Merge branch 'pygments' into 'master'

Pygments

Closes #4, #5, #6, #7, and #8

See merge request !1
This commit is contained in:
Ariejan de Vroom 2017-03-20 16:42:08 +01:00
commit bae0611961
125 changed files with 3315 additions and 2354 deletions

View File

@ -1,13 +0,0 @@
HUGO_VERSION=0.19
HUGO_DOWNLOAD=hugo_${HUGO_VERSION}_linux-64bit.tar.gz
HUGO_FILENAME=hugo_${HUGO_VERSION}_linux_amd64
set -x
set -e
# Install Hugo if not already cached or upgrade an old version.
if [ ! -e $CIRCLE_BUILD_DIR/bin/hugo ] || ! [[ `hugo version` =~ v${HUGO_VERSION} ]]; then
wget https://github.com/spf13/hugo/releases/download/v${HUGO_VERSION}/${HUGO_DOWNLOAD}
tar xvzf ${HUGO_DOWNLOAD} -C $CIRCLE_BUILD_DIR/bin/
ln -sf $CIRCLE_BUILD_DIR/bin/$HUGO_FILENAME/$HUGO_FILENAME $CIRCLE_BUILD_DIR/bin/hugo
fi

View File

@ -1,7 +0,0 @@
set -x
set -e
# Install yuicompressor if not already cached
if [ ! -e $CIRCLE_BUILD_DIR/bin/yuicompressor ]; then
npm install -g --prefix ${CIRCLE_BUILD_DIR} yuicompressor
fi

View File

@ -3,6 +3,10 @@ languageCode = "en-us"
title = "ariejan de vroom"
copyright = "Ariejan de Vroom"
PygmentsCodeFences = true
pygmentsuseclasses = false
pygmentsstyle = "arduino"
[author]
name = "Ariejan de Vroom"

View File

@ -9,15 +9,16 @@ I've been using a simple Rails application locally with a SQlite 3 database for
Here are some easy steps on how to migrate your data to MySQL. First of all you need to dump your SQLite3 database. This includes transaction statements and create commands. That's fine. Since we also migrate the schema information, our RoR app will not know any difference after we change config/database.yml.
The biggest probem I encoutered was that the SQLite3 dump places table names in double quotes, which MySQL won't accept.
~
First, make sure you create your MySQL database and create a user to access that database. Then run the following command. (It's a long one, so where you see a \, just continue on the same line.)
sqlite3 db/production.sqlite3 .dump | \
grep -v "BEGIN TRANSACTION;" | \
grep -v "COMMIT;" | \
perl -pe 's/INSERT INTO \"(.*)\" VALUES/INSERT INTO `\1` VALUES/' | \
mysql -u YOURUSERNAME -p YOURPROJECT_production[/source]
``` shell
sqlite3 db/production.sqlite3 .dump | \
grep -v "BEGIN TRANSACTION;" | \
grep -v "COMMIT;" | \
perl -pe 's/INSERT INTO \"(.*)\" VALUES/INSERT INTO `\1` VALUES/' | \
mysql -u YOURUSERNAME -p YOURPROJECT_production[/source]
```
This will take the SQLite 3 dump, remote the transaction commands. Next I use perl to replace all INSERT commands containing double quotes with something MySQL will understand.

View File

@ -4,22 +4,25 @@ title = "Tagging in ajax_scaffold"
tags = ["General", "Everything", "Web Development", "RubyOnRails", "Features"]
slug = "tagging-in-ajax_scaffold"
+++
I've been using the <a href="http://www.ajaxscaffold.com/">Ajax Scaffold</a> for quite some time now. It's a great piece of software by <a href="http://www.height1percent.com/">Mr. Richard White</a> for <a href="http://www.rubyonrails.com">Ruby on Rails</a>. It seems that the plugin version of AS is getting quite a bit more attention than the generator. I started out with the generator but quickly reverted to the plugin since it's way more flexible and easier to use.
Since I wanted to create a quick app to inventory my CD/DVD collection (which is now in a very sexy alu DJ case) I used Ajax Scaffold to get me started. In the spirit of Web 2.0 I wanted to add tags to every CD so it would be easier to find certain kinds of disks later on. So, I added <a href="http://wiki.rubyonrails.org/rails/pages/Acts+As+Taggable+Plugin">acts_as_taggable</a>.
Acts_as_taggable basically allows you to tag any model in your app. So, I made my Disk model taggable. Great. Now I could do this:
d = Disk.new(:number => 1, :name => "Mac OS X 10.4.6 Install DVD 1")
d.tag_with("macoxs apple macbook install")
d.save
``` ruby
d = Disk.new(:number => 1, :name => "Mac OS X 10.4.6 Install DVD 1")
d.tag_with("macoxs apple macbook install")
d.save
```
The real problem was, how to get this functionality easily integerated in Ajax Scaffold?
~
First of all I had to show a column in AS that included the tags attached to a given disk. I specify all rows manually in the Manager controller. Manager is scaffolded using AS. Here's what my Manager controller looks like:
class ManagerController < ApplicationController
``` ruby
class ManagerController < ApplicationController
ajax_scaffold :disk
@@scaffold_columns = [
@ -28,7 +31,8 @@ First of all I had to show a column in AS that included the tags attached to a g
AjaxScaffold::ScaffoldColumn.new(Disk, { :name => "tags",
:eval => "row.tag_list", :sort => "tag_list"}),
]
end
end
```
This will show three columns, including a column named 'tags'. Every model that acts_as_taggable has some extra methods. tag_list is a single string containing all tags seperated by spaces. So, the tags column shows the tag_list for that row. With :sort I specify that AS can just sort keywords alphabetically.
@ -38,9 +42,10 @@ Great! I now can see tags on disks! But, we also need to add those tags and that
Adding tags is not done by assignment but by calling a method with your tags, as shown before: tag_with (string). I could create a custom create and update method for the Disks, but there's a prettier solution available.
tag_list returns a string with the current tags. How about using that same name to assign tags? It's rather easy. Here's my Disk model:
`tag_list` returns a string with the current tags. How about using that same name to assign tags? It's rather easy. Here's my `Disk` model:
class Disk < ActiveRecord::Base
``` ruby
class Disk < ActiveRecord::Base
acts_as_taggable
validates_presence_of :name, :number
@ -48,12 +53,15 @@ tag_list returns a string with the current tags. How about using that same name
def tag_list=(new_tags)
tag_with new_tags
end
end
end
```
Now we can assign tags to tag_list as well as read the tags out. Now the only step is to add a special textfield to the form partial for AS.
<label class="required">Tags</label>
<%= text_field 'disk', 'tag_list' %>
``` erb
<label class="required">Tags</label>
<%= text_field 'disk', 'tag_list' %>
```
Now when a new disk is created or when one is updated, the tag_list will automagically be updated in a correct fashion.

View File

@ -4,20 +4,23 @@ title = "WordpressMu: Dont allow new blogs"
tags = ["General", "Everything", "Features", "WordPressMu"]
slug = "wordpressmu-dont-allow-new-blogs"
+++
If you're using <a href="http://mu.wordpress.org">WordpressMu</a>, the blog hosting tool used on <a href="http://www.wordpress.com">Wordpress.com</a>, you may want to disable the creation of blogs by your visitors.
Whatever your reasons for this are, I wanted to prevent this, because I (and my team of editors) want to maintain several blogs on different topics. Users are free to register and post comments, but creating new blogs is reserved for the administrator.
So, how do you implement this in WordpressMu? There is no checkbox (yet) that disables this feature. So, I had to hack the WordpressMu code a bit.
~
First, open up wp-signup.php. If you access a blog that does not exist, you'll be redirected to the signup page and be presented a signup form for that particular blog.
Open up wp-singup.php en just above the get_header(); call, place the following code:
if (!is_user_logged_in() || $user_identity != 'admin') {
``` php
if (!is_user_logged_in() || $user_identity != 'admin') {
header("Location: http://example.com/gofishatthispage/");
exit();
}
}
```
What this does is make sure that only a logged in user named 'admin' is allowed to proceed to the blog creation form. Others will be redirected to a location of your choice. A good idea is to send people to a page that explains why they can't' create a blog or what they have to do to get an administrator to create one for them.

View File

@ -7,19 +7,20 @@ slug = "cups-426-upgrade-required"
As I was installing my printer on my Ubuntu 6.06 Dapper LTS server with CUPS I noticed the following error:
**426 Upgrade Required**
> 426 Upgrade Required
After some research I came to the conclusion that CUPS, by default, tries to use SSL whenever possible. So, with this 426 error, you are redirected to the SSL domain. Chances are, you haven't configured SSL properly, if at all.
In my case, I didn't want to configure SSL. To get rid of this problem, the key lies in editing your configuration files ( /etc/cups/cupsd.conf ) and adding the following line:
<pre lang="bash">DefaultEncryption Never</pre>
``` text
DefaultEncryption Never
```
There are several options, Never, IfRequired and Required. By setting this to Never, SSL will never be enforced. Just restart your CUPS server with
$ /etc/init.d/cupsys restart
``` shell
/etc/init.d/cupsys restart
```
and you're good to go.

View File

@ -30,7 +30,9 @@ In order to solve this problem I had to take a few, rather easy, steps.
To get started, let us assign a password to the default ubuntu user.
sudo passwd ubuntu
``` shell
sudo passwd ubuntu
```
Now enter something that you'll remember easily, twice.
@ -38,26 +40,30 @@ In order to get Ubuntu to recognize the native screen resolution automatically,
So, we now need to change <strong>/etc/apt/sources.list</strong> and add the universe repository. This is rather easy, because these repositories already exist, but are commented out. Just open up /etc/apt/sources.list and uncomment the two universe lines. Make sure your sources.list looks like this:
deb http://archive.ubuntu.com/ubuntu edgy main restricted
deb-src http://archive.ubuntu.com/ubuntu edgy main restricted
``` text
deb http://archive.ubuntu.com/ubuntu edgy main restricted
deb-src http://archive.ubuntu.com/ubuntu edgy main restricted
## Uncomment the following two lines to add software from the 'universe'
## repository.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## universe WILL NOT receive any review or updates from the Ubuntu security
## team.
deb http://archive.ubuntu.com/ubuntu edgy universe
deb-src http://archive.ubuntu.com/ubuntu edgy universe
## Uncomment the following two lines to add software from the 'universe'
## repository.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## universe WILL NOT receive any review or updates from the Ubuntu security
## team.
deb http://archive.ubuntu.com/ubuntu edgy universe
deb-src http://archive.ubuntu.com/ubuntu edgy universe
deb http://security.ubuntu.com/ubuntu edgy-security main restricted
deb-src http://security.ubuntu.com/ubuntu edgy-security main restricted
deb http://security.ubuntu.com/ubuntu edgy-security main restricted
deb-src http://security.ubuntu.com/ubuntu edgy-security main restricted
```
Now, we can update our system and install the 915resolution package.
apt-get update
apt-get install 915resolution
``` shell
apt-get update
apt-get install 915resolution
```
You'll notice that 915resolution spits out a lot of information on your chipset and native resolution. You should check here that the 1280x800 resolution has been detected.

View File

@ -9,13 +9,15 @@ Many projects use SubVersion nowadays to store their project code. I do this als
The question, however, is how to release your current code properly to the public. You probably don't want your users to check out your current development code. Either you want them to check out a certain version (release) or you want to present them with a download archive containing the code.
I'm going to show you how to release a simple PHP application from SubVersion as an archive file to my users.
~
The base layout of my svn repository is like this. I have directory named 'trunk' that always contains the most recent version of the software. This is the development branch, so to say. I also have a 'branches' and a 'tags' directory. If you don't have these, you'll need to create them now:
$ svn mkdir -m "Creating branches directory" svn://yourrepository/branches
Commited revision 123.
$ svn mkdir -m "Creating tags directory" svn://yourrepository/tags
Commited revision 124.
``` shell
$ svn mkdir -m "Creating branches directory" svn://yourrepository/branches
Commited revision 123.
$ svn mkdir -m "Creating tags directory" svn://yourrepository/tags
Commited revision 124.
```
In this case my current development code, in the trunk of the svn repository is at revision 10. All files in the trunk are marked to be develoment quality. This means that I don't display version numbers, but simply show 'HEAD' to indicate you're working with a development quality product. Before I release this code to the public, I want to tweak a few things. Since this is not general development, I create a <strong>Release Branch</strong>. This release branch is basically a copy of the current code in the trunk. Changes to that branch are stored seperately from the development code, so I can easily tweak it to release quality.
@ -23,8 +25,10 @@ Creating the release branch is really easy. Let's say I want to release version
Well, create the Release Branch which is named, by convention, RB-1.1.0.
$ svn copy -m "Creating release branch 1.1.0" https://svn.sourceforge.net/svnroot/cse-tool/trunk https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.1.0
Committed revision 11.
``` shell
$ svn copy -m "Creating release branch 1.1.0" https://svn.sourceforge.net/svnroot/cse-tool/trunk https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.1.0
Committed revision 11.
```
You can see at <a href="http://cse-tool.svn.sourceforge.net/viewvc/cse-tool/branches/">http://cse-tool.svn.sourceforge.net/viewvc/cse-tool/branches/</a> that the new release branch (RB-1.1.0) was created as a directory containing a copy of the current develoment code.
@ -32,11 +36,15 @@ I can now do two things. Either I checkout a seperate working copy of the releas
Just check out your code as you'd normally do, but make sure you specify the release branch. I've also specified to store this code in a directory named cse-tool-1.1.0 so I don't confuse it with the trunk code, which is stored in a directory named 'cse-tool'.
$ svn co https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.0.0 cse-tool-1.1.0
``` shell
$ svn co https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.0.0 cse-tool-1.1.0
```
I could also swich my current working copy to the release branch. This may be useful if your project is very huge and you don't want to download the whole thing again. Switching between a release branch and the trunk is usually more efficient because you only need to download the differences between the two.
$ svn switch https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.0.0
``` shell
$ svn switch https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.0.0
```
Okay, now I can work on the release branch. Branding it with the right version number among other things. In your case it might be a good place to two SQL files to install or update a database you are using. You might want to update the changelog and other documentation.
@ -44,19 +52,22 @@ When you commit changes, they will be applied to the release branch only, <stron
When you're done you may switch back to the current development code:
$ svn switch https://svn.sourceforge.net/svnroot/cse-tool/trunk
``` shell
$ svn switch https://svn.sourceforge.net/svnroot/cse-tool/trunk
```
The code in the release branch is now ready to be shipped out. We want to mark this code as being Relese 1.1.0. This is called tagging. A tag is nothing more than a copy of the repository on a give moment. Technically, a branch and tag are the same. However, the conventions I use dictate that you don't change the code in a tag because it represents a certain state of your code, in this case the state the code was in at the time of Release 1.1.0.
Now, to actually create a release tag, named REL-1.1.0, we use the same procedure as with the creation of the release branch. Just note the differences in the source and destination repositories.
$ svn copy -m "Tag release 1.1.0" https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.1.0 https://svn.sourceforge.net/svnroot/cse-tool/tags/REL-1.1.0
Committed revision 13.
``` shell
$ svn copy -m "Tag release 1.1.0" https://svn.sourceforge.net/svnroot/cse-tool/branches/RB-1.1.0 https://svn.sourceforge.net/svnroot/cse-tool/tags/REL-1.1.0
Committed revision 13.
```
With the REL-1.1.0 tag we can create an archive that we can distribute to our users. Because we don't want to include svn metadata in our release we can't use checkout for this. SubVersion allows us to export our code, which is basically a check out, but without all the svn metadata. This is ideal to ship to our customers.
$ svn export https://svn.sourceforge.net/svnroot/cse-tool/tags/REL-1.1.0 cse-tool-1.1.0
``` shell
$ svn export https://svn.sourceforge.net/svnroot/cse-tool/tags/REL-1.1.0 cse-tool-1.1.0
```
Next I can tar up the cse-tool-1.1.0 directory and put the files on SourceForge. (<a href="http://sourceforge.net/project/showfiles.php?group_id=182622&package_id=211849">Download them here :)</a>)

View File

@ -1,48 +0,0 @@
+++
date = "2006-11-22"
title = "Plugins used on Ariejan.net"
tags = ["General", "Everything", "Wordpress", "Ariejan.net", "Blog"]
slug = "plugins-used-on-ariejannet"
+++
After I released <a href="http://ariejan.net/2006/11/21/iariejan-wordpress-theme-10/">iAriejan</a> I got some questions about what plugins I run on Ariejan.net. So, upon your request, here is the full listing.
I've included links to all the sites where you can download the plugins.
Note that <a href="http://ariejan.net/2006/11/21/iariejan-wordpress-theme-10/">iAriejan</a> comes bundeled with WP Wetfloor and AJAX Comments.
~
<ul>
<li><a href="http://www.mikesmullin.com/2006/06/05/ajax-comments-20/">AJAX Comments</a>
This plugin allows the instant posting of plugins with a nice and sexy AJAX effect. Very Web 2.0ish.</li>
<li><a href="http://akismet.com/">Askimet</a><br />
Allows the screening of comments for SPAM. This is great in combination with the AJAX Comments!</li>
<li><a href="http://mightyhitter.com/main-page/plugins/mightyadsense/">Mighty Adsense</a><br />
Places Google Ads on my blog (without editing the theme) in order to cover my hosting expenses.</li>
<li><a href="http://maxpower.ca/">Adsense Target</a><br />
AdSense Target marks the important stuff so Google Ads match even better with the content on Ariejan.net. This is called Ad Targetting and supported by Google. Invisible to you, but still very nice!</li>
<li><a href="http://blog.finke.ws/?p=11">FeedBurner Widget</a><br />
Allows me to publish my RSS feed through FeedBurner. This is great for the management of the feed and usage (not user!) tracking.</li>
<li><a href="http://boakes.org/analytics">Google Analytics</a><br />
This plugin makes use of Google Analytics' ability of tracking visitors on your site. This way I can see how people visiting Ariejan.net interact with my site and which parts need improvement.</li>
<li><a href="http://blog.igeek.info/wp-plugins/igsyntax-hiliter/">iG:Syntax Hiliter</a><br />
This plugin allows me to easily post different kinds of code like Ruby, PHP and C#. It will show the code properly formatted, syntax highlighting is included and it will allow you to copy it all as plain text! Want a <a href="http://ariejan.net/2006/11/21/svn-how-to-release-software-properly/">demonstration</a>?</li>
<li><a href="http://svn.wp-plugins.org/widgets/trunk">Sidebar Widgets</a><br />
Actually, I don't use widgets right now. I use a static sidebar. I plan on using it some time. I have it installed for the FeedBurner widget to do its work.</li>
<li><a href="http://www.siuyee.com/projects/wp-wetfloor/">WP Wetfloor</a><br />
This is a brilliant plugin. It uses Javascript to create reflections of images. It uses the current background colour as a basis and is fully configurable by adding a few CSS classes. Of course, this degrades very nicely on non-javascript clients. You'll just have to miss the eye-candy.</li>
</ul>
Well that's all there is. If you can think of any plugin that I should use (but am currently not using), feel free to drop a comment.
Edit: I completely forgot to mention <a href="http://www.mikesmullin.com/2006/06/05/ajax-comments-20/">AJAX Comments</a>. It has been added to the list.

View File

@ -11,20 +11,24 @@ Fixing bugs can be as easy as fixing a few lines of code or as hard as rewriting
For this example let's say we have a project. It has a release branch named RB-1.0 and current development is going on in the trunk.
A user has submitted a bug report (numbered #3391) against your 1.0 release. Here's what to do:
~
### Easy bug fixes
Let's say bug #3391 is an easy fix. First, check out a working copy of the release branch.
$ svn co https://example.com/branches/RB-1.0 rb-1.0
``` shell
svn co https://example.com/branches/RB-1.0 rb-1.0
```
Now, go in there, write tests that expose the bug and fix it. As I said, it's an easy fix so you can commit all your changes at once to the release branch. When you do this, remember the new revision number.
**Note:** it's always smart to include the number of the bug (in this case #3391) in your commit message. This will make sure other developers (and later on, yourself) know what bug was fixed here.
$ svn commit -m "Bug fixed #3391"
...
Committed revision 183.
``` shell
$ svn commit -m "Bug fixed #3391"
...
Committed revision 183.
```
As I said, remember the revision number: 183.
@ -34,13 +38,17 @@ Don't start editing your working copy of the trunk and start fixing the bug all
Go into your trunk working copy and update it to the latest revision, this is revision 183. But, we only made changes to the release branch, and not to the trunk, so we need to merge those changes. We can do this by running the following command:
$ svn merge -r182:183 https://example.com/branches/RB-1.0 rb-1.0
``` shell
svn merge -r182:183 https://example.com/branches/RB-1.0 rb-1.0
```
You'll now see the fix you applied in the release branch getting merged with your current development code. Great, isn't it?
Before you leave to party, don't forget to commit the changes to the trunk. Again, name the bug number you fixed here and also which revision you userd to merge it.
$ svn commit -m "Merge r183 (bug fixed #3391)"
``` shell
svn commit -m "Merge r183 (bug fixed #3391)"
```
You can apply this merge process with any other release branch you have if that' necessary.
@ -52,22 +60,28 @@ If that's the case, you are better of creating a seperate bug fix branch. This a
First, create a bug fix branc. By convention Release Branches were called RB-1.0, Bug fix branches are called BUG-###. Of course, ### corresponds to the bug report number. In this case we create a branch named BUG-3391. We also need to create a snapshot of the code before we start fixing the bug. We call this tag PRE-3391.
$ svn copy -m "Create bugfix branch" https://example.com/branches/RB-1.0 https://example.com/branches/BUG-3391
$ svn copy -m "Tag start of bug fix" https://example.com/branches/BUG-3391 https://example.com/tags/PRE-3391
``` shell
svn copy -m "Create bugfix branch" https://example.com/branches/RB-1.0 https://example.com/branches/BUG-3391
svn copy -m "Tag start of bug fix" https://example.com/branches/BUG-3391 https://example.com/tags/PRE-3391
```
Now, you can checkout the bug fix branch and start work. You may call in the help of others if you need to. It's okay to make multiple commits to this branch.
When you have reached the point were the bug is fixed, you'll need to mark the end of it. We create a new tag named POST-3391 to mark the end of the bugfix:
$ svn copy -m "Tag end of bug fix" https://example.com/branches/BUG-3391 https://example.com/tags/POST-3391
``` shell
svn copy -m "Tag end of bug fix" https://example.com/branches/BUG-3391 https://example.com/tags/POST-3391
```
Well. You're done! Your bug has been fixed! But wait a minute. The fix is not present in the release branch yet! Here, again, we need to merge the bug fix into the release branch (and possibly into the trunk also).
First, update your current working of the release branch and merge the changes between the PRE-3391 and POST-3391 tags with the release branch. When done, run your tests to make sure everything works as expected and commit your changes.
$ svn update
$ svn merge https://example.com/tags/PRE-3391 https://example.com/tags/POST-3391
$ svn commit -m "Merged bug fix for bug #3391"
``` shell
svn update
svn merge https://example.com/tags/PRE-3391 https://example.com/tags/POST-3391
svn commit -m "Merged bug fix for bug #3391"
```
### Final note

View File

@ -9,25 +9,28 @@ I've seen it lots of times before, but I just added it to Ariejan.net (and the n
Since I haven't really looked into a plugin or anything, this is just a very simple theme hack.
You can apply it to your current theme with almost no effort at all.
~
Open up your comments.php file in your themes directory. And look for the following code:
<li class="<?php echo $oddcomment; ?>" id="comment-< ?php comment_ID() ?>" id="comment-< ?php comment_ID() ?>">
``` html+php
<li class="<?php echo $oddcomment; ?>" id="comment-< ?php comment_ID() ?>" id="comment-< ?php comment_ID() ?>">
```
and replace it with
<li class="<?php if ( $comment->comment_author_email == get_the_author_email() ) echo 'authorcomment'; else echo $oddcomment; ?>" id="comment-< ?php comment_ID() ?>" id="comment-< ?php comment_ID() ?>">
``` html+php
<li class="<?php if ( $comment->comment_author_email == get_the_author_email() ) echo 'authorcomment'; else echo $oddcomment; ?>" id="comment-< ?php comment_ID() ?>" id="comment-< ?php comment_ID() ?>">
```
What this will do is match the e-mail address of the poster with the e-mail address of the post author. This is in some way spoofable, as users may be able to post a comment with your e-mail address on it.
If you posted the comment an extra CSS class named 'authorcomment' is added. So add the following to your style.css file. (You may change this to suit your own taste of course):
.authorcomment {
``` css
.authorcomment {
background-color: #363636;
border: 1px solid #969696;
}
}
```
To prevent this you can add your e-mail address (the one you use with your WP account) to Options -> Discussion -> Comment Moderation. This will keep any post that contains your email address back for moderation by you. This is the only fool-proof method I know right now to keep people from spoofing. There might be some other hacks for this, but I haven't had time to think about that yet.

View File

@ -17,7 +17,7 @@ Most people know what Subversion is and that there's something called "The Trunk
As you may have read in my previous <a href="http://ariejan.net/tags/subversion">Subversion articles</a> the base of your Subversion repository are three directories: branches, tags and trunk.
Each directory in subversion can be checked out seperately. See the examples for more information.
~
### Trunk
The trunk contains the most current development code at all times. This is where you work up to your next major release of code.
@ -26,9 +26,7 @@ I see, almost too often, that people only use the trunk to store their code. Rel
The trunk should only be used to develop code that will be your next major release. Don't brand the trunk with version numbers or release names. Just keep the trunk in "development mode" at all times.
Example:
https://svn.example.com/svnroot/project/trunk
Example: `https://svn.example.com/svnroot/project/trunk`
### Branches
@ -44,7 +42,9 @@ The branch can be checked out seperately and you can start branding and versioni
Of course, you can address a release branch directly to check it out:
https://svn.example.com/svnroot/project/branches/RB-1.0
``` text
https://svn.example.com/svnroot/project/branches/RB-1.0
```
#### Bug fix branches
@ -54,7 +54,9 @@ Bug fix branches are named after the ID they are assigned in your bugtracking to
Of course, you can access your bugfix branch like any other.
https://svn.example.com/svnroot/project/branches/BUG-3391
``` text
https://svn.example.com/svnroot/project/branches/BUG-3391
```
Read my <a href="http://ariejan.net/2006/11/22/svn-how-to-fix-bugs-properly/">how to fix bugs properly</a> article for more specific bug fixing information. Also read on in this article to the tags section.
@ -68,7 +70,9 @@ These experiments, maybe PHP 5 is a bridge too far for your app, should be given
Experimental branches may be abandonned when the experiment fails. If they succeed you can easily merge that branch with the trunk and deliver your big new technology. These branches are named after what you're experimenting with. I always prefix them with 'TRY-':
https://svn.example.com/svnroot/project/branches/TRY-new-technology
``` text
https://svn.example.com/svnroot/project/branches/TRY-new-technology
```
### Tags
@ -80,7 +84,9 @@ Release tags mark the release (and state) of your code at that release point. Re
You can access these tags easily:
https://svn.example.com/svnroot/project/tags/REL-1.0.0
``` text
https://svn.example.com/svnroot/project/tags/REL-1.0.0
```
See <a href="http://ariejan.net/2006/11/21/svn-how-to-release-software-properly/">my article on releasing software</a> for more information.
@ -92,10 +98,9 @@ The start-tag is called 'PRE' and the end-tag called 'POST'. Of course, you shou
You probably don't check out bug fix tags, but you want to reference them when merging bug fixes with your other code:
https://svn.example.com/svnroot/project/tags/PRE-3391
https://svn.example.com/svnroot/project/tags/POST-3391
``` text
https://svn.example.com/svnroot/project/tags/PRE-3391
https://svn.example.com/svnroot/project/tags/POST-3391
```
Read more on <a href="http://ariejan.net/2006/11/22/svn-how-to-fix-bugs-properly/">fixing bugs wiht subversion</a> in my other article.

View File

@ -11,7 +11,7 @@ In this first part I will show you how to install Subversion over WebDAV. All of
In future parts I will tell you more about installing Trac, FastCGI (with Apache) to host Rails applications and how to use Capistrano to deploy your app properly.
For now, let's get cracking at Subversion.
~
First off, I installed Ubuntu 6.10 on my server. Because I don't need a graphical user interface, I have installed Ubuntu in text-only mode.
<h3>Open up to the universe</h3>
@ -20,46 +20,50 @@ The first thing I always do when I install a Ubuntu box is to enable the univers
Edit /etc/apt/sources.list and uncomment all the Universe related lines. Also, comment out your install disk. Here's what my /etc/apt/sources.list looks like:
# deb cdrom:[Ubuntu 6.10 _Edgy Eft_ - Release i386 (20061025)]/ edgy main restricted
``` text
# deb cdrom:[Ubuntu 6.10 _Edgy Eft_ - Release i386 (20061025)]/ edgy main restricted
deb http://nl.archive.ubuntu.com/ubuntu/ edgy main restricted
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy main restricted
deb http://nl.archive.ubuntu.com/ubuntu/ edgy main restricted
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://nl.archive.ubuntu.com/ubuntu/ edgy-updates main restricted
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy-updates main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://nl.archive.ubuntu.com/ubuntu/ edgy-updates main restricted
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy-updates main restricted
## Uncomment the following two lines to add software from the 'universe'
## repository.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## universe WILL NOT receive any review or updates from the Ubuntu security
## team.
deb http://nl.archive.ubuntu.com/ubuntu/ edgy universe
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy universe
## Uncomment the following two lines to add software from the 'universe'
## repository.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## universe WILL NOT receive any review or updates from the Ubuntu security
## team.
deb http://nl.archive.ubuntu.com/ubuntu/ edgy universe
deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy universe
## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
# deb http://nl.archive.ubuntu.com/ubuntu/ edgy-backports main restricted universe multiverse
# deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy-backports main restricted universe multiverse
## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
# deb http://nl.archive.ubuntu.com/ubuntu/ edgy-backports main restricted universe multiverse
# deb-src http://nl.archive.ubuntu.com/ubuntu/ edgy-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu edgy-security main restricted
deb-src http://security.ubuntu.com/ubuntu edgy-security main restricted
deb http://security.ubuntu.com/ubuntu edgy-security universe
deb-src http://security.ubuntu.com/ubuntu edgy-security universe
deb http://security.ubuntu.com/ubuntu edgy-security main restricted
deb-src http://security.ubuntu.com/ubuntu edgy-security main restricted
deb http://security.ubuntu.com/ubuntu edgy-security universe
deb-src http://security.ubuntu.com/ubuntu edgy-security universe
```
The next step is to make sure all software present is up-to-date. There are already a few updates available so run these two commands:
$ sudo apt-get update
$ sudo apt-get dist-upgrade
``` shell
sudo apt-get update
sudo apt-get dist-upgrade
```
That's it.
@ -67,7 +71,9 @@ That's it.
Because I have a MacBook, I'd like to use it to do my work. To do so I need to install the OpenSSH server on my server so I can access it over the network.
$ sudo apt-get install ssh
``` shell
sudo apt-get install ssh
```
This will install ssh and the OpenSSH server. It also generates everything you need automatically like RSA keys and all that.
@ -77,7 +83,9 @@ Now, try to login wiht SSH from your desktop machine.
Since I want to use Subversion of WebDAV I will need to install Apache first. I'll grab a vanilla copy of Apache from Ubuntu here.
$ sudo apt-get install apache2
``` shell
sudo apt-get install apache2
```
If that's finished you should see a placeholder when you access your server with a browser. Check this now.
@ -85,19 +93,25 @@ If that's finished you should see a placeholder when you access your server with
Subversion is also easily installed.
$ sudo apt-get install subversion subversion-tools
``` shell
sudo apt-get install subversion subversion-tools
```
Next we should setup a location for our Subversion repositories. I choose to put them in /var/lib/svn. Create this directory now:
$ sudo mkdir -p /var/lib/svn
``` shell
sudo mkdir -p /var/lib/svn
```
I also create a repository now and create a basic Subversion structure there. In my case the project is called 'colt'.
$ sudo mkdir -p /var/lib/svn/colt
$ sudo svnadmin create /var/lib/svn/colt
$ sudo svn mkdir file:///var/lib/svn/colt/trunk -m "Trunk"
$ sudo svn mkdir file:///var/lib/svn/colt/tags -m "Tags"
$ sudo svn mkdir file:///var/lib/svn/colt/branches -m "Branches"
``` shell
sudo mkdir -p /var/lib/svn/colt
sudo svnadmin create /var/lib/svn/colt
sudo svn mkdir file:///var/lib/svn/colt/trunk -m "Trunk"
sudo svn mkdir file:///var/lib/svn/colt/tags -m "Tags"
sudo svn mkdir file:///var/lib/svn/colt/branches -m "Branches"
```
Note that you need to sudo the svn commands because only root has write access to your repository currently.
@ -105,7 +119,9 @@ Note that you need to sudo the svn commands because only root has write access t
Okay. You are already at revision 3 on your repository. Good work! Now let's make sure that you repositories are accessable over the web. First, we install libapache2-svn. This packages includes WebDAV support for SVN.
$ sudo apt-get install libapache2-svn
``` shell
sudo apt-get install libapache2-svn
```
Next I open up /etc/apache2/mods-available/dav_svn.conf. This file contains configuration for the WebDAV and SubVersion modules we just installed.
@ -117,15 +133,16 @@ Note: The authentication file we use here can be recycled later when we install
I made my configuration look like this:
# dav_svn.conf - Example Subversion/Apache configuration
#
# For details and further options see the Apache user manual and
# the Subversion book.
``` apache
# dav_svn.conf - Example Subversion/Apache configuration
#
# For details and further options see the Apache user manual and
# the Subversion book.
# <location URL> ... </location>
# URL controls how the repository appears to the outside world.
# In this example clients access the repository as http://hostname/svn/
<location /svn>
# <location URL> ... </location>
# URL controls how the repository appears to the outside world.
# In this example clients access the repository as http://hostname/svn/
<location /svn>
# Uncomment this to enable the repository,
DAV svn
@ -160,7 +177,8 @@ I made my configuration look like this:
Require valid-user
</limitexcept>
</location>
</location>
```
Now kick Apache to reload and you should be able to access your repository over the web! Try http://example.com/svn/colt.
@ -168,7 +186,9 @@ Now kick Apache to reload and you should be able to access your repository over
Reading of the repository is okay without authentication. But writing needs to be protected. We need to create a password file for this. This is easy and is already explained in /etc/apache2/mods-available/dav_svn.conf:
$ sudo htpasswd2 -c /etc/apache2/dav_svn.passwd ariejan
``` shell
sudo htpasswd2 -c /etc/apache2/dav_svn.passwd ariejan
```
Go ahead, add as many users as you need.
@ -176,12 +196,16 @@ Go ahead, add as many users as you need.
Before you start using Subversion, make sure you make the repositories owned by Apache. Apache is the one who wil access the repositories physically. This is really easy:
$ sudo chown -R www-data.www-data /var/lib/svn
``` shell
sudo chown -R www-data.www-data /var/lib/svn
```
When you access your repository for write actions now, you will recieve the following message:
Authentication realm: <http ://example.com:80> Subversion Repository Access
Password for 'ariejan':
``` text
Authentication realm: <http ://example.com:80> Subversion Repository Access
Password for 'ariejan':
```
Alright sparky! Subversion access is ready for you now! Next time I'll tell you how to integrate Trac with your hot new Subversion repositories.

View File

@ -10,30 +10,37 @@ Also read <a href="http://ariejan.net/2006/12/01/how-to-setup-a-ubuntu-developme
In this part I will tell you how to install <a href="http://trac.edgewall.org/">Trac</a> on top of your Subversion repositories on your Ubuntu development server. Trac offers you a wiki, roadmap, tickets (tracking system) and access to your SubVersion repository. All of this is bundeled in a very sexy web interface.
Well, let's get to work now and get Trac installed. When you're done you will have trac available for all your Subversion repositories.
~
<h3>Install Trac</h3>
First thing to do is install trac. Here I will also install mod_python for your apache webserver and python-setuptools that we'll need later with the webadmin plugin.
$ sudo apt-get install trac libapache2-mod-python python-setuptools
``` shell
sudo apt-get install trac libapache2-mod-python python-setuptools
```
Now, I create a directory where all Trac information will be stored.
$ sudo mkdir -p /var/lib/trac
``` shell
sudo mkdir -p /var/lib/trac
```
Common sense dictates that you use the same name here for the trac environment as for the subversion repository.
Change to the trac directory and intitialize the project:
$ cd /var/lib/trac
$ sudo trac-admin colt initenv
``` shell
cd /var/lib/trac
sudo trac-admin colt initenv
```
You'll need to name the project, choose a database file (default is okay), specify where the subversion repository resides ( /var/lib/svn/colt, in this case) and a template (the default is okay here too).
I recommend you also create an administrator user right now. Make sure you add a user who's already in your /etc/apache2/dav_svn.passwd file.
$ sudo trac-admin colt permission add ariejan TRAC_ADMIN
``` shell
sudo trac-admin colt permission add ariejan TRAC_ADMIN
```
Well, that's it. Trac has been installed. Now let's make sure we can access trac through the web.
@ -41,25 +48,29 @@ Well, that's it. Trac has been installed. Now let's make sure we can access trac
Configuring apache is rather easy when you know what to do. Add the following code to /etc/apache2/sites-available/default (at the bottom before the end of the virtualhost tag) or put it in a seperate virtual host file if you want to dedicate a special domain to this.
<location /projects>
``` apache
<location /projects>
SetHandler mod_python
PythonHandler trac.web.modpython_frontend
PythonOption TracEnvParentDir /var/lib/trac
PythonOption TracUriRoot /projects
</location>
</location>
<locationmatch "/projects/[^/]+/login">
<locationmatch "/projects/[^/]+/login">
AuthType Basic
AuthName "Trac Authentication"
AuthUserFile /etc/apache2/dav_svn.passwd
Require valid-user
</locationmatch>
</locationmatch>
```
Notice here, again that we use TracEnvParentDir to show we host multiple instances of Trac. You may change the TracUriRoot to something different.
Again, make sure to chown your Trac installation to www-data:
$ sudo chown -R www-data.www-data /var/lib/trac
``` shell
sudo chown -R www-data.www-data /var/lib/trac
```
Now, access your trac over the web: http://example.com/projects for a complete listing of hosted projects or http://example.com/projects/colt for the COLT project.
@ -75,18 +86,24 @@ Don't unzip this file, just remove the .zip extension.
Because we installed setuptools earlier, we can now use easy_install to install this plugin system-wide, enabling it for all our trac installations.
$ sudo easy_install TracWebAdmin-0.1.2dev_r4240-py2.4.egg
``` shell
sudo easy_install TracWebAdmin-0.1.2dev_r4240-py2.4.egg
```
Next we enable webadmin in the global configuration file of track. You may need to create the 'conf' directory in this case:
$ cd /usr/share/trac
$ sudo mkdir conf
$ sudo vi conf/trac.ini[/conf]
``` shell
cd /usr/share/trac
sudo mkdir conf
sudo vi conf/trac.ini[/conf]
```
Next enter the following in trac.ini
[components]
``` ini
[components]
webadmin.* = enabled
```
Save the file and off you go. Login as the administrator user you specified earlier and you can make use of the 'admin' button that has appeared in the menu of Trac.

View File

@ -13,13 +13,14 @@ Installing Ruby on Rails on your Ubuntu box is not always as easy as it seems. H
This method was tested on both Dapper and Edgy systems. It may work on other Ubuntu releases as well. It's also possible that it works on Debian.
Besides Rails, I'll also be install mysql and sqlite3 support.
<!--more-->
<h3>1. Install Ruby</h3>
Before putting anything on Rails, install Ruby.
$ sudo apt-get install irb1.8 libreadline-ruby1.8 libruby libruby1.8 rdoc1.8 ruby ruby1.8 ruby1.8-dev
``` shell
sudo apt-get install irb1.8 libreadline-ruby1.8 libruby libruby1.8 rdoc1.8 ruby ruby1.8 ruby1.8-dev
```
You may now check what version of Ruby you have by running `ruby -v`. It's just for your information.
@ -27,17 +28,21 @@ You may now check what version of Ruby you have by running `ruby -v`. It's just
Surf to <a href="http://rubyforge.org/frs/?group_id=126">http://rubyforge.org/frs/?group_id=126</a> and download the latest available gems pacakge in tgz format. (You may also use the zip if you feel comfortable.)
$ wget http://rubyforge.org/frs/download.php/69365/rubygems-1.3.6.tgz
$ tar zxf rubygems-1.3.6.tgz
$ cd rubygems-1.3.6
$ sudo ruby setup.rb
$ sudo ln -sf /usr/bin/gem1.8 /usr/bin/gem
``` shell
wget http://rubyforge.org/frs/download.php/69365/rubygems-1.3.6.tgz
tar zxf rubygems-1.3.6.tgz
cd rubygems-1.3.6
sudo ruby setup.rb
sudo ln -sf /usr/bin/gem1.8 /usr/bin/gem
```
<h3>3. Install Rails!</h3>
You may now install Rails now!
$ sudo gem install rails
``` shell
sudo gem install rails
```
That's it! Well, almost. You probably want some other things as well.
@ -45,7 +50,9 @@ That's it! Well, almost. You probably want some other things as well.
Before you continue, stop a moment to install some development tools. These tools are probably needed to compile and install the gems we are going to install next.
$ sudo apt-get install build-essential
``` shell
sudo apt-get install build-essential
```
<h3>MySQL Support</h3>
@ -53,15 +60,21 @@ You probably want MySQL Ruby support. The MySQL code in Rails sucks (no offence)
First, install MySQL.
$ sudo apt-get install mysql-server mysql-client
``` shell
sudo apt-get install mysql-server mysql-client
```
Next, install development files for MySQL. We'll need these in order to make Ruby understand.
$ sudo apt-get install libmysqlclient15-dev
``` shell
sudo apt-get install libmysqlclient15-dev
```
Then, you may install the gem
$ sudo gem install mysql
``` shell
sudo gem install mysql
```
You have to choose what version you want to install. Enter the number corresponding with the latest version that is tagged 'ruby'. Installing win32 stuff on linux is generally not a good thing.
@ -69,8 +82,10 @@ You have to choose what version you want to install. Enter the number correspond
For SQLite 3 you need to install some packages before installing the gem.
$ sudo apt-get install sqlite3 libsqlite3-dev
$ sudo gem install sqlite3-ruby
``` shell
sudo apt-get install sqlite3 libsqlite3-dev
sudo gem install sqlite3-ruby
```
<h3>If things go wrong</h3>

View File

@ -15,32 +15,40 @@ First of all, in the controller, just get all the posts you need. In this case,
Controller:
def list
``` ruby
def list
@posts = Post.find :all
end
end
```
As you can see, I perform no ordering or whatsoever here.
Now, in your view you normally would iterate over all posts like this:
<%= render :partial => 'post', :collection => @posts %>
``` erb
<%= render :partial => 'post', :collection => @posts %>
```
But, as I said, we want to group the posts by week. To make life easy, I add a method to the Post class that returns the week number in which a post was written:
Model Post:
def week
``` ruby
def week
self.created_at.strftime('%W')
end
end
```
Now, the magic will happen in our view:
<% @posts.group_by(&:week).each do |week, posts| %>
``` erb
<% @posts.group_by(&:week).each do |week, posts| %>
<div id="week">
<h2>Week < %= week %></h2>
< %= render :partial => 'post', :collection => @posts %>
</div>
<% end %>
<% end %>
```
Let me explain the above. We specify that we want to call group_by for @posts. But we need to say how we want to group these posts. By specifying &:week we tell group_by that we want to group by the result of the week attribute of every post. This is the attribute we specified earlier in the model.
@ -52,7 +60,9 @@ As normal, we can now show the week number and iterate over the posts.
The result of group_by is not guaranteed to be ordered in any way. Simply call 'sort' before each and you're set:
@posts.group_by(&:week).sort.each do |week, posts|
``` ruby
@posts.group_by(&:week).sort.each do |week, posts|
```
Mostly, you'll find that the posts for every group are not sorted either. With the example above I think it's easy to figure out how to do that now. (hint: .sort)

View File

@ -1,18 +0,0 @@
+++
date = "2007-01-29"
title = "Hobo - And you thought Rails made life easy!"
tags = ["General", "Blog", "RubyOnRails"]
slug = "hobo-and-you-thought-rails-made-life-easy"
+++
If you've seen anything of <a href="http://www.rubyonrails.org">Ruby on Rails</a>, you know it makes your life really easy with all its generators and plugins. Well, check out <a href="http://hobocentral.net/blog/">Hobo</a>!
Hobo is a Rails plugin that adds tons of extra features that make your live even easier. Here's a quote from their site:
<blockquote>
Hobo is an Open-Source project that makes development with Rails even faster than it already is. It features:
<ul>
<li>A template engine that extends Rails' standard ERB templates with user-defined tags</li>
<li>A powerful library of pre-defined tags for knocking up ajaxified data-driven sites in a snap
(lots still to do

View File

@ -1,52 +0,0 @@
+++
date = "2007-02-21"
title = "RoR: link_to_remote with a text_field value as an argument"
tags = ["General", "RubyOnRails", "Features"]
slug = "ror-link_to_remote-with-a-text_field-value-as-an-argument"
+++
Forms are very well supported in Ruby on Rails. But in some cases you want to get information about a value a user has entered in a form, but without submitting the entire form. A good example of this is the sign up procedure at Yahoo. You enter your desired username, click "check availability" and you know if the name is available to you.
In my case I want to add locations to my database and use geocoding to get latituden/longitude values. So, I want users to enter an address and click "check" to verify that geocoding is successful. When successful, the full address and lat/lng values are automatically filled out in the form.
This article shows you how to update those text fields and most importantly, it shows how to add a text_field value as a paramater to your link_to_remote tag.
<!--more-->
This last part is easy with RJS:
<pre lang="ruby">render :update do |page|
page[:location_address].value = gecode.full_address
page[:location_lat].value = geocode.lat
page[:location_lng].value = geocode.lng
end</pre>
But now the tricky part! How to include the value of a text_field in a link_to_remote tag?? A normal link_to_remote tag might look like this:
<pre lang="ruby">link_to_remote "Check address",
:url {
:controller => 'locations',
:action => 'check_address',
:address => @location.address
},
:method => :get</pre>
The problem here is that we're at the 'new' form, so @location doesn't contain anything yet. Erb isn't going to help us either because it's already rendered and cannot change something 'onClick'.
You got it, JavaScript is your friend! You could now write a JS function that retrieves the value of the text_field and add it to the link_to_remote request. Well, you could. But there is an easier way!
Prototype is the answer! It comes default with Rails and if you already use the javascript defaults, it's already at your diposal. If not, include prototype in your layout first:
<pre lang="ruby">< %= javascript_include_tag 'prototype' %></pre>
Now, we can use Form.element.getValue to get the text_field value. Since Prototype is very nice, there's a shortcut available: $F(element).
The next question is how to integrate this with our link_to_remote tag. The :with paramater offers help!
<pre lang="ruby">link_to_remote "Check address",
:url {
:controller => 'locations',
:action => 'check_address',
:address => @location.address
},
:with => "'address='+encodeURIComponent($F(

View File

@ -6,7 +6,9 @@ slug = "subversion-how-to-revert-to-a-previous-revision"
+++
You've been there. You have been developing in your trunk for a while and at revision 127 you get the feeling you've done it all wrong! The production server is humming away at revision 123 and that's where you want to start out again. But how can you start again from revision 123? Easy as this with Subversion:
svn merge -rHEAD:123 .
``` shell
svn merge -rHEAD:123 .
```
This will see what changes you've made since r123 up until now (r127 in your case) and 'undo' them. Next you check in the code and you've go a sweet r128 that is exactly the same as r123. You can start over now!

View File

@ -8,20 +8,30 @@ slug = "tipsnippet-create-a-rss-feed"
RSS is hot! So, you want to fit your new Rails app with one too! That's easy, of course, but you just need to know what to do.
This snippet will show you how to create an RSS feed form your RESTful articles. I'll assume you know how to generate a resource named 'article' with a title, body and the default created_at and updated_at attributes.
<!--more-->
You'll first need to add a new collection to your resource in config/routes.rb
<pre lang="ruby">map.resources :articles, :collections => {:rss => :get}</pre>
``` ruby
map.resources :articles, :collections => {:rss => :get}
```
This will expose your RSS feed as http://localhost:3000/articles;rss
Create a corresponding action in the articles controller in app/controllers/articles_controller.rb. This method fetches the ten latest articles.
<pre lang="ruby">def rss
``` ruby
def rss
@articles = Article.find(:all, :limit => 10, :order => 'created_at DESC')
render :layout => false
end</pre>
end
```
I assume you render your articles in a layout. The render method here prevents your layout from rendering to create a plain XML file (which is what an RSS feed is).
Next we create a view. This is not the regular RHTML you're used to but RXML. This enables the XML generator which we'll use to generate the RSS feed. Create app/views/articles/rss.rxml
<pre lang="ruby">xml.instruct! :xml, :version=>"1.0"
Next we create a view. This is not the regular RHTML you're used to but RXML. This enables the XML generator which we'll use to generate the RSS feed. Create `app/views/articles/rss.rxml`
``` text
xml.instruct! :xml, :version=>"1.0"
xml.rss(:version=>"2.0"){
xml.channel{
xml.title("My Great Blog")
@ -39,10 +49,16 @@ xml.rss(:version=>"2.0"){
end
end
}
}</pre>
}
```
Well, that's it. You now have a working RSS feed!
If you want to enable auto discovery, you should add the following line to the header of your layout. (Auto discovery enables that little RSS icon in the address bar of your browser.)
<pre lang="ruby">< %= auto_discovery_link_tag(:rss, :controller => 'articles', :action => 'rss') %></pre>
``` erb
<%= auto_discovery_link_tag(:rss, :controller => 'articles', :action => 'rss') %>
```
Share and enjoy! Thank you.

View File

@ -6,48 +6,76 @@ slug = "rails-resources-and-permalinks"
+++
There has been quite a bit of discussion about creating permalinks with a rails resource. In this article I will show you how to create permalinks for a resource named 'pages' without giving up on any of the resource goodness!
<!--more-->
Before I start I'll presume you have a page scaffold_resource setup in your rails application. Make sure you have at least the following fields in your page model:
<pre lang="ruby">t.column :title, :string
``` ruby
t.column :title, :string
t.column :permalink, :string
t.column :content, :text</pre>
t.column :content, :text
```
Okay, what you want is the permalink_fu plugin. This plugin greatly simplifies the act of generating a permalink from a title. Install it first:
<pre lang="bash">$ cd railsapp
$ ./script/plugin install http://svn.techno-weenie.net/projects/plugins/permalink_fu/</pre>
``` shell
cd railsapp
./script/plugin install http://svn.techno-weenie.net/projects/plugins/permalink_fu/
```
In your Page model you may now add the following line. This line will generate a permalink in the permalink attribute automatically, so you don't have to show the permalink field in any forms.
<pre lang="ruby">has_permalink :title</pre>
``` ruby
has_permalink :title
```
That's it for generating the appropriate permalink string in your database.
Rails goodness has already provided you with the basic RESTful routes:
<ul>
<li>/pages</li>
<li>/pages/123</li>
<li>/pages/new</li>
<li>/pages/123;edit</li>
</ul>
But what you really want, is something like:
<ul>
<li>/pages/perma-link-here</li>
</ul>
Notice that the permalink url is only a GET request and should not be used for editing or updating the page in question.
Since using any other identifier than :id in a resource is madness, I create two new routes that will allow me to access permalinked pages. Not only that, but I do maintain the format option. Basically this means that you get three routes:
<ul>
<li>/page/perma-link-here</li>
<li>/page/perma-link-here.html</li>
<li>/page/perma-link-here.xml</li>
</ul>
Notice that I removed the 's' from 'pages' here. This is to avoid confusion with the resource 'pages'. But more on that later.
Now in config/routes.rb add the following two lines:
<pre lang="ruby">map.permalink 'page/:permalink', :controller => 'pages', :action => 'permalink'
map.connect 'page/:permalink.:format', :controller => 'pages', :action => 'permalink', :format => nil</pre>
``` ruby
map.permalink 'page/:permalink', :controller => 'pages', :action => 'permalink'
map.connect 'page/:permalink.:format', :controller => 'pages', :action => 'permalink', :format => nil
```
The first line adds a named route to an action named 'permalink' in your PagesController. This gives you the ability to add peralink links easily:
<pre lang="ruby">permalink_url(@page.permalink)</pre>
``` ruby
permalink_url(@page.permalink)
```
The second link is unnamed, and allows you to specify a format like HTML or XML.
The permalink action looks like this:
<pre lang="ruby"># GET /page/perma-link
``` ruby
# GET /page/perma-link
# GET /page/permal-link.xml
def permalink
@page = Page.find_by_permalink(params[:permalink])
@ -56,7 +84,9 @@ def permalink
format.html { render :action => 'show' }
format.xml { render :xml => @page.to_xml }
end
end</pre>
end
```
This special permalink action uses the same 'show' view as your resource.
If you want to maintain the 'pages' part of the URL, that's possible. You'll have to write a condition that makes sure that the :permalink parameter is a string an not an integer (ID). This article does not cover this.

View File

@ -16,7 +16,7 @@ The best example of this is an Article, where you can select the author (the ass
There is only one point where I ran into trouble with ActiveScaffold: acts_as_taggable_on_steroids.
Acts_as_taggable_on_steroids allows you to easily attach tags to models and do all kinds of crazy stuff with them. But, if you want to integrate in into AcitveScaffold, you're in for a tough ride.
<!--more-->
ActiveScaffold supports has_many :through associations, but not in a way that is compatible with acts_as_taggable_on_steroids. Let me show you.
In your ArticlesController you specify which columns to show. "tag_list" is a stringified version of the tags associated with the Article, which is great for showing to a user.
@ -24,11 +24,15 @@ In your ArticlesController you specify which columns to show. "tag_list" is a st
However, if you want to edit it an article (or create one), I don't want a text field where I have to enter tags manually, all I want are a bunch of check boxes, so I can check which tags apply to this article.
Showing the check boxes is easy with AS. By default I show 'tags', only in the list view do I use 'tag_list' instead. Also, make sure to set the ui_type for the tags column to :select. This will show you check boxes, instead of a sub form that allows you to create tags manually.
<pre lang="ruby">active_scaffold :article do |config|
``` ruby
active_scaffold :article do |config|
config.columns = [:title, :body, :tags, :author, :created_at]
config.list.columns = [:title, :author, :tag_list, :created_at]
config.columns[:tags].ui_type = :select
end</pre>
end
```
Well, very nice, right. You can now happily select the tags you want, and save your article. Not.
As you may have noticed, the tags are not saved. Why? Acts_as_taggable adds a 'tags' attribute to the model, however, when the Article model is saved, the tags attribute is overwritten by the tags specified in the "tags_list" attribute.
@ -36,15 +40,21 @@ As you may have noticed, the tags are not saved. Why? Acts_as_taggable adds a 't
The only way to solve this is to convert the tags selected in AS and store them as the tags_list attribute for the Article.
First, let's add a private method in the ArticleController class:
<pre lang="ruby">private
``` ruby
private
def new_tag_list(tag_ids)
tag_ids.map {|k,h| h['id']}.collect {|i| Tag.find(i)}.map do |tag|
tag.name.include?(Tag.delimiter) ? "\"#{tag.name}\"" : tag.name
end.join(Tag.delimiter.ends_with?(" ") ? Tag.delimiter : "#{Tag.delimiter} ")
end</pre>
end
```
And add two protected methods that extend the functionality of ActiveScaffold:
<pre lang="ruby">protected
``` ruby
protected
def before_create_save(record)
record.tag_list = new_tag_list(params[:record][:tags])
@ -52,7 +62,9 @@ end
def before_update_save(record)
record.tag_list = new_tag_list(params[:record][:tags])
end</pre>
end
```
This will take the actual form values from AS and create a tags_list. This new tags_list is then assigned to the article (named 'record' here). The two protected methods process the tags every time an Article is created or updated.
With this in place, you can happily assign tags to your articles! Please let me know if it worked for you, or if you have made any improvements to this solution.

View File

@ -5,16 +5,26 @@ tags = ["General", "RubyOnRails", "Features", "Ruby"]
slug = "action-mailer-all-mail-comes-from-mailer-daemon"
+++
Today I was trying to send mail from my Rails application through Action Mailer. This is quite simple, but I wanted to use a custom from-address. So, I create a setup_email method in my UserNotifier class that sets some defaults for every email sent out:
<pre lang="ruby">class UserNotifier < ActionMailer::Base
Today I was trying to send mail from my Rails application through Action Mailer. This is quite simple, but I
wanted to use a custom from-address. So, I create a setup_email method in my UserNotifier class that sets
some defaults for every email sent out:
``` ruby
class UserNotifier < ActionMailer::Base
protected
def setup_email(user)
@recipients = "#{user.email}"
@from = "My Application <no-reply@example.com">
@from = "My Application <no-reply@example.com>">
end
end</no-reply@example.com"></pre>
end
```
May you spotted the problem already, but I didn't. All the mail sent came from "MAILER DAEMON".
<pre>From: MAILER DAEMON</pre>
``` text
From: MAILER DAEMON
```
The problem was that @from didn't contain a properly formated from-address. It is missing the closing >, and so my email server ignores it.
If you have this issue, double check the from address, and make sure it's valid! Cheers.

View File

@ -23,95 +23,182 @@ I assume that you have just installed a fresh system with Ubuntu Linux 7.04 or D
I'll be deploy an imaginary Rails application named "myapp" which uses MySQL and is stored in Subversion. More on that later on.
Well, let's get going and get that Ruby on Rails server ready.
<!--more-->
<h3>Update your system</h3>
Before you do anything, use apt-get to update your system to the latest possible version.
<pre lang="bash">sudo apt-get update
sudo apt-get dist-upgrade</pre>
``` shell
sudo apt-get update
sudo apt-get dist-upgrade
```
This probably installs a new kernel (linux-kernel) so a reboot may be in order for optimal performance.
<h3>Enable SSH</h3>
Most people will want to have SSH on their server to login remotely. In this case I install both the server and client so you can SSH out if you need to:
<pre lang="bash">$ sudo apt-get install openssh-server openssh-client</pre>
``` shell
sudo apt-get install openssh-server openssh-client
```
You will now be able to login remotely wiht SSH.
<em>You'll need SSH later if you want to use Capistrano to deploy your Ruby on Rails application. In any case, SSH is a good thing to have around.</em>
<h3>Subversion</h3>
If you are serious about your Ruby on Rails server, you want to have Subversion around. Most deployment scripts pull the latest revision of your code from subversion. No configuration needed here.
<pre lang="bash">$ sudo apt-get install subversion</pre>
``` shell
sudo apt-get install subversion
```
We only need the client on the production server. We're not going to host Subversion repositories here.
<h3>Install MySQL Server</h3>
This is the first serious step you'll have to take. Both Ubuntu 7.04 and Debian 4.0 come with MySQL 5.0.x.
<pre lang="bash">$ sudo apt-get install mysql-server mysql-client libmysqlclient15-dev</pre>
``` shell
sudo apt-get install mysql-server mysql-client libmysqlclient15-dev
```
Be sure to set a password for the root MySQL user. Failing to do so will leave your database open for anyone who wishes to see.
<pre lang="bash">$ mysqladmin -u root -h localhost password 'secret'
$ mysqladmin -u root -h myhostname password 'secret'</pre>
``` shell
mysqladmin -u root -h localhost password 'secret'
mysqladmin -u root -h myhostname password 'secret'
```
Make sure to replace <em>secret</em> with your actual password.
Try logging in to MySQL with your new password to make sure everything works okay.
<pre lang="bash">$ mysql -u root -p
``` shell
mysql -u root -p
Enter password:
mysql></pre>
mysql>
```
MySQL is in place now, so let's get cracking at Ruby and Rails now.
<h3>Ruby, Gems, Rails</h3>
Installing Ruby is quite easy:
<pre lang="bash">$ sudo apt-get install ruby</pre>
``` shell
sudo apt-get install ruby
```
You'll now have Ruby 1.8.5. You will also need to install some other develoment package to help you build native Ruby Gems.
<pre lang="bash">$ sudo apt-get install make autoconf gcc ruby1.8-dev build-essentials</pre>
``` shell
sudo apt-get install make autoconf gcc ruby1.8-dev build-essentials
```
I'll install ruby gems the conventional way (so I'm not going to use Ubuntu's packages here). Download the <a href="http://rubyforge.org/frs/?group_id=126">latest Gems .tgz here</a>.
<pre lang="bash">$ wget http://rubyforge.org/frs/download.php/17190/rubygems-0.9.2.tgz
``` shell
wget http://rubyforge.org/frs/download.php/17190/rubygems-0.9.2.tgz
$ tar xvf rubygems-0.9.2.tgz
$ cd rubygems-0.9.2/
$ sudo ruby setup.rb
$ gem -v</pre>
$ gem -v
```
That's it for the gems. Now, install Rails an all it's dependencies:
<pre lang="bash">$ sudo gem install rails --include-dependencies</pre>
``` shell
sudo gem install rails --include-dependencies
```
If you get an error message complaining that the 'rails' gem cannot be found include the --remote option.
<h3>Oh no! More gems!</h3>
Next, let's install some essential Ruby Gems that will make your life quite a bit easier. Here we'll install the following gems:
<ul>
<li>mysql - For good MySQL connectivity</li>
<li>capistrano - Just to have it handy when needed</li>
<li>mongrel - Rails server</li>
<li>mongrel-cluster - To operate mongrel clusters</li>
</ul>
<pre lang="bash">$ sudo gem install mysql capistrano mongrel mongrel-cluster --include-dependencies</pre>
``` shell
sudo gem install mysql capistrano mongrel mongrel-cluster --include-dependencies
```
You'll be asked several times to choose a version for different gems. Always choose the latest available version for 'ruby'. (Don't choose win32. I don't need to explain why.)
<h3>Test Rails and MySQL operability</h3>
Before you continue you may want to take some time to test Rails and MySQL. It's not essential, but I recommend it because it will save you a lot of trouble later on.
Create a new Rails application in your homedir and a create the corresponding MySQL database. Also edit config/database.yml to reflect your root password!
<pre lang="bash">$ mysqladmin -u root -p create testapp_development
``` shell
mysqladmin -u root -p create testapp_development
$ mkdir testapp
$ rails testapp
$ cd testapp
$ vi config/database.yml
$ rake db:migrate</pre>
$ rake db:migrate
```
If you get any errors from that last command, check the previous steps. Normally, all should be fine and you can continue safely.
Most people configure a 'socket' in their config/database.yml for MySQL. Note that this socket is in different places for different distribution. Ubuntu and Debian keep it in: /var/run/mysqld/mysqld.sock. You may need to update your configuration in order to connect to the database.
<h3>Apache 2.2</h3>
The good thing about Ubuntu/Debian is that they both include Apache 2.2.x now. This branch of Apache includes the a balancing proxy, which allows you to distribute your workload over several Mongrel servers (in your mongrel cluster). I'll come back to that later.
<pre lang="bash">$ sudo apt-get install apache2</pre>
``` shell
sudo apt-get install apache2
```
Before you continue, enable several modules which we'll be using later on.
<pre lang="bash">$ sudo a2enmod proxy_balancer
$ sudo a2enmod proxy_ftp
$ sudo a2enmod proxy_http
$ sudo a2enmod proxy_connect
$ sudo a2enmod rewrite</pre>
``` shell
sudo a2enmod proxy_balancer
sudo a2enmod proxy_ftp
sudo a2enmod proxy_http
sudo a2enmod proxy_connect
sudo a2enmod rewrite
```
That's it for now on Apache. Let's move along.
<h3>Prepping the Rails application</h3>
Okay, let's prepare the Rails application 'myapp' for deployment now, shall we?
First, create a production database on your server, and configure it in config/database.yml:
<pre lang="bash">$ mysql -u root -p
> create database myapp_production;
> grant all privileges on myapp_production.* to 'myapp'@'localhost'
identified by 'secret_password'</pre>
``` shell
mysql -u root -p
```
``` sql
create database myapp_production;
grant all privileges on myapp_production.* to 'myapp'@'localhost'
identified by 'secret_password';
```
The next step is to install capistrano. You can install it on your development machine as a gem, as demonstrated above. Next, apply capistrano to your application:
<pre lang="bash">$ sudo gem install capistrano
$ cap --apply-to myapp</pre>
``` shell
sudo gem install capistrano
cap --apply-to myapp
```
Take a look at myapp/config/deploy.rb. This file contains the configuration for the deployment of your application. Take special care of the following, here's an example for 'myapp':
<pre lang="ruby">require 'mongrel_cluster/recipes'
``` ruby
require 'mongrel_cluster/recipes'
set :application, "myapp"
set :repository, "http://svn.myhost.com/svn/#{application}/trunk"
# We only have one host
@ -120,14 +207,20 @@ role :app, "myapp.com"
role :db, "myapp.com", :primary => true
# Don't forget to change this
set :deploy_to, "/home/ariejan/apps/#{application}"
set :mongrel_conf, "#{current_path}/config/mongrel_cluster.yml"</pre>
set :mongrel_conf, "#{current_path}/config/mongrel_cluster.yml"
```
As you can see, I've already included several lines that enable the use of mongrel, we'll get to that next.
Make sure you adapt this file to your own needs. myapp.com is the address of the server you're going to deploy your application to. the mongrel_cluster.yml file will be created in a moment.
On the server, make sure you create the 'apps' directory. You can now setup a basic file structure for the deployment:
<pre lang="bash">$ cd myapp
$ cap setup</pre>
``` shell
cd myapp
cap setup
```
On your server, you'll notice that the /home/ariejan/apps/myapp directory was created, including some subdirectories.
If you are annoyed with entering your SSH password every time, create and upload your public SSH key to automate this. (I'll write something up about that later on.)
@ -135,22 +228,39 @@ If you are annoyed with entering your SSH password every time, create and upload
Now, configure mongrel. For a normal setup, with moderate traffic, you can handle all traffic with two mongrel instances. The mongrel servers will only be accessable through 'localhost' on the server on non-default ports. Apache will do the rest later.
In your rails app, run the following command:
<pre lang="bash">$ mongrel_rails cluster::configure -e production -p 9000 \
-a 127.0.0.1 -N 2 -c /home/ariejan/apps/myapp/current</pre>
``` shell
mongrel_rails cluster::configure -e production -p 9000 \
-a 127.0.0.1 -N 2 -c /home/ariejan/apps/myapp/current
```
The configuration file we saw earlier has been created. Check-in all new files in to subversion now, and cold deloy your application!
<pre lang="bash">$ cap cold_deploy</pre>
``` shell
cap cold_deploy
```
The deployment will checkout the most recent code from Subversion and start the mongrel servers.
After the deployment, migrate your production database and restart the mongrel cluster:
<pre lang="bash">$ cap migrate
$ cap restart</pre>
``` shell
cap migrate
cap restart
```
To check that your application is running, issue the following command on your server. It should return you the HTML code from your app:
<pre lang="bash">$ curl -L http://127.0.0.1:9000</pre>
``` shell
curl -L http://127.0.0.1:9000
```
<h3>Configure Apache and the Balacing Proxy</h3>
You have two mongrel servers running, ready to handle incoming requests. But, you want your visitors to use 'myapp.com' and not an IP address with different port numbers. This is where apache comes in.
Create a new file in /etc/apache2/sites-availbale named 'myapp' and add the following:
<pre lang="apache">
``` apache
<proxy>
BalancerMember http://127.0.0.1:9000
BalancerMember http://127.0.0.1:9001
@ -184,24 +294,52 @@ Create a new file in /etc/apache2/sites-availbale named 'myapp' and add the foll
ErrorLog /home/ariejan/apps/myapps/shared/log/tankfactions_errors_log
CustomLog /home/ariejan/apps/myapps/shared/log/tankfactions_log combined
</virtualhost></pre>
</virtualhost>
```
Now enable this new site in apache:
<pre lang="bash">$ sudo a2ensite myapp
$ /etc/init.d/apache2 force-reload</pre>
``` shell
sudo a2ensite myapp
/etc/init.d/apache2 force-reload
```
In some cases you may need to make a small change to /etc/apache2/mods-enabled/proxy.conf and swap
<pre>Order deny,allow
Deny from all</pre>
``` apache
Order deny,allow
Deny from all
```
for
<pre>Order allow,deny
Allow from all</pre>
``` apache
Order allow,deny
Allow from all
```
That's all, you can now access your app on myapp.com!
<h3>Maintaining your application</h3>
Now, happily develop your application and make update (you check them in to Subversion). To update your web server run:
<pre lang="bash">$ cap deploy</pre>
``` shell
cap deploy
```
If you made changes in the database, you may want to run a long_deploy:
<pre lang="bash">$ cap long_deploy</pre>
``` shell
cap long_deploy
```
And if for some reason, your mongrel cluster dies, just restart it.
<pre lang="bash">$ cap restart</pre>
``` shell
cap restart
```
That's it! Happy hacking :)
<strong><a href="http://digg.com/programming/Rails_production_server_setup_and_deployment_on_Ubuntu_Debian">Please digg this story</a> to help spread the word! Thanks a lot!</strong>

View File

@ -12,24 +12,34 @@ The problem with that solution was that, although the checkboxes for every tag a
Together with a colleague (who wishes not to be named), I found a solution that is quite elegant. Instead of using check boxes, and creating all kinds of subforms in ActiveScaffold, we opted for an auto_completing, comma-separated list of tags.
This article descripes the solution we found. I think you'll like it very much!
<!--more-->
When you try to use acts_as_taggable with ActiveScaffold, you might use something like this in your BooksController.
<pre lang="ruby">active_scaffold :books do |config|
``` ruby
active_scaffold :books do |config|
config.columns = [:title, :body, :tags]
config.list.columns = [:title, :tag_list]
config.columns[:tags].ui_type = :select
# ...
end</pre>
end
```
This is not so useful when you want the flexibility of creating new tags instantly. Therefore, it's better to use the tag_list:
<pre lang="ruby">active_scaffold :books do |config|
``` ruby
active_scaffold :books do |config|
config.columns = [:title, :body, :tag_list]
config.list.columns = [:title, :tag_list]
# ...
end</pre>
end
```
You get a text_field for writing down the tags (comma-separated). The problem of this is that the user has to keep all the tags in mind and is not allowed to make any typos in the tag. To help our users out, I use Rails' auto_complete feature.
In your BooksController:
<pre lang="ruby">auto_complete_for :book, :tag_list
``` ruby
auto_complete_for :book, :tag_list
def autocomplete_tag_list
@all_tags = Tag.find(:all, :order => 'name ASC')
@ -40,19 +50,24 @@ def autocomplete_tag_list
end
render :layout => false
end</pre>
end
```
We can now create the template for the results which are found.
In app/views/books/autocomplete_tag_list.rhtml:
<pre lang="html">
``` erb
<ul class="autocomplete_list">
<% @tags.each do |t| %>
<li class="autocomplete_item"><%= t %></li>
<% end %></ul>
</pre>
```
Now comes the difficult part, integration of the auto_complete widget within ActiveScaffold.
ActiveScaffold has the possibility to change the way every attribute is displayed on the create and edit page. I want to change the form for the attribute 'tag_list'. To do this, I create create a file named app/views/books/_tag_list_form_column.rhtml:
<pre lang="html">
``` erb
<dl>
<dt>
<label for="record_tag_list">AutoCompleted Tag List</label>
@ -72,7 +87,9 @@ ActiveScaffold has the possibility to change the way every attribute is displaye
//]]>
</script>
</dd>
</dl></pre>
</dl>
```
This shows a text field and generates a div that contains the available tags that we can show to the user. To populate the list of tags we use Ajax.Autocompler, which requires three arguments: the id of the text_field; the id of the div where you want to show possible tags to the user; and third, the URL of the action we created before, that returns the proper tags.
The 'tokens' part of the last argument indicates that the user can seperate multiple tags with a comma. So, if you've entered one tag, added a comma and start typing a new tag, the auto complete feature will only lookup that second tag you're typing!

View File

@ -10,21 +10,29 @@ In the essence of every application is data. One way or another your application
Downloading files is not the hardest thing around. But the problem is that some formats, like XML, are automatically parsed by the browser and this makes it harder for users to download files like that.
So, what you want to do is, ignore the browser and offer your data (in XML or whatever format you want) as a file that can be downloaded directly. The solution is rather easy, as always with Rails.
<!--more-->
Okay, in this example I have an action that renders current 'entries' as XML and offers this to the user:
<pre lang="ruby">def export_to_xml
``` ruby
def export_to_xml
@entries = Entry.find(:all)
render :xml => @entries.to_xml
end</pre>
end
```
This works as you'd expect it to. When calling this action, the user receives a XML file containing all entries. But really, do you want that in your browser? Especially when the XML file is rather large, this can be very annoying, because your browser will want to load it all in!
What you want here is offer the users a file named 'entries.xml' for download. In this case we use Rails' send_data method. The previous action now looks like this:
<pre lang="ruby">def export_to_xml
``` ruby
def export_to_xml
@entries = Entry.find(:all)
send_data @entries.to_xml,
:type => 'text/xml; charset=UTF-8;',
:disposition => "attachment; filename=entries.xml"
end</pre>
end
```
It's clear that we send the XML data to the client. I specify the type and charset of the data with the 'type' paramater. This way the browser knows what is being send and allows the user to choose an application that can use the data. In this case an XML reader, for example.
The disposition parameter tells the browser this should be downloaded as a file (or attachment). It also specifies what the name of the attachment is, 'entries.xml'.

View File

@ -14,39 +14,47 @@ You're an island, and have no clue about the new revision being created. You jus
When you commit your changes, you'll get an error message:
$ svn commit -m "Updated README"
Sending README
Transmitting file data .svn: Commit failed (details follow):
svn: Out of date: '/myproject/README'
``` shell
$ svn commit -m "Updated README"
Sending README
Transmitting file data .svn: Commit failed (details follow):
svn: Out of date: '/myproject/README'
```
~
This is good. Subversion has detected that the file you want to commit has changed since you last updated it. Update the file to get it up-to-date again.
$ svn update
C README
Updated to revision 6.
``` shell
$ svn update
C README
Updated to revision 6.
```
The 'C' indicates there is a conflict with the README file, and Subversion does not know how to solve this. You are called in to help.
If you now take a look at README, you'll notice that there are several markers that indicate what parts of the code are conflicting. You can easily see what you changed, and what has changed in the repository:
<<<<<<< .mine
This is fun stuff!
=======
This is a documentation file
>>>>>>> .r6
``` shell
<<<<<<< .mine
This is fun stuff!
=======
This is a documentation file
>>>>>>> .r6
```
## What are your options?
<h3>What are your options?</h3>
You have three options for resolving the conflict. Whatever you choose, make sure you confer with your colleague on the matter.
<em>1. Scrap your changes, and go with the current work from your colleague.</em>
This is the easiest solution. All you have to do is revert the changes you made, and update your working copy:
$ svn revert README
Reverted 'README'
$ svn update README
At revision 6.
``` shell
$ svn revert README
Reverted 'README'
$ svn update README
At revision 6.
```
<em>2. Keep your changes, and dump whatever your colleague did.</em>
@ -61,9 +69,11 @@ Performing a simple 'ls' will show you that there are four files related to this
To check in your changes, copy your version over the original and tell Subversion you have resolved the conflict.
$ cp README.mine README
$ svn resolved README
Resolved conflicted state of 'README'
``` shell
$ cp README.mine README
$ svn resolved README
Resolved conflicted state of 'README'
```
The 'resolved' command will clean up all the special files that were generated.
@ -73,8 +83,10 @@ If you choose this option, you will have to manually edit README. Remove the mar
Subversion won't let you commit this file, so you'll have to mark it as 'resolved' as we saw during option 2:
$ svn resolved README
Resolved conflicted state of 'README'
``` shell
$ svn resolved README
Resolved conflicted state of 'README'
```
<em>Before you rush ahead</em>

View File

@ -12,41 +12,69 @@ This article will show you how to develop a plugin that adds functionality to a
Let's take a basic Rails application for starters. You have setup a model with some attributes and a scaffolded controller that allows you to CRUD your items. In this tutorial I'll be working with books. The model is named 'Book' and the controller 'BooksController'. Start your web server now and add some random data to play with.
Before you dive into writing a plugin for the controller to export data to XML you should have some basic functionality in your controller first. I've found it easier to develop my code in the controller first, and then port it to a plugin.
<!--more-->
So, add a new method to your BooksController that'll export books to XML. This looks quite easy:
<pre lang="ruby">def export_to_xml
``` ruby
def export_to_xml
books = Book.find(:all, :order => 'title')
send_data books.to_xml,
:type => 'text/xml; charset=UTF-8;',
:disposition => "attachment; filename=books.xml"
end</pre>
end
```
Now, call /books/export_to_xml and you download a real XML file containing all your books! To make things a bit more complicated, we want to be able to feed this method some conditions to select books. A nice solution is to add a special method for this that defines these conditions. (You could also use them in listing books, for example.) I add a new method to the BooksController:
<pre lang="ruby">def conditions_for_collection
``` ruby
def conditions_for_collection
['title = ?', 'some title!']
end</pre>
end
```
The condition is of the same format you can feed to <em>find</em>. Here you could, for example, select only the books belonging to the currently logged in user.
Next, update the export_to_xml method to use these conditions
<pre lang="ruby">def export_to_xml
``` ruby
def export_to_xml
books = Book.find(:all, :order => 'title', :conditions => conditions_for_collection)
send_data books.to_xml,
:type => 'text/xml; charset=UTF-8;',
:disposition => "attachment; filename=books.xml"
end</pre>
end
```
Nice that's it. Now, you like what you've made so far, and want to stuff it into a plugin and put it on your weblog. Here's how to go about that.
<h3>Creating the plugin</h3>
## Creating the plugin
First, generate the basic code for a plugin:
<pre>./script/generate plugin acts_as_exportable</pre>
``` shell
./script/generate plugin acts_as_exportable
```
This will create a new directory in vendor/plugins containing all the basic files you need. First, we'll take a look at vendor/plugins/acts_as_exportable/lib/acts_as_exportable.rb. This is where all the magic happens.
What we want is to is add a method to ActionControllerBase that allows you to easily enable the plugin in a certain controller. So, how do you want to activate the plugin? Right, you just call 'acts_as_exportable' from the controller, or optionally, you add the name of the model you want to use.
<pre lang="ruby">acts_as_exportable
acts_as_exportable :book</pre>
``` ruby
acts_as_exportable
acts_as_exportable :book
```
The vendor/plugins/acts_as_exportable/lib/acts_as_exportable.rb contains a module that's named after our plugin:
<pre lang="ruby">module ActsAsExportable
end</pre>
``` ruby
module ActsAsExportable
end
```
Next, we add a module named 'ClassMethods'. These class methods will be added to ActionController::Base when the plugin is loaded (we'll take care of that in a moment), and enable the functionality described above.
<pre lang="ruby">module ActsAsExportable
``` ruby
module ActsAsExportable
def self.included(base)
base.extend(ClassMethods)
end
@ -84,7 +112,9 @@ Next, we add a module named 'ClassMethods'. These class methods will be added to
instance_variable_get('@acts_as_exportable_config')
end
end
end</pre>
end
```
So, what happened? The first method you see extends the current class (that's every one of your controllers with the methods from the ClassMethods module).
Every class now has the 'acts_as_exportable' method available. What does it do? The plugin automatically grabs the name of the model associated (by convention) with the controller you use, unless you specify something else.
@ -92,7 +122,9 @@ Every class now has the 'acts_as_exportable' method available. What does it do?
Next, we create a new configuration object that contains information about the model we're working with. Later on this can contain more detailed information like what attributes to include or exclude from the export.
Finally we include the module InstanceMethods, which we still have to define. The instance methods are only included when we enable the plugin. In our case, the instance methods include the 'export_to_xml' and 'conditions_for_collection' methods. We can simply copy/paste them into your plugin.
<pre lang="ruby">module InstanceMethods
``` ruby
module InstanceMethods
def export_to_xml
data = Book.find(:all, :order => 'title', :conditions => conditions_for_collection)
send_data data.to_xml,
@ -103,7 +135,9 @@ Finally we include the module InstanceMethods, which we still have to define. Th
# Empty conditions. You can override this in your controller
def conditions_for_collection
end
end</pre>
end
```
Take note that we don't want to define any default conditions, because we don't know what model we're using here. By adding an empty method, the method is available and no conditions are used. Another developer can define 'conditions_for_collection' in his controller to override the one we write here.
In the 'export_to_xml' there are a few changes as well. First of all, I generalized 'books' to 'data'.
@ -111,13 +145,19 @@ In the 'export_to_xml' there are a few changes as well. First of all, I generali
The most important step is yet to come. We have still application specific code in your plugin, namely the Book model. This is where the Config class and @acts_as_exportable_config come in.
We have added a class variable to the controller named @acts_as_exportable_config. By default, this variable is not accessable by instance methods, so we need a little work around:
<pre lang="ruby">self.class.acts_as_exportable_config</pre>
``` ruby
self.class.acts_as_exportable_config
```
This will call the class method 'acts_as_exportable_config' we defined in ClassMethods and return the value of @acts_as_exportable_config.
Note that we store the configuration in each seperate controller. This allows acts_as_exportable to be used with more than one controller at the same time.
With the model name made application independent, the whole plugin code looks like:
<pre lang="ruby">module ActsAsExportable
``` ruby
module ActsAsExportable
def self.included(base)
base.extend(ClassMethods)
end
@ -170,15 +210,27 @@ With the model name made application independent, the whole plugin code looks li
def conditions_for_collection
end
end
end</pre>
end
```
Add the following line to your BooksController and restart your web server. (Oh, and make sure to remove the export_to_xml method from the controller as well)
<pre lang="rails">acts_as_exportable</pre>
``` ruby
acts_as_exportable
```
Done! Or not?
<h3>Enabling the plugin by default</h3>
## Enabling the plugin by default
We have a very nice plugin now, but it is not loaded by default! If you take a look at your plugin directory, you'll find a file named 'init.rb'. This file is executed when you (re)start your web server. This is the perfect place to add our class methods to the ActionController::Base. Just add the following three lines of code to 'init.rb':
<pre lang="ruby">ActionController::Base.class_eval do
``` ruby
ActionController::Base.class_eval do
include ActsAsExportable
end</pre>
end
```
When we include our module, the 'self.included' method is called, and the ClassMethods module is added, thus enabling the acts_as_exportable method.
That's all! Happy plugin writing!

View File

@ -5,28 +5,42 @@ tags = ["General", "RubyOnRails", "Features", "Ruby"]
slug = "using-iconv-to-convert-utf-8-to-ascii-on-linux"
+++
There are situations where you want to remove all the UTF-8 goodness from a string (mostly because of legacy systems you're working with). Now, this is rather easy to do. I'll give you an example:
<pre>çéß</pre>
Should be converted to
<pre>cess</pre>
On my mac, I can simply use the following snippet to convert the string:
<pre lang="ruby">s = "çéß"
There are situations where you want to remove all the UTF-8 goodness from a string
(mostly because of legacy systems you're working with). Now, this is rather easy to do.
I'll give you an example: `çéß`
Should be converted to `cess`. On my mac, I can simply use the following snippet to convert
the string:
``` ruby
s = "çéß"
s = Iconv.iconv('ascii//translit', 'utf-8', s).to_s # returns "c'ess"
s.gsub(/\W/, '') # return "cess"</pre>
Very nice and all, but when I deploy to my Debian 4.0 linux system, the I get an error that tells me that invalid characters were present. Why? Because the Mac has unicode goodness built-in. Linux does not (in most cases).
s.gsub(/\W/, '') # return "cess"
```
Very nice and all, but when I deploy to my Debian 4.0 linux system, the I get an error that
tells me that invalid characters were present. Why? Because the Mac has unicode goodness built-in.
Linux does not (in most cases).
So, how do you go about solving this? Easy! Get unicode support!
<pre>sudo apt-get install unicode</pre>
``` shell
sudo apt-get install unicode
```
Now, try again.
<strong>Bonus</strong>
## Bonus
If you want to convert a sentence (or anything else with spaces in it), you'll notice that spaces are removed by the gsub command. I solve this by splitting up the string first into words. Convert the words and then joining the words together again.
<pre lang="ruby">words = s.split(" ")
``` ruby
words = s.split(" ")
words = words.collect do |word|
word = Iconv.iconv('ascii//translit', 'utf-8', word).to_s
word = word.gsub(/\W/,'')
end
words.join(" ")</pre>
Like this? Why not write a mix-in for String?
words.join(" ")
```
Like this? Why not write a mix-in for String?

View File

@ -10,29 +10,52 @@ I hereby proudly announce my <em>Super Simple Authentication</em> plugin and gen
All right, what does it do? Sometimes you need to protect your actions and controllers, but you don't want to go about installing restful_authentication or anything like that. Adding a simple password for certain actions would suffice. So, I wrote a little plugin that can generate some code for you that allows you to easily protect your app with a simple password.
To get started, you must first install the plugin in your rails application:
<pre lang="bash">script/plugin install http://svn.ariejan.net/plugins/super_simple_authentication</pre>
``` shell
script/plugin install http://svn.ariejan.net/plugins/super_simple_authentication
```
When the plugin is installed, you may generate your SSA controller. This controller verifies your password and makes sure you stay authenticated for the duration of your visit.
<pre lang="bash">script/generate super_simple_authentication sessions</pre>
``` shell
script/generate super_simple_authentication sessions
```
Your password is located in config/super_simple_authentication.yml. Change it.
In the SessionsController, you'll find an include statement. Move this include to your application controller:
<pre lang="ruby">include SuperSimpleAuthenticationSystem</pre>
``` ruby
include SuperSimpleAuthenticationSystem
```
The generator automatically added routes to your config/routes.rb file. If you want easy access to login and logout functionality, add these two lines to your config/routes.rb file as well:
<pre lang="ruby">map.login '/login', :controller => 'sessions', :action => 'new'
map.logout '/logout', :controller => 'sessions', :action => 'destroy', :method => :delete</pre>
``` ruby
map.login '/login', :controller => 'sessions', :action => 'new'
map.logout '/logout', :controller => 'sessions', :action => 'destroy', :method => :delete
```
You can now protect you actions and controllers with a before_filter:
<pre lang="ruby"># Protect all actions in the controller
``` ruby
# Protect all actions in the controller
before_filter :authorization_required
# Protect all actions, except :index and :recent
before_filter :authorization_required, :except => [:index, :recent]
# Protect only :destroy
before_filter :authorization_required, :only => :destroy</pre>
before_filter :authorization_required, :only => :destroy
```
In your views, you can check if you are authorized or not with authorized? E.g.
<pre lang="html"><% if authorized? %>
# ... do secret admin stuff
<% end %></pre>
``` erb
<% if authorized? %>
<!-- do secret admin stuff -->
<% end %>
```
Please visit <a href="http://trac.ariejan.net">http://trac.ariejan.net</a> to report bugs. Ariejan.net will keep you updated on new major version. <a href="http://feeds.feedburner.com/Ariejan">Please subscribe to the RSS Feed</a>.
I hope you enjoy this plugin. Please post a comment if you use it in your project, or if you just like it. Bugs, feature requests and support requests should go into <a href="http://trac.ariejan.net/newticket">Trac</a>

View File

@ -8,7 +8,7 @@ slug = "rails-20-new-features"
As <a href="http://www.loudthinking.com">David Heinemeier Hansson</a> already told us all during his RailsConfEurope 2007 keynote, it's time to take off the party hats. It's no longer at time to celebrate all the new stuff we get. It's time to celebrate what we have already.
With this statement DHH ends the revolution of Rails. During the past three years a lot of new and exiting features were added to Rails. However, now the time has come to evolve Rails further. No more new and exiting stuff, but fine tuning. Making things even better than they already are.
<!--more-->
So, Rails 2.0 will not contain any major new features. But don't despair, there are quite a few nifty changes that you'll like to know about.
<strong>1. HTTP Authentication</strong>
@ -19,14 +19,18 @@ HTTP Authentication is a great way to limit access to specific areas of your app
The performance of a web application goes down the drain when you add too much JavaScript and stylesheets. Each and every file must be downloaded separately. Rails 2.0 will be able to take all the javascript files, stuff 'em together, compress that one file and sent that to the client, where it will be cached.
<pre lang="ruby"><%= javascript_include_tag :all, :cache => true %>
<%= stylesheet_link_tag :all, :cache => true %></pre>
``` erb
<%= javascript_include_tag :all, :cache => true %>
<%= stylesheet_link_tag :all, :cache => true %>
```
<strong>3. Asset Server</strong>
If you have a large (and busy) site, serving static files can be quite a performance issue. Rails 2.0 adds the notion of an "asset server". An asset server (in combination with item #2) will serve static content quickly, allowing your app to respond even faster to a user's request. To enable asset hosts, add the following line to your configuration:
<pre lang="ruby">config.action_controller.asset_host = 'assets%d.example.com'</pre>
``` ruby
config.action_controller.asset_host = 'assets%d.example.com'
```
You can even cycle through multiple asset servers by simply creating the appropriate CNAME records in DNS. Neat, eh?
@ -54,22 +58,27 @@ Sexy Migrations have been around for some time as a plugin. Some people love the
Let's take an old migration:
<pre lang="ruby">create_table :people do |t|
``` ruby
create_table :people do |t|
t.column :first_name, :string, :null => false
t.column :last_name, :string, :null => false
t.column :group_id, :integer
t.column :description, :text
t.column :created_at, :datetime
t.column :updated_at, :datetime
end</pre>
end
```
We now have this:
<pre lang="ruby">create_table :people do |t|
We now have this:
``` ruby
create_table :people do |t|
t.integer :group_id
t.string :first_name, :last_name, :null => false
t.text :description
t.timestamps
end</pre>
end
```
<strong>8. Plugin Mania</strong>

View File

@ -8,35 +8,47 @@ It may seem easy for some, but for others, installing MySQL on Ubuntu or Debian
First of all, make sure your package management tools are up-to-date. Also make sure you install all the latest software available.
sudo apt-get update
sudo apt-get dist-upgrade
``` shell
sudo apt-get update
sudo apt-get dist-upgrade
```
After a few moments (or minutes, depending on the state of your system), you're ready to install MySQL.
~
By default, recent Ubuntu/Debian systems install a MySQL Server from the 5-branch. This is a good thing, so don't worry.
First, install the MySQL server and client packages:
sudo apt-get install mysql-server mysql-client
``` shell
sudo apt-get install mysql-server mysql-client
```
When done, you have a MySQL database read to rock 'n roll. However, there's more to do.
You need to set a root password, for starters. MySQL has it's own user accounts, which are not related to the user accounts on your Linux machine. By default, the root account of the MySQL Server is empty. You need to set it. Please replace 'mypassword' with your actual password and myhostname with your actual hostname.
sudo mysqladmin -u root -h localhost password 'mypassword'
sudo mysqladmin -u root -h myhostname password 'mypassword'
``` shell
sudo mysqladmin -u root -h localhost password 'mypassword'
sudo mysqladmin -u root -h myhostname password 'mypassword'
```
Now, you probably don't want just the MySQL Server. Most likely you have Apache+PHP already installed, and want MySQL to go with that. Here are some libraries you need to install to make MySQL available to PHP:
sudo apt-get install php5-mysql
``` shell
sudo apt-get install php5-mysql
```
Or for Ruby:
sudo apt-get install libmysql-ruby
``` shell
sudo apt-get install libmysql-ruby
```
You can now access your MySQL server like this:
mysql -u root -p
``` shell
mysql -u root -p
```
Have fun using MySQL Server.

View File

@ -7,30 +7,36 @@ slug = "rails-snippet-caching-expensive-calls"
In Rails, from time to time, you may encounter you have a method you call several times, but which returns always the same result. For example, have the following:
<pre lang="ruby">class Person < ActiveRecord::Base
``` ruby
class Person < ActiveRecord::Base
has_many :articles
def get_approved_articles
self.articles.find(:all, :conditions => {:approved => true}, :order => 'approved_on DESC')
end
end</pre>
end
```
A query is fired every time you call Person#get_approved_articles. To cache the result of the query during this request, just add a bit of magic
<pre lang="ruby">class Person < ActiveRecord::Base
``` ruby
class Person < ActiveRecord::Base
has_many :articles
def get_approved_articles
@approved_articles ||= self.articles.find(:all, :conditions => {:approved => true}, :order => 'approved_on DESC')
end
end</pre>
end
```
This will return the @approved_articles value if it exists. If it doesn't, which is the first time you access the method, the query is run and stored in @approved_articles for later use.
Note: I know it's much easier to define this kind of behaviour, but it's just an illustration.
<pre lang="ruby">class Person < ActiveRecord::Base
``` ruby
class Person < ActiveRecord::Base
has_many :articles
has_many :approved_articles, :class_name => "Article", :conditions => {:approved => true}, :order => 'approved_on DESC'
end</pre>
end
```

View File

@ -23,48 +23,62 @@ will be reverted to HEAD (the last commit revision) of your code.
When you restore your stash, you changes are reapplied and you continue working
on your code.
**Stash your current changes**
## Stash your current changes
$ git stash save <optional message for later reference>
Saved "WIP on master: e71813e..."</pre>
``` shell
$ git stash save <optional message for later reference>
Saved "WIP on master: e71813e..."</pre>
```
**List current stashes**
## List current stashes
Yes, you can have more than one!! The stash works like a stack. Every time you
save a new stash, it's put on top of the stack.
$ git stash list
stash@{0}: WIP on master: e71813e...
``` shell
$ git stash list
stash@{0}: WIP on master: e71813e...
```
Note the `stash@{0}` part? That's your stash ID, you'll need it to restore it
later on. Let's do that right now. The stash ID changes with every stash you
make. `stash@{0}` refers to the last stash you made.
**Apply a stash**
## Apply a stash
$ git stash apply stash@{0}
``` shell
git stash apply stash@{0}
```
You may notice the stash is still there after you have applied it. You can drop
it if you don't need it any more.
$ git stash drop stash@{0}
``` shell
git stash drop stash@{0}
```
Or, because the stash acts like a stack, you can pop off the last stash you
saved:
$ git stash pop
``` shell
git stash pop
```
If you want to wipe all your stashes away, run the 'clear' command:
$ git stash clear
``` shell
git stash clear
```
It may very well be that you don't use stashes that often. If you just want to
quickly stash your changes to restore them later, you can leave out the stash
ID.
$ git stash
...
$ git stash pop
``` shell
$ git stash
# ...
$ git stash pop
```
Feel free to experiment with the stash before using it on some really important
work.

View File

@ -17,39 +17,49 @@ The best example of such a surprise is RSpec. RSpec uses 'rake db:schema:dump' t
The solution is to disable the id column and create a primary key column named uuid instead.
<pre lang="ruby">create_table :posts, :id => false do |t|
``` ruby
create_table :posts, :id => false do |t|
t.string :uuid, :limit => 36, :primary => true
end</pre>
end
```
In your Post model you should then set the name of this new primary key column.
<pre lang="ruby">class Post < ActiveRecord::Base
``` ruby
class Post < ActiveRecord::Base
set_primary_key "uuid"
end</pre>
end
```
The next step is to create the UUID itself. We'll have to do this the Rails app, because most databases don't support UUID out of the box.
First install the uuidtools gem
<pre>sudo gem install uuidtools</pre>
``` shell
sudo gem install uuidtools
```
Create a file like lib/uuid_helper.rb and add the following content.
<pre lang="ruby">require 'rubygems'
``` ruby
require 'rubygems'
require 'uuidtools'
module UUIDHelper
def before_create()
self.uuid = UUID.timestamp_create().to_s
end
end</pre>
end
```
Then, include this module in all UUID-enabled models, like Post in this example.
<pre lang="ruby">class Post < ActiveRecord::Base
``` ruby
class Post < ActiveRecord::Base
set_primary_key "uuid"
include UUIDHelper
end</pre>
end
```
Now, when you save a new Post object, the uuid field is automatically filled with a Universally Unique Identifier. What else could you wish for?

View File

@ -9,34 +9,39 @@ ActiveRecord is great in providing CRUD for your data models. In some cases, how
I'm going to show you how you can easily mark a Model as read only all the time. In this example I have a Item model like this:
<pre lang="ruby">class Item < ActiveRecord::Base
end</pre>
``` ruby
class Item < ActiveRecord::Base
end
```
ActiveRecord::Base provides two methods that may be of interest here:
<pre lang="ruby">def readonly!
``` ruby
def readonly!
@readonly = true
end
def readonly?
defined?(@readonly) && @readonly == true
end</pre>
end
```
The first method sets the record to read only. This is great, but we don't want to set the read only property every time we load a model. The second, readonly?, return true if the object is read only or false if it isn't.
So, if we return true on the readonly? method, our object is marked as read only. Great!
<pre lang="ruby">class Item < ActiveRecord::Base
``` ruby
class Item < ActiveRecord::Base
def readonly?
true
end
end</pre>
end
```
That is all! All Item objects are now marked as read only all the time. If you try to write to the model, you'll receive an error.
<pre lang="ruby">item = Item.find(:first)
``` ruby
item = Item.find(:first)
item.update_attributes(:name => 'Some item name')
=> ActiveRecord::RecordReadOnly</pre>
=> ActiveRecord::RecordReadOnly
```

View File

@ -12,7 +12,8 @@ Well, yes there is: create modules! Normally you'd write a module to reuse your
So, I package all related code (e.g. Authentication, state management, managing associated objects, etc) into different modules and place them in the /lib directory. Let's say you have a a bunch of methods to handle keep a counter on your User model
Class User < ActiveRecord::Base
``` ruby
class User < ActiveRecord::Base
attr_accessor :counter
def up
@ -26,16 +27,18 @@ So, I package all related code (e.g. Authentication, state management, managing
def reset
counter = 0
end
end
end
```
You could create a new file lib/counter.rb and include that module in your User model.
Class User < ActiveRecord::Base
``` ruby
class User < ActiveRecord::Base
attr_accessor :counter
include Counter
end
end
module Counter
module Counter
def up
counter += 1
end
@ -47,7 +50,8 @@ You could create a new file lib/counter.rb and include that module in your User
def reset
counter = 0
end
end
end
```
As you can see, this keeps your fat User model clean and makes it easier for you to find code that applies to a certain function.

View File

@ -8,10 +8,12 @@ I'm currently writing some RSpec tests that use Time.now.
I want my model to calculate a duration and store the future time in the database. I've already specced the calculation of the duration, but I also want to spec that everything gets saved correctly. Here's my first spec:
it "should do stuff" do
``` ruby
it "should do stuff" do
m = Model.create()
m.expires_at.should eql(Time.now + some_value)
end
end
```
This fails.
@ -21,12 +23,13 @@ So how do you test this kind of behaviour? I was not going to let this one beat
What you need to do is stub out Time#now to return a constant value within this test. This way, both calls will use the same Time.now value and thus yield the same result. This in turn makes your test pass (if the saving goes well, of course).
it "should do stuff" do
``` ruby
it "should do stuff" do
@time_now = Time.parse("Feb 24 1981")
Time.stub!(:now).and_return(@time_now)
m = Model.create()
m.expires_at.should eql(Time.now + some_value)
end
end
```

View File

@ -8,7 +8,9 @@ I just released version 0.1.0 of my IMDB gem which allows your app to search IMD
## Installation
sudo gem install imdb
``` shell
sudo gem install imdb
```
This will also install the dependencies Hpricot and HTTParty.
@ -16,29 +18,31 @@ This will also install the dependencies Hpricot and HTTParty.
In your project, include the gem (and possibly rubygems as well).
require 'rubygems'
require 'imdb'
``` ruby
require 'rubygems'
require 'imdb'
search = Imdb::Search.new('Star Trek')
=> #<Imdb::Search:0x18289e8 @query="Star Trek">
search = Imdb::Search.new('Star Trek')
=> #<Imdb::Search:0x18289e8 @query="Star Trek">
puts search.movies[0..3].collect{ |m| [m.id, m.title].join(" - ") }.join("\n")
=> 0060028 - "Star Trek" (1966) (TV series)
puts search.movies[0..3].collect{ |m| [m.id, m.title].join(" - ") }.join("\n")
=> 0060028 - "Star Trek" (1966) (TV series)
0796366 - Star Trek (2009)
0092455 - "Star Trek: The Next Generation" (1987) (TV series)
0112178 - "Star Trek: Voyager" (1995) (TV series)
st = Imdb::Movie.new("0796366")
=> #<Imdb::Movie:0x16ff904 @url="http://www.imdb.com/title/tt0796366/", @id="0796366", @title=nil>
st = Imdb::Movie.new("0796366")
=> #<Imdb::Movie:0x16ff904 @url="http://www.imdb.com/title/tt0796366/", @id="0796366", @title=nil>
st.title
=> "Star Trek"
st.year
=> 2009
st.rating
=> 8.4
st.cast_members[0..2].join(", ")
=> "Chris Pine, Zachary Quinto, Leonard Nimoy"
st.title
=> "Star Trek"
st.year
=> 2009
st.rating
=> 8.4
st.cast_members[0..2].join(", ")
=> "Chris Pine, Zachary Quinto, Leonard Nimoy"
```
As you can see, both `Imdb::Search` and `Imdb::Movie` are lazy loading, only doing a HTTP request when you actually request data. Also, the remote HTTP data is cached trhough-out the life span of your Imdb::Movie object.

View File

@ -9,21 +9,25 @@ Active Records provides callbacks, which is great is you want to perform extra b
However, there are situations where you can easily fall into the trap of creating an infinite loop.
<pre lang="ruby">class Beer < ActiveRecord::Base
``` ruby
class Beer < ActiveRecord::Base
def after_save
x = some_magic_method(self)
update_attribute(:my_attribute, x)
end
end</pre>
end
```
The above will give you a nice infinite loop (which doesn't scale). It's possible to update your model, without calling the callbacks and without resorting to SQL.
<pre lang="ruby">class Beer < ActiveRecord::Base
``` ruby
class Beer < ActiveRecord::Base
def after_save
x = some_magic_method(self)
Beer.update_all("my_attribute = #{x}", { :id => self.id })
end
end</pre>
end
```
This is a bit unconventional, but it works nicely. You can use all the following ActiveRecord methods to update your model without calling callbacks:

View File

@ -6,14 +6,17 @@ slug = "rails-mysql-case-sensitive-strings-in-your-database"
+++
When using Rails + MySQL, you'll find that normal string (or varchar(255)) fields are case insensitive. This can be quite a nuisance, but it's easy to resolve. You need to set your table to the utf8_bin collation. By using the binary variant, you're basically enabling case sensitivity.
create_table :posts, :options => 'ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin' do |t|
``` ruby
create_table :posts, :options => 'ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin' do |t|
t.string :title, :limit => 100
end
end
```
That's all. The <code>title</code> field is now case sensitive.
Another question I get a lot is how to change the collation and charset for an existing column. That's easy with the following query. Just make sure to pick the right column and data type:
ALTER TABLE posts MODIFY `title` varchar(100) CHARACTER SET utf8 COLLATE utf8_bin;
``` sql
ALTER TABLE posts MODIFY `title` varchar(100) CHARACTER SET utf8 COLLATE utf8_bin;
```

View File

@ -13,52 +13,63 @@ We use a base-10 system. This means that for any number, every digit can hold 10
You may be familiar with hexadecimal numbers. Hexadecimal has a base of 16, meaning every digit can hold 16 distinct values. 0 through 9, a through f.
<strong>Converting an integer to a 32-base string</strong>
## Converting an integer to a 32-base string
The key to shortening a number to a string is that you need to store as many different values in a single digit. Because we want the digits to be part of an URL, we can only use valid URL characters. Characters like '/' and '?' are a no-go in this situation.
Another reason for sticking with 32 (and not 52 (a-z, A-Z, 0-9)) is that I'm going to use bit encoding. But, first we need to define an alphabet to use to represent the 32 different values:
<pre lang="ruby">ENCODE_CHARS =
``` ruby
ENCODE_CHARS =
%w( B C D F G H J K
M N P Q R S T V
W Z b c d f h j
k m n p r x t v )
</pre>
```
For a value of 3 we'd use 'F'. Easy, right?
Let's say we want to encode the number 123. What valerii does first is convert 123 to a binary string.
<pre lang="ruby">123.to_s(2) # => "1111011"</pre>
``` ruby
123.to_s(2) # => "1111011"
```
Cool, now split up this string in blocks of 5 bits. 5 bits can contain 32 different values. We need to start 'chopping' from the right to the left, so first we reverse the binary string and split it up.
<pre lang="ruby">"1111011".reverse.scan(/.{1,5}/) # => ["11011", "11"]</pre>
``` ruby
"1111011".reverse.scan(/.{1,5}/) # => ["11011", "11"]
```
Now we convert the binary strings to 10-base numbers and map those to the characters defined in <code>ENCODE_CHARS</code>. The resulting characters are reversed again and joined to one string:
<pre lang="ruby">123.to_s(2).reverse.scan(/.{1,5}/).map do |bits|
``` ruby
123.to_s(2).reverse.scan(/.{1,5}/).map do |bits|
ENCODE_CHARS[bits.reverse.to_i(2)]
end.reverse.join # => "Fp"</pre>
end.reverse.join # => "Fp"
```
<strong>Converting a 32-base string to an integer</strong>
## Converting a 32-base string to an integer
Converting a 32-base string back to its original integer value is quite easy now. The only trick is to create another hash that maps each character to its integer value:
<pre lang="ruby">DECODE_MAP = ENCODE_CHARS.to_enum(:each_with_index).inject({}) do |h,(c,i)|
``` ruby
DECODE_MAP = ENCODE_CHARS.to_enum(:each_with_index).inject({}) do |h,(c,i)|
h[c] = i; h
end</pre>
end
```
This looks scary, but it's actually a common way to reverse key/values in a hash.
Next is taking each character from the string and pushing the 5 bits we read in a variable. This variable is the original integer value.
<pre lang="ruby">"Fp".split(//).map { |char|
``` ruby
"Fp".split(//).map { |char|
DECODE_MAP[char] or return nil
}.inject(0) { |result,val| (result << 5) + val } # => 123</pre>
}.inject(0) { |result,val| (result << 5) + val } # => 123
```
<strong>Notes</strong>
## Notes
Once you have established your way of encoding/decoding you should not change the alphabet you're using, since it redefines the meaning and value of the encoded strings.

View File

@ -5,13 +5,15 @@ tags = ["git", "prune", "repack", "fsck"]
slug = "git-problem-error-unable-to-create-temporary-sha1-filename"
+++
I got <code>git problem: error: unable to create temporary sha1 filename</code> when pushing to a remote repository. The fix is rather easy.
~
On both your local and remote repositories perform the following magic:
git fsck
git prune
git repack
git fsck
``` shell
git fsck
git prune
git repack
git fsck
```
The last fsck should not report any problems.

View File

@ -16,9 +16,11 @@ If you fix a bug or create a new feature do it in a separate branch!
Let's say you want to create a patch for my <a href="http://github.com/ariejan/imdb">imdb</a> gem. You should clone my repository and create a new branch for the fix you have in mind. In this sample we'll do an imaginary fix for empty posters.
git clone git://github.com/ariejan/imdb.git
cd imdb
git checkout -b fix_empty_poster
``` shell
git clone git://github.com/ariejan/imdb.git
cd imdb
git checkout -b fix_empty_poster
```
Now, in the new <code>fix_empty_poster</code> branch you can hack whatever you need to fix. Write tests, update code etc. etc.
@ -28,10 +30,12 @@ When you're satisfied with all you changes, it's time to create your patch. FYI:
Okay, I've made some commits, here's the <code>git log</code> for the <code>fix_empty_poster</code> branch:
git log --pretty=oneline -3
* ce30d1f - (fix_empty_poster) Added poster URL as part of cli output (7 minutes ago)
* 5998b80 - Added specs to test empty poster URL behaviour (12 minutes ago)
* aecb8cb - (REL-0.5.0, origin/master, origin/HEAD, master) Prepare release 0.5.0 (4 months ago)
``` shell
git log --pretty=oneline -3
* ce30d1f - (fix_empty_poster) Added poster URL as part of cli output (7 minutes ago)
* 5998b80 - Added specs to test empty poster URL behaviour (12 minutes ago)
* aecb8cb - (REL-0.5.0, origin/master, origin/HEAD, master) Prepare release 0.5.0 (4 months ago)
```
In GitX it would look like this:
@ -39,7 +43,9 @@ In GitX it would look like this:
Okay, now it's time to go and make a patch! All we really want are the two latest commits, stuff them in a file and send them to someone to apply them. But, since we created a separate branch, we don't have to worry about commits at all!
git format-patch master --stdout > fix_empty_poster.patch
``` shell
git format-patch master --stdout > fix_empty_poster.patch
```
This will create a new file <code>fix_empty_poster.patch</code> with all changes from the current (<code>fix_empty_poster</code>) against <code>master</code>. Normally, git would create a separate patch file for each commit, but that's not what we want. All we need is a single patch file.
@ -51,19 +57,25 @@ Now, you have a patch for the fix you wrote. Send it to the maintainer of the pr
First, take a look at what changes are in the patch. You can do this easily with <code>git apply</code>
git apply --stat fix_empty_poster.patch
``` shell
git apply --stat fix_empty_poster.patch
```
Note that this command does not apply the patch, but only shows you the stats about what it'll do. After peeking into the patch file with your favorite editor, you can see what the actual changes are.
Next, you're interested in how troublesome the patch is going to be. Git allows you to test the patch before you actually apply it.
git apply --check fix_empty_poster.patch
``` shell
git apply --check fix_empty_poster.patch
```
If you don't get any errors, the patch can be applied cleanly. Otherwise you may see what trouble you'll run into. To apply the patch, I'll use <code>git am</code> instead of <code>git apply</code>. The reason for this is that <code>git am</code> allows you to <em>sign off</em> an applied patch. This may be useful for later reference.
git am --signoff < fix_empty_poster.patch
Applying: Added specs to test empty poster URL behaviour
Applying: Added poster URL as part of cli output
``` shell
git am --signoff < fix_empty_poster.patch
Applying: Added specs to test empty poster URL behaviour
Applying: Added poster URL as part of cli output
```
Okay, patches were applied cleanly and your master branch has been updated. Of course, run your tests again to make sure nothing got borked.

View File

@ -23,8 +23,10 @@ Now I have a few other sites running on my vps which I don't want to cache just
Here's the entire configuration for Varnish to accomplish just that:
backend default { .host = "127.0.0.1"; .port = "8080"; }
sub vcl_recv { if (req.http.host !~ "ariejan.net") { return(pass); } }
``` text
backend default { .host = "127.0.0.1"; .port = "8080"; }
sub vcl_recv { if (req.http.host !~ "ariejan.net") { return(pass); } }
```
Yes, that is just two lines! What this does is forward everything you throw at varnish to the server at port 8080. The `vcl_recv` makes sure that if the hostname does not include ariejan.net varnish passes the request forward - no caching.
@ -36,33 +38,35 @@ When I first ran my `ab` benchmark with 10 concurrent connections I got to about
For the record:
# ab -c 1000 -n 60000 http://ariejan.net/2010/03/22/shields-up-rrrack-alert/
``` shell
$ ab -c 1000 -n 60000 http://ariejan.net/2010/03/22/shields-up-rrrack-alert/
Server Software: Apache/2.2.15
Server Hostname: ariejan.net
Server Port: 80
Server Software: Apache/2.2.15
Server Hostname: ariejan.net
Server Port: 80
Document Path: /2010/03/22/shields-up-rrrack-alert/
Document Length: 5117 bytes
Document Path: /2010/03/22/shields-up-rrrack-alert/
Document Length: 5117 bytes
Concurrency Level: 1000
Time taken for tests: 6.290 seconds
Complete requests: 60000
Failed requests: 0
Write errors: 0
Total transferred: 331434376 bytes
HTML transferred: 307460062 bytes
Requests per second: 9539.34 [#/sec] (mean)
Time per request: 104.829 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent requests)
Transfer rate: 51459.38 [Kbytes/sec] received
Concurrency Level: 1000
Time taken for tests: 6.290 seconds
Complete requests: 60000
Failed requests: 0
Write errors: 0
Total transferred: 331434376 bytes
HTML transferred: 307460062 bytes
Requests per second: 9539.34 [#/sec] (mean)
Time per request: 104.829 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent requests)
Transfer rate: 51459.38 [Kbytes/sec] received
Connection Times (ms)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 56 364.9 12 6209
Processing: 2 22 76.7 16 3073
Waiting: 2 19 76.6 13 3070
Total: 3 78 374.6 29 6223
Connect: 1 56 364.9 12 6209
Processing: 2 22 76.7 16 3073
Waiting: 2 19 76.6 13 3070
Total: 3 78 374.6 29 6223
```
Interested? Check out [Varnish][1] now or ask us at [Kabisa][2] to help you out!

View File

@ -8,7 +8,9 @@ To install Nokogiri on a Debian system you need some system packages in place. T
~
First, install the necessary debian packages if you don't have them already:
apt-get install build-essential libxml2-dev libxslt1-dev
``` shell
apt-get install build-essential libxml2-dev libxslt1-dev
```
Then you can install nokogiri without any problem with `gem install nokogiri`

View File

@ -15,7 +15,6 @@ _FYI: I'm currently running aj.gs on Firefly 0.1 with MySQL._
[1]: http://ariejan.net/2010/03/28/really-another-sinatra-url-shortener-in-ruby/
[2]: http://github.com/ariejan/firefly#readme
[3]: http://github.com/ariejan/firefly
~
### FireFly
@ -23,16 +22,19 @@ FireFly is a simple URL shortener for personal use.
### Installation
sudo gem install firefly
``` shell
sudo gem install firefly
```
After you have installed the Firefly gem you should create a `config.ru` file that tells your webserver what to do. Here is a sample `config.ru`:
require 'rubygems'
require 'firefly'
``` ruby
require 'rubygems'
require 'firefly'
disable :run
disable :run
app = Firefly::Server.new do
app = Firefly::Server.new do
set :hostname, "localhost:3000"
set :api_key, "test"
@ -43,13 +45,16 @@ After you have installed the Firefly gem you should create a `config.ru` file th
# Make sure to install the do_mysql gem:
# sudo gem install do_mysql
# set :database, "mysql://root@localhost/firefly"
end
end
run app
run app
```
Next you can start your web server. You may try thin:
thin start -R config.ru
``` shell
thin start -R config.ru
```
### Configuration
@ -67,24 +72,30 @@ It's possible to use all kinds of backends with DataMapper. Sqlite3 and MySQL ha
Adding a URL is done by doing a simple POST request that includes the URL and your API key.
curl -d "url=http://ariejan.net" -d "api_key=test" http://localhost:3000/api/add
``` shell
curl -d "url=http://ariejan.net" -d "api_key=test" http://localhost:3000/api/add
```
If you're on a MacOSX you could add the following function to your `~/.profile` to automate URL shortening:
shorten(){
``` shell
shorten(){
URL=$1
SHORT_URL=`curl -s -d "url=$URL&api_key=test" http://localhost:3000/api/add`
echo $SHORT_URL | pbcopy
echo "-- $URL => $SHORT_URL"
echo "Short URL copied to clipboard."
}
}
```
After you restart Terminal.app (or at least reload the `.profile` file) you can use it like this:
$ shorten http://ariejan.net
-- http://ariejan.net => http://aj.gs/1
Short URL copied to clipboard.
``` shell
$ shorten http://ariejan.net
-- http://ariejan.net => http://aj.gs/1
Short URL copied to clipboard.
```
### Bugs, Feature Requests, etc.

View File

@ -13,11 +13,13 @@ Luckily, it's easy to detect web sockets support through JavaScript. All you rea
Here's a simple example, note that I'm using jQuery here.
$(document).ready(function() {
``` javascript
$(document).ready(function() {
if( typeof(WebSocket) != "function" ) {
$('body').html("<h1>Error</h1><p>Your browser does not support HTML5 Web Sockets. Try Google Chrome instead.</p>");
}
});
});
```
Maybe there are better ways, but _it works for me_.

View File

@ -15,34 +15,39 @@ Hot off the press is Firefly 0.3! Firefly is a simple URL shortener application
* Works with any Rack capable web server
Interested? Give it a go! Here's how in ht
~
### 1. Install Firefly
gem install firefly
``` shell
gem install firefly
```
### 2. Ready?
Create a new directory and place the following `config.ru` file in it:
require 'rubygems'
require 'firefly'
``` ruby
require 'rubygems'
require 'firefly'
disable :run
disable :run
app = Firefly::Server.new do
app = Firefly::Server.new do
set :hostname, "localhost:3000"
set :api_key, "test"
set :database, "sqlite3://#{Dir.pwd}/firefly.sqlite3"
end
end
run app
run app
```
### 3. Go!
Start your engines! In this case I use `thin`:
thin start -R config.ru
``` shell
thin start -R config.ru
```
Now, when you visit `http://localhost:3000/` you'll be asked for your API key to login. Then, start shortening!
@ -54,11 +59,14 @@ When you update from 0.2 you'll notice your click stats are all indicating `0`.
1. Remove the `clicks` field from `firefly_urls`
ALTER TABLE `firefly_urls` DROP `clicks`;
``` sql
ALTER TABLE `firefly_urls` DROP `clicks`;
```
2. Rename the `visits` field to `clicks`
ALTER TABLE `firefly_urls` CHANGE `visits` `clicks` int;
``` sql
ALTER TABLE `firefly_urls` CHANGE `visits` `clicks` int;
```
All done! No restart required.

View File

@ -7,29 +7,37 @@ slug = "ruby-version-and-gemset-in-your-bash-prompt-yes-sir"
RVM is an easy way to switch between different ruby implementations and gemsets. If you don't know about it, [go check it out][1]. If you do know about, you'll know how annoying it is never to know which ruby version and gemset you're currently using. Here is a nice `.profile` hack that shows your current ruby version and optional gemset in your prompt.
[1]: http://rvm.beginrescueend.com/
~
Firstly, add the following function to your `~/.profile`:
function rvm_version {
``` shell
function rvm_version {
local gemset=$(echo $GEM_HOME | awk -F'@' '{print $2}')
[ "$gemset" != "" ] && gemset="@$gemset"
local version=$(echo $MY_RUBY_HOME | awk -F'-' '{print $2}')
[ "$version" != "" ] && version="@$version"
local full="$version$gemset"
[ "$full" != "" ] && echo "$full "
}
}
```
Next, you can use this function in your prompt. Like this:
export PS1="\$(rvm_version) \w \$(parse_git_branch) \$ "
``` shell
export PS1="\$(rvm_version) \w \$(parse_git_branch) \$ "
```
The results? For standard ruby 1.8.7
@1.8.7 ~ $
``` text
@1.8.7 ~ $
```
Or with the `rails3` gemset enabled:
@1.8.7@rails3 ~ $
``` text
@1.8.7@rails3 ~ $
```
So, now you always know which ruby you're using! Happy coding!

View File

@ -6,7 +6,9 @@ slug = "firefly-041-released"
+++
I just pushed [Firefly 0.4.1][1] to [Rubygems][2]. Updating is easy:
gem update firefly
``` shell
gem update firefly
```
Don't forget to restart your server, that's all.
@ -19,7 +21,7 @@ The 0.4.1 release covers the following changes:
[1]: http://github.com/ariejan/firefly/tree/v0.4.1
[2]: http://rubygems.org/gems/firefly
~
If you are interested in contributing to Firefly, please fork the project on [github][3]. Pull requests are very welcome.
[3]: http://github.com/ariejan/firefly

View File

@ -7,21 +7,23 @@ slug = "bundler-passenger-with-rails-235-yes-please"
Bundler allows you to define the gems your application uses, resolve dependencies and load everything up. This is great, because you don't have to manage all those different gem versions yourself any more.
There is a little problem, though. When you want to use Bundler with Rails 2.3.5. you need to do a bit of extra work. You'll need to create a file `config/preinitializer.rb` that contains the following:
~
require "rubygems"
require "bundler"
if Gem::Version.new(Bundler::VERSION) <= Gem::Version.new("0.9.5")
``` ruby
require "rubygems"
require "bundler"
if Gem::Version.new(Bundler::VERSION) <= Gem::Version.new("0.9.5")
raise RuntimeError, "Your bundler version is too old." +
"Run `gem install bundler` to upgrade."
end
# Set up load paths for all bundled gems
Bundler.setup
rescue Bundler::GemNotFound
rescue Bundler::GemNotFound
raise RuntimeError, "Bundler couldn't find some gems." +
"Did you run `bundle install`?"
end
end
```
Then you deploy your app with Capistrano (as `root`) and find that Passenger can't find your gems. True, you need to install them, so you add a Capistrano task to run `bundle install` after you update your code. Still, passenger can't find the gems.
@ -29,10 +31,9 @@ The problem is that bundler installs the gems to your `~/.bundle`. When you run
A solution is easy: `bundle install .bundle` will install the gems to `./.bundle`, which should be your rails root directory. That solves your problem with passenger! Here's the full Capistrano task:
desc "Install bundled gems into ./.bundle"
task :bundle do
``` ruby
desc "Install bundled gems into ./.bundle"
task :bundle do
run "cd #{release_path}; bundle install .bundle"
end
end
```

View File

@ -6,29 +6,34 @@ slug = "upgrading-to-mongoid-beta-6"
+++
If you are working with Rails 3 and Mongoid, you're likely to upgrade to [`mongoid-2.0.0.beta6`][1]. That's okay, but you will run into a few problems. Among others, one will be:
Database should be a Mongo::DB, not NilClass
``` text
Database should be a Mongo::DB, not NilClass
```
or
Mongoid::Errors::InvalidDatabase: Mongoid::Errors::InvalidDatabase
``` text
Mongoid::Errors::InvalidDatabase: Mongoid::Errors::InvalidDatabase
```
Another, Mongoid-related problem is the error `uninitialized constant OrderedHash`.
Luckily, these problems can be solved quite easily.
[1]: http://rubygems.org/gems/mongoid
~
The first thing you need to do is make sure you use the right version of `bson_ext`. Beta 6 requires you to run `bson_ext-1.0.1` or you'll get the `OrderedHash` error. Okay, with that out of the way, let's focus on the MongoDB errors.
The problem is that Mongoid is accessed/used before it is properly initialized. To resolve this issue, add the following line to your other requires at the top of `config/application.rb`.
require 'mongoid/railtie'
``` ruby
require 'mongoid/railtie'
```
With that, you initialize Mongoid correctly. Hope it helps.
Update: I also ran into `keys must be strings or symbols` errors. This is now a _known issue_ with `mongoid-2.0.0.beta6` and has been fixed in `master`. If you are using Bundler (you are, aren't you?) then you can use the master branch instead of the gem:
# gem 'mongoid', '2.0.0.beta6'
gem 'mongoid', :git => 'http://github.com/durran/mongoid.git'
``` ruby
# gem 'mongoid', '2.0.0.beta6'
gem 'mongoid', :git => 'http://github.com/durran/mongoid.git'
```

View File

@ -9,24 +9,26 @@ Today version 0.4.3 of Firefly was released with some minor updates. To complete
The client library allows your Ruby application to easily shorten URLs with a remote Firefly server. It's very easy to use and lightweight.
[1]: http://github.com/ariejan/firefly-client
~
*Firefly 0.4.3 Changelog*
## Firefly 0.4.3 Changelog
* Handle invalid API keys correctly.
* Added a fix for MySQL users to update the `code` column to use the correct collation. Fixes [issue #9][2]
[2]: github.com/ariejan/firefly/issues/9
*Firefly Client*
## Firefly Client
Using the Firefly Client is very easy, read the following snippet from the [README][3]:
require 'rubygems'
require 'firefly-client'
``` ruby
require 'rubygems'
require 'firefly-client'
firefly = Firefly::Client.new("http://aj.gs", "my_api_key")
firefly.shorten("http://google.com")
firefly = Firefly::Client.new("http://aj.gs", "my_api_key")
firefly.shorten("http://google.com")
=> "http://aj.gs/8ds"
```
Nice, huh? Get more info over at [github][1]

View File

@ -8,7 +8,9 @@ I've always trouble uploading files with Curl. Some how the syntax for that comm
What I want to do is perform a normal `POST`, including a file and some other variables to a remote server. This is it:
curl -i -F name=test -F filedata=@localfile.jpg http://example.org/upload
``` shell
curl -i -F name=test -F filedata=@localfile.jpg http://example.org/upload
```
You can add as many `-F` as you want. The `-i` option tells curl to show the response headers as well, which I find useful most of the time.
~

View File

@ -8,30 +8,38 @@ I'm often asked how to merge only specific commits from another branch into the
First of all, use `git log` or the awesome [GitX][1] tool to see exactly which commit you want to pick. An example:
dd2e86 - 946992 - 9143a9 - a6fd86 - 5a6057 [master]
``` text
dd2e86 - 946992 - 9143a9 - a6fd86 - 5a6057 [master]
\
76cada - 62ecb3 - b886a0 [feature]
```
Let's say you've written some code in commit `62ecb3` of the `feature` branch that is very important right now. It may contain a bug fix or code that other people need to have access to now. Whatever the reason, you want to have commit `62ecb3` in the master branch right now, but not the other code you've written in the `feature` branch.
~
Here comes `git cherry-pick`. In this case, `62ecb3` is the cherry and you want to pick it!
git checkout master
git cherry-pick 62ecb3
``` shell
git checkout master
git cherry-pick 62ecb3
```
That's all. `62ecb3` is now applied to the master branch and commited (as a new commit) in `master`. `cherry-pick` behaves just like `merge`. If git can't apply the changes (e.g. you get merge conflicts), git leaves you to resolve the conflicts manually and make the commit yourself.
*Cherry picking a range of commits*
## Cherry picking a range of commits
In some cases picking one single commit is not enough. You need, let's say three consecutive commits. `cherry-pick` is not the right tool for this. `rebase` is. From the previous example, you'd want commit `76cada` and `62ecb3` in `master`.
The flow is to first create a new branch from `feature` at the last commit you want, in this case `62ecb3`.
git checkout -b newbranch 62ecb3
``` shell
git checkout -b newbranch 62ecb3
```
Next up, you rebase the `newbranch` commit `--onto master`. The `76cada^` indicates that you want to start from that specific commit.
git rebase --onto master 76cada^
``` shell
git rebase --onto master 76cada^
```
The result is that commits `76cada` through `62ecb3` are applied to `master`.

View File

@ -8,9 +8,8 @@ In git, branching is cheap and easy. You do it all the time (you're not? Well, y
No problem for git! Renaming a branch is really easy:
git branch -m old_branch new_branch
``` shell
git branch -m old_branch new_branch
```
That's all.
~

View File

@ -18,11 +18,13 @@ To requeue jobs, you can use `Resque::Failure.requeue(index)` where index corres
To requeue all jobs in the _failed_ queue, you can simply run the following commands:
# Requeue all jobs in the failed queue
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
``` ruby
# Requeue all jobs in the failed queue
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
# Clear the failed queue
Resque::Failure.clear
# Clear the failed queue
Resque::Failure.clear
```
That's all there is to it, really. Happy processing!

View File

@ -12,8 +12,10 @@ The first tool you need is ffmpeg. If you're on Mac, simple run `brew install ff
With `ffmpeg` setup and in your path, create a directory and stuff you WMA's there. Then open `irb` and run the following Ruby command:
ext = ".wma"
Dir.glob("*#{ext}").each {|f| m = f.gsub(ext, '.mp3'); `ffmpeg -i '#{f}' -ab 192k -ac 2 -ar 44100 '#{m}'` }
``` ruby
ext = ".wma"
Dir.glob("*#{ext}").each {|f| m = f.gsub(ext, '.mp3'); `ffmpeg -i '#{f}' -ab 192k -ac 2 -ar 44100 '#{m}'` }
```
When done, you'll find the WMA files converted to MP3 (having the same filename, except for the extention).

View File

@ -15,12 +15,16 @@ First, install the `pptpd` package. `pptpd` offers a `PPTP`-type VPN which is
supported by Microsoft and other network vendors. This is also the easiest to
setup.
sudo apt-get install pptpd
``` shell
sudo apt-get install pptpd
```
Next up, edit `/etc/pptpd.conf` with `sudo vi /etc/pptp.conf`. At the bottom add the following lines:
localip 192.168.1.10
remoteip 192.168.1.230-239
``` text
localip 192.168.1.10
remoteip 192.168.1.230-239
```
Here `localip` references the IP of my home server. The `remoteip` variable
configures which IPs remote clients may use when the connect through VPN to my
@ -30,8 +34,10 @@ network. In this case I reserve 10 IP address: 192.168.1.230 through
With that out of the way, let's tell `PPTP` which users to allow. Edit
`/etc/ppp/chap-secrets`, just like you did before using `sudo`.
# client server secret IP Address
ariejan pptpd somepassword *
``` text
# client server secret IP Address
ariejan pptpd somepassword *
```
That's all! Yes, seriously. Just restart the `pptpd` daemon and you're good to
go.

View File

@ -1,26 +0,0 @@
+++
date = "2010-10-12"
title = "SenTestCase: XCBuildLogCommandInvocationSection error in XCode 3.2"
tags = ["iphone", "sdk", "xcode"]
slug = "sentestcase-xcbuildlogcommandinvocationsection-error-in-xcode-32"
+++
Today I wanted to add some unit tests to an iPhone project I'm working on. I came across the following error when trying to run my tests:
An internal error occurred when handling command output: -[XCBuildLogCommandInvocationSection setTestsPassedString:]: unrecognized selector sent to instance 0x2017a22a0
An internal error occurred when handling command output: -[XCBuildLogCommandInvocationSectionRecorder endMarker]: unrecognized selector sent to instance 0x201719b60
~
It appears there's a bug in XCode somewhere that prevents unit tests to run
(yeah, I know, there should be tests for that at Apple). Anyway, you can
easily fix this.
Double-click on _Run Script_ for your testing target. Replace
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
with
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" 1> /tmp/RunUnitTests.out
Now, build your test target again and you should be good to go.

View File

@ -10,17 +10,23 @@ On Ubuntu I recently installed MySQL and set a password. Here's how to remove th
~
First, connect to MySQL and check what permissions are currently set:
$ mysql -u root -p
use mysql;
select Host, User, Password from user;
``` shell
$ mysql -u root -p
use mysql;
select Host, User, Password from user;
```
You'll probably see three entries for the `root` user: `localhost`, `127.0.0.1` and your hostname like `hostname`. To clear the password for the root user issue the following query:
update user set password = '' where user = 'root';
``` sql
update user set password = '' where user = 'root';
```
Optionally, you may only want to reset the password for `localhost`:
update user set password = '' where user = 'root' and host = 'localhost';
``` sql
update user set password = '' where user = 'root' and host = 'localhost';
```
Keep in mind that using no MySQL password is insecure. Always protected your MySQL database with at least a strong password in production.

View File

@ -6,13 +6,17 @@ slug = "hide-last-login-on-bash-login"
+++
Everytime I open a new Terminal on my Mac, I get a line like this:
Last login: Thu Nov 25 09:07:55 on ttys004
``` text
Last login: Thu Nov 25 09:07:55 on ttys004
```
This annoys me. I don't care when I last opened a local Terminal.
~
To hide this "Last login" message when logging in to bash you need to create a file in your homedirectory.
touch ~/.hushlogin
``` shell
touch ~/.hushlogin
```
With this `.hushlogin` file in place you won't see the "Last login" message and go directly to your prompt, where you want to be.

View File

@ -6,7 +6,7 @@ slug = "why-did-errormessagesfor-disappear-from-rails-3"
+++
Today I learned that `error_messages_for` has disappear from Rails 3. When I tried using it I got the following deprecation warning:
DEPRECATION WARNING: form.error_messages was removed from Rails and is now available as a plugin.
**DEPRECATION WARNING: form.error_messages was removed from Rails and is now available as a plugin.**
What happened? Why was this pulled from Rails 3?
~
@ -15,19 +15,20 @@ The reason `error_messages_for` was pulled from Rails 3 is a new guideline that
Okay, that's all well and good. How do you fix this problem? There are two ways. The first is to implement your own error handling. A HAML snippet:
- if @post.errors.any?
``` ruby
- if @post.errors.any?
.errors
%h2 There was a problem saving this post
%ul
- @post.errors.full_messages.each do |msg|
%li= msg
```
This is by far the best way to go about it. However, if you have a current Rails 2 app you're upgrading, writing your own error handling may be rather difficult or time consuming. In that case you can install a plugin that restores the original functionality of `error_messages_for`.
rails plugin install git://github.com/rails/dynamic_form.git
``` shell
rails plugin install git://github.com/rails/dynamic_form.git
```
Just make sure to restart your server.

View File

@ -10,7 +10,8 @@ Luckily, Amazon features _bucket policies_, which allow you to define permission
~
This example will give _read_ access to _Everyone_ on _all files_ in your bucket.
{
``` json
{
"Version":"2008-10-17",
"Statement":[{
"Sid":"AllowPublicRead",
@ -23,7 +24,8 @@ This example will give _read_ access to _Everyone_ on _all files_ in your bucket
]
}
]
}
}
```
**Make sure you replace `bucket` in `arn:aws:s3:::bucket/*` with your bucket name.**

View File

@ -31,27 +31,28 @@ I don't want to configure anything, if possible.
Well, this is the rake task I currently use:
require 's3'
require 'digest/md5'
require 'mime/types'
``` ruby
require 's3'
require 'digest/md5'
require 'mime/types'
## These are some constants to keep track of my S3 credentials and
## bucket name. Nothing fancy here.
## These are some constants to keep track of my S3 credentials and
## bucket name. Nothing fancy here.
AWS_ACCESS_KEY_ID = "xxxxx"
AWS_SECRET_ACCESS_KEY = "yyyyy"
AWS_BUCKET = "my_bucket"
AWS_ACCESS_KEY_ID = "xxxxx"
AWS_SECRET_ACCESS_KEY = "yyyyy"
AWS_BUCKET = "my_bucket"
## This defines the rake task `assets:deploy`.
namespace :assets do
## This defines the rake task `assets:deploy`.
namespace :assets do
desc "Deploy all assets in public/**/* to S3/Cloudfront"
task :deploy, :env, :branch do |t, args|
## Minify all CSS files
## Minify all CSS files
Rake::Task[:minify].execute
## Use the `s3` gem to connect my bucket
## Use the `s3` gem to connect my bucket
puts "== Uploading assets to S3/Cloudfront"
service = S3::Service.new(
@ -59,33 +60,33 @@ Well, this is the rake task I currently use:
:secret_access_key => AWS_SECRET_ACCESS_KEY)
bucket = service.buckets.find(AWS_BUCKET)
## Needed to show progress
## Needed to show progress
STDOUT.sync = true
## Find all files (recursively) in ./public and process them.
## Find all files (recursively) in ./public and process them.
Dir.glob("public/**/*").each do |file|
## Only upload files, we're not interested in directories
## Only upload files, we're not interested in directories
if File.file?(file)
## Slash 'public/' from the filename for use on S3
## Slash 'public/' from the filename for use on S3
remote_file = file.gsub("public/", "")
## Try to find the remote_file, an error is thrown when no
## such file can be found, that's okay.
## Try to find the remote_file, an error is thrown when no
## such file can be found, that's okay.
begin
obj = bucket.objects.find_first(remote_file)
rescue
obj = nil
end
## If the object does not exist, or if the MD5 Hash / etag of the
## file has changed, upload it.
## If the object does not exist, or if the MD5 Hash / etag of the
## file has changed, upload it.
if !obj || (obj.etag != Digest::MD5.hexdigest(File.read(file)))
print "U"
## Simply create a new object, write the content and set the proper
## mime-type. `obj.save` will upload and store the file to S3.
## Simply create a new object, write the content and set the proper
## mime-type. `obj.save` will upload and store the file to S3.
obj = bucket.objects.build(remote_file)
obj.content = open(file)
obj.content_type = MIME::Types.type_for(file).to_s
@ -100,10 +101,12 @@ Well, this is the rake task I currently use:
puts
puts "== Done syncing assets"
end
end
end
```
This rake task is hooked into my `rake deploy:production` script and generates the following output (I added a new file just to show you what happens.)
``` shell
$ rake deploy:production
(in /Users/ariejan/Code/Sites/ariejannet)
Deploying master to production
@ -121,6 +124,7 @@ This rake task is hooked into my `rake deploy:production` script and generates t
Total 30 (delta 17), reused 0 (delta 0)
-----> Heroku receiving push
```
### Conclusion ###

View File

@ -32,7 +32,9 @@ added. I'll just summarize those and ignore the 1.2.x release I made.
Simple install the latest gem version and restart your server.
gem install firefly -v1.3.0
``` shell
gem install firefly -v1.3.0
```
Then restart your server. DataMapper will take care of migrating your
database for you.

View File

@ -27,7 +27,9 @@ All right, you're now ready to enter your magic URL. Simply replace the
`YOUR_DOMAIN` and `YOUR_API_KEY` placeholders with your actual
domain name and API key (you can find them in your `config.ru`)
http://YOUR_DOMAIN/api/add?api_key=YOUR_API_KEY&url=%@
``` text
http://YOUR_DOMAIN/api/add?api_key=YOUR_API_KEY&url=%@
```
Then hit _"Save"_ and you're set! Twitter for iPhone will now
automatically use your Firefly instance to shorten URLs!

View File

@ -16,9 +16,11 @@ citizen I want to test that method.
Using RSpec or Cucumber here just seems wrong here. So, I've implemented
my own _Ruby Micro Test Framework_: *Narf*
def assert(message, &block)
``` ruby
def assert(message, &block)
puts "#{"%6s" % ((yield || false) ? ' PASS' : '! FAIL')} - #{message}"
end
end
```
(Yes, that's it.)
@ -28,48 +30,56 @@ method. And example:
Let's say you're going to write a method that counts how often the word
'ruby' occurs in a given String:
def count_rubies(text)
``` ruby
def count_rubies(text)
# TODO
end
end
#--- Ruby Micro Test Framework
def assert(message, &block)
#--- Ruby Micro Test Framework
def assert(message, &block)
puts "#{"%6s" % ((yield || false) ? ' PASS' : '! FAIL')} - #{message}"
end
#---
end
#---
#--- Tests
assert "Count zero rubies" do
#--- Tests
assert "Count zero rubies" do
count_rubies("this is Sparta!") == 0
end
end
assert "Count one ruby" do
assert "Count one ruby" do
count_rubies("This is one ruby") == 1
end
end
assert "Count one RuBy" do
assert "Count one RuBy" do
count_rubies("This is one RuBy") == 1
end
end
```
Now, simple save this file and feed it to ruby:
$ ruby my_method.rb
! FAIL - Count zero rubies
! FAIL - Count one ruby
! FAIL - Count one RuBy
``` shell
$ ruby my_method.rb
! FAIL - Count zero rubies
! FAIL - Count one ruby
! FAIL - Count one RuBy
```
Now, implement your method...
def count_rubies(text)
``` ruby
def count_rubies(text)
text.match(/(ruby)/i).size
end
end
```
And re-run your tests:
$ ruby my_method.rb
``` shell
$ ruby my_method.rb
PASS - Count zero rubies
PASS - Count one ruby
PASS - Count one RuBy
```
So with the addition of just a single method you can fully TDD/BDD your
single method Ruby code. Pretty need, huh?

View File

@ -61,8 +61,10 @@ required - it's all built in into devise already!
Let me show you. First check that your (devise-powered) user has an
authentitication token:
@user.authentication_token
=> "4R2bzzQRdoT_iz-ND4Bb"
``` ruby
@user.authentication_token
=> "4R2bzzQRdoT_iz-ND4Bb"
```
In case your `authentication_token` is nil, you should generate one with
`@user.reset_authentication_token!`
@ -72,10 +74,12 @@ request to the server (while uploading files). Nothing fancy here
either. Not that this is a snippet from JavaScript, embedded in a HAML
template:
$('#image_file').uploadify({
``` javascript
$('#image_file').uploadify({
// I omitted all other config options, since they're not relevant.
'script' : '#{images_path(:auth_token => current_user.authentication_token, :format => :json)}'
)
)
```
Rails will generate a URL like this:
`/images.json?auth_token=4R2bzzQRdoT_iz-ND4Bb`.
@ -83,13 +87,15 @@ Rails will generate a URL like this:
The final step is to protect your `ImagesController#create` action with
devise.
class ImagesController < ApplicationController
``` ruby
class ImagesController < ApplicationController
before_filter :authenticate_user!
def create
# Handle your upload
end
end
end
```
That's all. You dont' even need to add rack middleware or hack Uploadify
to allow an authenticated devise user to upload images through flash.

View File

@ -18,13 +18,16 @@ kick off the clean up process.
This is what I'd like:
rake cleanup # Cleanup everything
rake cleanup:clicks # Aggregate click stats
rake cleanup:logs # Clean old logs
``` shell
rake cleanup # Cleanup everything
rake cleanup:clicks # Aggregate click stats
rake cleanup:logs # Clean old logs
```
Here's what I put in `lib/tasks/cleanup.rake`:
namespace :cleanup do
``` ruby
namespace :cleanup do
desc "Aggregate click stats"
task :clicks => :environment do
Click.cleanup!
@ -36,10 +39,11 @@ Here's what I put in `lib/tasks/cleanup.rake`:
end
task :all => [:clicks, :logs]
end
end
desc "Cleanup everything"
task :cleanup => 'cleanup:all'
desc "Cleanup everything"
task :cleanup => 'cleanup:all'
```
Notice that the `cleanup:all` task does not have a description. Without
it, it won't show up when you do a `rake -T` to view available tasks.

View File

@ -4,6 +4,7 @@ title = "Vows and CoffeeScript"
tags = ["javascript", "bdd", "nodejs", "coffee-script", "vows", "v8"]
slug = "vows-and-coffeescript"
+++
CoffeeScript is a really nice way to write JavaScript code. Combined
with NodeJS you are empowered by a very fast platform to develop
server-side applications. Of course, you want to test these apps as well. [Vows][1] is really
@ -16,16 +17,19 @@ First off, make sure you have CoffeeScript and Vows installed. Here I
install them _globally_ so you can use the `coffee` and `vows` command
line utilities.
npm install -g coffee-script
npm install -g vows
``` shell
npm install -g coffee-script
npm install -g vows
```
Next up, in your product directory, create a directory named `test`.
Here we'll create (classic) example: `division-test.coffee`
vows = require 'vows'
assert = require 'assert'
``` coffee
vows = require 'vows'
assert = require 'assert'
vows
vows
.describe('Division by zero')
.addBatch
'when dividing a number by zero':
@ -45,6 +49,7 @@ Here we'll create (classic) example: `division-test.coffee`
assert.notEqual topic, topic
.export(module)
```
I'm not going to explain the intimate details of Vows here, suffice it
to say that you calculate a value and store it in `topic`. Then you
@ -61,9 +66,9 @@ run them.
With the `test/division-test.coffee` saved, try running `vows` from
your console. Here's the output from `vows --spec`:
λ vows --spec
♢ Division by zero
``` shell
vows --spec
♢ Division by zero
when dividing a number by zero
✓ we get Infinity
@ -71,7 +76,8 @@ your console. Here's the output from `vows --spec`:
✓ is not a number
✓ is not equal to itself
✓ OK » 3 honored (0.002s)
✓ OK » 3 honored (0.002s)
```
Another great tip is `vows -w`. This will keep vows running and monitor
your test files for changes. When a file changes, it will re-run your

View File

@ -8,7 +8,9 @@ Sometimes it handy to get a list out of `git log` that tells you which files wer
~
Let's say you want to view all the changed files since the last tagged release, `v1.3.1`:
git log --reverse --name-status HEAD...v1.3.1 | grep -e ^[MAD][[:space:]]
``` shell
git log --reverse --name-status HEAD...v1.3.1 | grep -e ^[MAD][[:space:]]
```
As you're used to, this shows each files that *A*dded, *M*odified or *D*eleted. This command does not squash file changes. So it's possible for a file to first be added, the deleted, then added again and later modified. The `--reverse` option shows file changes historically, so the first file changed after the v1.3.1 release is shown first.

View File

@ -18,7 +18,9 @@ After investigating, using _Activity Monitor_ I discovered the following:
So, something is using my disk. But what? The solution is to use the `iotop` utility:
sudo iotop -C 5 12
``` shell
sudo iotop -C 5 12
```
A common entry here is the `mds` procoess, which has an insane amount of `BYTES`. So, this `mds` process is causing a lot of I/O, causing things to get slow.
@ -30,7 +32,9 @@ A quick Google search reveals that the `mds` process is actually the Spotlight i
I don't use Spotlight at all, so let's disable it - preventing the disk I/O.
sudo mdutil -a -i off
``` shell
sudo mdutil -a -i off
```
That's all. Spotlight indexing disabled. After a few seconds the disk I/O dropped from ± 450w/s to 0w/s. Vim starts ups again within a seconds. I'm happy.
@ -38,5 +42,6 @@ That's all. Spotlight indexing disabled. After a few seconds the disk I/O droppe
If, for some obscure reason, you want to re-enable Spotlight, use the following command:
sudo mdutil -a -i on
``` shell
sudo mdutil -a -i on
```

View File

@ -4,34 +4,43 @@ title = "Git: Squash your latests commits into one"
tags = ["git", "rebase", "squash"]
slug = "git-squash-your-latests-commits-into-one"
+++
With git it's possible to squash previous commits into one. This is a great way to group certain changes together before sharing them with others.
~
Here's how to squash some commits into one. Let's say this is your current `git log`.
* df71a27 - (HEAD feature_x) Updated CSS for new elements (4 minutes ago)
* ba9dd9a - Added new elements to page design (15 minutes ago)
* f392171 - Added new feature X (1 day ago)
* d7322aa - (origin/feature_x) Proof of concept for feature X (3 days ago)
``` text
* df71a27 - (HEAD feature_x) Updated CSS for new elements (4 minutes ago)
* ba9dd9a - Added new elements to page design (15 minutes ago)
* f392171 - Added new feature X (1 day ago)
* d7322aa - (origin/feature_x) Proof of concept for feature X (3 days ago)
```
You have a branch `feature_x` here. You've already pushed `d7322aa` with the proof of concept of the new feature X. After that you've been working to add new element to the feature, including some changes in CSS. Now, you want to squash your last three commits in one to make your history look pretty.
The command to accomplish that is:
git rebase -i HEAD~3
``` shell
git rebase -i HEAD~3
```
This will open up your editor with the following:
pick f392171 Added new feature X
pick ba9dd9a Added new elements to page design
pick df71a27 Updated CSS for new elements
``` text
pick f392171 Added new feature X
pick ba9dd9a Added new elements to page design
pick df71a27 Updated CSS for new elements
```
Now you can tell git what to do with each commit. Let's keep the commit `f392171`, the one were we added our feature. We'll squash the following two commits into the first one - leaving us with one clean commit with features X in it, including the added element and CSS.
Change your file to this:
pick f392171 Added new feature X
squash ba9dd9a Added new elements to page design
squash df71a27 Updated CSS for new elements
``` text
pick f392171 Added new feature X
squash ba9dd9a Added new elements to page design
squash df71a27 Updated CSS for new elements
```
When done, save and quit your editor. Git will now squash the commits into one. All done!

View File

@ -8,30 +8,38 @@ Sometimes you have to take your git repository's log to see what you did the day
~
To do this, I use the following custom `git log` command:
git log --pretty=format:'%Cred%h%Creset - %C(yellow)%ae%Creset - %Cgreen%cd%Creset - %s%Creset' --abbrev-commit --date=iso
``` shell
git log --pretty=format:'%Cred%h%Creset - %C(yellow)%ae%Creset - %Cgreen%cd%Creset - %s%Creset' --abbrev-commit --date=iso
```
The result:
5d6ef1e - ariejan@ariejan.net - 2011-05-02 10:36:43 +0200 - Bumped to version 1.5.2
afede9e - ariejan@ariejan.net - 2011-05-02 10:35:53 +0200 - Fixed #29 - Sharing to facebook without a title now works properly.
d9985a1 - ariejan@ariejan.net - 2011-04-23 00:13:06 -0700 - Added Travis build status to README
3e1149c - ariejan@ariejan.net - 2011-04-22 13:44:21 +0200 - Bumped version to 1.5.1
fcab3bf - ariejan@ariejan.net - 2011-04-22 13:43:41 +0200 - Don't test on ruby 1.9.2 for now. See issue #27
93843ec - ariejan@ariejan.net - 2011-04-22 13:42:29 +0200 - Fixed issue #28 - Share-to-* double-encodes titles
ec56315 - ariejan@ariejan.net - 2011-04-22 12:43:41 +0200 - Fixed up utf-8 encoding for ruby 1.9.2
2146134 - ariejan@ariejan.net - 2011-04-22 12:39:37 +0200 - Bump Sinatra to 1.2.3
c5efbf4 - ariejan@ariejan.net - 2011-04-22 12:07:41 +0200 - Truncate long URLs in the back-end to maintain a correct layout.
``` text
5d6ef1e - ariejan@ariejan.net - 2011-05-02 10:36:43 +0200 - Bumped to version 1.5.2
afede9e - ariejan@ariejan.net - 2011-05-02 10:35:53 +0200 - Fixed #29 - Sharing to facebook without a title now works properly.
d9985a1 - ariejan@ariejan.net - 2011-04-23 00:13:06 -0700 - Added Travis build status to README
3e1149c - ariejan@ariejan.net - 2011-04-22 13:44:21 +0200 - Bumped version to 1.5.1
fcab3bf - ariejan@ariejan.net - 2011-04-22 13:43:41 +0200 - Don't test on ruby 1.9.2 for now. See issue #27
93843ec - ariejan@ariejan.net - 2011-04-22 13:42:29 +0200 - Fixed issue #28 - Share-to-* double-encodes titles
ec56315 - ariejan@ariejan.net - 2011-04-22 12:43:41 +0200 - Fixed up utf-8 encoding for ruby 1.9.2
2146134 - ariejan@ariejan.net - 2011-04-22 12:39:37 +0200 - Bump Sinatra to 1.2.3
c5efbf4 - ariejan@ariejan.net - 2011-04-22 12:07:41 +0200 - Truncate long URLs in the back-end to maintain a correct layout.
```
Now, it's easy to see what I did on april 22nd using grep
git log <snip> | grep 2011-04-22
``` shell
git log <snip> | grep 2011-04-22
```
You can also filter on email address to select only specific user etc. Of course, all this could be done with git, but I'm way more comfortable using grep.
To make life easy, you can create a shortcut for this log command. Add this to you `~/.gitconfig`:
[alias]
``` ini
[alias]
timelog = log --pretty=format:'%Cred%h%Creset - %C(yellow)%ae%Creset - %Cgreen%cd%Creset - %s%Creset' --abbrev-commit --date=iso
```
You can now use `git timelog` in any git repository.

View File

@ -8,15 +8,21 @@ We've all been there, you committed changes you now regret. If you didn't share
~
Use `git log` to see your most recent commits. Let's say you want to revert the last three commits, you can run the following command:
git reset --hard HEAD~3
``` shell
git reset --hard HEAD~3
```
If you only want the last commit to be removed:
git reset --hard HEAD~1
``` shell
git reset --hard HEAD~1
```
HEAD~1 is a shorthand for the commit before head. But, it's also possible to roll back to a specific commit using the SHA hash.
git reset --hard d3f1a8
``` shell
git reset --hard d3f1a8
```
> Please note that all your uncommitted changes will be lost when you perform `git reset --hard`. You might want to use [git stash][1] to save your uncommitted changes.
@ -24,9 +30,11 @@ In case you already pushed your changes to a remote repository, you can't use `g
Note that git revert does not walk back into history, but only works on a specific commit or range of commits. To use my previous examples:
git revert HEAD~3..HEAD
git revert HEAD~1..HEAD
git revert d3f1a8..master
``` shell
git revert HEAD~3..HEAD
git revert HEAD~1..HEAD
git revert d3f1a8..master
```
Optionally specify the `--no-commit` option to see what's being reverted.

View File

@ -6,7 +6,9 @@ slug = "git-checkout-a-single-file-from-another-commit-or-branch"
+++
I recently worked on a new feature in a separate branch. It didn't work out well, so I branched master again and tried another solution. However, I needed a specific filesI committed in the first feature branch. To avoid placing those files back in my working copy by hand, I used git to checkout the specific file from the other branch.
git checkout feature_1 -- path/to/file/iwant
``` shell
git checkout feature_1 -- path/to/file/iwant
```
This will *not* checkout the `feature_1` branch, but instead checkout the most recent version of `path/to/file/iwant` in the `feature_1` branch. Very handy indeed!

View File

@ -43,90 +43,111 @@ There are a few things you need to setup before diving in. The first bit is run
Here's the full `apt-get` command list I used.
apt-get update
apt-get upgrade -y
apt-get install build-essential ruby-full libmagickcore-dev imagemagick libxml2-dev \
``` shell
apt-get update
apt-get upgrade -y
apt-get install build-essential ruby-full libmagickcore-dev imagemagick libxml2-dev \
libxslt1-dev git-core postgresql postgresql-client postgresql-server-dev-8.4 nginx curl
apt-get build-dep ruby1.9.1
apt-get build-dep ruby1.9.1
```
You'll also need a separate user account to run your app. Believe me, you don't want to run your app as `root`. I call my user `deployer`:
useradd -m -g staff -s /bin/bash deployer
passwd deployer
``` shell
useradd -m -g staff -s /bin/bash deployer
passwd deployer
```
To allow `deployer` to execute commands with super-user privileges, add the following to `/etc/sudoers`. This required `deployer` to enter his password before allowing super-user access.
# /etc/sudoers
%staff ALL=(ALL) ALL
``` shell
# /etc/sudoers
%staff ALL=(ALL) ALL
```
## Ruby and RVM
With that done, you're ready to install `rvm`, I performed a system-wide install, so make sure you run this as root.
bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)
``` shell
bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)
```
Next up install the required ruby, in this case ruby-1.9.2-p290 and rubygems:
rvm install ruby-1.9.2-p290
wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.10.tgz
tar zxvf rubygems-1.8.10.tgz
cd rubygems-1.8.10
ruby setup.rb
``` shell
rvm install ruby-1.9.2-p290
wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.10.tgz
tar zxvf rubygems-1.8.10.tgz
cd rubygems-1.8.10
ruby setup.rb
```
Create a `~/.gemrc` file, this sets some sane defaults for your production server:
# ~/.gemrc
---
:verbose: true
:bulk_threshold: 1000
install: --no-ri --no-rdoc --env-shebang
:sources:
- http://gemcutter.org
- http://gems.rubyforge.org/
- http://gems.github.com
:benchmark: false
:backtrace: false
update: --no-ri --no-rdoc --env-shebang
:update_sources: true
``` ruby
# ~/.gemrc
---
:verbose: true
:bulk_threshold: 1000
install: --no-ri --no-rdoc --env-shebang
:sources:
- http://gemcutter.org
- http://gems.rubyforge.org/
- http://gems.github.com
:benchmark: false
:backtrace: false
update: --no-ri --no-rdoc --env-shebang
:update_sources: true
```
Also create this `~/.rvmrc` file to auto-trust your .rvmrc project files:
# ~/.rvmrc
rvm_trust_rvmrcs_flag=1
``` shell
# ~/.rvmrc
rvm_trust_rvmrcs_flag=1
```
_Note: do this for both `root` and the `deployer` user to avoid confusion later on._
Because you'll be running your app in production-mode all the time, add the following line to `/etc/environment` so you don't have to repeat it with every Rails related command you use:
RAILS_ENV=production
``` shell
RAILS_ENV=production
```
## Postgres
I know not everybody uses Postgres, but I do. I love it and it beats the living crap out of MySQL. If you use MySQL, you'll know what to do. Here are instructions for setting up Postgres. First create the database and login as the `postgres` user:
sudo -u postgres createdb my_site
sudo -u postgres psql
``` shell
sudo -u postgres createdb my_site
sudo -u postgres psql
```
Then execute the following SQL:
CREATE USER my_site WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE my_site TO my_site;
``` sql
CREATE USER my_site WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE my_site TO my_site;
```
## Nginx
Nginx is a great piece of Russian engineering. You'll need some configuration though:
# /etc/nginx/sites-available/default
upstream my_site {
``` nginx
# /etc/nginx/sites-available/default
upstream my_site {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/my_site.socket fail_timeout=0;
}
}
server {
server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80 default;
@ -169,23 +190,25 @@ Nginx is a great piece of Russian engineering. You'll need some configuration th
expires max;
break;
}
}
}
```
All dandy! One more then:
# /etc/nginx/nginx.conf
user deployer staff;
``` nginx
# /etc/nginx/nginx.conf
user deployer staff;
# Change this depending on your hardware
worker_processes 4;
pid /var/run/nginx.pid;
# Change this depending on your hardware
worker_processes 4;
pid /var/run/nginx.pid;
events {
events {
worker_connections 1024;
multi_accept on;
}
}
http {
http {
sendfile on;
tcp_nopush on;
tcp_nodelay off;
@ -217,11 +240,14 @@ All dandy! One more then:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
}
```
Okay, that's Nginx for you. You should start it now, although you'll get a 500 or proxy error now:
/etc/init.d/nginx start
``` shell
/etc/init.d/nginx start
```
## Unicorn
@ -231,37 +257,40 @@ You'll be doing `cap deploy` 99% of the time. This command needs to be _fast_. T
Let's get started by adding some gems to your app. When done run `bundle install`.
``` ruby
# Gemfile
gem "unicorn"
group :development do
gem "capistrano"
end
```
The next step is adding a configuration file for Unicorn in `config/unicorn.rb`:
# config/unicorn.rb
# Set environment to development unless something else is specified
env = ENV["RAILS_ENV"] || "development"
``` ruby
# config/unicorn.rb
# Set environment to development unless something else is specified
env = ENV["RAILS_ENV"] || "development"
# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
# documentation.
worker_processes 4
# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete
# documentation.
worker_processes 4
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/tmp/my_site.socket", :backlog => 64
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/tmp/my_site.socket", :backlog => 64
# Preload our app for more speed
preload_app true
# Preload our app for more speed
preload_app true
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
pid "/tmp/unicorn.my_site.pid"
pid "/tmp/unicorn.my_site.pid"
# Production specific settings
if env == "production"
# Production specific settings
if env == "production"
# Help ensure your application will always spawn in the symlinked
# "current" directory that Capistrano sets up.
working_directory "/home/deployer/apps/my_site/current"
@ -272,9 +301,9 @@ The next step is adding a configuration file for Unicorn in `config/unicorn.rb`:
stderr_path "#{shared_path}/log/unicorn.stderr.log"
stdout_path "#{shared_path}/log/unicorn.stdout.log"
end
end
before_fork do |server, worker|
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
if defined?(ActiveRecord::Base)
@ -291,9 +320,9 @@ The next step is adding a configuration file for Unicorn in `config/unicorn.rb`:
# someone else did our job for us
end
end
end
end
after_fork do |server, worker|
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true",
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
@ -304,7 +333,8 @@ The next step is adding a configuration file for Unicorn in `config/unicorn.rb`:
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
end
```
Okay, as you can see there's some nice stuff in there to accomplish zero-downtime restarts. Let me tell you a bit more about that.
@ -318,14 +348,18 @@ All the while, you have restarted your app, without taking it down: zero downtim
Now for Capistrano, add the following to your `Gemfile`.
# Gemfile
group :development do
``` ruby
# Gemfile
group :development do
gem "capistrano"
end
end
```
And generate the necessary Capistrano files.
capify .
``` shell
capify .
```
Open up `config/deploy.rb` and replace it with the following.
@ -333,45 +367,46 @@ This deploy script does all the usual, but the special part is where you reset t
Also not that the `update_code` is overwritten to do a simple `git fetch` and `git reset` - this is very fast indeed!
# config/deploy.rb
require "bundler/capistrano"
``` ruby
# config/deploy.rb
require "bundler/capistrano"
set :scm, :git
set :repository, "git@codeplane.com:you/my_site.git"
set :branch, "origin/master"
set :migrate_target, :current
set :ssh_options, { :forward_agent => true }
set :rails_env, "production"
set :deploy_to, "/home/deployer/apps/my_site"
set :normalize_asset_timestamps, false
set :scm, :git
set :repository, "git@codeplane.com:you/my_site.git"
set :branch, "origin/master"
set :migrate_target, :current
set :ssh_options, { :forward_agent => true }
set :rails_env, "production"
set :deploy_to, "/home/deployer/apps/my_site"
set :normalize_asset_timestamps, false
set :user, "deployer"
set :group, "staff"
set :use_sudo, false
set :user, "deployer"
set :group, "staff"
set :use_sudo, false
role :web, "123.456.789.012"
role :app, "123.456.789.012"
role :db, "123.456.789.012", :primary => true
role :web, "123.456.789.012"
role :app, "123.456.789.012"
role :db, "123.456.789.012", :primary => true
set(:latest_release) { fetch(:current_path) }
set(:release_path) { fetch(:current_path) }
set(:current_release) { fetch(:current_path) }
set(:latest_release) { fetch(:current_path) }
set(:release_path) { fetch(:current_path) }
set(:current_release) { fetch(:current_path) }
set(:current_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
set(:latest_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
set(:previous_revision) { capture("cd #{current_path}; git rev-parse --short HEAD@{1}").strip }
set(:current_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
set(:latest_revision) { capture("cd #{current_path}; git rev-parse --short HEAD").strip }
set(:previous_revision) { capture("cd #{current_path}; git rev-parse --short HEAD@{1}").strip }
default_environment["RAILS_ENV"] = 'production'
default_environment["RAILS_ENV"] = 'production'
# Use our ruby-1.9.2-p290@my_site gemset
default_environment["PATH"] = "--"
default_environment["GEM_HOME"] = "--"
default_environment["GEM_PATH"] = "--"
default_environment["RUBY_VERSION"] = "ruby-1.9.2-p290"
# Use our ruby-1.9.2-p290@my_site gemset
default_environment["PATH"] = "--"
default_environment["GEM_HOME"] = "--"
default_environment["GEM_PATH"] = "--"
default_environment["RUBY_VERSION"] = "ruby-1.9.2-p290"
default_run_options[:shell] = 'bash'
default_run_options[:shell] = 'bash'
namespace :deploy do
namespace :deploy do
desc "Deploy your application"
task :default do
update
@ -467,11 +502,12 @@ Also not that the `update_code` is overwritten to do a simple `git fetch` and `g
rollback.cleanup
end
end
end
end
def run_rake(cmd)
def run_rake(cmd)
run "cd #{current_path}; #{rake} #{cmd}"
end
end
```
Now there is one little thing you'll need to do. I like to run my apps, even on the server, to use their own gemset. This keeps everything clean and isolated. Login to the `deployer` account and create your gemset. Next run `rvm info` and fill the `PATH`, `GEM_HOME` and `GEM_PATH` variables accordingly.
@ -481,20 +517,24 @@ Now there is one little thing you'll need to do. I like to run my apps, even on
I always like to keep the database configuration out of git. I'll place it in the shared directory.
# /home/deployer/apps/my_site/shared/database.yml
production:
``` yaml
# /home/deployer/apps/my_site/shared/database.yml
production:
adapter: postgresql
encoding: unicode
database: my_site_production
pool: 5
username: my_site
password: password
```
## First setup
Now setup your deployment like this:
cap deploy:setup
``` shell
cap deploy:setup
```
This will clone your repo and link your `database.yml` file. Optionally, you may want to run migrations or upload an SQL dump to get started quickly with your app.

View File

@ -16,39 +16,47 @@ There's a quite an easy solution for this, offered to us by Capistrano. Unfortun
The first thing you should do it update your Nginx configuration in `/etc/nginx/sites-available/default` and add the following snippet, just before the `location /`:
if (-f $document_root/system/maintenance.html) {
``` nginx
if (-f $document_root/system/maintenance.html) {
return 503;
}
}
error_page 503 @maintenance;
location @maintenance {
error_page 503 @maintenance;
location @maintenance {
rewrite ^(.*)$ /system/maintenance.html last;
break;
}
}
```
As [iGEL][3] and [Anlek Consulting][4] pointed out in the comments, it's good practice to send a _503 Service Temporarily Unavailable_ HTTP code back to the client. A normal user won't notice the difference, but spiders do. Sending the 503 code will tell search engines, like Google, that your site is not available and that they should not re-index your maintenance message as the new site content. Instead, they'll come back later when your site returns a _HTTP 200_ code again.
You can try this fairly easily by putting your site in maintenance mode (per the instructions that follow) and do a HEAD request:
λ curl -I http://ariejan.net
HTTP/1.1 503 Service Temporarily Unavailable
Server: nginx/0.8.54
Date: Tue, 20 Sep 2011 18:22:35 GMT
Content-Type: text/html
Content-Length: 1276
Connection: keep-alive
``` shell
$ curl -I http://ariejan.net
HTTP/1.1 503 Service Temporarily Unavailable
Server: nginx/0.8.54
Date: Tue, 20 Sep 2011 18:22:35 GMT
Content-Type: text/html
Content-Length: 1276
Connection: keep-alive
```
## Use Capistrano
Now, you can run (from your own machine):
cap deploy:web:disable
``` shell
cap deploy:web:disable
```
This task will upload a generic placeholder and place it in `public/system/maintenance.html`. If nginx sees this file exists, it will render it and abort further processing. So, as long as the maintenance.html file is present, your app is not accessible.
When you're migrations are done, you can run:
cap deploy:web:enable
``` shell
cap deploy:web:enable
```
This will have remove the `maintenance.html` file, and thus making your app accessible again.
@ -56,7 +64,8 @@ This will have remove the `maintenance.html` file, and thus making your app acce
What you probably want is a separate task to deploy new code *and* run migrations. Here's the task I used:
namespace :deploy do
``` ruby
namespace :deploy do
desc "Deploy and migrate the database - this will cause downtime during migrations"
task :migrations do
transaction do
@ -67,13 +76,15 @@ What you probably want is a separate task to deploy new code *and* run migration
end
restart
end
end
end
```
## Customize your maintenance page
Of course you want to customize the maintenance page, because, frankly, it's kind of ugly by default. This does require you write your own `deploy:web:disable` task:
namespace :deploy do
``` ruby
namespace :deploy do
namespace :web do
task :disable, :roles => :web, :except => { :no_release => true } do
require 'erb'
@ -88,7 +99,8 @@ Of course you want to customize the maintenance page, because, frankly, it's kin
put result, "#{shared_path}/system/maintenance.html", :mode => 0644
end
end
end
end
```
Now you can create a maintenance page for your app at `app/views/layouts/maintentance.html.erb`. See [the original maintenance template][2] for inspiration.

View File

@ -15,7 +15,8 @@ The key is that you don't want ruby to decide when to do garbage collection for
To set this up for RSpec create a file `spec/support/deferred_garbage_collection.rb`:
class DeferredGarbageCollection
``` ruby
class DeferredGarbageCollection
DEFERRED_GC_THRESHOLD = (ENV['DEFER_GC'] || 15.0).to_f
@ -33,21 +34,26 @@ To set this up for RSpec create a file `spec/support/deferred_garbage_collection
@@last_gc_run = Time.now
end
end
end
end
```
Next, add the following to your `spec/spec_helper.rb` so rspec will use our deferred garbage collection.
config.before(:all) do
``` ruby
config.before(:all) do
DeferredGarbageCollection.start
end
end
config.after(:all) do
config.after(:all) do
DeferredGarbageCollection.reconsider
end
end
```
Now, when you run `rake spec` you should see a nice speed increase. Try to alter the threshold value a bit to see what gives your best performance:
DEFER_GC=20 rake spec
``` ruby
DEFER_GC=20 rake spec
```
Enjoy!

View File

@ -8,12 +8,13 @@ slug = "properly-testing-rails-3-scopes"
Testing scopes has always felt a bit weird to me. Normally I'd do something like this:
class Post < ActiveRecord::Base
``` ruby
class Post < ActiveRecord::Base
scope :published, where(:published => true)
scope :latest, order("created_at DESC")
end
end
describe Post do
describe Post do
context 'scopes' do
before(:all) do
@first = FactoryGirl.create(:post, :created_at => 1.day.ago, :published => true)
@ -28,7 +29,8 @@ Testing scopes has always felt a bit weird to me. Normally I'd do something like
Post.latest.should == [@first, @last]
end
end
end
end
```
This test is okay. It tests if the named scope does what it needs to do. And there's also the problem. Scopes are part of ActiveRecord and are already extensively tested there. All we need to do is check if we _configure_ the scope correctly.
@ -36,7 +38,8 @@ What we need is a way to inspect what `where` and `order` rules are set for a pa
Here's another test that utilizes some Rails 3 methods you may not have heard of before.
describe Post do
``` ruby
describe Post do
context 'scopes' do
it "should only return published posts" do
Post.published.where_values_hash.should == {:published => true}
@ -46,7 +49,8 @@ Here's another test that utilizes some Rails 3 methods you may not have heard of
Post.latest.order_values.should == ["created_at DESC"]
end
end
end
end
```
The `where_values_hash` and `order_values` allow you to inspect what a scopes doing. By writing your test this way you achieve two import goals:

View File

@ -12,11 +12,12 @@ Also, your configuration may be correct, but what happens when you upgrade to a
So, this is the proper way of testing your scopes:
class Post < ActiveRecord::Base
``` ruby
class Post < ActiveRecord::Base
scope :latest, order("created_at DESC")
end
end
describe Post do
describe Post do
context 'scopes' do
before(:all) do
@first = FactoryGirl.create(:post, :created_at => 1.day.ago)
@ -27,7 +28,8 @@ So, this is the proper way of testing your scopes:
Post.latest.should == [@first, @last]
end
end
end
end
```
## A note on speed

View File

@ -29,8 +29,9 @@ So, let's take a closer look at a real-world example.
This is a `Post` model, it `belongs_to` and author and it can give you a summary of an article by returning the text above a '~' marker.
# app/models/post.rb
class Post < ActiveRecord::Base
``` ruby
# app/models/post.rb
class Post < ActiveRecord::Base
DELIMITER = "~\n"
belongs_to :author
@ -42,26 +43,31 @@ This is a `Post` model, it `belongs_to` and author and it can give you a summary
body
end
end
end
end
```
This is a spec that's defined for the `Post` model:
# spec/models/post_spec.rb
require 'spec/helper'
``` ruby
# spec/models/post_spec.rb
require 'spec/helper'
describe Post do
describe Post do
context "summary" do
it "should return the summary" do
post = FactoryGirl.build(:post, :body => "Summary\n~\nNo summary.")
post.summary.should == "Summary"
end
end
end
end
```
Running this spec would take at least 10 seconds. Running `time rspec spec/models/post_spec.rb` woudl output something like this:
Finished in 0.05523 seconds
real 0m10.387s
``` text
Finished in 0.05523 seconds
real 0m10.387s
```
This means that running the actual spec took 0.05s, but running the entire command took 10 seconds. What is slowing us down?
@ -91,8 +97,9 @@ There are a lot of scenarios where the answer to that question is _NO_.
This may seem a bit weird at first, but let's take another look at the `Post` model:
# app/models/post.rb
class Post < ActiveRecord::Base
``` ruby
# app/models/post.rb
class Post < ActiveRecord::Base
DELIMITER = "~\n"
belongs_to :author
@ -104,7 +111,8 @@ This may seem a bit weird at first, but let's take another look at the `Post` mo
body
end
end
end
end
```
The `summary` method does not interact with the `Post` model at all, except that it access the `body` attribute. But the `body` attribute is just a `String`.
@ -112,8 +120,9 @@ So, if we wanted to test the `summary` method and remove all Rails dependencies,
Consider this:
# app/logic/myapp/summary.rb
module MyApp
``` ruby
# app/logic/myapp/summary.rb
module MyApp
class Summary
def self.for(text, delimiter)
summary = if text =~ /#{delimiter}/i
@ -123,7 +132,8 @@ Consider this:
end
end
end
end
end
```
I think you can quickly see that this method does exactly the same as `Post#summary`. But it does not have any dependency to ActiveRecord.
@ -144,8 +154,9 @@ That looks good! Note that this spec _does not_ include `spec_helper`. `spec_hel
The `Post` model should also be updated, of course to utilise this new class.
# app/models/post.rb
class Post < ActiveRecord::Base
``` ruby
# app/models/post.rb
class Post < ActiveRecord::Base
DELIMITER = "~\n"
belongs_to :author
@ -153,14 +164,16 @@ The `Post` model should also be updated, of course to utilise this new class.
def summary
MyApp::Summary.for(body, DELIMITER)
end
end
end
```
The spec for `Post` should also be changed. Since we already have tested that `MyApp::Summary#for` returns the right summary for a given text and delimiter, all we have left to do is make sure that `Post#summary` calls it correctly.
# spec/models/post_spec.rb
require 'spec/helper'
``` ruby
# spec/models/post_spec.rb
require 'spec/helper'
describe Post do
describe Post do
context "summary" do
it "should return the summary" do
post = FactoryGirl.build(:post)
@ -168,7 +181,8 @@ The spec for `Post` should also be changed. Since we already have tested that `M
post.summary
end
end
end
end
```
## Running fast specs
@ -176,12 +190,16 @@ The files above are located in `fast_spec` and `app/logic`. I do this because I
Running fast specs works like this:
rspec -I app/logic fast_spec
``` shell
rspec -I app/logic fast_spec
```
Try it out with `time rspec -I app/logic fast_spec`:
Finished in 0.03223 seconds
real 0m0.421s
``` text
Finished in 0.03223 seconds
real 0m0.421s
```
That's your same spec, down from about 10 seconds to 0.5 second.

View File

@ -10,10 +10,12 @@ For instance, a 404 "Not found" error can (and should) be handled correctly in y
Let me give you an example of how to handle a `ActiveRecord::RecordNotFound` exception. Let's assume you have an application that could show a user profile:
# GET /p/:name
def show
``` ruby
# GET /p/:name
def show
@profile = Profile.find(params[:name])
end
end
```
Now, it may happen that the `:name` paramater contains a value that cannot be found in our database, most likely because someone made a typo in the URL.
@ -25,12 +27,14 @@ Now, instead of showing the user the (by default ugly) 404 page from `public/404
Here's one solution:
# GET /p/:name
def show
``` ruby
# GET /p/:name
def show
@profile = Profile.find(params[:name])
rescue
rescue
render :template => 'application/profile_not_found', :status => :not_found
end
end
```
You can now create `app/views/applicaiton/profile_not_found.html.haml` and give a nice custom error message to your user.
@ -42,21 +46,25 @@ The above example only works for the specific profile `show` action. It's also p
Your `show` action still looks like this:
# GET /p/:name
def show
``` ruby
# GET /p/:name
def show
@profile = Profile.find(params[:name])
end
end
```
Then, in your `app/controllers/application_controller.rb` add this:
class ApplicationController < ActionController::Base
``` ruby
class ApplicationController < ActionController::Base
rescue_from ActiveRecord::RecordNotFound, :with => :rescue_not_found
protected
def rescue_not_found
render :template => 'application/not_found', :status => :not_found
end
end
end
```
Whenever an `ActiveRecord::RecordNotFound` exception is thrown (and not handled by the action itself), it will be handled by your `ApplicationController`.
@ -64,24 +72,29 @@ Whenever an `ActiveRecord::RecordNotFound` exception is thrown (and not handled
It's possible to throw your own custom exceptions and handle them in different ways. Like this:
# Define your own error
class MyApp::ProfileNotFoundError < StandardError
end
``` ruby
# Define your own error
class MyApp::ProfileNotFoundError < StandardError
end
# GET /p/:name
def show
# GET /p/:name
def show
@profile = Profile.find_by_name(params[:name])
raise MyApp::ProfileNotFoundError if @profile.nil?
end
end
```
And add this to your `ApplicationController`:
rescue_from MyApp::ProfileNotFoundError, :with => :profile_not_found
``` ruby
rescue_from MyApp::ProfileNotFoundError, :with => :profile_not_found
```
Optionally, if you don't want to write that custom `profile_not_found` method, you may also supply a block:
rescue_from MyApp::ProfileNotFoundError do |exception|
``` ruby
rescue_from MyApp::ProfileNotFoundError do |exception|
render :nothing => "Profile not found.", :status => 404
end
end
```

View File

@ -4,6 +4,7 @@ title = "Automatically switch between SSL and non-SSL with Nginx+Unicorn+Rails"
tags = ["Rails", "rails3", "unicorn", "nginx", "ssl"]
slug = "automatically-switch-between-ssl-and-non-ssl-with-nginx-unicorn-rails"
+++
_Scroll down for setup instructions. Or, read this bit about SSL in the real world first._
SSL or Secure Socket Layer is a nice way to secure sensitive parts of your Rails application. It achieves to goals.
@ -25,9 +26,11 @@ In the example above Rabobank uses an [EV SSL Certificate][evssl]. EV stands for
The cost for such an EV SSL Certificate is $200 - $1000 per year. You probably don't need it for your site.
## SSL for you and me
When you are looking to secure the back-end of your site (where you login etc.), you only require the encryption part of SSL. There are two routes you can take
### Self signed SSL
You are able to create a working SSL certificate yourself. This will give you encryption, but no identity validation. When you use a self-signed SSL certificate all browsers will warn you about this.
* Encryption
@ -38,6 +41,7 @@ You are able to create a working SSL certificate yourself. This will give you en
For me, that's a reason not to use self signed SSL for any other than development and testing purposes.
### Standard SSL
Most SSL authorities provide you with a _Standard SSL_ product. These certificates only check if you own the domainname. They also offer encryption and work (without warnings) in your browser. You can get one of these for as little as $9 a year.
* Encryption
@ -46,6 +50,7 @@ Most SSL authorities provide you with a _Standard SSL_ product. These certificat
* Cheap ($10 - $20)
## Setting up SSL for your Rails application
Setting up SSL is a web server thing. It does not involve your Rails appliation directly (but more on that in a moment).
If you followed my [nginx+unicorn][nu] guide, you'll have Nginx and Unicorn setup already.
@ -56,13 +61,15 @@ If you followed my [nginx+unicorn][nu] guide, you'll have Nginx and Unicorn setu
First you need to create a private key. Do this on you server. The following command will generate a 2048 bit key. When it asks you to set a passphrase, do so.
$ openssl genrsa -des3 -out example.com.key 2048
Generating RSA private key, 2048 bit long modulus
..................................+++
.................................................................................+++
e is 65537 (0x10001)
Enter pass phrase for example.com.key:
Verifying - Enter pass phrase for example.com.key:
``` shell
$ openssl genrsa -des3 -out example.com.key 2048
Generating RSA private key, 2048 bit long modulus
..................................+++
.................................................................................+++
e is 65537 (0x10001)
Enter pass phrase for example.com.key:
Verifying - Enter pass phrase for example.com.key:
```
### Creating a key and Certificate Sign Request (csr)
@ -78,27 +85,29 @@ Okay, you now have your key. Next step, create the Certificate Sign Request. The
Here's the full version:
$ openssl req -new -key example.com.key -out example.com.csr
Enter pass phrase for example.com.key: <<passphrase>>
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord-Brabant
Locality Name (eg, city) []:Eindhoven
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ariejan.net
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:example.com
Email Address []:
``` shell
$ openssl req -new -key example.com.key -out example.com.csr
Enter pass phrase for example.com.key: <<passphrase>>
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord-Brabant
Locality Name (eg, city) []:Eindhoven
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ariejan.net
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:example.com
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
```
Great, you now have two files:
@ -106,54 +115,69 @@ Great, you now have two files:
* `example.com.csr` - sign request
### Get your certificate (crt)
Now, go to your selected SSL authority, order your Standard SSL Certificate and upload the contents of `example.com.csr` when requested.
After doing some validations, which may require you to click some links in emails, you're certificate should be ready for download. Save this file as `example.com.crt`.
### Intermediate certificates
Some SSL authorities work with so called intermediate certificates. This requires you to include an intermediate certificate with your own certificate. If your SSL provider requires this, save the intermediate certificate as `intermediate.crt`.
For usage with nginx, you must place both your own and the intermediate certificates in a single file. This is easy:
$ cat example.com.crt intermediate.crt > sslchain.crt
``` shell
cat example.com.crt intermediate.crt > sslchain.crt
```
### Remove the passphrase from your key
Now, this is not recommended, but many people do this. The reason is that when your private key has passphrase, your server requires that passphrase everytime your (re)start it. This could mean that your server cannot boot up without manual interaction from your part.
$ cp example.com.key example.com.key.orig
$ openssl rsa -in example.com.key.orig -out example.com.key
``` shell
cp example.com.key example.com.key.orig
openssl rsa -in example.com.key.orig -out example.com.key
```
You now have `example.com.key.orig`, which is your original private key _with_ the passphrase. And you have `example.com.key`, which is the same private key, but _without_ the passphrase.
### Setup nginx for SSL
Finally, you can setup Nginx for SSL. Normally I add both a SSL and non-SSL configuration. Setup is very easy.
First of all, become root and copy your keys and certificate to `/etc/ssl`.
$ cp example.com.key example.com.crt /etc/ssl
``` shell
cp example.com.key example.com.crt /etc/ssl
```
or if you use an intermediate certificate:
$ cp example.com.key sslchain.crt /etc/ssl
``` shell
cp example.com.key sslchain.crt /etc/ssl
```
Next, you take your non-SSL `server` configuration and duplicate it. Then you add the following lines.
listen 443; # Instead of Listen 80
``` nginx
listen 443; # Instead of Listen 80
ssl on;
ssl_certificate /etc/ssl/sslchain.crt; # or /etc/ssl/example.com.crt
ssl_certificate_key /etc/ssl/example.com.key;
ssl on;
ssl_certificate /etc/ssl/sslchain.crt; # or /etc/ssl/example.com.crt
ssl_certificate_key /etc/ssl/example.com.key;
location / {
location / {
# Add this to the location directive.
proxy_set_header X-Forwarded-Proto https;
}
}
```
Most of this is pretty straight forward. The `proxy_set_header` directive is needed to let your Rails application know if the request came in over SSL or not. Normally this shouldn't matter, but you'll need it for the next part of this guide.
Save, restart nginx and your SSL connection should be available.
## Automatically switch between SSL and non-SSL with Rails
To take this site as an example, I don't want to run the front-end through SSL. First of all, you can't submit any data to my server. Second, I include several external resources (Disqus, Twitter, AdSense), that will give you warnings about using "insecure" content on an encrypted page.
What I _do_ want is to encrypt traffic to the backend, where I log in and write posts like these.
@ -162,16 +186,20 @@ I need to make sure that your browser knows exactly when and when not to switch
First, update your Gemfile:
# Gemfile
``` ruby
# Gemfile
gem 'rack-ssl-enforcer'
```
After you've run `bundle install`, update `config/application.rb` (or `config/environments/production.rb` is you only want to configure this for your production environment).
# config/application.rb or config/environments/production.rb
config.middleware.use Rack::SslEnforcer,
``` ruby
# config/application.rb or config/environments/production.rb
config.middleware.use Rack::SslEnforcer,
:redirect_to => 'https://example.com', # For when behind a proxy, like nginx
:only => [/^\/admin\//, /^\/authors\//], # Force SSL on everything behind /admin and /authors
:strict => true # Force no-SSL for everything else
```
With the following statement, you achieve the following:
@ -187,5 +215,6 @@ _Note: if you find yourself getting into an infinite redirect loop, make sure ha
[readme]: https://github.com/tobmatth/rack-ssl-enforcer#readme
## Wrapping up
You now know how you can setup an SSL certificate with Nginx and how you can make your Rails application automatically switch between SSL and non-SSL whenever you want to.

View File

@ -8,40 +8,54 @@ This is just short snippet on how to install Node.js (any version) and NPM (Node
## Step 1 - Update your system
sudo apt-get update
sudo apt-get install git-core curl build-essential openssl libssl-dev
``` shell
sudo apt-get update
sudo apt-get install git-core curl build-essential openssl libssl-dev
```
## Step 2 - Install Node.js
First, clone the Node.js repository:
git clone https://github.com/joyent/node.git
cd node
``` shell
git clone https://github.com/joyent/node.git
cd node
```
Now, if you require a specific version of Node:
git tag # Gives you a list of released versions
git checkout v0.4.12
``` shell
git tag # Gives you a list of released versions
git checkout v0.4.12
```
Then compile and install Node like this:
./configure
make
sudo make install
``` shell
./configure
make
sudo make install
```
Then, check if node was installed correctly:
node -v
``` shell
node -v
```
## Step 3 - Install NPM
Simply run the NPM install script:
curl -L https://npmjs.org/install.sh | sudo sh
``` shell
curl -L https://npmjs.org/install.sh | sudo sh
```
And then check it works:
npm -v
``` shell
npm -v
```
That's all.

View File

@ -14,7 +14,9 @@ Doing a little digging around I found a working solution. Apparently Apple keeps
With the above warning in the back of your mind, open your terminal (slowly) and issue the following command:
sudo rm /private/var/log/asl/*.asl
``` shell
sudo rm /private/var/log/asl/*.asl
```
Now quit and restart Terminal or iTerm2 and your prompt should present itself quickly again.

View File

@ -14,7 +14,9 @@ First create a fork of the original project. You can do this easily by clicking
Then, check out your fork:
git clone git@github.com:ariejan/repo-name.git
``` shell
git clone git@github.com:ariejan/repo-name.git
```
## Step 2 - Contribute
@ -31,7 +33,9 @@ With your contribution done, don't merge it back into `master`. `master` is your
First push your branch to Github, so you can share it with others.
git push origin fix_for_this_or_that
``` shell
git push origin fix_for_this_or_that
```
You should now see this branch in your Github project page. You'll also notice there's a "Pull Request" button at the top. Click it if you want the project maintainer to pull your `fix_for_this_or_that` branch into the main project.
@ -41,13 +45,17 @@ Over time the `master` of your fork will start lagging behind. Because you did n
Before you can pull in changes you must add a git remote for it. You can use the _Git Read-only_ URL for this.
git remote add upstream https://github.com/some_one/some-repo.git
``` shell
git remote add upstream https://github.com/some_one/some-repo.git
```
Now, for raking in the changes;
git checkout master
git fetch upstream
git merge upstream/master
``` shell
git checkout master
git fetch upstream
git merge upstream/master
```
## Step 4.5 - Keeping your feature branch up-to-date
@ -55,17 +63,23 @@ If you pulled in changes from `upstream` and did not yet share your feature bran
<p class="important">Do <strong>not</strong> rebase if you already pushed your branch to Github. <a href="http://progit.org/book/ch3-6.html#the_perils_of_rebasing">Read why</a>.</p>
git checkout fix_for_this_or_that
git rebase master
``` shell
git checkout fix_for_this_or_that
git rebase master
```
If any conflict arise, fix them. And continue your rebase:
git add conflicting_file
git rebase --continue
``` shell
git add conflicting_file
git rebase --continue
```
You may also abort the rebase:
git rebase --abort
``` shell
git rebase --abort
```
## Big picture

View File

@ -22,15 +22,19 @@ Here's how I set everything up.
First, clone the official Gitlab repository and name it `upstream`.
git clone --origin upstream https://github.com/gitlabhq/gitlabhq.git my_git_server
``` shell
git clone --origin upstream https://github.com/gitlabhq/gitlabhq.git my_git_server
```
Next I made all the changes I want. I updated `config/gitosis.yml` and `unicorn` to `Gemfile` and setup Capistrano.
I then pushed this to my own git server. This is the same server Capistrano will use to pull changes from.
git remote add origin git@git.ariejan.net:my_git_server.git
git push origin master
cap deploy
``` shell
git remote add origin git@git.ariejan.net:my_git_server.git
git push origin master
cap deploy
```
That's all there is to deploying Gitlab from my own repository.
@ -38,18 +42,24 @@ That's all there is to deploying Gitlab from my own repository.
Now, the Gitlab crew is pushing out new features at an amazing rate. So, how do I get those new features (and the occasional bug fix) into my copy of Gitlab for deploying?
git fetch upstream
``` shell
git fetch upstream
```
Remember how we named the official Gitlab repository `upstream` earlier? With this `fetch` we get all changes from their repository (but we don't apply them to anything yet).
Then, merge the upstream changes with your own branch.
git merge upstream/master
``` shell
git merge upstream/master
```
There may be merge conflicts, just resolve them and commit your merge. Then again to deploy:
git push origin master
cap deploy
``` shell
git push origin master
cap deploy
```
## Why do this?

View File

@ -8,22 +8,28 @@ Today I upgraded a production PostgreSQL 8.4 database to version 9.1. This was o
~
The first step is to make a full dump of your data. I personally like to store that dump somewhere safe before upgrading. As root:
su - postgres
pg_dumpall > dump.sql
exit
cp ~postgres/dump.sql /root/
``` shell
su - postgres
pg_dumpall > dump.sql
exit
cp ~postgres/dump.sql /root/
```
Now you can safely remove the postgresql-8.4 and install postgresql-9.1:
aptitude purge postgresql-8.4
aptitude install postgresql-9.1
``` shell
aptitude purge postgresql-8.4
aptitude install postgresql-9.1
```
Next check the postgresql configuration in `/etc/postgresql/9.1/main`. If you make any changes, make sure to restart postgres with `/etc/init.d/postgresql restart`.
Postgresql 9.1 is now up and running, let's import our data back into it.
su - postgres
psql < dump.sql
``` shell
su - postgres
psql < dump.sql
```
That's all. You're now fully upgraded to PostgreSQL 9.1.

View File

@ -8,17 +8,19 @@ While working on a [Gitlab][1] installation I noticed that all repository file p
Using the following commands (in plain Bash) allow you to recursively set permissions for files and directories. So, to fix the proper read permissions on your Gitlab repositories you can use this:
# Go to your git repositories directory (as git or the gitlab user)
cd /home/git/repositories
``` shell
# Go to your git repositories directory (as git or the gitlab user)
cd /home/git/repositories
# Fix ownership
sudo chown -R git:git *
# Fix ownership
sudo chown -R git:git *
# Fix directory permissions
sudo find -type d -exec chmod 770 {} \;
# Fix directory permissions
sudo find -type d -exec chmod 770 {} \;
# Fix file permissions
sudo find -type f -exec chmod 660 {} \;
# Fix file permissions
sudo find -type f -exec chmod 660 {} \;
```
After this, your Gitlab should have no trouble accessing your code (e.g. in the tree browser).

View File

@ -10,9 +10,11 @@ Decorators allow you to move view related functionality for your models in to se
Anyway, if you use Devise you're provided with a `current_user` helper. However, this helper returns an instance of `User` - without your decorators. To enable decorators for your `current_user` by default, simple add this to `app/controllers/application_controller.rb`:
def current_user
``` ruby
def current_user
UserDecorator.decorate(super) unless super.nil?
end
end
```
Now, anywhere in your views where you call `current_user` you'll get a decorated version instead.

View File

@ -8,10 +8,12 @@ Here's a handy ruby snippet that might come in handy one day.
When the regex matches (input should end with " today"), you can directly grab the matched value using the special `$1` variable.
case input
when /(.*)\stoday$/i then
``` ruby
case input
when /(.*)\stoday$/i then
puts "Today: #{$1}"
end
end
```
I think you can see how you can bend this to your own needs.

View File

@ -42,8 +42,10 @@ If your content is good, readers will find it. Link to it. Tweet about it. And c
No, that's not whaty I'm saying. META tags _do_ prodide information to crawlers, you should not assume they use that information in any way. For example, Ariejan.net has these META-tags:
<meta content='About Software Engineering, Ruby on Rails, Java, Git and the Cloud - by Ariejan' name='description'>
<meta content='ruby, rubyonrails, rails, git, svn, mysql, mac, ios, apple, web, web2.0, development, dev' name='keywords'>
``` html
<meta content='About Software Engineering, Ruby on Rails, Java, Git and the Cloud - by Ariejan' name='description'>
<meta content='ruby, rubyonrails, rails, git, svn, mysql, mac, ios, apple, web, web2.0, development, dev' name='keywords'>
```
In Google, this results in a view like this:

View File

@ -4,36 +4,49 @@ title = "Search and Replace in multiple files with Vim"
tags = ["Rails", "vim"]
slug = "search-and-replace-in-multiple-files-with-vim"
+++
I recently learned a nice VimTrick™ when paring with [Arjan](http://arjanvandergaag.nl). We upgrade an app to Rails 3.2.6 and got the following deprecation message:
DEPRECATION WARNING: :confirm option is deprecated and will be removed from Rails 4.0.
Use ':data => { :confirm => 'Text' }' instead.
I recently learned a nice VimTrick™ when pairing with [Arjan](http://arjanvandergaag.nl). We upgrade an app to Rails 3.2.6 and got the following deprecation message:
``` text
DEPRECATION WARNING: :confirm option is deprecated and will be removed from Rails 4.0.
Use ':data => { :confirm => 'Text' }' instead.
```
Well, nothing difficult about that, but we have quite a few `:confirm` in this app.
Firstly we checked where we used them (note we use ruby 1.9 hash syntax everywhere):
$ ack -l "confirm:" app
``` shell
ack -l "confirm:" app
```
Now you have a listing of all the files that contain the `:confirm` hash key. You can leave out the `-l` option to get some context for each find.
Now, we need to open Vim with those files:
$ ack -l "confirm:" app | xargs -o vim
``` shell
ack -l "confirm:" app | xargs -o vim
```
Vim will open the first of these files. Here's a snippet of what you may find:
= link_to "Delete", something_path, confirm: "Are you sure?"
``` haml
= link_to "Delete", something_path, confirm: "Are you sure?"
```
Now, search and replace is easy:
:%s/confirm: ".*"/data: { & }/g
``` vim
:%s/confirm: ".*"/data: { & }/g
```
This will surround the current confirm with the `data` hash. Just the way Rails likes it. The `&` character will be replaced with whatever text matched the search pattern.
You could repeat this for every file manually. But, you're using Vim.
:argdo %s/confirm: ".*"/data: { & }/g | update
``` vim
:argdo %s/confirm: ".*"/data: { & }/g | update
```
This will perform the search and replace on each of the supplied arguments (in this case the files selected with `ack`) and update (e.g. save) those files.

View File

@ -10,7 +10,9 @@ The normal solution would be to setup a VPN with one of your servers elsewhere a
As a last resort you might consider setting up a SSH Tunnel for a specific service like this:
ssh -N user@server -L 3306:127.0.0.1:3306
``` shell
ssh -N user@server -L 3306:127.0.0.1:3306
```
But, this only works for a single port, and thus application. It may help, but it can become tedious pretty quickly. You also have to rewrite any configuration you had for connection to a remote host to use your localhost, most likely on some strange port.
@ -22,11 +24,15 @@ Besides giving you access to all the services you need, you also encrypt (e.g. h
Installing sshuttle on your Mac is a breeze
$ brew install sshuttle
``` shell
brew install sshuttle
```
Then you can setup an IP-over-SSH connection to any remote server you have SSH access to. You'll need your local admin password in order to setup routing properly.
$ sshuttle -r username@server 0/0 -vv
``` shell
sshuttle -r username@server 0/0 -vv
```
This routes all traffic over the tunnel towards `server`. Use on of those online ip checkers to see that you're actually using your `server`'s IP address.
@ -36,25 +42,33 @@ The one thing this does _not_ do is DNS. DNS is still done using your locally co
Not to worry, you can go 'full stealth' with the `--dns` options, which also routes DNS over to the remote server:
$ sshuttle --dns -r username@server 0/0 -vv
``` shell
sshuttle --dns -r username@server 0/0 -vv
```
To stop using your IP-over-SSH connection, simply press CTRL-C twice and sshuttle should restore your normal networking connections.
If sshuttle does not restore the connection properly, you can do so manually:
$ sudo ipfw -q -f flush
``` shell
sudo ipfw -q -f flush
```
I've already create a few aliases in my `~/.zshrc`:
alias tunnel='sshuttle -r ariejan@server 0/0 -vv'
alias tunnel_dns='sshuttle --dns -r ariejan@server 0/0 -vv'
alias reset_tunnel='sudo ipfw -q -f flush'
``` shell
alias tunnel='sshuttle -r ariejan@server 0/0 -vv'
alias tunnel_dns='sshuttle --dns -r ariejan@server 0/0 -vv'
alias reset_tunnel='sudo ipfw -q -f flush'
```
So, no need to setup complicated VPN contraptions, just use plain old SSH and off you go.
Bonus: you can also connect to a non-standard SSH port, in case port 22 has been blocked in the firewall as well:
$ sshuttle --dns -r username@server:port 0/0 -vv
``` shell
sshuttle --dns -r username@server:port 0/0 -vv
```
[sshuttle]: https://github.com/apenwarr/sshuttle/

View File

@ -8,20 +8,28 @@ The situation is pretty straightforward. You have been making commits for that n
Let's assume you want to have this:
A - B - (C) - D - E - F
``` text
A - B - (C) - D - E - F
```
`C` was the last commit you pulled from `origin` and D, E and F are commits you just made but should have been in their own branch. This is what you wanted:
A - B - (C)
``` text
A - B - (C)
\ D - E F
```
Step 1: Assuming you're at `F` on `master`, create a new branch with those commits:
git branch my_feature_branch
``` shell
git branch my_feature_branch
```
Then, still on `master`, reset back to commit `C`. This is 3 commits back.
git reset --hard HEAD~3
``` shell
git reset --hard HEAD~3
```
Okay, you're `master` is now back at `C`, which you lasted pulled, and the `my_feature_branch` includes D, E and F. Just checkout `my_feature_branch` and continue work as usual. I'm sure no one saw what you just did.

View File

@ -40,21 +40,23 @@ For those a bit tech-savvy, here's the electrical schematic for the Blink tutori
With the following, relatively simple code, you can make that LED blink.
int led = 13; // This refers to pin 13.
``` arduino
int led = 13; // This refers to pin 13.
// setup() is run once during start-up
void setup() {
// setup() is run once during start-up
void setup() {
// initialize the digital pin as an output.
pinMode(led, OUTPUT);
}
}
// the loop routine runs over and over again forever:
void loop() {
// the loop routine runs over and over again forever:
void loop() {
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
}
```
After making this first step you'll get to know more components and features of Arduino. You'll start using sensors (buttons, light sensors, sound sensors, GPS, magnetometers, gas sensors) and you'll start using actuators (motors, steppers, relais, LED matrixes). The possibilities are endless.

Some files were not shown because too many files have changed in this diff Show More