I have investigated this in detail and the reason for this is really ridiciulous - 80% of this time is spend on executing "require" statements and this is due to really bad design of RubyGems and/or Bundler. They both do their job by augmenting $LOAD_PATH to include all directories containing gem contents. If you then look into how Ruby deals with the $LOAD_PATH, it turns out each time you do a "require" it will go through all of those directories and their subdirectories in search for a file matching what you have required. I used strace to see how this impacts doing "rails console" on an application with around 30 gems - it ended up doing 35000 open calls that ended up with ENOENT. It is beyond my mind how this can remain unfixed for so long. If RubyGems instead maintained a cache from all the gem directories, it could map requires to files without touching the filesystem and it would work many times as fast. Even creating a cache at runtime and then using it for the lookup would be many times as fast. Unfortunately, it is hard to determine a strategy for building this cache that would map the requires to files in exactly the same was as RubyGems, because RubyGems has some pretty weird strategy for determining which files take priority over which files when you do an ambiguous require. Because of this, I haven't yet succeeded in implementing a fix myself and I'm not 100% sure if it is possible. I also tried to contact one of the Bundler guys, but so far haven't had any reply about this.
Ok so first this is not exactly an empty Rails project. The Gemfile.lock has 221 lines, that amount of dependencies is not normal. To compare the application code for http://www.getharvest.com/ a +5 year old rails project with customers et all and it has 245 gem dependencies (including some that are our own). The +3 year old http://www.coopapp.com has 181 dependencies. Both of these start up within 3-4 seconds on my desktop and due to their age & size they have admitedly too many dependencies. Everytime one is removed I've rejoice.
Please don't call something with 221 dependencies a blank Rails project. More the amount of conflicting half baked gems you've added makes it an unfair complaint about Rails. I've assure you that there is no language / framework in the universe in which does not take a hit when adding too many dependencies. Either execution wise but more often it just breaks your spirit.
Anyway back to numbers, on 3 year old desktop under Linux, executing tests on the "blank" project yielded:
<pre>
real 0m8.698s
user 0m5.996s
sys 0m2.532s
</pre>
For some reason ruby is much slower on OSX than linux, that is an interesting project to investigate. I've think the platform difference shows the 10 second difference, spite the slower CPU. It would be quite interesting to understand why ruby is slow on OSX. Each to his own itch to scratch.
Why are you equating lines in Gemfile.lock with the number of dependencies?
There are 28 gems specified in the Gemfile. That's not an unreasonable amount of gems. With dependencies, the total amount of gems being required is 76.
Hell, only using Rails and sqlite-ruby will require 26 gems to be installed.
I'm not that deeply in Ruby/RubyGems/Bundler internals to be 100% sure, but on the other hand I really can't see how it could work reasonably fast while using the strategy for looking up libraries it uses.
Great work! Yehuda is brilliant, although I'm sure he's busy now and I don't know how much bandwidth he would have to address that. I think he's full bore on Sproutcore.
If I calculate (20 seconds) x (number of times I bootstrap rails), it might even be worth my time to have a go at it.
There is a bug supposed to be fixed in ruby 1.9.3 that addresses something to do with the load path and bootstrap performance, but no idea when that's due to land. If it's truly the cause, it seems like this should be a patch to 1.9.2 instead.