Tips to speed up Rails app (thesis)?

Hi List,

For my thesis I'm currently investing how a Rails can be sped up.

I've already done : - caching - transactions - slow Ruby helpers manual written - include when doing a find

I've also tried to use Mongrel, but compared to Webrick my results were slower (see http://users.telenet.be/jorambarrez/thesis/ ), can anyone explain this?

Another question: how can I test concurrency with ab, when I have authentication filters?

Any other thing I should try?

You might want to try different template engines/erb implementations.

eg. I found a significant improvement when I switched an app to erubis:

http://jystewart.net/process/2007/02/speeding-up-rails-with-erubis/

James.

The platform you run on makes a huge difference. Ruby is slow on Windows, for example. On a Windows system, Mongrel and WEBrick are equivalent. The major difference is that Mongrel continues to work for more than a few hours :slight_smile:

You’ll find that mongrel, webrick, fcgi, etc will perform around the same. The important thing you need to think about is using more than one server.

For example, page caching is great, but if you’re using Mongrel to serve the cached pages, you’re not getting all of the benefits that page caching provides. You want a static web server to serve your static content (images, js, stylesheets, html files), Reserve Mongrel for handling the actual requests. See Apache + mod_proxy_balance + Mongrel_cluster or nginx+mongrel_cluster.

Thanks for the replies!

I'm running both webrick and mongrel on ubuntu. I also have a server thats running for almost 3 weeks now, on webrick production. So Brian, If I understand you correctly, there should be no big differences between mongrel/webrick, but mongrel can be better used concurrently?

The apache combination is something I will look in to!

Yeah… but to put it better… webrick is not meant for production. It’s just not designed for it and will usually choke under load.

Rails is single-threaded… if you have only one mongrel running, you can only process one request at a time. That’s why we load-balance. (It’s not as bad as it sounds though, as most requests happen quickly. 2 is decent, 4 is really good, I think I read that twitter uses something like 16 or something?

I don’t know what tool you’re using for your tests, but you might also want to consider httperf.

In fact, there’s a great screen cast on benchmarking that you should pick up. It only costs $9 and is quite awesome.

http://peepcode.com/products/benchmarking-with-httperf