currently I'm running a staging environment which should serve the
purpose to replicate the production one as close as possible before
To keep the staging environment's costs down I tried using a series of
m1.small instances, the Rails one hosts a stack composed by Rails 3.2
+ Nginx + Rainbows! + Ruby 1.9.3-p194.
I completed my capistrano deploy scripts and asset precompilation is
happening on the server through Node.js.
- depending on the instance size this will take long time for each
deploy because of the steal CPU time feature EC2 has.
The supervisor can and will decide to allocate resources to other
instances, leaving you with a sort of crippled system.
I'm reaching a point where just precompiling the assets takes
450000ms. This without taking into account that since the CPU usage
skyrockets there is little room for the actual processes on the live
- the precompile task by default precompiles the assets twice, digest
and non-digest. I might just run precompile primary and get away with
it, but still.
- I benchmarked the precompile task on other kind of instances, the
ones that gives best results are high cpu ones and they are not cheap,
given that we don't need all that CPU power during the normal
lifecycle of the app and it seems silly to use this kind of instance
just to tune down deploy (precompile + startup) time.
- when the assets are finally precompiled I upload them to Amazon S3/
CloudFront via the excellent asset_sync gem.
- I can't make Rails accept that assets are hosted elsewhere; the
staging app won't start or bomb out on the first request if I decide
to precompile assets on my local machine at some point during
deployment and upload them to s3 leaving the app assets-free on the
server. Disabling the asset pipeline won't help since it will complain
about missing manifests, etc.
- Rails is very slow starting up, always due to the CPU steal time and
some falcon patches still missing from MRI. This is another matter,
but if you don't pay attention in your deploy process you might end up
with people seeing old assets or having pretty large downtimes if you
don't have a downtime-free deploy like the one provided using Unicorn/
So, how do you scale horizontally with Rails 3.2?
Currently I have enough firepower to substain a large amount of users
given the combination of fine tuning, nginx, load balancers, reverse
proxy, rainbows! and whatnot, but firing up another instance and
managing the assets seems to become a problem.
Do you precompile locally to have a central assets creation point and
avoid CPU burns and potentially long deploys on the servers?
It might be out of topic but what tool (chef, puppet, etc.) do you use
to clone, start/stop instances given the requirement of deeply
customized config files (like for example NGINX) and managing adding/
removing instances to an external load balancer like the AWS one?
I'm starting to have doubts in that EC2 might not be the friendlier
cloud hosting platform for a large Rails 3.2 app.
To reach my assets I'm using this:
I'm using something like this:
config.action_controller.asset_host = Proc.new do |source, request =
if request && request.ssl?
If you have any tips on a asset-less deploy configuration (digested
possibly) with real assets hosted elsewhere I'm all ears.