Hi Rodrigo,
For the future, I suspect this question is more appropriate on Stackoverflow/Quora than rubyonrails-core, but here is my quick take on this:
I’m tuning the performance of some request in my application and there’s
something puzzling me.
Rails reports the action takes 6ms:
Started GET “/templates/2.json” for 127.0.0.1 at 2013-07-22 10:49:15 -0300
Processing by TemplatesController#index as JSON
Parameters: {“id”=>“2”}
Rendered text template (0.1ms)
Completed 200 OK in 6ms (Views: 0.8ms | Sequel: 2.2ms)
Chrome’s “waiting” time for a local request (localhost) is 25ms.
Then nginx will report 98ms for the same request when proxying the Rails
app, but that is a separate issue I’ll investigate by other means…
First, I’d like to understand what is this 6ms reported time by Rails.
Does it take into account the whole request time since it reaches the
web server or only since it reaches the Rails stack? Does it take into
account all middlewares? Any hint where those extra 19ms from Chrome’s
reported “waiting” time are being spent on?
The response time reported by Rails is based on a “process_action” event in ActionController, so it’s pretty low on the stack on only includes time spent in your controllers. There is a Rack::Runtime middleware (included by Rails by default) that wraps around your entire middleware stack (well, almost) if you prefer a more accurate measurement. The middleware appends a X-Runtime response header, which you can probably see from the Chrome inspector panel. The difference between this value and the one in your Rails log is probably pretty small though (a few ms) as a request typically spends the majority of its time in the controller. Nginx/apache/passenger etc might add 1ms or so to this but in the grand scheme of things that’s pretty much just noise.
The “waiting” time in Chrome is roughly equivalent to the delta of “responseStart” and “responseEnd” in the navigation timing spec, which means the time between immediately before browser sent the request to your sever and when the browser receives the first byte of the response header. This would include the time it takes to send the request across the wire (watch your cookie size because they are sent on every request!) and the time your server takes to generate a response.
The moral of the story is that if you care about the real world performance of your app, you should probably looking into some kind of RUM (real user monitoring) solution in additional to your sever-side monitoring to get an accurate picture. If you already using Newrelic you should enable it here. There are also other standalone services for doing this. (Shameless plug – if you’re building single page apps, we are working on this at caliper.io)
Also, I’d like to take the chance to ask about another thing I’ve
noticed recently. If I remember correctly, Rails used to fill in the
Etag automatically in every request based on the response content but
now it will only do that if you use something like fresh_when or stale?.
Is that correct or has it always been working the way it currently does?
I understand that the overhead may not be desired, but I’d be curious if
the automatic Etag header has been removed, what would be the reason for
that besides the overhead?
The “automatic” etag is generated by the Rack::ETag middleware and I don’t remember seeing this removed from Rails 4. To be sure, you can always run rake middleware and check if you see something like “use Rack::ETag” in the list.
Thanks in advance,
Rodrigo.
Godfrey
Founding Engineer at caliper.io