Reported action time doesn't match Chrome's "waiting" reported time

I'm tuning the performance of some request in my application and there's something puzzling me.

Rails reports the action takes 6ms:

Started GET "/templates/2.json" for 127.0.0.1 at 2013-07-22 10:49:15 -0300
Processing by TemplatesController#index as JSON
   Parameters: {"id"=>"2"}
   Rendered text template (0.1ms)
Completed 200 OK in 6ms (Views: 0.8ms | Sequel: 2.2ms)

Chrome's "waiting" time for a local request (localhost) is 25ms.

Then nginx will report 98ms for the same request when proxying the Rails app, but that is a separate issue I'll investigate by other means...

First, I'd like to understand what is this 6ms reported time by Rails. Does it take into account the whole request time since it reaches the web server or only since it reaches the Rails stack? Does it take into account all middlewares? Any hint where those extra 19ms from Chrome's reported "waiting" time are being spent on?

Also, I'd like to take the chance to ask about another thing I've noticed recently. If I remember correctly, Rails used to fill in the Etag automatically in every request based on the response content but now it will only do that if you use something like fresh_when or stale?. Is that correct or has it always been working the way it currently does?

I understand that the overhead may not be desired, but I'd be curious if the automatic Etag header has been removed, what would be the reason for that besides the overhead?

Thanks in advance,
Rodrigo.

Hi Rodrigo,

For the future, I suspect this question is more appropriate on Stackoverflow/Quora than rubyonrails-core, but here is my quick take on this:

I’m tuning the performance of some request in my application and there’s something puzzling me.

Rails reports the action takes 6ms:

Started GET “/templates/2.json” for 127.0.0.1 at 2013-07-22 10:49:15 -0300

Processing by TemplatesController#index as JSON

Parameters: {“id”=>“2”}

Rendered text template (0.1ms)

Completed 200 OK in 6ms (Views: 0.8ms | Sequel: 2.2ms)

Chrome’s “waiting” time for a local request (localhost) is 25ms.

Then nginx will report 98ms for the same request when proxying the Rails app, but that is a separate issue I’ll investigate by other means…

First, I’d like to understand what is this 6ms reported time by Rails. Does it take into account the whole request time since it reaches the web server or only since it reaches the Rails stack? Does it take into account all middlewares? Any hint where those extra 19ms from Chrome’s reported “waiting” time are being spent on?

The response time reported by Rails is based on a “process_action” event in ActionController, so it’s pretty low on the stack on only includes time spent in your controllers. There is a Rack::Runtime middleware (included by Rails by default) that wraps around your entire middleware stack (well, almost) if you prefer a more accurate measurement. The middleware appends a X-Runtime response header, which you can probably see from the Chrome inspector panel. The difference between this value and the one in your Rails log is probably pretty small though (a few ms) as a request typically spends the majority of its time in the controller. Nginx/apache/passenger etc might add 1ms or so to this but in the grand scheme of things that’s pretty much just noise.

The “waiting” time in Chrome is roughly equivalent to the delta of “responseStart” and “responseEnd” in the navigation timing spec, which means the time between immediately before browser sent the request to your sever and when the browser receives the first byte of the response header. This would include the time it takes to send the request across the wire (watch your cookie size because they are sent on every request!) and the time your server takes to generate a response.

The moral of the story is that if you care about the real world performance of your app, you should probably looking into some kind of RUM (real user monitoring) solution in additional to your sever-side monitoring to get an accurate picture. If you already using Newrelic you should enable it here. There are also other standalone services for doing this. (Shameless plug – if you’re building single page apps, we are working on this at caliper.io)

Also, I’d like to take the chance to ask about another thing I’ve noticed recently. If I remember correctly, Rails used to fill in the Etag automatically in every request based on the response content but now it will only do that if you use something like fresh_when or stale?. Is that correct or has it always been working the way it currently does?

I understand that the overhead may not be desired, but I’d be curious if the automatic Etag header has been removed, what would be the reason for that besides the overhead?

The “automatic” etag is generated by the Rack::ETag middleware and I don’t remember seeing this removed from Rails 4. To be sure, you can always run rake middleware and check if you see something like “use Rack::ETag” in the list.

Thanks in advance,

Rodrigo.

Godfrey

Founding Engineer at caliper.io

Thank you for pointing out to X-Runtime (I’ve seen it before but I
was unaware of its exact meaning). You’re right, the reported
Chrome’s timing when I’m accessing the Rails application directly
(no nginx proxy) is very close to the information in X-Runtime:
Chrome “Waiting” time: 27ms
X-Runtime: 0.025152
Reported time in the log: Completed 200 OK in 7ms (Views: 0.9ms |
Sequel: 3.0ms)
So, now I know that the overhead for ActionController::Base is about
18ms and I could try building some actions on top of Metal if I want
to improve it further, but at least now I know where to look at.
You’re right, when comparing the X-Runtime with Chrome’s “Waiting”
time, the nginx overhead seems to be pretty small (4ms at most), but
still Rails will take longer to process the request when nginx is
involved and I have no clue why is that. Here’s what happens when
nginx is involved:
Chrome “Waiting” time: 88ms
X-Runtime: 0.083885
Completed 200 OK in 7ms (Views: 0.8ms | Sequel: 2.9ms)
This is what I’d really like to improve since there’s no reason why
Rails should take longer to respond to the request when it is
requested by nginx instead of directly from the browser…
I’m using Puma for development mode (where I’m issuing those tests)
and unicorn under production…
I’d appreciate if you could provide me any hints on why the proxied
request is slower from the middleware perspective. Maybe some
settings I could try on nginx. Of course, I won’t be insisting on
that on this mailing list since this is non-related to Rails, but in
case you know the answer I’d be glad to hear your guesses.
I wasn’t aware of Caliper before. We’re indeed maintaining an SPA
and I’ll try to find some time to have a closer look at what Caliper
is and how it could help us. Currently we’re indeed monitoring our
application with NewRelic and have RUM enabled for them (we have a
set of applications running actually, Grails, Rails, Solr and a
Spring MVC one). But I already have some hints of which requests I
should be optimizing (almost all JSON requests exclusively) and I’m
also converting some requests handling from the Grails app to the
Rails one and I’m particularly interested in making sure the
requests are as fast or faster when implemented in the Rails side
(which has been true so far, as the ported actions are much faster
in the Rails side).
Humm… Maybe I’ve seen then in the past for HTML requests and have
just assumed it worked for JSON format as well? I’ve confirmed the
middleware is enabled for my application. Maybe Rails will skip JSON
requests since jQuery won’t take advantage of Not Modified responses
by default. The ifModified ajax parameter defaults to false and if
you enable it you’ll have to take care of caching it in the
client-side yourself, so maybe that was enough reason for disabling
the automatic ETag caching for JSON requests… I’ve been handling
this in jQuery with an ajax filter:
$.ajaxSetup({ cache: false, traditional: true, ifModified: true });
// traditional is required by Grails default binding rules
var _ajaxCache = {};
$.ajaxPrefilter(‘json’, function(options, originalOptions, jqXHR) {
� options.success = function(data, textStatus, xhr) {
��� if (xhr.status != 304) {
����� if (xhr.getResponseHeader(‘Last-Modified’))
_mhAjaxCache[originalOptions.url] = data;
����� return originalOptions.success.apply(this, arguments);
��� }
��� var args = Array.prototype.slice.call(arguments);
��� args[0] = _ajaxCache[originalOptions.url];
��� return originalOptions.success.apply(this, args);
� }
});
Thank you so much, Godfrey!
Cheers,
Rodrigo.

As suggested by Godfrey (off-list), I'm moving this discussion to Stackoverflow:

http://stackoverflow.com/questions/17797753/why-is-rails-middleware-overhead-so-high-when-nginx-is-used-as-a-proxy

Thanks for all input so far.

Best,
Rodrigo.