Moving routing recognition outside the mutex

Page 24 of Ez's Mongrel presentation (http://brainspl.at/
mongrel_handlers.pdf) suggests he gets good throughput by doing
routing processing etc before starting the mutex lock.

[I've not looked at this code for Rails nor Merb] - is this something
we can port to Rails? Anyone written about this already in this forum
or blogs (to save you writing it up again! :slight_smile:

Cheers
Nic

Unspoken inference: the routing code would need refactoring/reducing
to support thread-happiness/safety.

Nic

Page 24 of Ez's Mongrel presentation (http://brainspl.at/
mongrel_handlers.pdf) suggests he gets good throughput by doing
routing processing etc before starting the mutex lock.

[I've not looked at this code for Rails nor Merb] - is this something
we can port to Rails? Anyone written about this already in this forum
or blogs (to save you writing it up again! :slight_smile:

Living without AsyncIO and using green threads makes me suspicious
that it'll make a difference in a full application. However if people
take a look, improve the thread safety, run relevant and accurate
benchmarks, and show an improvement, I can't see why we wouldn't be
interested.

I'm just not necessarily sure that all those steps are a worthwhile
investment, nor that thread safety will yield performance improvements
relative to profiling an indicative app and fixing what's slow.

No matter how trimmed down the mutex lock is, it's still there, and
you don't want your users to be queuing within mongrel when there are
processes available elsewhere to service their requests. Because of
this you'll probably always want your load balancers to be doling out
one request at a time, so additional thread safety won't buy you a
whole lot of anything...

If the things (requests) going through a bottleneck (rails request stack) 1-at-a-time can be reduced from 1 second to 0.1 second (by squashing the mutex) then you’d get 10x more requests through per second. Or doesn’t it actually pan out that way?

Sorry if that’s a naive question.
Nic

If the things (requests) going through a bottleneck (rails request stack)
1-at-a-time can be reduced from 1 second to 0.1 second (by squashing the
mutex) then you'd get 10x more requests through per second. Or doesn't it
actually pan out that way?

While increasing the number of tasks being worked on at any one moment
may mean that work *starts* getting done sooner, all that the users
care about is when the work is finished. As the ruby VM uses green
threads, and none of the database drivers or mongrel use non-blocking
IO, I just don't think that increased parallelism will increase
throughput anything like 10x.

But we're all just guessing until we see benchmarks :).