Heavy queries stall application

Hi,

I have a ruby application where part of the functionality means that the application has to run very time-consuming queries against a db slave. My problem is that when these queries are ran, the application stalls until it is finished. The queries are done like @result = Model.find(:all, :conditions => conditions, :joins => joins), and they use the available indexes as far as possible.

So, I need to find a way to allow my rails application to manage simultaneous queries. Problem is, I don't really know where to start / where the issue lies. I know it's a configuration issue somewhere, but I don't if it's my web server or application that needs to be reconfigured.

Any tips?

Hi,

I have a ruby application where part of the functionality means that
the application has to run very time-consuming queries against a db slave. My problem is that when these queries are ran, the application stalls until it is finished. The queries are done like @result = Model.find(:all, :conditions => conditions, :joins => joins), and they use the available indexes as far as possible.

So, I need to find a way to allow my rails application to manage simultaneous queries. Problem is, I don't really know where to start / where the issue lies. I know it's a configuration issue somewhere,
but I don't if it's my web server or application that needs to be reconfigured.

This a fundamental issue with rails: a single rails instance (ie a
mongrel for example) cannot handle more than 1 request at a time. One
approach is to offload such tasks to something like backgroundrb so
that the mongrel is freed up straightaway

Fred

Frederick Cheung wrote:

So, I need to find a way to allow my rails application to manage simultaneous queries. Problem is, I don't really know where to start / where the issue lies. I know it's a configuration issue somewhere,
but I don't if it's my web server or application that needs to be reconfigured.

This a fundamental issue with rails: a single rails instance (ie a mongrel for example) cannot handle more than 1 request at a time. One approach is to offload such tasks to something like backgroundrb so that the mongrel is freed up straightaway

Fred

Hmm, I should've probably mentioned this in the first post. But I have a dev environment, (it used to be the live env. until I got a dedicated machine for it). In the development environment, parallell queries works perfectly fine.. But in my live environment, the problem above happens. I've checked with my db admins, and there's no max_connection criteria set for the mysql users for either application. The applications also use the same source code and the same httpd configuration. Some details regarding the setup for the two:

Working configuration (dev env):

Linux 2.6.17-1.2187_FC5smp #1 SMP Mon Sep 11 01:32:34 EDT 2006 i686 i686 i386 GNU/Linux

ruby 1.8.4 (2005-12-24)

Rails v 1.1.16

Apache/2.2.2

Not working setup(prod env):

Linux 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:32:02 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux

Red Hat Enterprise 4

ruby 1.8.6 (2007-03-13 patchlevel 0) [x86_64-linux]

Rails v 1.2.3

Apache/2.0.52

It could be blind luck: if the request isn't routed to the mongrel which is busy with the long running process then you won't see the problem.

Fred

Frederick Cheung wrote:

don't if it's my web server or application that needs to be

have a

It could be blind luck: if the request isn't routed to the mongrel which is busy with the long running process then you won't see the problem.

Fred

I don't use mongrel; I use Apache with mod_fastcgi.. Maybe it's the same issue, but I've been observing the problem for about two months and it persists through reboots etc. Thanks for the info btw, I did some further testing, and I discovered that it actually doesn't stall until there are two queries blocking the application... I also discovered that the same applies when running two queries in the dev environment.. :frowning:

Do you think this could be handled with apache MPM worker or something like that? I reckon backgroundrb could do it, but it would be a lot of development work since it's not cronjob-style requests that block the application, but ad-hoc queries that dig through our databases for info.

Frederick Cheung wrote:

don't if it's my web server or application that needs to be

have a

It could be blind luck: if the request isn't routed to the mongrel which is busy with the long running process then you won't see the problem.

Fred

I don't use mongrel; I use Apache with mod_fastcgi.. Maybe it's the same issue, but I've been observing the problem for about two months and it persists through reboots etc. Thanks for the info btw, I did some further testing, and I discovered that it actually doesn't stall until there are two queries blocking the application... I also discovered that the same applies when running two queries in the dev environment.. :frowning:

You'll see similar issues eventually, it's just that it's a fastcgi worker that gets blocked. Depending on your settings apache may decided to fire up another fastcgi worker or it will try and wait for one to become available.

Do you think this could be handled with apache MPM worker or something like that? I reckon backgroundrb could do it, but it would be a lot of development work since it's not cronjob-style requests that block the application, but ad-hoc queries that dig through our databases for info.

Don't know really, but I will say that backgroundrb has been great for that purpose for use. You just need a worker that does your database digging and returns the results. Your controller just fires off the work request to backgroundrb and then shows a view which just ajax-polls backgroundrb until its done before displaying the result data retrieved from backgroundrb.

Fred

Look at your queries and locking, especially if using MyISAM and mySQL, which locks entire tables for SELECT statements (your long- running query). The fact that it locks up once two queries get going tells me you might have a resource contention or deadlock. If you have a table where frequent writes and frequent reads are occurring all the time, *especially* if you have select and updates within single transactions, you should think about switching the table to InnoDB instead, or setting up another table and do some of the heavy lifting for your query with INSERT triggers to preprocess some of the information.

http://dev.mysql.com/doc/refman/5.0/en/table-locking.html

Good point. You can still get deadlocks with innodb (they'll timeout eventually). from your mysql prompt show full processlist show innodb status

might should you useful info.

Fred