server overload

i've got an RoR site, which runs on mongrel
there is a task which the admin uses to fill the database with new
data, the problem is that when the task runs the whole webpage

the job he does from the admin panel also runs as a cron job at a time
the page isn't used but he insists for the manual run.

So i don't know if this is a rail problem or mongrel/this whatever -
but would be very grateful for a help


How much data is being loaded? If you are adding a large amount of
data inside a giant commit statement then the database could become
inaccessible for everyone else until the load completes.

Worse than that, IIRC, Mongrel serves one request at a time. If your
request takes a long time, *nothing* else will respond while it's
Consider a Mongrel cluster (so there are other Mongrel instances to
handle other requests) or better yet, switch to Passenger with Apache
(or NGinx, etc)

well its from an xml file which usually is about ~20MB i found one
solution which isnt very pretty to run the web page on two different
ports using two different instances of mongrel, but I'm looking for
somthing more simple for the end user, so he dosn't have to remember
to login to one webpage for the admin work.

Yeh i thought about mongrel cluser - I'm just imaging the discussion
with the server admins :wink:

Use a delayed_job task for the long running task.

This will free your mongrel process for further requests as delayed job runs as a separate ruby process but loads the rails environment. You can use “send_later” for this and this requires minimal code change.

  • Gautam

this sound very interesting I'll check that out

While delayed_job is handy, you will eventually run out of steam if
you only have one mongrel. Others have suggested mongrel_cluster for
running multiple mongrels, you might also want to consider passenger,
which will fork extra rails instances as needed.


I do agree - infact my preferred setup us Nginx+Passenger.

However, given that the current problem on hand is one long running task slowing things down and not a scale issue (I.e number of requests), delayed_job may just do the trick.

My 2 cents :slight_smile:
- Gautam

I'm currently trying to set up the delayed job solution, running into
some problems
but if I get it to work I think it will do the job, the importing will
be done about once a day.

Since I'm on the topic I've tried to do the railscast setup from
delayed job but both of the versions showed there dont do it for me
the first one with the send_later method, shows "undefined method `perform' for
#<YAML::Object: ..."
I load the config in environment.rb file (raw_config = + "/config/config.yml")
APP_CONFIG = YAML.load(raw_config)[RAILS_ENV])

the second option using a new class after jobs:work

"Job failed to load: Unknown handler. Try to manually require the
appropriate file."


I presume you have run the migration that is generated by delayed_job. You also have the initializer file.

Earlier would you start the long running as ? If so, would definitely work.

If ‘start’ is an instance method in the Download model, it will work. If ‘start’ is a class method, you cannot (obviously) do a – then you can enqueue the task.

Send some more details - and I can help you sort this out.

  • Gautam


hmm, I've played around a bit and it somehow started to work well I
guess thats good just wish how I've done that so I could setup it on
the server properly :wink: