We're using nginx with HAProxy and mongrel as well as monit and we're
integrating with SAP using Piers Harding's sapnwrfc gem. It contains a
C-extension which calls itself an SAP provided C-library for RFC
calls. In general it works great.
But there are SAP RFCs with very long runtimes, e.g. orders with a lot
of items, searches with not restricted enough search criterias, but
also sporadic hickups in SAP which can happen to any RFC call. They
can block a mongrel for more than 2 minutes. We therefore need a
The 2 available solutions we found (SystemTimer and Terminator) just
wouldn't work, no matter what we tried. We therefore wrote an old-
fashioned custom solution: each request writes a file initially and
deletes it again at the end of processing. A monit process which runs
periodically (in our case every 10s) checks whether there are any
files older than a specific timeout period. If it finds one, it kills
the mongrel process (no soft kill) and starts a new one. When the
mongrel gets killed, nginx receives a 504 error. We assume that this
will happen mostly (only ?) in timeout cases, therefore we modified it
to redirect to our app to a page which has an error message about the
This solution works perfectly so far. The only weird phenomenon we saw
is that in one case the user never gets the error page/redirect, but
the browser just hangs (firefox and IE).
Comments ? Ideas why SytemTimer and Terminator would not work ?
Improvements to the current solution ?
The solution details:
1. Setting a global constant with the mongrel port during
ObjectSpace.each_object(Mongrel::HttpServer) do |i|
Const::App.port = i.port
Const::App.port = '3000' # when testing etc.
2. Creating/deleting the file in before/after filters:
class ApplicationController < ActionController::Base
3. Monit Configuration:
check file mongrel.3000.req path /var/www/apps/b2b2dot0/current/tmp/
if timestamp > 90 seconds
then exec "/export/admin-scripts/kill_mongrel.sh 3000"