What could be causing my pack of Mongrels to get so fat?

I have 15 mongrels running on one machine which come to life at about 35M. I found one source of their growth to be a cache mechanism which set class variables, never expiring them, causing each Mongrel to build it’s own cache store. This resulted in them getting to 170M+ in less than 12 hours and removing this got them each down to a weekend growth of 100M+.

100M is bigger than I would like, and I would like to know why this is happening before I slap monit on there and tell it to euthanise any mongrel over 80M.

Where can I find a description of what data persists beyond a Rails call in the Ruby environment? Class variables? Probably not instance variables, but anything else?

Cheers, Morgan Grubb.

My understanding is this:

When your mongrel is first loaded, it will load the initial stuff, the stuff used to make your application start. Once people start hitting the site, it will start loading whatever it needs to get the requests done, hence your mongrels getting fat.

Also, pray tell how you have 15 mongrels running in 100mb.

My understanding is this:

When your mongrel is first loaded, it will load the initial stuff, the stuff used to make your application start. Once people start hitting the site, it will start loading whatever it needs to get the requests done, hence your mongrels getting fat.

But which bits actually stay in memory and which bits get flushed out when the current page request ends? I’ve proven that class variables persist but I’m not sure that’s accounting for all of the memory they’re sucking up.

Also, pray tell how you have 15 mongrels running in 100mb.

Apologies, each mongrel starts life at 35M and currently grows to 100M.

Cheers.

A thought that sprung to mind as soon as I clicked the send button:

What are you using to run the mongrels?

Class variables, constants etc... Beware also of ruby extensions as things can get a little complicated with them (rmagick is a good example of this, see http://rubyforge.org/forum/forum.php?thread_id=1374&forum_id=1618). Another thing is that at the start your mongrel has loaded none of you application's code (just the rails framework) , but as time goes on each mongrel will have more and more of your application loaded.

Fred

Not quite sure what you mean. They’re being run with mongrel-cluster. They’re not running swiftiply or evented at the moment (I’d like to get the weight under control before I change the diet.

Cheers.

I’d like to be running quite a few more Mongrels, actually, but I have to limit it to 15 so I don’t run out of memory. It’s been fine up until now but since we’re generating quite a bit more traffic lately we actually have to look at some proper scaling. No point adding another box until I’m sure the current one is doing the best it can and at the moment I’m really not sure of that at all.

Cheers.

From what I’ve read elsewhere people generally accept 60M as a good size for a Mongrel to get to so I’m wondering what they’re doing different. It’s not possible that the few constants we use (all defined in environment.rb) would expand in memory to 100M+. I’d like to think this site isn’t so complicated that it bloats out by so much. It’s not doing much that is actually complicated (all the tricky work is in the spam detection).

We don’t process that many images, relatively speaking, but I’m currently investigating replacing RMagick (among other things) as I am aware of it’s leaking-like-a-sieve tendencies.

Would Singleton objects persist in memory, perhaps?

How big are the Mongrels you guys are running?

Cheers.

From what I’ve read elsewhere people generally accept 60M as a good size for a Mongrel to get to so I’m wondering what they’re doing different. It’s not possible that the few constants we use (all defined in environment.rb) would expand in memory to 100M+. I’d like to think this site isn’t so complicated that it bloats out by so much. It’s not doing much that is actually complicated (all the tricky work is in the spam detection).

The memory use depends on the application, 60M is an average, not a maximum. As you can see in (part of) my “top” dump, some apps are using substantially less, some substantially more.

We don’t process that many images, relatively speaking, but I’m currently investigating replacing RMagick (among other things) as I am aware of it’s leaking-like-a-sieve tendencies.

RMagick doesn’t leak memory like a sieve FYI, I have an app that uses it extensively and memory use is rock stable. In the past, you had to start the garbage collection manually, if you didn’t, memory consumption would rise. However, since the latest release garbage collection is automatic. You do need to know that because of the size of the library, RMagick will consume quite a bit of memory. So if you’re only looking for thumbnailing or simple transformations and memory is a constraint, other solutions like MiniMagick or ImageScience might be more appropriate.

How big are the Mongrels you guys are running?

Too bad the server was restarted just a couple of days ago, because I can honestly tell you you’ll get about the same figure in a month or three. Also, having 15 mongrels for an application that’s just released is just way overkill and probably will even slow down your server. There have been so many articles about this subject, a simple “how many mongrels” search on Google will turn up quite a few results. Please read those.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

2342 capistra 16 0 70108 51m 4384 S 15.3 7.2 486:47.38 mongrel_rails

2336 capistra 16 0 70140 51m 4380 S 0.0 7.2 491:29.64 mongrel_rails

2339 capistra 16 0 69940 50m 4384 S 0.0 7.1 492:32.39 mongrel_rails

2354 capistra 16 0 49408 26m 3352 S 0.0 3.8 0:03.83 mongrel_rails

2357 capistra 16 0 49420 21m 3352 S 0.0 3.0 0:03.78 mongrel_rails

2361 capistra 16 0 61324 41m 3568 S 0.0 5.8 0:11.49 mongrel_rails

2365 capistra 16 0 67224 33m 3076 S 0.0 4.6 0:09.13 mongrel_rails

2368 capistra 16 0 68024 34m 3028 S 0.0 4.8 0:09.31 mongrel_rails

15881 capistra 16 0 59800 40m 3572 S 0.0 5.7 1:40.15 mongrel_rails

22270 capistra 16 0 61536 42m 3604 S 0.0 5.9 0:31.68 mongrel_rails

Best regards

Peter De Berdt

Morgan,

I get about the same numbers as you on the size of the Mongrels in the same time frame (although I have only 4 instances). I use monit to restart them when they get to big but, like you, I would like a better understanding of why they grow. Unless you use a lot of class variables or globals (I don't) I do think there has to be a memory leak somewhere. But, of course, it shouldn't really be possible to leak memory with a garbage collected language. I haven't tried switching off monit to see if there's an upper limit yet, have you?

Cheers, Jan

To clarify things: in the first few days memory consumption rises up to the point where it is for the moment, after that it stays more or less the same. And at least one of our applications can be called quite complex and have a fair number of users, as well as a ton of mediaplayers polling the server every minute. In that app we use RMagick, as well as ffmpeg for extracting a frame from uploaded video, quite a few plugins, hyperestraier for fulltext indexing (which is a seperate process, but anyhow), …

I don’t know the innards of Rails and Mongrel enough to say what causes memory consumption to go up and what is released when, but fetching huge recordsets at a time instead of using pagination or a lazy loading table could be the cause of such memory use. If this is the case, you could basically have a single controller, single model app that uses 200MB of RAM because you’re fetching a couple of thousand records in an instance variable on each request.

Best regards

Peter De Berdt

Jan,

I found my upper limit when each Mongrel was chewing 170M and the box went massively into swap. The rapid growth to 170M I found was because of a badly implemented caching mechanism that the previous developer put in that basically chewed memory and got very few cache hits. Taking that out didn’t harm performance at all and stopped the rapid memory consumption.

But since then I’m unsure as to where the rest of the memory is going. Globalize has it’s own internal caching mechanism which I’m currently looking to monkey-patch to use memcache instead but even that could only account for approximately 2M per mongrel. I’ll see what happens when I replace/update RMagick (next on the list) but we don’t do enough image processing in the mongrel to have much effect.

I suspect that whatever is causing these fat dogs is probably also giving them diabetes - my mongrels die spontaneously far more often than I would like. Unfortunately I can only really deal with one massively anomalous issue at a time.

Cheers.

I don’t know the innards of Rails and Mongrel enough to say what causes memory consumption to go up and what is released when, but fetching huge recordsets at a time instead of using pagination or a lazy loading table could be the cause of such memory use. If this is the case, you could basically have a single controller, single model app that uses 200MB of RAM because you’re fetching a couple of thousand records in an instance variable on each request.

I could understand that if we were doing massive fetches on every single call or if the mongrels would go up and down in weight as large queries were turned into objects. Unfortunately neither of these are the case as the first thing I did for massively improved performance was to fix the queries being run.

Cheers.

How about trying to narrow down the place where the memory consumption occurs by httperf’ing a few controllers/methods. Basically simulate a user hitting a page a few thousands of times and see what page is causing your memory consumption to go up so quickly and work from there: see what is being called, what models, what plugins, …

Best regards

Peter De Berdt