memory leak ... and why doesn't a process get smaller?

I am trying to pin down a memory leak in a production Rails 2.0.2 application and in doing the investigation I created a very simple Rails 2.02 application for testing.

FYI: the testing script and all the testing results are available here:

   http://pastie.org/155135

I create a simple model that has two attributes:

   rails leaktest    cd leaktest    script/generate scaffold comment name:string body:text    RAILS_ENV=production rake db:migrate

My test consists of measuring the real memory usage after each request and a test run consists of accessing the default rails page four times and then the index page for the model four times.

I was able to measure the small memory leak throughout these tests (approximately 8-20 bytes per request) that Ara Howard tried and failed to track down last October.

I then create 100 copies of the model each with about 1k data and duplicate the test.

I subsequently ran the test with 1000 and 10000 model objects.

Between 0 and 1000 copies of the model object the process size varied between 55 and 70M however when I increased the memory size to 10000 model objects the process size ballooned to about 220M.

The database is SQLite and the actual file is only 11MB when filled with 10000 model objects. I was surprised that instantiating an SQL object of around 1k bytes results in an increase of memory in the process space of around 15k.

   Change in AR object count Change in process size    0 => 1000 20MB    1000 => 10000 150 MB

I then deleted all the model objects and ran the tests again and the process size stayed at 218k.

I was surprised because I thought the objects would have been garbage collected and the process size shrunk. I have not yet tried instrumenting this process with Dike or Bleakhouse.

Does the fact that the process size stay at 220k mean that there are unused objects that can no longer be garbage collected?

Is there a way to have the process size shrink back down?