Problem running migration

Hi, I'm trying to run a set of migration which brings data from a legacy project. The legacy project is on production and have lots of data. So, during the time the migration keep allocating and allocating memory until it grabs it all. I'm doing pagination to bring the data step by step. Beside, what is very wear is that the memory never goes down between migration, for example, I do:

rake db:migrate

== 2 InitialMoveLegacyUsersToUsers: migrating

Rafael Chacon wrote:

rake db:migrate

== 2 InitialMoveLegacyUsersToUsers: migrating

== 2 InitialMoveLegacyUsersToUsers: migrated (83.3700s)

== 3 AddPasswordResetCodeToUsers: migrating

-- add_column(:users, :password_reset_code, :string)    -> 0.0877s == 3 AddPasswordResetCodeToUsers: migrated (0.0883s)

== 4 AddOpenIdAuthenticationTables: migrating

-- create_table(:open_id_authentication_associations, {:force=>true})    -> 0.0054s -- create_table(:open_id_authentication_nonces, {:force=>true})    -> 0.0044s == 4 AddOpenIdAuthenticationTables: migrated (0.0105s)

And watching the behavior of the memory between migrations it never free memory, always keep growing.

You probably need to show me what your migrations look like. You must be loading a load of database rows into ruby objects?

Maybe you're setting them to a variable that remains in scope?

Don't know.

If you do rake db:migrate VERSION=2 and then seperately do rake db:migrate VERSION=4

does it isolate the problem to the first migration?

The most conlficting migration looks like this:

class MoveLegacySugarEntries < ActiveRecord::Migration   class Entries < ActiveRecord::Base;end

  class Entry::SugarEntry < Entry;end   def self.up    Legacy::SugarReading.each do |le|       u = User.find_by_login(le.user.login)       unless u== nil         TzTime.zone = u.tz          Entry::SugarEntry.create!(         :registered_at => DateTime.parse("#{le.date} #{le.time.hour}:#{le.time.min}"),         :comment => le.comment,         :value => le.reading,         :user => u)       end     end   end

Where the active record each is implemented like this:

class <<ActiveRecord::Base   def each(limit=1000)     rows = find(:all, :conditions => ["id > ?", 0], :limit => limit)     while rows.any?       rows.each { |record| yield record }       rows = find(:all, :conditions => ["id > ?", rows.last.id], :limit => limit)     end     self   end end

So, is getting 1000 rows at a time... Enough to don't exploit the memory. Which variable might not be getting out of scope??

Anyway, how would you think possible to get garbage memory from previous migrations?? For example, if I ran my migration from 0 when I got to this migration I already have 1.5 gigas memory occupied, isn't kind of wear???

Thanks for your help