I was searching around to see how other people handle the issue of
updating a production database during a release of new version of an
application, and all I could find is that (according to one post) it
was typically done via migrations.
Migrations have a problem, though. After a while, the code changes to
a point where earlier migrations stop working. If you are doing
simple schema changes in your migrations, it will probably keep
working, but if you are modifying application data in your migrations,
then sooner or later someone will do something like add a new column
and write validation code for it, which causes earlier migrations that
worked with that table to fail. Usually this is only a problem if a
few weeks have elapsed since the last update.
Have other people run into this issue? How are you working around it
(or avoiding it)?
Migrations have a problem, though. After a while, the code changes to
a point where earlier migrations stop working.
[...]
Usually this is only a problem if a
few weeks have elapsed since the last update.
Have other people run into this issue? How are you working around it
(or avoiding it)?
You shouldn't be running lots of old migrations. In general, new
installations should be done with rake db:schema:load, not by running
every migration since the beginning.
Use migrations properly and this problem will go away.
I was actually talking about updating an existing production
database. In such a case, the migrations which would be run would be
just the ones between the previous release and the current one.
I have not tried your suggestion of rake db:schema:load but, judging
by the contents of schema.rb, this would drop all of the tables and
recreate them. We would lose both our application data and our user
data. Perhaps I misunderstood how you were thinking to use that?
Thanks, Marnen, for mentioning seed_fu-- that is an intriguing plugin,
and I have been thinking about whether we could use it. I think we
have a situation in which it would not work well for us-- for at least
some of our tables.
As a concrete example, suppose you have a large table of product
information. If the table is large, loading it from a fixture (or
seed_fu file) could be slow, and anyway some of that information will
probably be edited through a separate administrative web application,
which would not be updating seed/fixture files.
Also, you might want occasionally to add a new field to the table
(another product code field) and compute initial values via a
migration. In this case it is convenient to use a migration because:
* you want to try out the change with the code that uses it in your
development area before checking in
* computing initial values for the new field might be very quick and
easy with ruby code
* you want the other developers to get the data change when they do
an update of the project files
If it is very important that the new product code field have a value
for all records, you might put some validation code into the model
class to prevent new records from being created without it. Doing so
would break any earlier migration that created product records.
I don't really see any solution to this problem except identifying
which tables are like this and directly copying them (from a dump
file) into the production database. The migrations that modify them
can be coded so that they don't do anything when run on the production
system, and that way the migrations can still be run to update other
tables (e.g. user data tables) which cannot be copied.