Soft failing deleting-type migrations

I'm sure we've all had the experience of our development database
getting out of sync with our migrations, and having to comment out
this and that to get the migrations working again.

It seems to me that it would be good to have the option to keep going
after a failed drop-column or drop-table operation. After all, the
table is now in the state you wanted it to be; it just wasn't before.

But I can see an argument that keeping the current behavior is
desirable to ensure the correctness of a production system. So what
if it were an option - SOFT_FAIL_DROPS=[none|up|down|both] - that
defaulted to "both" on development, "down" on test, and "none" on
production?

Would such a patch likely be smiled upon by the core team?

Jay

But I can see an argument that keeping the current behavior is
desirable to ensure the correctness of a production system. So what
if it were an option - SOFT_FAIL_DROPS=[none|up|down|both] - that
defaulted to "both" on development, "down" on test, and "none" on
production?

Or maybe on error it could offer the option to continue anyway when in development mode ?

Fred

-1 from me. This “forgiveness” option would just be encouraging slack when developing.

-1 from me too. I can see the temptation but Mislav has nailed it:
this just encourages slackness.

Tom

-1 from me. This “forgiveness” option would just be encouraging slack when

developing.

-1 from me too. I can see the temptation but Mislav has nailed it:
this just encourages slackness.

-1 from me too.

My problem is not that my dev database is out of sync, in fact I thought the use of migrations prevents this. My problem is that after a certain number of migrations I can’t migrate from zero on a new dev instance or new prod server because my model is now in sync with migration 100, and something fails at migration 8. This usually happens when adding or changing data in the #8 migration. What would interest me more is some way to run a migration using the same svn version for the model that that migration’s last change has. This isn’t painful enough to me to actually write that code though. :wink:

My problem is not that my dev database is out of sync, in fact I thought the
use of migrations prevents this. My problem is that after a certain number
of migrations I can't migrate from zero on a new dev instance or new prod
server because my model is now in sync with migration 100, and something
fails at migration 8. This usually happens when adding or changing data in
the #8 migration. What would interest me more is some way to run a migration
using the same svn version for the model that that migration's last change
has. This isn't painful enough to me to actually write that code though. :wink:

I'd guess that the solution to this is purely a plugin material. But
my preferred way is to include model definition inside migration for
cases like that.

Interesting... I didn't expect such a negative reaction!

When the problem happens to me, it's usually because (a) I've written
a migration that aborted, or (b) I tried to merge and renumber two
migrations but forgot to roll back first, thus often yielding problem
(a). When that happens, I've got, say, a schema_info of 34, but half
of the fields from migration 35. So I can't migrate down, because
there are fields that shouldn't be there in 34, but neither can I
repeat the migration. This doesn't happen to you folks?

The real solution would be wrapping migrations in transactions, as in
http://dev.rubyonrails.org/ticket/5470, but that doesn't work on MySQL
or Oracle.

My problem is not that my dev database is out of sync, in fact I thought the
use of migrations prevents this. My problem is that after a certain number
of migrations I can't migrate from zero on a new dev instance or new prod
server because my model is now in sync with migration 100, and something
fails at migration 8. This usually happens when adding or changing data in
the #8 migration.

That's actually a solved problem - you want to use a local copy of the
model in your migration, not the model itself.

It doesn’t happen to me for a few reasons:

  1. I am very careful that my migration down matches my migration up. Using TextMate and mistyping in the up, will also mistype in the down, so when I find and fix the up I am always careful to fix the down too.

  2. I don’t often merge migrations, and I never renumber.

  3. If the schema number did get updated, but the migration failed, I go and change the schema number. I do this until the migration is working.

This happens to me sometimes, so I can feel Jay's pain. Basically it most often happens for a reason that isn't covered in Steven's points: a typo in a migration line where you needed to use plain SQL. That causes the migration to abort just like Jay describes. The problem is that most db engines (most notably MySQL) don't support transactions when defining the schema so the db structure ends up in an inconsistent state. This has nothing to do with slackness imho, since writing pure SQL is often more or less trial and error. It hasn't hit me hard enough yet, though, that I would've done anything to it. Just commenting out a few lines in the migration and down-migrating is mostly enough to fix the situation.

Cheers,
//jarkko

In this case log into the server and experiment with the SQL you want to use, similar to using script/console or irb. I personally write all of my migrations as SQL (helped by an sql_migration generator I wrote). I've also written a couple of rake tasks to bump migration numbers around during development when I delete or combine migrations, but this is really only a housekeeping-type thing. It's not something I'd really recommend doing.

All of this is helped, of course, by having transactional DDL in PostgreSQL. :slight_smile:

Michael Glaesemann
grzm seespotcode net