If you run rake cucumber then you get db:test:prepare as part of the
cucumber task. If you run cucumber straight from the command line
then you do not.
I agree that db:migrate:whatever, when run in the development
environment, should always produce db/schema.rb. It seems to me
utterly pointless to have a test db that is not in sync with the
current development db schema.
Rails sets up a test environment, you run tests via rake tasks for that reason.
The golden path is to run db:migrate if needed, eg after a pull got
new migrations, and then run the builtin testing tasks, or the ones
your 3rd party tool provides.
Cucumber only provides a way to run individual tests through its “cucumber” executable. I don’t think it should be the responsibility of outside gems or even the built-in testing frameworks to run rake db:test:prepare before the tests themselves.
The database should be taken care of when you run rake db:migrate for both the development AND test databases using the criteria I said earlier.
Not saying that's the job of an agnostic tool, in such case you would
normally have a plugin that does whatever is needed to give you a
normal workflow.
It already does. The point is that unless you do a db:test:prepare,
it's not loaded into the test env yet. Whenever you run "rake" or
"rake cucumber" or anything like that, db:test:prepare is then run;
the debate here is whether it should be run at the time of migrate
rather than test.
I think a separate rake task that migrates and prepares test would be
useful. Since I have one project still on mysql, I would be strongly
opposed to having this done as a part of the normal db:migrate task,
because it takes mysql about a minute to run the db:test:prepare with
a moderate number of tables, and I don't want to have to wait for that
every time I migrate my development database.
But db:migrate:whatever => ['db:migrate', 'db:test:prepare'] would be useful.
For me, personally, it is a smell that one needs to do that. The
testing public contract uses tasks, a posterior such a compound task
means that you're not following the workflow. An integration plugin is
missing, or whatever. You can of course implement that task yourself
if you want to, eg if such plugin does not exist.
On the other hand, I don't like the idea of one environment touching
another environment. Nor development, neither the rest. You have to
think in the workflow for any given environment including custom ones,
right? If I migrate the production database, that's it, I don't want
that task to be messing with anything else. If Rails is setting up the
test environement, that's the logical moment to do any housekeeping.
db/schema.rb is a bridge. Separation of concerns.
I agree that you normally want your test database and whatever your
environment is in sync. That's why the task aborts if it is not. But I
do think it is desirable that environments do not interfere with each
other.
The thing I "kinda" like about the current tasks is that they allow developers in legacy databases to override/extend them in such a way so they can support scripting the database. This is useful when you have things like stored procedures or DB specific data types. This process get's even more complicated when the developer is on a platform that does not provide native binaries to do the job, for instance UNIX to a SQL Server database. This article [1] on our adapter's wiki covers a process of extending rake so that you can do the equivalent of alias_method_chain for a rake task. The fact that there are discreet rake tasks that can be overridden or extended is a big asset to the edge cases others face, including myself at the day job.
So my contribution to the conversation. If there is a core decision to consolidate things, just make sure you keep things far enough apart to accommodate those not living with a blessed db and work process.
Cucumber::Rake::Task.new(:features) do |t|
t.cucumber_opts = "--format pretty"
end
task :features => 'db:test:prepare'
I don't think it should be the responsibility of outside gems or
even the built-in testing frameworks to run rake db:test:prepare before the
tests themselves.
It's the responsibility of anyone running tests -- whether test/unit,
cucumber, autotest, textmate, etc -- to use the test harness.
The database should be taken care of when you run rake db:migrate for both
the development AND test databases using the criteria I said earlier.
The test database is ephemeral. Imagine it is created just before the
test run and destroyed just afterward. It is never migrated; it's
cloned on the fly *just for that test run.*
Anything that expects greater responsibility or longevity from the
test db raises red flags and design smells.
I see. I missed the distinction. My own workflow is that after any
migration then I run rake cucumber, so I would not see this problem
in the normal course of events. On the other hand I do recall being
tripped up by this on at least one occasion. Some sort of
automation of loading the test environment dbschema driven by
db:migrate seems likely a worthwhile enhancement.
This is a nice ideal. However I agree with others who say that when
the schema gets large, recreating it can be slow. And slow tests are a
nightmare.
What if we could skip db:test:prepare if a NOPREP environment variable
is given? Or had an extra set of tasks, say test:quick:unit etc, which
skipped it? I've personally taken to doing the former in a large
project.
Yea, I've done this too. Basically in my chained :clone_structure I copy the schema migrations to the test database and have the same task along with :dump noop unless the migrations are out of sync. This way I get to avoid a costly clone which takes almost 2 minutes for my legacy database.