I don't know how everyone else is doing but we've taken to having the fixtures for all our models declared in test_helper. While this may be contrary to the ideals, in practice this ends up being necessary, as once you start travelling through the associations a lot of models end up being touched and maintaining the set of fixtures used by each test file was an increasing hassle.
This does however lead to an inefficiency: fixtures are reloaded once per test case (they are cached in @@already_loaded_fixtures[self.class]), which makes sense if they are defined once per test case, but not if you used them like we do. With fairly minor changes, we've made fixtures (optionally) cached for the duration of all tests ran. On our setups this has cut the time it takes to run tests by anywhere between 30 and 50%, depending on the number of fixtures used.
Anyway what are people's thoughts on this. Worth writing a patch, or better in a plugin ? (or is everyone too busy with travel plans for railsconf europe )
Please send me this patch. I have attempted this in the past put
failed to get it working without regressions. If it works i'll most
certainly merge it into trunk.
I've attached what we've been using, as long as you declare all your
fixtures in advance it has worked fine in our apps (this isn't
however the case of the unit tests that come with activerecord - for
example method_scoping.rb has several subclasses of
Test::Unit::TestCase, with different fixture sets). In it's current
state it's very much an 'all or nothing' setting. It also needs mocha
Caveats aside here's what I did (to rails 1.2.3) :
I've thought briefly of rewriting things such that you didn't have to have all of your fixtures declarations in test_helper: ie the first time we see fixtures :customers we load it, but other times not. This would require a little more work, but i think would be doable (but you'd still need to stub out Time.now or else you'd run into consistency issues (eg fixtures for different tables with (eg) created_at's set to <%= Time.now.to_s(:db) %>, ie identical .yml files, but different rows in the database if the 2 files were loaded at significantly different times.
Thoughts appreciated (in particular I'd love to know a better way than the horrible thing I had to do to be able to (effectively) alias_method_chain on setup_with_fixtures (is this horribleness to cope with the case that there may or may not be an end user setup method that we don't want to clobber the setup_with_fixtures or is there some other reason?)
I've improved this so that it doesn't care whether or not your fixtures are in test_helper.rb or not - it simply ensures any given fixture is not loaded more that once
So now you just install the plugin (http://svn1.hosted-projects.com/
fcheung/faster_fixtures/trunk) and add require 'faster_fixtures' to
test_helper.rb
This is great. Highrise's unit tests went from 18 seconds to 10
seconds. Let's definitely get this in for Rails 2.0.
I love the plugin and I saw a speedup as well but it broke a few
tests. 9/250 tests failed - several had something to do with testing
whether model attributes had been set internally via a before_save or
similar callback. I can only guess that there’s some extra caching going on.
I don’t have time to debug it just yet but I recommend we get some more folks to
try it out on their app before we fold this into
core.
I'd be interested in seeing what's going on. It does introduce extra caching (indeed that's the whole point), but there always was some caching: it's just that instead of per testcase caching it's common across all test cases.
The hack for chaining the “setup” method you declared ugly in the comments is really not bad. I’ve been thinking and I can’t really think of a way to make it happen in Ruby except monkeypatching TestCase run method to allow for custom, chainable callbacks.
It just irks me that I had to essentially reproduce the method_added from fixtures.rb along with the messy business of first redefining the method_added to be a no-op etc... Good to know that I haven't missed any tricks.
It looks like all the errors I'm seeing relate to the fact that the plugin freezes time during test cases. My app has a few tests that do things like:
) time = Time.now
) record = Model.wacky_find_or_create
) assert record.created_at > time
I took just a few minutes to refactor my tests and they all pass now but this is definitely a gotcha for the plugin.
Yes, when I first came up with this (we've been using this internally for a while) we had a few cases like this. I think we just changed all of our > in those cases to be >=.
I think internally my justification for that was that morally my tests shouldn't care whether I'm running on a fast machine or a slow machine and if my computer was infinitely fast then it could execute tests faster than the granularity for Time.now
It would be nice not to have to freeze time, but you pretty much have to for stuff to be consistent
I'm seeing a 55% speedup in unit tests and 25% in functional tests. Wicked.
I've got a patch just about ready (all the tests in AR pass with it, just need to add some tests specifically for the faster fixtures stuff)
I did have 2 queries:
- Should there be a flag to turn this on and off? I've already set things up so that encountering a non transactional test jettisons the cache (since the test could be junking the test data, requiring fixtures to be reloaded)
- I've used mocha as my time fixing mechanism, which would mean an added dependency. activerecord/test/mixin_test.rb shows a different way of fixing the value of Time.now. Would it be preferable to use something similar instead ?
I’d definitely go for a solution without Mocha. Starting from scratch, a Rails app should have no dependencies. In theory, you can develop an app even without Rake.