RSpec - concerns about mocking

I've been looking at what it would take to establish a semi-automatic mapping between specs and mocks. E.g. you could flag specs as presenting a mock so that a change in the lower level object (and attendant spec) would change the upstream mock behavior. In theory this would help the sort of refactoring problems you're describing, although there would still be sharp edges.

-faisal

Earlier versions of Mocha used not to allow you to stub a non-existent
method. I'm intending to re-introduce this with a manual override.

Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.

However, in the end there's no substitute for acceptance tests that
exercise critical business functionality.

Earlier versions of Mocha used not to allow you to stub a non-existent
method. I'm intending to re-introduce this with a manual override.

May I recommend that the default behaviour is that these are ignored,
but that you can explicitly tell mocha to fail or warn you of
expectations for methods that don't exist?

Also there is an experimental feature in Mocha called responds_like or
quacks_like (http://mocha.rubyforge.org/classes/Mocha/Mock.html#M000029)
which constrains a mock to only allow methods that exist on a
specified object or class.

However, in the end there's no substitute for acceptance tests that
exercise critical business functionality.

Hear! Hear!

It's important to understand that mocking as we approach it today
comes from TDD as part of an XP process, which divides tests into
Customer Tests and Programmer Tests (note: the lingo has morphed over
time, but the distinction has not). The idea is that you begin with
customer defined end-to-end tests (that fail miserably at first) and
use those to steer you in the right direction in terms of what objects
to develop. Then you drive the development of those objects with more
focused tests.

In this environment, mocking allows us to keep the programmer tests
focused on small bits of functionality. This makes it much easier to:

- develop code when the other pieces it relies on don't exist yet
- understand failures
- run the tests fast

The cost is the scenario Alex described: programmer tests all pass,
but a customer test fails. This is a GOOD THING. This is why we have
different levels of testing. If every level of testing exercises
everything in its entire environment, then we really only have one
level of testing, and we lose the unique benefits we intent to reap
from having different levels of testing.

Conversely, if you are not doing any high level testing in addition to
the object-level testing, then you probably shouldn't be using mocks
at all.

Cheers,
David
http://blog.davidchelimsky.net

I’ve only recently started using rpsec and I’m still trying to get my head around the entire test suite that needs to be setup.

I think I have a handle on testing models, controllers, and views seperately, thanx to looking at the generated code for examples and plenty of pestering the rspec list. I also have used mocks and stubs for most of it.

Is there any tutorial style posts or anything similar that someone could point me towards for the above “acceptance tests”. I’m assuming these are similar to integration tests in standard rails testing, where everything plays together to prove it out.

I guess I’m a bit concerned that if I try to just combine everything at the current level that I am I will balls it up bad, so I’m looking for pointers.

Cheers
Daniel