Testing to prevent regression

Hi guys,

I've been seeing lots of commits lately without tests, for instance:

   Special case in deprecated CGI proxy layer for Mongrel CGI cookies [#… · rails/rails@b6e56ef · GitHub    Update bundled rack to fix more parameter parsing issues · rails/rails@238a6bb · GitHub    Fixes a typo in initializer.rb producing error: undefined local varia… · rails/rails@1dab1d3 · GitHub    Still need to setup view paths · rails/rails@5c63be1 · GitHub

Most of these commits appear to fix some sort of regression, but no tests are added to make sure we can catch such a regression in the future. Is this done on purpose?

Manfred

Hi,

I've also been worried about the lack of testing present in some
commits lately. ActionPack, and especially the Rack related code, seem
to lack lots of necessary testing. Afaik the Rails policy used to be; have good test coverage or don't
commit. Has this been changed?

Eloy

Many of those changes are related to development reloading which is really hard to test with traditional unit tests. This has been really frustrating on my part because I have to manually run though some stupid checklist for development reloading vs production for every patch I accept related to this.

Seriously, if you have any good ideas how to automate real functional tests in this area I'd love to get them in.

My first intuition is to spawn new Rails process from a test and see if it behaves the way it should. I'll try to free up some time tomorrow to give it a try.

Manfred

That sounds like a plan. Thanks for the response.

Eloy

The tricky part of this would be to ensure that the spawned environment is identical to the current environment. For example, environment/rake variables, ruby interpreter being used, etc, etc.

-- Chad

That sounds like a job for forking.

That sounds like a job for forking.

That makes accumulating the failures from the child process essentially impossible though, you'd end up with crazy output intertwined with the parent process.

My assumption was that we would create a pipe before forking, each of the two processes would close one end of the pipe and talk into the other, and pass back the output/status. The parent would check that it matches (assert_equal "everything worked" line_read_from_pipe).

But I think forking doesn't really work on Windows (haven't tried, but it's certainly hard to implement for other languages so I imagine it doesn't work in Ruby either).

But, this is all a problem with spawning in general, so if we do go with spawning/forking as suggested, testing on Windows would get quite difficult.

Anyway, I'm just saying that forking would be better than spawning in general. As for whether this is the 'right' way to test Rack integration etc. - I'm not sure we can solve the problem all neatly wrapped up inside the test harness, personally. We might just have to deal with it at a higher level - having multiple continuous integration targets, one with each different thing we need to test against. We don't have to run the full test suite for each, and many could live on the same server with a bit of coaxing.

That sounds like a job for forking.

That makes accumulating the failures from the child process essentially impossible though, you'd end up with crazy output intertwined with the parent process.

Well, let's just talk code shall we (: I'll write up some stuff and present it to the mailinglist, we can debate whether it's a good solution afterwards. About support for testing on Windows: any help is appreciated, I don't understand Windows.

Manfred

Ok, I spent some time digging through commits and writing tests. The tests I wrote actually don't have anything to do with reloading because I couldn't find a recent commit that required any forking or spawning to test.

  http://github.com/Manfred/rails/commits/railties-tests

Josh, can you point me to a specific piece of code that would require such a test?

Manfred

Those look nice (didn't run 'em myself). Mocks to the rescue! Good job, thanks.

-- Chad

Ok, I spent some time digging through commits and writing tests. The tests I wrote actually don't have anything to do with reloading because I couldn't find a recent commit that required any forking or spawning to test.

http://github.com/Manfred/rails/commits/railties-tests

Josh, can you point me to a specific piece of code that would require such a test?

Any of the changes like 69c049f5ab45bf9bfb0d269acea0773581905fd4

That fixed problems with storing AR objects in the session: 1) the session stores call marshall.dump 2) and the marshalling code checks respond_to? 3) and the dispatcher hook had triggered reload! 4) So the class that the instance refers to has been nuked (c.f. earlier discussions) 5) So the session serialization fails

Great work so far, if you can figure out a reliable way to do regression testing with reloaded code, then I'll gladly (gladly!) apply it :slight_smile: