Test

Automated tests cannot prove the absence of bugs, they can only prove their presence.

In response to the last statement of the quote above: If you want to improve your software, refactor. If you cannot refactor your software because you're afraid you might break something, you don't have enough automated tests.

--max

Max Muermann wrote:

Automated tests cannot prove the absence of bugs, they can only prove their presence.

Washing your hands before performing surgery cannot get rid of /all/ germs, so you may as well not bother! :wink:

Washing your hands before performing surgery cannot get rid of /all/ germs, so you may as well not bother! :wink:

Well said :0)

All of this is also assuming that the only benefits that we get from testing are less bugs. There is a trend towards Test-driven development in the rails community and once you start doing that, testing becomes far more than just 'finding bugs'.

Also, the original quote's metaphor breaks down when applied to TDD. Testing first is like saying "What weight do I want to be?" then working out until you get to that weight - which makes far more sense : 0)

And man... 'develop better'? - what a nerdy thing to say...

Writing tests informs the design of whatever I'm testing. I often refactor code to make it more testable, which generally means decreasing coupling, improving interfaces, and applying consistent conventions. These improvements don't just improve testability, they improve the overally quality of my code.

More tests + intelligent refactoring == better code.

That's not what I meant at all. Of course unit tests are incredibly important and even terrible test coverage is better than no coverage. But it is dangerous to assume a piece of software to be bug-free because all the unit tests pass. A failed test is one that has just caught a bug. A passed test is one that may just not be testing the right edge case.

--max

Max Muermann wrote:

> > Automated tests cannot prove the absence of bugs, they can only prove > > their presence. > > Washing your hands before performing surgery cannot get rid of /all/ > germs, so you may as well not bother! :wink:

That's not what I meant at all.

There are actually software engineers out there who use the line you quoted as an excuse not to write any tests. (I generally figured you quoted it tongue-in-cheek, and I responded in kind, with a winky.)

Of course unit tests are incredibly important and even terrible test coverage is better than no coverage. But it is dangerous to assume a piece of software to be bug-free because all the unit tests pass. A failed test is one that has just caught a bug. A passed test is one that may just not be testing the right edge case.

Focus on the word "assume". Tests don't exist in isolation, written requirements don't either, and version controllers don't, and so on. Nobody on a team who uses TDD and frequent iterations assumes that software is life-critical-ready just because we TDDed it. If anything, TDD adds to the metrics showing exactly how frail our software is. A unit test that springs loose half the time is valuable because we can still fix it, pass it, and check in. It's also valuable as a negative metric.

Max Muermann wrote:

> > > Automated tests cannot prove the absence of bugs, they can only prove > > > their presence. > > > > Washing your hands before performing surgery cannot get rid of /all/ > > germs, so you may as well not bother! :wink:

> That's not what I meant at all.

There are actually software engineers out there who use the line you quoted as an excuse not to write any tests. (I generally figured you quoted it tongue-in-cheek, and I responded in kind, with a winky.)

I did - only I sort-of missed the winky :wink:

> Of course unit tests are incredibly > important and even terrible test coverage is better than no coverage. > But it is dangerous to assume a piece of software to be bug-free > because all the unit tests pass. A failed test is one that has just > caught a bug. A passed test is one that may just not be testing the > right edge case.

Focus on the word "assume". Tests don't exist in isolation, written requirements don't either, and version controllers don't, and so on. Nobody on a team who uses TDD and frequent iterations assumes that software is life-critical-ready just because we TDDed it. If anything, TDD adds to the metrics showing exactly how frail our software is. A unit test that springs loose half the time is valuable because we can still fix it, pass it, and check in. It's also valuable as a negative metric.

I'm in violent agreement here. I cannot imagine working on a software project without great test coverage. I can also not imagine not conducting design and code reviews. I have, on the other hand, seen projects forego any sort of review process because they had (close to) 100% test coverage.

--max