There are certain skills in software development that are very subtle. They seem to take years of experience to really appreciate and learn to apply well. One of them is writing good automated software tests.
The “ideal” set of software tests should:
- [Strong] Always fail in the presence of bugs (the classic definition of a “good test”)
- [Cheap] Require little effort to build
- [Informative]The cause of a test failure is obvious
- [Flexible] Never fail as a side effect of desired change to the system
The above goals are conflicting. Increasing any of them tends to reduce the other others. Thus the subtlety in finding the correct balance.
When I first leart to unit test, I only knew about Strength and Cost. And I think I focused my attention on improving Strength more than Cost.
Then, I learned that you can achieve decent Strength and Cheapness by coarse integration tests. However, such tests arent Informative, nor Flexible.
Flexiblity is very subtle, because it counters the simple heuristic: “more test, more good”. The negative effects of low flexibility take time to perceive.
This is the first of several posts on Automated Testing. In them, I hope to describe the guidelines I use to optimize all four goals.
[But for the impatient, it really comes down to “use mostly fine grained unit tests, including lots of interaction tests (probably using Mocks)“.]