Thursday, August 18, 2011

The automation contradiction – Four reasons why automated tests need to tell the human story

Ah, test automation. Those wonderful monkeys that free us from the routine drudgery of re-running boring tests over and over! And because the computer monkeys not people execute these tests, some testers write such tests as though it's only important that the compiler understands them in the compiler's 0's and 1's kind of way.


As a proponent of ATDD or Executable Specifications this is not a view that I share. In fact, I coach people who write automated tests in Fitnesse to become story tellers. Because it's not simply a matter of getting a test to execute that's important. In order for the test to be an asset over the longer term, it's also important that other team members can read the test, and easily understand how the functionality works and what is being tested.

That means that tests need to have good names, and must be well organised in a tree structure or tagged. They need to have a short one or two sentence purpose, and clearly express the key business functionality under test as succinctly as possible. (A myriad of small details, such click there, edit this isn't enough). Finally, if possible, all tests should also express the 'why' which describes why the functionality was implemented and who the customer is.

Now, none of this should be all that new to a seasoned IT professional. Particularly one who has been worked on a legacy feature for which such information is missing. What may come as more of a surprise, is that I've come to realise that the readability of tests is actually more important for automated tests that it is for manual tests.

The main reason for this seems to be that manual tests are just that, manual. And every time they are run the person running the test picks up the implicit knowledge about what the test does, and hopefully also some tacit knowledge about how the functionality works. However, with automated tests we don't get this tacit learning. Therefore, it's more important than ever that a quick scan of a test provides correct insight into what the test covers.

Now as anyone who has tried to maintain an automated test suite knows, the tests also sometimes break. (Otherwise what would be the use of running the tests in the first place?). Without knowing what the purpose of the test was in the first place, it's pretty hard to decide what to do with the test and how to fix it. At this point you'll be grateful for any effort expended on expressing the test in a more human readable manner. So if you ever see yourself altering existing functionality in the future, then this is reason number two for spending some extra effort now writing more readable tests.

Thirdly, well written, executable specifications are an immensely powerful way for the whole team to communicate. This is because they are written in a form that everyone, from customers to BAs, testers and programmers (who like to debug things), can understand and talk about. And because they represent what the product does, and not what people think it does, they are also very real. Quite frankly I don't think this is something you can appreciate until you have experienced it in action. So if you are still sceptical at this point then I suggest read Gojko Adzic's Specification by Example which contains case studies of other people who have tried this approach.

The final reason that test organisation becomes really important in automated test systems is that due to their nature we end up with more tests. This comes about because in a well written test system the marginal cost of each new test becomes smaller. More tests mean more testing, but also more to organise, so we better make sure we do it well.

Which really brings us back to full circle. Automated tests have the ability to give software teams the courage to refactor code and release more often, but only if the tests themselves don't become an area of technical debt that no one wants to change. For this reason it is super important that tests are written to be human readable, and organised so it is easy to get a feel for test coverage in any area.