Friday, May 20, 2011

Automated tests must go green.

You might have one of hundreds of reasons why some of your tests fail and they might be very good reasons. Lack of time to get the automation working properly, functionality that is still buggy, the list goes on... But one thing I am passionate about is that the standard set of automated tests that are run regularly should all go green.

This means that the moment something breaks it is fixed immediately, or if that's not possible then it is removed from the suite and recorded as a task to do later. You don't have to delete it, just put it aside (maybe into another suite of similarly-able tests) until it becomes a lovely green healthy test again.

I just want to be clear here that I'm not saying here that the only way you should use automated testing tools is with perfectly green tests. I am a strong proponent of half manual-half automated testing. What I am recommending is that you separate out the tests that run reliably and are fully automated, from those that aren't.

Now this a simple rule but people often struggle to get their head around it. I don't know how many times I have had the following discussion in the last few weeks, but it's a fundamental. So here it is in an abridged form.

Questioner - "So Clare I've got this test and the way we are currently have it implemented in the system is wrong. I doesn't quite comply with what it should. Josie says that you've told her we've got to make our Fitnesse test pass - but that's just wrong as it's testing the wrong thing. How will we know to fix it if I make the test pass?"

Me - "Yes that's right. Automated tests must go green. Otherwise how does Bob in the other team know that when he makes a (programming) improvement to the Test System that he hasn't broken anything? He won't have the tacit knowledge that you have that this test always fails and will think he's broken the test".

"But it's just wrong. We can't have tests that test for the wrong behaviour."

"Okay. Lets look at it another way. Is there anything at the moment in your personal life something that you would like to do daily if you had more time?"

"Huh? what do you mean? "

"Well say something like flossing your teeth every day, or stretching after exercise - those are the things that I wish I could persuade myself to do"

"Uh." - Thinking to herself that that Clare has gone crazy and wondering where all this is going.

"Now do you remind yourself every morning that you're not doing it, and spend a minute or so processing that reminder every day? And would you then send a reminder out to all your co-workers frequently that says 'Clare needs to be reminded to floss her teeth'? Because that's broken tests do."

"But..."

"Actually I haven't quite finished. Broken tests are actually much worse than this because they don't typically say what the problem is. Instead the reminder is 'Something is wrong here and you now need to spend some time investigating it to see if you caused it!'

"But..."

"And anyway have we actually planned to fix this bug, or is it just something we hope to do in the future when we have more time?"

And so the conversation continues...

We need to keep in mind that the value of automated regression testing comes from it being a pragmatic tool to give us more information in how the changes we are making now are effecting our existing functionality. Automation needs to be about how our system is now, not about some magical Eden where we have fixed all the bugs we know we have.

So what do I suggest if you have a test that won't run reliably or when you know it is testing functionality that isn't right? Well I don't have any hard and fast rules, but here are some ideas.

If the functionality is buggy then use your normal software development processes to deal with the bug and not the automated testing process. Write a story about it or put in a bug in a bug report and let it be prioritised as it normally would be (as long as it not stopping lots of existing automated tests running in which case it is your job to make sure management knows seriousness of the situation). That's how you deal with the fact that you have a bug on your hands.

Then make your test pass according to what the system is doing right now. However, write in the margin (use a comment field in the table or write it in plain text above the table in Fitnesse) that 'currently the system isn't doing what is expected and the result should really be ___ '.

That way if the bug does get fixed, and the test starts to fail, no-one will waste time angsting over whether the new result is right or wrong. They will be able to quickly update the test and move on.

The reason I suggest doing this, is that the system might be a little wrong now but at least this way we make sure that the system is stable and we are not getting unexpected side effects of new code that become more incorrect over time, or may confuse our users. This way we still get cheap information (because it's automated) which we wouldn't get it we skipped the test completely.

When you have tests that don't run reliably, then I suggest you move them into their own suite. Name this suite something more appropriate such as "Manually assisted automated testing" or "Tooling assisted tests" because at this point in time that is exactly what such tests are! And make sure you use this terminology when you talk about these tests with management.

This is really important, because the return of investment (ROI) of running such tests frequently, is seldom worth the cost of running them. So it is important to separate that cost from the cost and thus ROI of running the fully automated tests regularly.

Putting such tests into their own test suite is not going to stop you running these tests before each release, but it allows you and management to separately control how often these tests are run and to focus development time on the most valuable work. And it may also help you highlight to management that more investment is needed in the automated test system.

This is a lesson that I have learnt the hard way, through my own blood, sweat and tears. Want to know what will happen if you try and cope with all the breakages yourself? Then watch this very short excerpt from my presentation at the ANZTB conference in March. I hope it lightens up your day.

Do you disagree with what I have written? Or do you have an even better strategy for dealing with such tests? Please write your comments below - I want to learn more.

No comments:

Post a Comment