Thursday, July 16, 2009

Half and half testing - The best of both worlds.

Automated and manual testing. With testing, it is often a strict dichotomy where at any one time you are only doing one or the other. A while back we asked the question "Why not have tests which utilise the strengths of both man-kind and machine?" and the results amazed us.

The original idea came about because we were having trouble automating our gui tests with Selenium. At that time, we just didn't have the time to solve all the issues. This meant that we only were testing our gui in an ad hoc way, which was slow, frustrating and lacking in coverage.

At the time, we had good coverage of the backend processing and number crunching, but couldn't test easily that our gui worked correctly with the backend - a classic integration test problem. With a big release coming up, where different teams had written the gui and backend parts, we realised this was something we had to tackle head on.

Our solution was to write a fixture that would temporarily halt the Fitnesse test, tell the user to do something and wait until they pushed "done", then carry on with the test.



The resulting tests, were partly manual, partly automated but incredibly efficient compared to our completely manual testing. It takes about an hour to run all the tests. To do the same amount of testing completely manually would have taken over 2 weeks and been less complete. It was a huge win.

In general our half and half tests go like this:
1 Fitnesse: Create some data for the tests
2 Human: Use the gui to create more data.
3 Fitnesse: Test the interactions between all the data is correct.

For example, our system needs both buyers and sellers so one benefit is we can now test the buyer and seller gui separately. For instance, by creating the seller data using Fitnesse first, we are able to test the buyer wizard on its own without having to test both wizards (buyer and seller) together. Another win.


We see big benefits because the data created by the user is picked up by Fitnesse and works seamlessly with the rest of the fixtures. We do this by creating an account with Fitnesse first and telling the manual tester to use that account. That way when the done button is pushed, Fitnesse can query the database to find what the objects were created. Now the tests are fully automated, we are still using this system, it has another benefit that it gets round the 'What is this word' human-check on the create account page as well.

While the initial idea behind the fixture was to use it as a temporary solution we have found this fixture to be an extremely valuable tool. With it, testers can now run tests even when one small part of the process is not yet properly automated. Previously, everything had to be automated or the test wouldn't run.

We have used it to manually run stored procedures on the database, and to write tests for critical bugs. We even use it for ad-hoc (but Fitnesse assisted) testing. The long term strategy is still to fully automate most of these tests, but this fixture allows us to get some value out of the tests before they are completely working. And it helps us to 'test the tests' to assess how much work it is to automate a test, before we decide its priority for automating. There are also a few tests, where we want to cast a human eye over the layout. Those we plan to always be half manual.

The "Ask User" fixture may a simple idea, but one that can really help you get more value from your testing system. For that reason, I think it's a great tool to have in the toolbox. Here's the Ask User java code to get you started. It's not very pretty but it does the trick. However, to get most value out of it you probably want to find some way to integrate it with the rest of your Fitnesse tests. That part, I leave as an exercise for you to do.