Thursday, July 16, 2009

Half and half testing - The best of both worlds.

Automated and manual testing. With testing, it is often a strict dichotomy where at any one time you are only doing one or the other. A while back we asked the question "Why not have tests which utilise the strengths of both man-kind and machine?" and the results amazed us.

The original idea came about because we were having trouble automating our gui tests with Selenium. At that time, we just didn't have the time to solve all the issues. This meant that we only were testing our gui in an ad hoc way, which was slow, frustrating and lacking in coverage.

At the time, we had good coverage of the backend processing and number crunching, but couldn't test easily that our gui worked correctly with the backend - a classic integration test problem. With a big release coming up, where different teams had written the gui and backend parts, we realised this was something we had to tackle head on.

Our solution was to write a fixture that would temporarily halt the Fitnesse test, tell the user to do something and wait until they pushed "done", then carry on with the test.



The resulting tests, were partly manual, partly automated but incredibly efficient compared to our completely manual testing. It takes about an hour to run all the tests. To do the same amount of testing completely manually would have taken over 2 weeks and been less complete. It was a huge win.

In general our half and half tests go like this:
1 Fitnesse: Create some data for the tests
2 Human: Use the gui to create more data.
3 Fitnesse: Test the interactions between all the data is correct.

For example, our system needs both buyers and sellers so one benefit is we can now test the buyer and seller gui separately. For instance, by creating the seller data using Fitnesse first, we are able to test the buyer wizard on its own without having to test both wizards (buyer and seller) together. Another win.


We see big benefits because the data created by the user is picked up by Fitnesse and works seamlessly with the rest of the fixtures. We do this by creating an account with Fitnesse first and telling the manual tester to use that account. That way when the done button is pushed, Fitnesse can query the database to find what the objects were created. Now the tests are fully automated, we are still using this system, it has another benefit that it gets round the 'What is this word' human-check on the create account page as well.

While the initial idea behind the fixture was to use it as a temporary solution we have found this fixture to be an extremely valuable tool. With it, testers can now run tests even when one small part of the process is not yet properly automated. Previously, everything had to be automated or the test wouldn't run.

We have used it to manually run stored procedures on the database, and to write tests for critical bugs. We even use it for ad-hoc (but Fitnesse assisted) testing. The long term strategy is still to fully automate most of these tests, but this fixture allows us to get some value out of the tests before they are completely working. And it helps us to 'test the tests' to assess how much work it is to automate a test, before we decide its priority for automating. There are also a few tests, where we want to cast a human eye over the layout. Those we plan to always be half manual.

The "Ask User" fixture may a simple idea, but one that can really help you get more value from your testing system. For that reason, I think it's a great tool to have in the toolbox. Here's the Ask User java code to get you started. It's not very pretty but it does the trick. However, to get most value out of it you probably want to find some way to integrate it with the rest of your Fitnesse tests. That part, I leave as an exercise for you to do.

Saturday, June 20, 2009

Creating a QA process from out of thin air

This week I had the opportunity to present the work I have been doing over the last year to a group of testers at a Testing Professionals Network (TPN) meeting. I enjoyed sharing the changes we have made and how far we have come in a short time.

Agile Acceptance testing with Fitnesse

A little more about what I've been up too...

About a year ago, I swapped jobs within my company from being a developer to the challenging and highly stressful job (in the beginning) of QA team lead. This meant I was tasked with building a QA team to test a high load, high availability adserver.

This all happened about a year after the company started when we suddenly found we had enough customers that we had to be very careful about what we released. However, the product had been created with such speed that we had no automatic system tests and it was incredibly hard to get any test to run successfully on our test environment. We had also just hired our first tester, who was struggling to get anything working. So it was a big problem and it needed solving, quickly.

My reponse was to start building an automatic testing system, starting with the area of biggest ROI (return on investment). I chose budgeting as I guessed not going over our promised budget was pretty important for our customers, was almost impossible to test by hand but was a reasonably simple test to autotmate.

From those small beginnings the suite slowly grew. As things started to take shape more and more developers could see the power of the system and were happy to contribute to it. Others required the boss to tell them that they must do it. Now 12ish months later, thanks partly to a re-write and thus the need to test everything, we probably have about 90% functionality coverage with our automated tests. Not only that, but we also have a team of developers who are enthusiastic about getting the tests running.

Having seen several other companies take a long time to develop automated tests I am truly amazed at our process. I believe our success is due to getting everyone involved in the testing process. Making the software developers run the tests really helped. That way they got to see how their system design affects the system testability.

So one of the main themes I had hoped to share in this presentation was that in order to build a good, reliable automated system test suite you generally need to improve the testability of the system under test. Improving testability generally benefitsboth testers and developers, as the tools that are developed also help during development and debugging of the system. For example, we created an installer, tools for checking the system status and ways to easily create test data.

So creating a quality test system quickly, I believe, requires the cooperation of both testers and developers. However, putting such a system in place is tricky because it generally requires a simultaneous culture and technology change.

An example of a culture change required is that developers often believe testers are solely responsible for testing. However, developers can make a huge difference on the testability of the system depending on how they design the system. So to get everyone focused on the end goal, tested and releasable software, it is important that both managers and developers understand that developers and testers share the responsibility for testing the system.

However, if a developer has not worked on a project with a good automated system test suite, then they may not be able to see how they can help the tester. Then, if they see tests running erratically they are more likely to question the value of such tests, than to question the way that they themselves have designed the system.

Thus the first culture change requires the developer to be able to picture how they can assist the testing process. But getting them to make the first changes, requires them to believe that the tests are worth building in the first place.

My approach to tackling this chicken and egg problem, is to be agile. Start with a working tests, then ask for small improvements that make the tests more reliable, or quick, or easy to write. Give some value, ask for an improvement, give value, ask again, give value, ask again...

So my second message to all tests leads out there is don't just stick your head down and suffer with unpredictable automated tests. Think about what you need changed about the system, and make sure your developers listen to you. It may seem like a huge effort for small gains in the beginning, but they all snow ball and save you time in the long run.

Please, take a look at the presentation and send me any comments you have. In particular if there are slides that don't make sense without the audio then write a comment and I'll explain them in more detail.

What small change can you make that will make your system more testable?

Friday, February 20, 2009

A lesson from the river

When the week is over, I love nothing more than to head out of town into the clean air and freedom of the wilderness. Being a bit of an adrenalin junkie, I often choose some rather adventurous way of doing this. In particular, I love to kayak the sort of rivers that send shivers down the spine of normal people. While I kayak purely for enjoyment, I have found that the experiences from the river often alter how I think back in the office.


You see, software development is a little like kayaking in that both are team games. On a river you depend on your mates watching out for you in order to paddle something too dangerous to paddle alone. This is not much different than a software project which requires you to work with
other developers to create something too big to finish alone.

The big difference between the two situations is the level of risk you play with. With kayaking you don’t just play with your job and lifestyle, you literally play with your life. For those who can accept the danger it is a a perfect way to gain risk assessment muscles.

Kayaking has taught me that I should not study just the rapid I want to paddle, but I must also look downstream. What will happen if I take a swim? Is there a nice pool after the tricky bit or does the river flow into a deadly rock sieve? In many cases a grade V river with pools is easier and less dangerous than a continuous grade IV river with no place to rescue a swimmer.

On a river I refer to this concept of the consequences if it all goes wrong, by the unforgettable term, “F*** up factor” (FUF). In the office you may prefer to use “Consequence”, but the concept remains the same. Risk management is more than just asking “What is the likelihood of it all going to plan?” It’s also about asking “Can I live with the consequence if this doesn’t work out as planned?”

While we all think this way to some degree, I often see people behaving in ways that makes me think their FUF muscles aren’t strong. Perhaps a person will balk at trying a new idea (high risk, but low FUF as all you are losing is development time), yet would change the database on the production server without a dry run if the rest of the team didn't prevent it (medium risk, high FUF because it could affect all our customers).

Or consider the FUF of an exception in test code vs production code. While it’s equally important both code bases are maintainable, the FUF of an exception in our test code is nearly zero. Therefore when writing test code putting in exception handling code probably isn’t worth the effort. But for production code we probably want to think of the FUF of every exception and handle it appropriately.

A further benefit of FUF analysis is that it may help you feel more confident when dealing with the unknown. After all, if you have accepted that you can live with the FUF should it all go to custard, then your brain can stop worrying about the possibility of it going wrong. That should allow you to focus on the more important task of making it go right.

I recommend you give FUF analysis a try. It should make you develop a more appropiate application for your users and hopefully make you more adventurous in the process.