Showing posts with label automated tests. Show all posts
Showing posts with label automated tests. Show all posts

Thursday, January 5, 2012

ATDD using FitNesse course now available!

I am proud to announce that I have joined together with Clarus consulting to create a fabulous ATDD course based on FitNesse. So whether you are in Australia, New Zealand or further abroad, you can now get leading edge coaching to kick start the use of ATDD or FitNesse within your organisation.

However, this 2 day course covers far more than just learning how to use FitNesse. Instead it is a team based learning session during which the whole team, (BAs, testers, developers, product owners), works together to create executable specifications for a project. This experience should in

I've already helped many teams quickly increase their automated testing coverage while at the same time improving communication using FitNesse. And I am truly excited about the huge positive effects that those teams have continued to enjoy after I left. It is these experiences that I leveraged when writing the course.

So if you are ready to find out how to build the right software, and how to build it right so that you can confidently release the software time and time again, then get in touch right now by contacting me here.

Course Overview:

Topics for the course are taken from the following.

Day 1:

Introduction to ATDD

  • ATDD – a new way of working
  • ATDD vs TDD
  • Why we should automate tests
  • Why the whole team should be involved


Specification Workshop

  • How to run a Specification Workshop


Writing up specifications (tests)

  • FitNesse Wiki Basics
  • Guidelines for writing executable specifications
  • Tricks for writing specifications in FitNesse
  • Introduction to FitNesse Automation


Day 2:

Making specifications (tests) executable

  • Coding basic tables
  • Structuring FitNesse code
  • Setting up FitNesse for the team
  • Debugging FitNesse code


Organising specifications

  • Creating a fixture toolbox
  • Organising specifications
  • Suites in FitNesse
  • Source Control and FitNesse


Advanced topics

  • Dealing with legacy systems
  • Solutions to common automation problems
  • Overview of other ATDD tools
  • Tips for successful FitNesse implementations


The course is suitable for teams who are completely new to FitNesse as well as those who are already using FitNesse but want to use it more effectively. Everyone should come away from the course with a solid understanding of how to use FitNesse to collaboratively build executable specifications. Teams familiar with FitNesse will also explore some of the more advanced topics in depth.





Thursday, August 18, 2011

The automation contradiction – Four reasons why automated tests need to tell the human story

Ah, test automation. Those wonderful monkeys that free us from the routine drudgery of re-running boring tests over and over! And because the computer monkeys not people execute these tests, some testers write such tests as though it's only important that the compiler understands them in the compiler's 0's and 1's kind of way.


As a proponent of ATDD or Executable Specifications this is not a view that I share. In fact, I coach people who write automated tests in Fitnesse to become story tellers. Because it's not simply a matter of getting a test to execute that's important. In order for the test to be an asset over the longer term, it's also important that other team members can read the test, and easily understand how the functionality works and what is being tested.

That means that tests need to have good names, and must be well organised in a tree structure or tagged. They need to have a short one or two sentence purpose, and clearly express the key business functionality under test as succinctly as possible. (A myriad of small details, such click there, edit this isn't enough). Finally, if possible, all tests should also express the 'why' which describes why the functionality was implemented and who the customer is.

Now, none of this should be all that new to a seasoned IT professional. Particularly one who has been worked on a legacy feature for which such information is missing. What may come as more of a surprise, is that I've come to realise that the readability of tests is actually more important for automated tests that it is for manual tests.

The main reason for this seems to be that manual tests are just that, manual. And every time they are run the person running the test picks up the implicit knowledge about what the test does, and hopefully also some tacit knowledge about how the functionality works. However, with automated tests we don't get this tacit learning. Therefore, it's more important than ever that a quick scan of a test provides correct insight into what the test covers.

Now as anyone who has tried to maintain an automated test suite knows, the tests also sometimes break. (Otherwise what would be the use of running the tests in the first place?). Without knowing what the purpose of the test was in the first place, it's pretty hard to decide what to do with the test and how to fix it. At this point you'll be grateful for any effort expended on expressing the test in a more human readable manner. So if you ever see yourself altering existing functionality in the future, then this is reason number two for spending some extra effort now writing more readable tests.

Thirdly, well written, executable specifications are an immensely powerful way for the whole team to communicate. This is because they are written in a form that everyone, from customers to BAs, testers and programmers (who like to debug things), can understand and talk about. And because they represent what the product does, and not what people think it does, they are also very real. Quite frankly I don't think this is something you can appreciate until you have experienced it in action. So if you are still sceptical at this point then I suggest read Gojko Adzic's Specification by Example which contains case studies of other people who have tried this approach.

The final reason that test organisation becomes really important in automated test systems is that due to their nature we end up with more tests. This comes about because in a well written test system the marginal cost of each new test becomes smaller. More tests mean more testing, but also more to organise, so we better make sure we do it well.

Which really brings us back to full circle. Automated tests have the ability to give software teams the courage to refactor code and release more often, but only if the tests themselves don't become an area of technical debt that no one wants to change. For this reason it is super important that tests are written to be human readable, and organised so it is easy to get a feel for test coverage in any area.

Friday, May 20, 2011

Automated tests must go green.

You might have one of hundreds of reasons why some of your tests fail and they might be very good reasons. Lack of time to get the automation working properly, functionality that is still buggy, the list goes on... But one thing I am passionate about is that the standard set of automated tests that are run regularly should all go green.

This means that the moment something breaks it is fixed immediately, or if that's not possible then it is removed from the suite and recorded as a task to do later. You don't have to delete it, just put it aside (maybe into another suite of similarly-able tests) until it becomes a lovely green healthy test again.

I just want to be clear here that I'm not saying here that the only way you should use automated testing tools is with perfectly green tests. I am a strong proponent of half manual-half automated testing. What I am recommending is that you separate out the tests that run reliably and are fully automated, from those that aren't.

Now this a simple rule but people often struggle to get their head around it. I don't know how many times I have had the following discussion in the last few weeks, but it's a fundamental. So here it is in an abridged form.

Questioner - "So Clare I've got this test and the way we are currently have it implemented in the system is wrong. I doesn't quite comply with what it should. Josie says that you've told her we've got to make our Fitnesse test pass - but that's just wrong as it's testing the wrong thing. How will we know to fix it if I make the test pass?"

Me - "Yes that's right. Automated tests must go green. Otherwise how does Bob in the other team know that when he makes a (programming) improvement to the Test System that he hasn't broken anything? He won't have the tacit knowledge that you have that this test always fails and will think he's broken the test".

"But it's just wrong. We can't have tests that test for the wrong behaviour."

"Okay. Lets look at it another way. Is there anything at the moment in your personal life something that you would like to do daily if you had more time?"

"Huh? what do you mean? "

"Well say something like flossing your teeth every day, or stretching after exercise - those are the things that I wish I could persuade myself to do"

"Uh." - Thinking to herself that that Clare has gone crazy and wondering where all this is going.

"Now do you remind yourself every morning that you're not doing it, and spend a minute or so processing that reminder every day? And would you then send a reminder out to all your co-workers frequently that says 'Clare needs to be reminded to floss her teeth'? Because that's broken tests do."

"But..."

"Actually I haven't quite finished. Broken tests are actually much worse than this because they don't typically say what the problem is. Instead the reminder is 'Something is wrong here and you now need to spend some time investigating it to see if you caused it!'

"But..."

"And anyway have we actually planned to fix this bug, or is it just something we hope to do in the future when we have more time?"

And so the conversation continues...

We need to keep in mind that the value of automated regression testing comes from it being a pragmatic tool to give us more information in how the changes we are making now are effecting our existing functionality. Automation needs to be about how our system is now, not about some magical Eden where we have fixed all the bugs we know we have.

So what do I suggest if you have a test that won't run reliably or when you know it is testing functionality that isn't right? Well I don't have any hard and fast rules, but here are some ideas.

If the functionality is buggy then use your normal software development processes to deal with the bug and not the automated testing process. Write a story about it or put in a bug in a bug report and let it be prioritised as it normally would be (as long as it not stopping lots of existing automated tests running in which case it is your job to make sure management knows seriousness of the situation). That's how you deal with the fact that you have a bug on your hands.

Then make your test pass according to what the system is doing right now. However, write in the margin (use a comment field in the table or write it in plain text above the table in Fitnesse) that 'currently the system isn't doing what is expected and the result should really be ___ '.

That way if the bug does get fixed, and the test starts to fail, no-one will waste time angsting over whether the new result is right or wrong. They will be able to quickly update the test and move on.

The reason I suggest doing this, is that the system might be a little wrong now but at least this way we make sure that the system is stable and we are not getting unexpected side effects of new code that become more incorrect over time, or may confuse our users. This way we still get cheap information (because it's automated) which we wouldn't get it we skipped the test completely.

When you have tests that don't run reliably, then I suggest you move them into their own suite. Name this suite something more appropriate such as "Manually assisted automated testing" or "Tooling assisted tests" because at this point in time that is exactly what such tests are! And make sure you use this terminology when you talk about these tests with management.

This is really important, because the return of investment (ROI) of running such tests frequently, is seldom worth the cost of running them. So it is important to separate that cost from the cost and thus ROI of running the fully automated tests regularly.

Putting such tests into their own test suite is not going to stop you running these tests before each release, but it allows you and management to separately control how often these tests are run and to focus development time on the most valuable work. And it may also help you highlight to management that more investment is needed in the automated test system.

This is a lesson that I have learnt the hard way, through my own blood, sweat and tears. Want to know what will happen if you try and cope with all the breakages yourself? Then watch this very short excerpt from my presentation at the ANZTB conference in March. I hope it lightens up your day.

Do you disagree with what I have written? Or do you have an even better strategy for dealing with such tests? Please write your comments below - I want to learn more.

Thursday, July 16, 2009

Half and half testing - The best of both worlds.

Automated and manual testing. With testing, it is often a strict dichotomy where at any one time you are only doing one or the other. A while back we asked the question "Why not have tests which utilise the strengths of both man-kind and machine?" and the results amazed us.

The original idea came about because we were having trouble automating our gui tests with Selenium. At that time, we just didn't have the time to solve all the issues. This meant that we only were testing our gui in an ad hoc way, which was slow, frustrating and lacking in coverage.

At the time, we had good coverage of the backend processing and number crunching, but couldn't test easily that our gui worked correctly with the backend - a classic integration test problem. With a big release coming up, where different teams had written the gui and backend parts, we realised this was something we had to tackle head on.

Our solution was to write a fixture that would temporarily halt the Fitnesse test, tell the user to do something and wait until they pushed "done", then carry on with the test.



The resulting tests, were partly manual, partly automated but incredibly efficient compared to our completely manual testing. It takes about an hour to run all the tests. To do the same amount of testing completely manually would have taken over 2 weeks and been less complete. It was a huge win.

In general our half and half tests go like this:
1 Fitnesse: Create some data for the tests
2 Human: Use the gui to create more data.
3 Fitnesse: Test the interactions between all the data is correct.

For example, our system needs both buyers and sellers so one benefit is we can now test the buyer and seller gui separately. For instance, by creating the seller data using Fitnesse first, we are able to test the buyer wizard on its own without having to test both wizards (buyer and seller) together. Another win.


We see big benefits because the data created by the user is picked up by Fitnesse and works seamlessly with the rest of the fixtures. We do this by creating an account with Fitnesse first and telling the manual tester to use that account. That way when the done button is pushed, Fitnesse can query the database to find what the objects were created. Now the tests are fully automated, we are still using this system, it has another benefit that it gets round the 'What is this word' human-check on the create account page as well.

While the initial idea behind the fixture was to use it as a temporary solution we have found this fixture to be an extremely valuable tool. With it, testers can now run tests even when one small part of the process is not yet properly automated. Previously, everything had to be automated or the test wouldn't run.

We have used it to manually run stored procedures on the database, and to write tests for critical bugs. We even use it for ad-hoc (but Fitnesse assisted) testing. The long term strategy is still to fully automate most of these tests, but this fixture allows us to get some value out of the tests before they are completely working. And it helps us to 'test the tests' to assess how much work it is to automate a test, before we decide its priority for automating. There are also a few tests, where we want to cast a human eye over the layout. Those we plan to always be half manual.

The "Ask User" fixture may a simple idea, but one that can really help you get more value from your testing system. For that reason, I think it's a great tool to have in the toolbox. Here's the Ask User java code to get you started. It's not very pretty but it does the trick. However, to get most value out of it you probably want to find some way to integrate it with the rest of your Fitnesse tests. That part, I leave as an exercise for you to do.

Saturday, June 20, 2009

Creating a QA process from out of thin air

This week I had the opportunity to present the work I have been doing over the last year to a group of testers at a Testing Professionals Network (TPN) meeting. I enjoyed sharing the changes we have made and how far we have come in a short time.

Agile Acceptance testing with Fitnesse

A little more about what I've been up too...

About a year ago, I swapped jobs within my company from being a developer to the challenging and highly stressful job (in the beginning) of QA team lead. This meant I was tasked with building a QA team to test a high load, high availability adserver.

This all happened about a year after the company started when we suddenly found we had enough customers that we had to be very careful about what we released. However, the product had been created with such speed that we had no automatic system tests and it was incredibly hard to get any test to run successfully on our test environment. We had also just hired our first tester, who was struggling to get anything working. So it was a big problem and it needed solving, quickly.

My reponse was to start building an automatic testing system, starting with the area of biggest ROI (return on investment). I chose budgeting as I guessed not going over our promised budget was pretty important for our customers, was almost impossible to test by hand but was a reasonably simple test to autotmate.

From those small beginnings the suite slowly grew. As things started to take shape more and more developers could see the power of the system and were happy to contribute to it. Others required the boss to tell them that they must do it. Now 12ish months later, thanks partly to a re-write and thus the need to test everything, we probably have about 90% functionality coverage with our automated tests. Not only that, but we also have a team of developers who are enthusiastic about getting the tests running.

Having seen several other companies take a long time to develop automated tests I am truly amazed at our process. I believe our success is due to getting everyone involved in the testing process. Making the software developers run the tests really helped. That way they got to see how their system design affects the system testability.

So one of the main themes I had hoped to share in this presentation was that in order to build a good, reliable automated system test suite you generally need to improve the testability of the system under test. Improving testability generally benefitsboth testers and developers, as the tools that are developed also help during development and debugging of the system. For example, we created an installer, tools for checking the system status and ways to easily create test data.

So creating a quality test system quickly, I believe, requires the cooperation of both testers and developers. However, putting such a system in place is tricky because it generally requires a simultaneous culture and technology change.

An example of a culture change required is that developers often believe testers are solely responsible for testing. However, developers can make a huge difference on the testability of the system depending on how they design the system. So to get everyone focused on the end goal, tested and releasable software, it is important that both managers and developers understand that developers and testers share the responsibility for testing the system.

However, if a developer has not worked on a project with a good automated system test suite, then they may not be able to see how they can help the tester. Then, if they see tests running erratically they are more likely to question the value of such tests, than to question the way that they themselves have designed the system.

Thus the first culture change requires the developer to be able to picture how they can assist the testing process. But getting them to make the first changes, requires them to believe that the tests are worth building in the first place.

My approach to tackling this chicken and egg problem, is to be agile. Start with a working tests, then ask for small improvements that make the tests more reliable, or quick, or easy to write. Give some value, ask for an improvement, give value, ask again, give value, ask again...

So my second message to all tests leads out there is don't just stick your head down and suffer with unpredictable automated tests. Think about what you need changed about the system, and make sure your developers listen to you. It may seem like a huge effort for small gains in the beginning, but they all snow ball and save you time in the long run.

Please, take a look at the presentation and send me any comments you have. In particular if there are slides that don't make sense without the audio then write a comment and I'll explain them in more detail.

What small change can you make that will make your system more testable?