06 January, 2011

My testing cutting board

In his comments on my post Tying up some loose ends, Darren McMillan introduced me to the concept of Inspectional Testing, invented by Markus Gartner.

Reading about Inspectional Testing made me think of my own approach to testing, especially under strict time constraints, and I like to compare the way I do testing to the process of making a cutting board.

At first there is the piece of raw, untreated wood. I will start out by using a plane to shave the wood and make the rough surface a little bit smoother. I will make sure that I cover the entire surface, not skipping any parts. This is a fairly fast procedure, but on the other hand it only removes the worse irregularities.

Once I am done with the plane, I bring out the coarse P60 sandpaper, and again work my way across the whole surface, a little bit slower this time. Then I repeat the process with the slightly finer P100 paper. When I am down to the P150 sandpaper I have managed to turn the rough piece of wood into a smooth surface.
Finally I take the P180 sandpaper and work on those special spots that need some extra attention.

This pretty much sums up how I normally test: A layered approach going from coarse to fine, an approach that works very well with Thread-Based Test Management.

03 January, 2011

Tying up some loose ends

I think it is time for a follow-up on my experience of applying Thread-Based Test Management (TBTM). In October last year I blogged about my attempt at setting up my tests for a new project using TBTM.

It was a smaller project, with me as the only tester which gave me a lot of freedom to try new things, and I actually did what I set out to do! First I created my mind map, or fabric. I added all test activities I could think of to the fabric as threads. Using different colours and icons I visualised the priority of the threads. The first version of the fabric as it looked before I started testing I kept as my test plan, and this I sent to the external customer. To my joy the mind map version of a test plan was quite well received!



Thereafter I started testing. I would typically be working on two or three threads at the same time, and I would not "tie off" or finish a thread, but rather do some testing and then come back to the same thread later, after having worked on some other threads too. This way I ended up digging into the details of most threads at the same time, rather than focusing on one test task and "completing" it before moving on. As my testing progressed I kept updating the fabric. The visual presentation of the current status turned out not only to be a great help to me, but also to the developer I worked with. In the end, he would actually only use the fabric to see what work remained and not our usual defect reporting system.

I had planned to write daily status reports, but instead I actually ran my tests as unfinished, not time-boxed SBTM inspired sessions and wrote session reports. It turned out I needed the reports to keep track of what I was doing. For test charters, I drew a lot of activity diagrams.

When all testing I had time to do was done I had a fabric with a lot of green threads and unfortunately some red too. I took my "final" fabric and used it as my test report. Together with some clarifying notes it constituted all test documentation the external customer received.

My conclusion after this experiment is that for me personally I think a hybrid of SBTM and TBTM is the way to go. I do feel the need to work on things in parallel whereas I usually do not feel the need to time-box, but on the other hand I really like my session reports. I will continue to work like this - it has been exploratory, controlled (in the good sense) and well documented (in the sense of "as needed"). Oh, and I left very few defects for the customer to find and they were all of low priority...

Know yourself as a tester

Who are you? How do you behave? And how does it affect your testing?

I was pair-testing with a colleague, which gave me an excellent opportunity to study his testing behaviour. He had to enter a five digit number repeatedly, and in this particular test it was irrelevant which number was entered. To my fascination I noticed that nine times out of ten he would enter "12458".  Of course I had to ask what was so special about "12458", and it turned out he was completely unaware of his behaviour - on the contrary, he was convinved he was entering random numbers!

To me, this served as a bit of a wake-up call, making me contemplate my own behaviour and how it affects my testing in general and my exploratory testing in particular. Do I to have favourite numbers that I tend to enter more often? Am I more prone to using keyboard short-cuts than the mouse pointer to navigate? How often do I think I am exploring new, untrodden, trails when in reality I am just walking down the same well-beaten path as always, and might almost as well have run a scripted test?

Here session reports in the spirit of SBTM work as a support for me, helping me to be more creative and my testing to be more exploratory. By making short notes when I do exploratory testing it is easier for me to identify patterns in my testing. These behaviours are not necessarily bad, but I believe I need to be aware of them. Furthermore, I use my old reports as inspiration - or "seeds" - for the next session.

Everyone has little quirks, even when testing, and I think it is important that we are aware of them.