28 October, 2010

Physicists - Testers in disguise?

My background is in science. I have spent 11 years (a third of my life, believe it or not) studying mathematics, statistics and most importantly - physics, experimental astroparticle physics to be specific.

I have been trained
  • to be sceptical
  • to question
  • to think analytically
  • to think logically
  • to be curious 
  • to try to understand how things work rather than accepting stated facts
  • to explore
All of these things I think make very good qualities for a tester too.

My research consisted of searching for a signal in a data set made up mainly of background noise. Feel free to read my thesis. In order to do my research I had to write my own software. Since the results of using my software to process the data were going into my thesis I had to test that the software was behaving as I expected it to in an attempt (futile maybe) to minimize the risk of making a complete fool of myself.

I claim that testing in a wider meaning of the word comes naturally to experimental physicists, even when talking about software testing. The life of any experimental physicist consists of
  • data acqusition
  • data analysis, more often than not using some homemade software
  • publishing results from data analysis
Publishing (preferably interesting) results is the basis of your career, if you do not publish you do not exist. Imagine what would happen (and does happen) if you publish results you later have to retract because your software is found to have severe defects. Physicists are aware of what is at stake - and unlike what is generally the case in the software industry, every mistake is going to hurt the physicist personally.

Hence physicists - and all other scientists with integrity - test their software tools meticulously to make sure they understand how they work and that  they work as expected. It is not a strict, structured testing that ISTQB would approve of, but the physicists have their hearts in the right place. They want things to work and be reliable, and is that not really just what we all want?

SWET1 - Swedish Workshop on Exploratory Testing

Högberga gård, October 16-17, 2010 

Participants: Michael Albrecht, Henrik Andersson, James Bach, Anders Claesson, Oscar Cosmo, Rikard Edgren, Henrik Emilsson, Ann Flismark, Johan Hoberg, Martin Jansson, Johan Jonasson, Petter Mattsson, Simon Morley, Torbjörn Ryber, Christin Wiedemann

Several very nice accounts of SWET1 have already been given, but now that two weeks have past I feel ready to share my personal reflections on the weekend.
 
I had never participated in something similar before, and was unsure of what to expect. I did have rather high expectations, but the even so I was overwhelmed by the sheer intensity of the discussions. There was so much energy and so many ideas and thoughts flying around that by the end of the first evening I was suffering from a total intellectual meltdown and had to go to my room and install some mind map softwares to relax...

The whole following week my brain was infested by a swarm of ideas bouncing around inside my head, but by the second week it had started settling down and sinking in, and by now I feel fairly recovered.

Spending a day and a half talking about nothing but exploratory testing was of course very stimulating and inspiring. Everyone took part actively and contributed in a unique way. My fellow peers provided me with ideas, hints, tips, tool suggestions and general encouragement that really gave me a push forwards as a tester. 

The best thing though was the shared joy of testing.

Thank you everyone.


21 October, 2010

Spinning threads into yarn

Recently the approach I have had to my testing has been heavily influenced by session-based test management. I have made a test plan consisting of a high-level list of test tasks. The testing has been exploratory, performed in sessions on a given topic, e.g. a function. I have two problems with this:
  1. As much as I like lists, they make bad test plans - at least to me. There are always too many tasks so the list will be too long, covering several pages in a document, making it hard to get an overview. It is also difficult to depict relationships. I have tried different groupings and headers, and managed to create nightmare layouts that are impossible to read. A list is also highly binary - either the task is done or not, there is no equivalent to "work in progress".
  2. I would rarely be able to finish a session without interruption. Something urgent would come up and I would have to abort the session, and when restarting the conditions might have changed. As I discussed in the post on October 17th, I also tended to feel obliged to completing the session before I took on a new task, even though there might have been more important matters surfacing after the session started. In this situation it was of course hard to keep track of the status of the tasks.
The appeal of thread-based test management is of course that I can perform test tasks in parallel - it is not necessary to say that a test task is done. Instead I can scrape the surface of everything once and sort of work my way down to the insignificant details from there.

I have resolved to use a different approach for the next test period. This is what I have done so far:
  • I have installed the open source mind map tool FreeMind
  • I have created a mind map (I call it fabric) with
    • Error reports
    • Product heuristics
    • Generic heuristics
  • Since the fabric only contains short thread names, I have introduced knots that are (k)notes in simple text format that I link, or tie, onto the threads. The notes contain additional informtion such as hints, tips and reminders.
  • I have compiled the stitch guide. The stitch guide provides guidelines on how I think that my project should use thread-based test management. The guidelines are not rules, but suggestions intended to promote consistency.
  • I have a template for daily status reports. The report can contain anything I feel needs writing down during the day, but should at least contain the names of the threads that have been tested in some way. I am currently looking for a more fun name that "Daily status report".
  • The actual testing will of course be exploratory.
A fabric. Colours and icons are used to show priorities and status. The red arrow indicates that there is a knot tied onto the thread.
My plan is to use the first version of the fabric as my test plan. During the test period I will keep updating the fabric, and at the end of the test period the current status of the fabric will be my test report.

Note to the reader: This is my current interpretation of thread-based test management, and this constitutes my starting point. Hopefully it will evolve into something that I find useful, and turn out ot be an improvement compared to today. If not I will not hesitate to chuck it and try something new. I have big hopes though and cannot wait to get started!

17 October, 2010

Picking up a new thread

I have just spent the weekend at a very inspiring peer conference on exploratory testing (Swedish Workshop on Exploratory Testing, SWET1) in Stockholm.

There were many interesting presentations and discussions, but what is on my mind right now is James Bach's presentation on thread-based test management, http://www.satisfice.com/blog/archives/503 . I have been trying to adopt Session-Based Test Management (SBTM) for a while, but never managed to do any proper time-boxing since I would typically be interrupted in the middle of a session and have to abort or restart.

Quite often I will start a test activity, be interrupted and not finish the session, start a new test activity, not complete that session either for whatever reason and so on. Working this way makes me stressed since
  • It feels like I never finish anything.
  • I do not have an overview of what I am doing and what the status of the different tasks is.
I have also come to realize that in some cases I feel a bit hemmed in by the actual session. Even if the conditions change or something urgent comes up I still feel obliged to finish the session before I start a new activity. In those cases when I work in a chaos of sorts I think this perceived need to "be loyal" to my session reduces my efficiency.

So, I'm going to have a go at thread-based test management instead, and I start by making a mind map!