Bug Return Policy


We find bugs, irregularities, this that should be there, and things that shouldn’t. From that we create a bug report, and from that someone looks into it, and then it’s a wrap. Unless the information is not returned, an no-one is the wiser. A bug report to me is a representation of an observation of the system, usually something that’s wrong. Some tools and vocabularies calls this “defects”, “bugs”, “tickets”, “incidents”. A bug report can be an email, post-it, or even a mentioning in passing [2].

Here are some recent sample headlines:
– The design is unclear, please elaborate
– With this role, I can access this, which I shouldn’t
– When I compare the requirements to the delivery list, I find these ..
– There is no data here, but there should be
– We thought we wanted this, but now we want something else

Notice that a bug report usually originates with a person, making an evaluation. This person is the tester, no matter the functional hat (SME, SDET, PO, VP). This may be tool supported, coming from a log of automated checks, or from BDD or Jenkins or what not. No matter the amount of tools, a person is making an informed decision, and raising the bug.[4] Come to think of it, she could choose to do nothing. But something is bugging her [5].

Here are some recent replies to my bug reports:
– it is by design
– it works on the development environment
– that’s how the COTS (or framework or platform) handles it
– ok, got it. seems like an easy fix
– awrh, now we have to rethink the whole thing
– Defferred, FixedUpStream, Rejected,
– Hmm, I see what you mean. Let me look into it

These replies come from some other person than the tester – let’s call him the fixer. First of all the fixer evaluates the report – he makes a decision, based on his context and his available information. Sometimes it’s an easy fix, sometimes it cannot be reasonably fixed. Sometimes the fix have diminishing returns. And everything in between.

What is very important to me, is that the fixer communicates his immediate evaluation to the tester. As quickly and transparent as possible. The fixer, to me, does not have the option to close it [1] alone. Nor can he fix the bug without letting the tester know. In the end the tester calls whether it is resolved or acceptable given the updated information. If the tester and fixer cannot agree, then call for outside help. And only then, let the two people work it out first.

The bug report and “fixer reply” has to be returned to the tester. Either the fix has to be tested, or the no-fix has to be tested too. It’s all part of the game – and it’s all integral to improve the quality in the short run – by fixing this specific project. It is an integral part in improving the quality in the long run, by adding knowledge and collaboration to the solution of the bugs found. Every bug, every clarification, every wish from the test to investigate something about the product counts towards collaborating about the quality of the solution.

TL;DR: Always direct the reply to a bug back to the person who found it.

1: Closure http://www.ministryoftesting.com/2014/11/closure/
2: Mentioning in passing, aka “mipping” http://www.satisfice.com/blog/archives/97
3: 3 types of bugs http://cartoontester.blogspot.dk/2010/06/3-types-of-bugs.html
4: How to raise a bug http://cartoontester.blogspot.dk/2012/10/3-steps.html
5: Something that bugs someone whose opinion matters. http://www.satisfice.com/glossary.shtml#Bug


QA Aarhus – Exploratory Testing How and When

1 Comment

QA Network Aarhus is a local non-affiliated network of testers (and good friends) in Aarhus. Where I had the great pleasure of talking about Exploratory Testing. This is the link collection, the slides are attached.



How to spot defects


Two checking / routinized tips for the basic tester:
  • Not doing something – it should 
  • Doing something – it shouldn’t

Two  testing / bespoke tips, for the advanced tester:

  • Doing something – it should
  • Not doing something – it shouldn’t be doing

See also What if – and Does it Matter Testing AND Checking 

3 types of bugs by Andy Glover @cartioontester

3 types of bugs by Andy Glover @cartoontester

Use SCRUM for your testing


  • Put (testing) ideas in on the backlog
  • Define an amount of ideas (use SBTM?)
  • Do the work
    • Some things work
    • some things don’t work
  • Present/ship the findings 
  • Get new ideas
Agile v Traditional Housekeeping methods

Agile v Traditional Housekeeping methods by cartoontester

See also Exploratory Test in a nutshell

Close the tool and start exploring

1 Comment

A good colleague of mine said to me: “You know, Jesper –  I would prefer if the testers only prepared the test cases in QC, and then closed the tool and started testing.” What great way to put it! A good test preparation should have the aim to prepare the testers for the applications. Put on the edge, the testers should cram like preparing for a closed book exam (evil me ;-). When you start testing an application environment like this, you apply your knowledge and start exploration and learning (see Michael Bolton testing vs checking and Ajay Balamurugadas on Scouting and testing).

As contrary to an exam – DO go back to your tool frequently to do the book keeping, remind you of areas left out. But use as refference … mostly :-). Michael Bolton writes on “ Handling an overstructured mission” – that scriptet/prepared testcases reveal problems or considerations for new testcases. So we should add them and execute them! Also we should keep management informed of the added value. Kristoffer Nordström comments: “Excellent testing is a process of exploration,discovery, investigation, learning and communicating that information to the stakeholders.” – indeed so!

Exploration goes for bughunting as well. Fred Beringer has a good blogpost on The Long Tail of bugs (with a clear refference to the “Long tail” and the 80-20 Pareto principle). He asks “What do you find in your long tail of bugs? … What depth of the tail needs to be addressed?” This is where the exploration on application capabilities begins. Fred also writes “On top of this long tail of bugs, you can easily imagine a parallel long tail of test cases“. A good preparation leaves the skilled tester with knowledge from the business on what is most important – and the knowledge is well applied and exercised when closing the tool and start testing .. start thinking … start learning .. START exploring.

Originally at http://www.eurostarconferences.com/blog/2010/9/6/close-the-tool-and-start-exploring—jesper-ottosen.aspx – more from EuroStar 2010: Perfects in softwaretesting

So Everything Must Be Tested?


Everything? Really? A 100% Coverage(1) of everything? As in E-v-e-r-y-t-h-i-n-g?  nah…

“Something must be tested” … oh wait (2) …

“Something – that matters to someone who matters – must be tested”.

So who matters – who decides what to test?

“Something – that matters to the stakeholders(3) of the project – must be tested”

Use both right and left brain parts (The Right Brain for the future) remember! 

“Something – that matters to the stakeholders of the project – must be explored and confirmed” (4,5)

So what is this something that matters?

“A solution that solves a problem for the stakeholders – must be explored and confirmed

Must? Always? MustShouldCanWould (MoSCoW) ? Everytime?  OK OK

“A solution that solves a problem for the stakeholders will be explored and confirmed within a given business context“. In short –The scope of testing is a business decision

Cartoon Tester: Coverage

Cartoon Tester: Coverage


1: http://blog.asym.dk/2011/03/29/covering-test-coverage/

2: “A bug is something that matters to someone who matters .. to me” (Michael Bolton, James Bach, Cem Kaner, Brett PettiChord et.al)

3: Stakeholders – in the broadest sense: developers, customers, users,…

4: Michael Bolton http://www.developsense.com/blog/2009/08/testing-vs-checking/

  • Checked as in confirming, using scripts and sequences
  • Tested as in learning and exploring, sapience

5; Rapid Software Testing http://www.satisfice.com/rst.pdf pages 59-67.