Closing the Gaps

[Previously on the Ministry of Testing, Nov 2014 – now only on the Web Archives]: About Closure

When I’m in a testing activity I want my test cases [Passed], my user stories [done] and my coffee [black].  Stuff may have a start point, some states in between and an end state. Lets look at ways to represent states and articulate the meaning of states.
One way to illustrate status about the product being tested is to model the activities we have with states. An agile user story may be [ready], [in progress] or [done]. A document may be [final], [approved] and a mind map may be iconified etc. States are so common that we sometimes forget the theory behind the model, and what benefits we have from the theories.

The representation of closure

For instance we can look to computer science graph theory[1]  to help us understand and control the states diagrams. It is the same graph theory that brings us state machines and state-transition diagrams, but that is another story[2] . In a graph theory state model we want one unique start state (Like [To do]), and one unique end state (like [passed]), everything in between is intermediate.

A (single) end state helps prevent the state machine from going on forever[3] , and us from going on forever too. [Deferred] and [Rejected] are temporary states to me. Setting [Rejected] back to “detected by” will aid that the tester reflects on the reasons. The reasons are then tested (in the brain). Sometimes it’s a “my bad” but quite often also the tester finds that issue is simply not “rejected” with more data and examples.

The understanding of closure

Similarly the agile “Definition of Done”[5]  and “Definition of Ready”[6]  helps the agile team phrase when the task is to change state, sometimes it’s explicit, sometimes it’s implied. The understanding of terms (the semantics) are usually more imperative than the syntax (the rules and representation). Sometimes it’s necessary to “connect the lines”.

There are a two related psychology theories on closure. One is the Gestalt law of closure[7]  – that is that we tend to self-organize items into an orderly structure. As the image above isn’t really about triangles – it’s about our human tendency to connect the dots. The other psychology part is the desire to close stuff to gain controllability[8]:

The need for closure is the motivation to find an answer to an ambiguous situation. This motivation is enhanced by the perceived benefits of obtaining closure, such as the increased ability to predict the world and a stronger basis for action.

Management and stakeholders often want a “firm and unambiguous” answer from our testing investigations. And this is often the business justification for setting states to our work products; that we have states to illustrate work progress in. Sometimes loose representations and a strong shared understanding goes well, sometimes a more strict representation and elaboration is required.

The syntax of states may be easily explained and codified (and checked), while the semantics and perception is less direct – and needs analysis (and testing). All work products (even for mind maps and test charters) have states and we must articulate both the syntax and the semantics to the team and stakeholders.

References

  1. Graph theory http://en.wikipedia.org/wiki/Graph_theory
  2. Fell in the trap of total coverage https://jlottosen.wordpress.com/2012/11/05/fell-in-the-trap-of-total-coverage/
  3. Finite state machines http://en.wikipedia.org/wiki/Finite-state_machine
  4. A Track down History
  5. definition of done https://www.scrum.org/Resources/Scrum-Glossary/Definition-of-Done
  6. definition of ready http://guide.agilealliance.org/guide/definition-of-ready.html
  7. Law of closure http://jeremybolton.com/2009/09/gestalt-design-principles-the-law-of-closure/
  8. Closure (Psychology) http://en.wikipedia.org/wiki/Closure_(psychology)

A Track down History

Currently I am doing a large V-model waterfall project with a three month testing period and 500+ requirements. To track the progress I want to dust off my old “test progress tracking” method that I matured and described in 2011 and 2014.

The approach was documented in two articles for “The Testing Planet” a paper magazine by the Software Testing Club. The “STC” later became the now world famous Ministry of Testing. Unfortunately the original articles are no longer available on the MoT site – they are on the WayBackMachine. So not to track down that path, here’s a recap of:

A Little Track History that goes a Long Way

For large (enterprise, waterfall) projects tracking test progress is important, as it helps us understand if we will finish a given test scope on time. Tracking many of projects have given me one key performance indicator: daily Percent Passed tests as compared to an s-curve. The data in the S-curve is based on the following data points, based on numerous projects:

Time progressExpected Passed Progress
10%0%
20%5%
30%10%
40%30%
50%45%
60%60%
70%75%
80%90%
90%95%
100%100%
Adding a 3rd order polynomial trend-curve gives the s-curve.

If the “Actual” curve is flat over more days or is below the blue trend line, then investigate what is blocking the test progress… defects perhaps:

The Defect Count and Camel Image

During a large project like this the active bug count goes up and down. No one can tell what we find tomorrow or how many we will find. In my experience tracking the daily count of active defects (i.e. not resolved) is key, and will oscillate like the humps on a camel:

Camel background is optional

If the curve doesn’t bend down after a few days there are bottlenecks in the timely resolution of defects found. When the count goes up – testing a new (buggy) area is usually happening. Over time the tops of the humps should be lower and lower and by the end of the project, steep down to 0.

And thus a little track history has come a long way.

The Testing Planet, July 2011

I wonder if

I wonder if… the Norwegian and Swedish texts are correct on this picture:

2015-10-29 21.38.35

I wonder if is surely among the things that I as a tester say  or think a lot. You will also hear me cheer when we find a critical bug. Every defect / bug / observation  / issue / problem / incident we find is our chance to learn about the product. It’s a natural part of the game to find things and then to handle them. Defer them if so inclined, mitigate the risks, fix the bugs, update the  processes – but always take a decision based on the new knowledge you have.

awesome

Here are some other things I often say:

revert_thatsodd  strange

Originally at the Ministry of Testing Facebook page,  but the twists above are mine.

Bug Return Policy

We find bugs, irregularities, this that should be there, and things that shouldn’t. From that we create a bug report, and from that someone looks into it, and then it’s a wrap. Unless the information is not returned, an no-one is the wiser. A bug report to me is a representation of an observation of the system, usually something that’s wrong. Some tools and vocabularies calls this “defects”, “bugs”, “tickets”, “incidents”. A bug report can be an email, post-it, or even a mentioning in passing [2].

Here are some recent sample headlines:
– The design is unclear, please elaborate
– With this role, I can access this, which I shouldn’t
– When I compare the requirements to the delivery list, I find these ..
– There is no data here, but there should be
– We thought we wanted this, but now we want something else
[3]

Notice that a bug report usually originates with a person, making an evaluation. This person is the tester, no matter the functional hat (SME, SDET, PO, VP). This may be tool supported, coming from a log of automated checks, or from BDD or Jenkins or what not. No matter the amount of tools, a person is making an informed decision, and raising the bug.[4] Come to think of it, she could choose to do nothing. But something is bugging her [5].

Here are some recent replies to my bug reports:
– it is by design
– it works on the development environment
– that’s how the COTS (or framework or platform) handles it
– ok, got it. seems like an easy fix
– awrh, now we have to rethink the whole thing
– Defferred, FixedUpStream, Rejected,
– Hmm, I see what you mean. Let me look into it

These replies come from some other person than the tester – let’s call him the fixer. First of all the fixer evaluates the report – he makes a decision, based on his context and his available information. Sometimes it’s an easy fix, sometimes it cannot be reasonably fixed. Sometimes the fix have diminishing returns. And everything in between.

What is very important to me, is that the fixer communicates his immediate evaluation to the tester. As quickly and transparent as possible. The fixer, to me, does not have the option to close it [1] alone. Nor can he fix the bug without letting the tester know. In the end the tester calls whether it is resolved or acceptable given the updated information. If the tester and fixer cannot agree, then call for outside help. And only then, let the two people work it out first.

The bug report and “fixer reply” has to be returned to the tester. Either the fix has to be tested, or the no-fix has to be tested too. It’s all part of the game – and it’s all integral to improve the quality in the short run – by fixing this specific project. It is an integral part in improving the quality in the long run, by adding knowledge and collaboration to the solution of the bugs found. Every bug, every clarification, every wish from the test to investigate something about the product counts towards collaborating about the quality of the solution.

TL;DR: Always direct the reply to a bug back to the person who found it.

1: Closure http://www.ministryoftesting.com/2014/11/closure/
2: Mentioning in passing, aka “mipping” http://www.satisfice.com/blog/archives/97
3: 3 types of bugs http://cartoontester.blogspot.dk/2010/06/3-types-of-bugs.html
4: How to raise a bug http://cartoontester.blogspot.dk/2012/10/3-steps.html
5: Something that bugs someone whose opinion matters. http://www.satisfice.com/glossary.shtml#Bug

minecraft_creeper_wallpaper_by_lynchmob10-1440x900