A Track down History

Currently I am doing a large V-model waterfall project with a three month testing period and 500+ requirements. To track the progress I want to dust off my old “test progress tracking” method that I matured and described in 2011 and 2014.

The approach was documented in two articles for “The Testing Planet” a paper magazine by the Software Testing Club. The “STC” later became the now world famous Ministry of Testing. Unfortunately the original articles are no longer available on the MoT site – they are on the WayBackMachine. So not to track down that path, here’s a recap of:

A Little Track History that goes a Long Way

For large (enterprise, waterfall) projects tracking test progress is important, as it helps us understand if we will finish a given test scope on time. Tracking many of projects have given me one key performance indicator: daily Percent Passed tests as compared to an s-curve. The data in the S-curve is based on the following data points, based on numerous projects:

Time progressExpected Passed Progress
10%0%
20%5%
30%10%
40%30%
50%45%
60%60%
70%75%
80%90%
90%95%
100%100%
Adding a 3rd order polynomial trend-curve gives the s-curve.

If the “Actual” curve is flat over more days or is below the blue trend line, then investigate what is blocking the test progress… defects perhaps:

The Defect Count and Camel Image

During a large project like this the active bug count goes up and down. No one can tell what we find tomorrow or how many we will find. In my experience tracking the daily count of active defects (i.e. not resolved) is key, and will oscillate like the humps on a camel:

Camel background is optional

If the curve doesn’t bend down after a few days there are bottlenecks in the timely resolution of defects found. When the count goes up – testing a new (buggy) area is usually happening. Over time the tops of the humps should be lower and lower and by the end of the project, steep down to 0.

And thus a little track history has come a long way.

The Testing Planet, July 2011

A 30 Days Agile Experience

In September 2017 the Ministry of Testing had a crowd-based knowledge sharing event called “30 Days of Agile Testing” with a small learning activity for each day of the month. As with the similar security event I set up a weekly schedule at work to meet for an time-boxed hour and discuss 3-5 selected topics each time.

Our score was 17 topics discussed – some more discussed than actually tried out. Hence the half marks on the poster on the window below. Me and my coworkers work on many different teams – so to dig into specific team tools and processes was out of scope.

Here is a few of our findings:

IMG_0007

Links to “the Club” on some of the topics we selected:

 

 

I wonder if

I wonder if… the Norwegian and Swedish texts are correct on this picture:

2015-10-29 21.38.35

I wonder if is surely among the things that I as a tester say  or think a lot. You will also hear me cheer when we find a critical bug. Every defect / bug / observation  / issue / problem / incident we find is our chance to learn about the product. It’s a natural part of the game to find things and then to handle them. Defer them if so inclined, mitigate the risks, fix the bugs, update the  processes – but always take a decision based on the new knowledge you have.

awesome

Here are some other things I often say:

revert_thatsodd  strange

Originally at the Ministry of Testing Facebook page,  but the twists above are mine.

QA Aarhus – Exploratory Testing How and When

QA Network Aarhus is a local non-affiliated network of testers (and good friends) in Aarhus. Where I had the great pleasure of talking about Exploratory Testing. This is the link collection, the slides are attached.

nnit

Continue reading

Mindmaps for 400

Finally non-profit self-organizing software testing is happening in Denmark. On may 21 2014 we actually had two events:

At the first I was glad to share my experiences using mind maps in software testing, note taking and information structuring. (Get the PDF Xmind mind map here: https://jlottosen.files.wordpress.com/2014/05/mindmaps-400.pdf)

You stop going deeper down the tree, when there is nothing more knowledge to gain, just like good (exploratory) testing.

Cultural context of the “for 100″ comes from the Jeopardy TV Quiz, where the questions come in 4 levels: 100, 200,300, 400 for the  increasingly harder questions. The prize is similarly $100 for level 100 etc.  

Softwaretesting is only dead, if it stands still

[ Software Testing Club Blog | January 23, 2012 ]

Here are some quotes from actual test mission statements from the last 10 years. Test always is changing. Come to think of it, it’s only dead if it stands still.

2003

We discover and measure the quality of the system:
– to test everything is utopia,
– we should test something, something new, something old (and something blue?)

2006

The primary objective for testing is to reveal the quality of the delivery. The revealed quality is documented in test reports, that serves as suggestions for decisions on approval or rejection of the delivery. Key principles: early involvement, structured testing, based on V-model, documented, COTS tools (TestDirector)

2008

Testing is the structured activity of identifying the quality and improving the quality of the an IT-supported product or project
– Identification of quality = Preparing and executing a test activity
– Improving the quality = Evaluation, fixing and retesting of raised incident

Test provides information about the quality of IT-supported product or project, with that information:
– Quality of products can be improved
– Project decisions can be based on facts
– Processes can be investigated for improvement

2011 

Testing has the mission of providing information to the project about the quality of the business features of the solution.

Testing is about learning about the quality solution, not about confirming, verifying or assuring.
Test is about adjusting, not about following a strict script and knowing everything up front.

Originally at my SoftwareTestingClub blog – more from my STC Blog here: All oracles are failableTesting decommissioning of ICBMs and software and Testing a new version.