Focus on Goals over Risks

Looking into the discussion on what goes into a Test Plan and what goes into a Test Strategy – it’s my personal opinion that we can improve our business alignment. Risk-based testing and Product Risk Analysis have been around for long – but better models have emerged to address what will be more impactful.

Continue reading

A Better ROI for testing

In classic test techniques and test approaches the test activity is a scarce ressource. Due to time and money constraints a risk-based priority was always required to make ends meet. We now have the tools and approaches for a better Return of Investment on the testing activity, and it’s all about running more tests, more frequent and sooner.

You never have time to test everything. So in the context of classic test techniques and testing types (I’m looking at you, old fart) you had to prioritize and combine tests to make more with what you had.

  • MoScoW priorities” on test cases (Must-Should-Could-Would). Yet when management put a 20% max on must cases, the business just requested more tests until the level was achieved.
  • Pairwise combinatorics and equvivalence classes might reduce the number of tests, but always seemed to focus on low level input fields and formats, never about business outcomes.
  • Discussion on whether the test case was a regression test, an integration test or what not. Sometimes regression tests mattered more than new functionality. Sometimes SIT and UAT seems to be the very same thing, so why test it twice. What the business really wanted was not window dressing of words, but results no matter what the name.

Counting tests is like..

An analogy to testing everthing, is to count all possible decimal numbers. There is always one more, decimal position around the corner. For each number, I can select any amount of new numbers around it. As with tests, I can select any amount of new .. variations.. of the test (browser, time, user, preconditions…). It’s hard to count something that spills into one another, as two tests can cover much the same, but still be two different things in the end.

.. and the rocks overlap too.

The classic techniques above are filtering techniques to first reduce the infinite space of possible tests into something distinct (a “test case”) – where every test is seperated from one another (countable). A “rock” in Aarons analogy. Secondly to filter it into something finite. so that it can be completed and answer “when are we done testing“.

Filtering down from all possible numbers to a countable and finite set.

Old Cost of Software Testing

The above filtering is is under the pretext that every value/test counted has a high price. Similarly that every test has a high cost to prepare and run. Average cost to write a formal test case could easily be 3 hours, and 1 hour to run / perform for at tester – and the perhaps with a 50% rerun/defect rate. So with 100 test cases a simple cost model would be at least: 450 hours, or 3 hours pr. test including 50% rerun.

No wonder management want to reduce this, and race the cost to the lowest denominator. Also considering that this only covers – at best – all the tests once, and half the tests twice. Is that a sufficient safetynet to go live on?

A new view on ROI

Current tools and test approaches turns the approach around, and focusses on making testing faster, frequent and cheaper (pr. test). The practises are so prevalent and well-described, that it really is already should be considered general development best practise. (G x P). Consider:

Now every project will have it’s own ratio of automation, but for this simple model, let’s assume 75% can be automated/tools-supported to such a degree that running them is approximately costless. Ie. it runs as a part of the continous testing, a CI/CD pipeline with no hands or eyeballs on it.

Preparing tests still takes time, let’s assume the same 3 hours. So the 25 tests with no automation still needs 112,5 hours – but the automation, as running is zero only accounts for the 225 hours of preparation. Just this simple model of testing costs, reduces the cost for testing with 25% (from 450 to 337) – including reruning 50% of the tests once.

The modern approach is to make the tests plentifull and comoditice it, rather than make it scarce. (See also “Theory of Constraints” wrt bottlenecks). With the added benefits of CI/CD and whole team approach to quality – the research of Accelerate confirms the correlation to clear business benefits.

Since running the automated tests are cheap, we can run them “on demand”. Let’s run 25% daily – is this a regression test? Perhaps, it doesn’t really matter. Assuming we run 25% random tests a day for two weeks, aka 250 tests, we have increased the count of test runs, and the number of times each test has run. With this approach our test preparation effort of 225 hours above is now utilized for 250 runs… or under 1 hour/cost pr. run.

The whole idea is to (re)imagine the testing effort as fast and repeated sampling among all possible values, done multiple times. The more the tests are run the better – and the better ROI for testing. .. and if you dare an even better performance by the organization.

Fast, Repeated Sampling of numbers

Shoot, Neglect or Train?

How you treat the bringer of (bad) news tells me a lot about the organisation and potential for business growth. Go Read Accelerate – that book is full of insights. One of the models, is the organisational types from Westrum:

[ Screen capture from the Kindle issue of Accelerate ]

Andy Kelk has a to-the-point description about Westrum on their blog:

To test your organisation, you can run a very simple survey asking the group to rate how well they identify with 6 statements:

https://www.andykelk.net/devops/using-the-westrum-typology-to-measure-culture
  • On my team, information is actively sought.
  • On my team, failures are learning opportunities, and messengers of them are not punished.
  • On my team, responsibilities are shared.
  • On my team, cross-functional collaboration is encouraged and rewarded.
  • On my team, failure causes enquiry.
  • On my team, new ideas are welcomed.

The respondents rate each statement from a 1 (strongly disagree) to a 7 (strongly agree). By collecting aggregating the results, you can see where your organisation may be falling short and put actions in place to address those areas. These questions come from peer-reviewed research by Nicole Forsgren.

https://www.andykelk.net/devops/using-the-westrum-typology-to-measure-culture

So when a passionate person comes to you with (bad) news, what do you and your organisation do? Do you reflect, ignore or hide the request? Do you say that it’s not a good idea to bridge the organisation? Do you raise an Non-conformity and set in motion events to bring “justice”? Do you experiment to implement the novel ideas and actively seek information?

FAIL = First Attempt In Learning.

Test ALL the things

TL;DR: We can add testing to all requirements and all business risks. Testing to document requirements and to debunk risks provides valuable information for the business. Let us not limit testing to things that can be coded. The intellectual activity of trial and learning is happening anyways, we might as well pitch in with ways to find important evidence for the decision makers.

Test all the requirements

Traditionally testing was all about testing the functional requirements that could be coded. Non-functional requirements was left for the specialists, or plainly disregarded. I know I have done my share of test planning, with a range of requirements left “N/A” with regards to testing. Especially performance scope, batch jobs, hardware specs, data base table expansions and virus scanning have been left out of my functional test plans…

When I look at a list of requirements now – I see that we can indeed test all the things, or we can at least work on how to document that the requirement is fulfilled. Some of the requirements are actually quite easy to document. If it’s on a screen somewhere, take a screen shot and attach it to a simple test case. Done deal really. Additionally with a testing mind-set I can think of ways to challenge the details. But do we really, really need to fill up a disk to establish if it’s exactly a 1 Gb allocation – probably not. Do we really really need to document all requirements – yes in some contracts/contexts it’s important for the customer to know that everything has indeed been established. Sometimes the customer doesn’t trust you otherwise, sometimes the tests are more about your ability to deliver and provide evidence that matters.

Test all the business risks

Look into the business case of your project and find the business risks. Sometimes they are explicitly stated and prioritized. A a recent Ministry of Testing Meetup we looked into a case for a large national health system. We looked at the tangible benefits, intangible benefits and on the scored business risks.  What worried the business and management most was budget, time and whether the new system would be used in a standardized way. There is an opportunity for testing here to help address, document and challenge the most important business  risks. Traditional testing would usually look at functional requirements that can be coded or configured, and miss totally to address what matters most to the business.

OK, how do we test the project costs? How about frequent checkpoints of expected spending – would that be similar to tracking test progress. Perhaps – let’s find out. Testing is all about asking questions for the stakeholders and solving the most important problems first. Can we help to analyse risks and setup mitigation activities – sure we can. We just have to step out of our traditional “software only bubble”.

MEME - Test ALL the things
Meme ALL the things

 

 

 

 

 

 

 

Read also: Many Bits under the Bridge, Less Software, more TestingTest Criteria for Outsourced SoftwareThe Expected, Fell in the trap of total coverage.

Links: “A Context-Driven Approach to Delivering Business Value”, Cynefin In Software TestingTesting during Application Transition Trials

 

The core purpose of softwaretesting

[ Peak Performance |  November 8, 2011]

“It seems clear to me that testing, then, should be all about helping companies create products that are viable for generating revenue and/or reducing costs, quickly and cheaply, while identifying, mitigating, and/or controlling business risk — not protecting the end-user from that annoying bug. “

See also Software test is always changing