In classic test techniques and test approaches the test activity is a scarce ressource. Due to time and money constraints a risk-based priority was always required to make ends meet. We now have the tools and approaches for a better Return of Investment on the testing activity, and it’s all about running more tests, more frequent and sooner.
You never have time to test everything. So in the context of classic test techniques and testing types (I’m looking at you, old fart) you had to prioritize and combine tests to make more with what you had.
- “MoScoW priorities” on test cases (Must-Should-Could-Would). Yet when management put a 20% max on must cases, the business just requested more tests until the level was achieved.
- Pairwise combinatorics and equvivalence classes might reduce the number of tests, but always seemed to focus on low level input fields and formats, never about business outcomes.
- Discussion on whether the test case was a regression test, an integration test or what not. Sometimes regression tests mattered more than new functionality. Sometimes SIT and UAT seems to be the very same thing, so why test it twice. What the business really wanted was not window dressing of words, but results no matter what the name.
Counting tests is like..
An analogy to testing everthing, is to count all possible decimal numbers. There is always one more, decimal position around the corner. For each number, I can select any amount of new numbers around it. As with tests, I can select any amount of new .. variations.. of the test (browser, time, user, preconditions…). It’s hard to count something that spills into one another, as two tests can cover much the same, but still be two different things in the end.
The classic techniques above are filtering techniques to first reduce the infinite space of possible tests into something distinct (a “test case”) – where every test is seperated from one another (countable). A “rock” in Aarons analogy. Secondly to filter it into something finite. so that it can be completed and answer “when are we done testing“.
Old Cost of Software Testing
The above filtering is is under the pretext that every value/test counted has a high price. Similarly that every test has a high cost to prepare and run. Average cost to write a formal test case could easily be 3 hours, and 1 hour to run / perform for at tester – and the perhaps with a 50% rerun/defect rate. So with 100 test cases a simple cost model would be at least: 450 hours, or 3 hours pr. test including 50% rerun.
No wonder management want to reduce this, and race the cost to the lowest denominator. Also considering that this only covers – at best – all the tests once, and half the tests twice. Is that a sufficient safetynet to go live on?
A new view on ROI
Current tools and test approaches turns the approach around, and focusses on making testing faster, frequent and cheaper (pr. test). The practises are so prevalent and well-described, that it really is already should be considered general development best practise. (G x P). Consider:
- Read the Accelerate Book
- Many Bits under the Bridge
- Characteristics Change
- Could Modern Testing work in the enterprise
Now every project will have it’s own ratio of automation, but for this simple model, let’s assume 75% can be automated/tools-supported to such a degree that running them is approximately costless. Ie. it runs as a part of the continous testing, a CI/CD pipeline with no hands or eyeballs on it.
Preparing tests still takes time, let’s assume the same 3 hours. So the 25 tests with no automation still needs 112,5 hours – but the automation, as running is zero only accounts for the 225 hours of preparation. Just this simple model of testing costs, reduces the cost for testing with 25% (from 450 to 337) – including reruning 50% of the tests once.
The modern approach is to make the tests plentifull and comoditice it, rather than make it scarce. (See also “Theory of Constraints” wrt bottlenecks). With the added benefits of CI/CD and whole team approach to quality – the research of Accelerate confirms the correlation to clear business benefits.
Since running the automated tests are cheap, we can run them “on demand”. Let’s run 25% daily – is this a regression test? Perhaps, it doesn’t really matter. Assuming we run 25% random tests a day for two weeks, aka 250 tests, we have increased the count of test runs, and the number of times each test has run. With this approach our test preparation effort of 225 hours above is now utilized for 250 runs… or under 1 hour/cost pr. run.
The whole idea is to (re)imagine the testing effort as fast and repeated sampling among all possible values, done multiple times. The more the tests are run the better – and the better ROI for testing. .. and if you dare an even better performance by the organization.