A Ratio between Tests

During one of my recent projects I was considering the ratio between the checks and the tests – that is the ratio between those tests, that are primarily simple binary confirmations, and those tests that are more tacit questions. This blog is about my considerations on the idea/experiment/model.

First I observed, that we have a range of different items in our requirements – some of them are [actual copy of the current specification]:

Binary Confirmations

  • It must be possible to add a customer ticket reference
  • It must be possible to copy the ticket number

Tacit Questions

  • You must be able to navigate displayed lists easily
  • It must be easy to compare work log and audit log

You could argue that they need refinement, more testability and less “easy“. But this is what we have to work with for now. Even if we had all the time in the world (we don’t) – we would not be able to write all of the requirements in a perfect form (if such a form exists).

As the system under test is a commercial standard system, some of the requirements are even given as “Out of the Box”, we will probably not even be testing those explicitly. Our coverage criteria is not ALL OF THEM.

Ordering the tests

It is a deliberate experiment from my side to divide the requirements (and hence the tests) into the piles of Closed and Open Questions. Perhaps there is even three piles – Rapid Software Testing has: human checking, machine checking and human/machine checking , Wardley has Pioneers, Settlers and Town Planners. Perhaps the Rule of Three applies here too.. perhaps it’s an continuum … let’s see.

Perhaps it’s a continuum

As part of the requirement workshops I will label the requirements and align with the stakeholders to get the expectations right – with the help of a few friends. This a context/project based “operationalization“.

I wrote about this ratio on my blog post around the Test Automation Pyramid, as I will use the labels to automate the confirmations (and only the confirmations). The assumption is, that there are significantly more of the binary requirements tested by machine checking – and more human tested tacit questions. If we can get all the tedious tasks automated – that is the really the end goal.

Automate all the things that should be automated

Alan Page

Every project/context will have it’s own ratio, depending on a range of factors. Saying there should always be more of one type than the other would not hold. As the above project is the configuration and implementation of a standard commercial business software package (like SAP, SalesForce etc), my expectation is that most of the requirements are binary. Also considering that this project is heavy on the right hand side of the Wardley Map scale of evolution.

It’s a Reduction in Context

I am well aware that the two/three piles are an approximation / reduction. Especially when looking at the “binary” requirements and “only” testing these by confirmation. They could as easily be explored to find further unknown unknowns. If we prioritize to do so – it all about our choice of risk.

It is also an limitation as “perfect testing” should consist of both testing and checking. I factor this into the test strategy, by insisting that all of the requirements are tested both explicitly and implicitly. First of all most of our binary requirements are on the configuration and customization of the out-of-the-box software solution. So when the subject matter experts are performing the testing of the business flows, they are also evaluating the configuration and customization. And I do want them to spot when something is odd

The binary configuration is ok, but human know-how tells us otherwise.

Ultimately I want to use the experts to do the thinking and the machines to do the both the confirmations and the tedious tasks.

Advertisements

Trending: Shift-Left

TL;DR: Shift-Left is about testing early and automated. Shift technical with this trend or facilitate that testing happens.

Shift-Left is the label we apply when testing moves closer to development and integrated into the development activities. The concept is many IT years old, and there are already some excellent articles out there: What the Shift Left in Testing Means (Smart Bear, no date), “Shift left” has become “drop right” (Test Plant 2014), Shift Left QA. How to do it. Why it matters (Work Soft, 2015).

To me Shift-Left is still an active trend and change how to do testing. This goes along with Shift-Right, Shift-Coach and Shift-Deliver discussed separately. I discussed these trend labels at Nordic Testing Days 2016 during the talk “How to Test in IT operations“.

Here are some contexts where Shift-Left happens:

  • Google have “Software Engineer in Test” as job title according to the book “How we Test Software at Google“.
  • Microsoft have similar “Software Design Engineer in Test” as discussed by Alan Page in “The SDET Pendulum” and in the e-book “A-word
  • A project I was regarding pharmaceutical  Track and Trace, had no testers. I didn’t even test but did compliance documentation of test activities. The developers tested. First via peer review, then via peer execution of story tests and then validation activities. No testers, just the same team – for various reasons.
  • A project I was in regarding a website and API for trading property information had no testers, but had continuous build and deploy with even more user oriented test cases that I could ever grab. (see: Fell in the trap of total coverage)

The general approach to Shift-Left is that “checking” moves earlier in the cycle in form of automation. More BDD, more TDD, more automated tests, continuous builds, frequent feedback and green bars. More based on “Test automation pyramid” (blog discussion, whiteboard testing video). Discussing the pyramid model reveals that testing and checking goes together in the lower levels too. I’m certain that (exploratory) testing happens among technicians and service-level developers; – usually not explicitly, but still.

To have “no QA” is not easy. Not easy on the testers because they need to shift and become more SET/SDET-like or shift something else (Shift-Right and Shift-Coach and Shift-Deliver). Neither is it easy on the team, as the team has to own the quality activities – as discussed in “So we’re going “No QA’s”. How do we get the devs to do enough testing?

Testers and test managers cannot complain, when testing and checking is performed in new ways. When tool-supported testing take over the boring less-complex checks, we can either own these checks or  move to facilitate that these checks are in place. Similarly when the (exploratory) brain-based testing of the complex and unknown is being handed over to some other person. Come to think of it I always prefer testing done by subject matter experts in the project, be it users, clients, testers or other specialists.

We need to shift to adapt to new contexts and new ways of aiding in delivering working solutions to our clients.

jollyrum

Black or white – it is the same box

blackandwhite

  • Exploration, Finding, Testing, Setting up hypothesis, Manual, Bespoke, Check lists, What if
  • Steps, Confirmation, Checks, Proofs, Automated, Routine, Test cases, Does it matter

Does it indeed matter …

The “box” is a solution – if the box doesn’t fit the problem, it doesn’t work. (period) 

See also: Testing and checkingWhat if – and Does it MatterThe small changes in the big scripts Routinized and bespoke activities  left and right side of the brain

What if – and Does it Matter

Software testing as two core questions:

Very related to Testing AND Checking

  • Checking is something that we do with the motivation of confirming existing beliefs. 
  • Testing is a process of exploration, discovery, investigation, and learning.

Testing AND Checking

A lot of bits have traveled under the Internet bridge since Michael Bolton’s Testing vs Checking in 2009:

  • Checking is something that we do with the motivation of confirming existing beliefs. Checking is a process of confirmation, verification, and validation.
  • Testing is something that we do with the motivation of finding new information. Testing is a process of exploration, discovery, investigation, and learning. 

Revisiting it again and again it dawns on me(1)… It’s not “versus“- It’s not “either or” – It’s “not one over the other” – It’s not “merely” either – it’s about testing and checking.  Both on the same time – but yet using different words, helps us find a better understanding of what we do. The Rapid Software Testing course material pages 59-67 illustrates this in details (3):

Rapid Software Testing - Task Performing

We can look into scripts/checking as being task performing “arrows”: an hypothesis is established and sought to be confirmed. Value seeking happens all through the checking process – you think and react based on the evidence found. (2)

Rapid Software Testing - Value seeking - used by permission

We can look into exploration/testing as being value seeking “cycles”: an exploration acts on it’s own – but still consists of smaller sequential tasks being performed. (2)

You apply both your left and right side of the brain – you check and test – you do tasks and seek value – you apply routinized and bespoke activities. You can use the distinction to guide you to a context-driven testing approach.  Read also Exploring Uncertainty for a good discussion on how  Checking is Not Evil and how to illustrate what is Not Checking.

In 2013 Michael Bolton and James Bach refined the definition of checking:

TESTING AND CHECKING REFINED | March 26th, 2013 ]

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

              From that, we have identified three kinds of checking:

Human checking is an attempted checking process wherein humans collect the observations and apply the rules without the mediation of tools.

Machine checking is a checking process wherein tools collect the observations and apply the rules without the mediation of humans.

Human/machine checking is an attempted checking process wherein both humans and tools interact to collect the observations and apply the rules.

_ _ _

1 with guiding from both James B & Michael B. Thanks!
2 with specific permission from James Bach.
3 also referenced in So Everything Must Be Tested?

Perfects in testing

…and how all the letters in UAT is gone.

One of the evenings of the EuroStar 2010 a conference within the conference took place. This night, which went by many names (“Rebel Alliance”, “Oprørsalliancen”, “Danish Alliance”), was an informal meeting of friends. As was done at StarEast, it was “a mini conference after the conference, with beer”. We had there very special people and some famous names from the software testing world, and we spent the night talking testing, debating testing, listening to lightning talks on testing, playing games and debating some more. For me it was the best conference party ever.

“Jesper explains how their team uses visual artifacts to keep morale up, and how these relates to looking for “Perfects”.

from http://testing.gershon.info/201012/eurostar2010-rebel-alliance-night/

See also: