During one of my recent projects I was considering the ratio between the checks and the tests – that is the ratio between those tests, that are primarily simple binary confirmations, and those tests that are more tacit questions. This blog is about my considerations on the idea/experiment/model.
First I observed, that we have a range of different items in our requirements – some of them are [actual copy of the current specification]:
- It must be possible to add a customer ticket reference
- It must be possible to copy the ticket number
- You must be able to navigate displayed lists easily
- It must be easy to compare work log and audit log
You could argue that they need refinement, more testability and less “easy“. But this is what we have to work with for now. Even if we had all the time in the world (we don’t) – we would not be able to write all of the requirements in a perfect form (if such a form exists).
As the system under test is a commercial standard system, some of the requirements are even given as “Out of the Box”, we will probably not even be testing those explicitly. Our coverage criteria is not ALL OF THEM.
Ordering the tests
It is a deliberate experiment from my side to divide the requirements (and hence the tests) into the piles of Closed and Open Questions. Perhaps there is even three piles – Rapid Software Testing has: human checking, machine checking and human/machine checking , Wardley has Pioneers, Settlers and Town Planners. Perhaps the Rule of Three applies here too.. perhaps it’s an continuum … let’s see.
As part of the requirement workshops I will label the requirements and align with the stakeholders to get the expectations right – with the help of a few friends. This a context/project based “operationalization“.
I wrote about this ratio on my blog post around the Test Automation Pyramid, as I will use the labels to automate the confirmations (and only the confirmations). The assumption is, that there are significantly more of the binary requirements tested by machine checking – and more human tested tacit questions. If we can get all the tedious tasks automated – that is the really the end goal.
Automate all the things that should be automatedAlan Page
Every project/context will have it’s own ratio, depending on a range of factors. Saying there should always be more of one type than the other would not hold. As the above project is the configuration and implementation of a standard commercial business software package (like SAP, SalesForce etc), my expectation is that most of the requirements are binary. Also considering that this project is heavy on the right hand side of the Wardley Map scale of evolution.
It’s a Reduction in Context
I am well aware that the two/three piles are an approximation / reduction. Especially when looking at the “binary” requirements and “only” testing these by confirmation. They could as easily be explored to find further unknown unknowns. If we prioritize to do so – it all about our choice of risk.
It is also an limitation as “perfect testing” should consist of both testing and checking. I factor this into the test strategy, by insisting that all of the requirements are tested both explicitly and implicitly. First of all most of our binary requirements are on the configuration and customization of the out-of-the-box software solution. So when the subject matter experts are performing the testing of the business flows, they are also evaluating the configuration and customization. And I do want them to spot when something is odd
Ultimately I want to use the experts to do the thinking and the machines to do the both the confirmations and the tedious tasks.
10 thoughts on “A Ratio between Tests”
[…] work I am currently running a large project regarding customizing and implementing a standard commercial software system, PractiTest would fit right in, as we have the following test […]
[…] are closed questions, others are more open-ended or similarly require some thinking. Currently the ratio is that 70% is done by test automation and 30% is for a few of the subject matter experts to test. […]
[…] Automation in Testing: See the AiT definition and namespace: “using automation to support his testing outside of automated checks“. Use tools and automation to handle all the tedious tasks of your active testing activity. Use automated checks to cover the binary requirement confirmations. […]
[…] activity, let the experts play their part , have testers for the remaining exploration and have tools for the rest. The trend of less testers and more testing is still active and testing is shifting to the future […]
[…] every project will have it’s own ratio of automation, but for this simple model, let’s assume 75% can be automated/tools-supported […]
[…] sometimes you can have a guide (tool smith) to enable them – sometimes it’s best to let the engineers tackle the automation. That depends on your gameplay and map of the world. What end users are good at is intrinsic domain […]
[…] As mentioned with ET and CT, we can now use the map to discuss why we need both CT and ET for a specific project. Continuous Testing relates upwards in the value chain to continuous Delivery, while exploratory testing ties more into a more visible end user goal of building the right thing, especially in a context of implicit and tacit knowledge. […]
[…] This aligns with the research of Accelerate (Lean Practices, specifically) and the Team Topologies framework – and with the old saying: If you want to go fast, go alone. If you wan to go far – go together. The best approach to scale an effort is to do it collaboratively, disband the bottlenecks and automate repetitive grunt work. […]
[…] struggles we see in the discussions around how much we can automate. We can model it on this simple continuum between hunches and hard […]
[…] of implicit and tacit knowledge, A Ratio between tests, […]