A typical acceptance criteria is that a specific percentage of test cases passed and/or a specific number of critical defects unresolved. Yet the expressions have to be recognized as a way of the stakeholder to express the level of expected confidence. In other words “what are you OK with?” How can we help you gain confidence in the solution? What trends should we look for?
I have experienced situations where the solution was delivered, even though the criteria where not met. The so-called Go-NoGo meeting, was usually a Go-Go, and jokingly called so. I have also experienced that the business deferred a fix for a critical defect in production. That even if a planned release fulfilled the elaborate acceptance criteria – the delivery was cancelled. It was a major enterprise release that was technically to spec – yet because of other business risks, it was postponed.
As a tester you may experience this where all the automatic Factory Acceptance tests (FAT) pass, but the system still fails to react. I had to text “FAT failed” to someone recently, but it auto-corrected to “FART failed“. Indeed if all the requirements pass and but the system fails to deliver – it is a fart.
The challenge is that not everything that can be measured counts, and not everything that counts can measured.
And that’s not even considering that 100% testcases passed is a metric that makes no sense. Quality is not only the amount of specific attributes, as ISO/IEEE standards may lead you to measure, but a relationship “something that matters – to someone who matters – at sometime“. If you only look at the measurable – you miss half of the story.
The business needs
- fit for purpose (requirements) and fit for use (business context) [ITIL v3]
- to solve a business problem – ff the problem isn’t solved, the product doesn’t work. (Even of all the tests are green). [Ben Simo]
- information from testing to aid in making business decisions