Similar to scope creep we may also experience “test creep“. Test Creep is when the tester adds more tests than what is in scope. Just as well as more business functionality is added during scope creep, more testing is added in test creep. Both aren’t necessarily bad, but in time-boxed or similar (budgeted) constraint projects creeping isn’t necessarily value adding. This is probably easier to understand in an agile project focusing on minimal viable product , but may happen in other contexts too.
It is test creep, when the tester feels an obligation to run an extra drill down into browser and OS configurations, when scope is less broad. It is creeping the scope of testing, if the testers feels a “/need/ to write testcases for this first” when exploratory sessions fits the mission. Consider test creep like gold plating, in that way that it tries to refine and perfect the product – when good enough is the perfect fit.
Test creep can happen intentionally, happen by management or by product owner request. It may happen unintentionally, and usually it is with the best intentions – as more testing always is better testing – right? (But it Depends) Sometimes yes, we as testers are to blame that we add more scenarios, rigor and details, because a testing mindset drives us to investigate the product.
In discussing this with Mohinder and Darren, we found that – it’s not only a matter of removing wait time for testers. This may add more time, to test but the scope creep in testing may happen none the less. A Lean mindset with focus on what adds value to the business and a discussion on the minimum viable testing will assist the project in avoiding test creep.
[…] When good enough is the perfect fit […]
LikeLike
I generally do not see more tests as a bad thing. The difference is between the number of test cases and the number of test executions. Test creep for me would be testing things that are bound to change short term before the next release. In that case the test results are meaningless. Likewise, testing a new or changed feature before all known bugs are fixed because fixing bugs has the potential for introducing new bugs.
While I see the value in “minimal viable testing” it is handicapped by the same issues as with “minimal viable product”. Who’s to say what is minimally viable? Especially when considering that most software shops do not end up using their own product. The information can only come from customers and customers will only be able to tell you if the product is minimally viable if they get to see and use it.
My suggestion is to test early (as in deeply inspect the hopefully verbose and comprehensive requirements), test often (but only when the results will be of value), and test a lot. I rather test more than necessary than not enough.
LikeLike
[…] This might be similar to a test architect, a (internal) test consultant activity. It has nothing to do with diminishing testing. Rather I see it as more testing happening, something that would not have been done without the coaching from a test manager. It’s all about finding a test approach that is fit for the context. […]
LikeLike