January 24, 2015
#TestingPlanet, closure, coverage, decisions, knowledge, mastering, skills, test_strategy
- Using Business Decisions to Drive Your Testing Coverage [Sticky Minds by TechWell, November 2014] In a business setting, software testers have a great challenge: to articulate how they support the business lines. One way to approach this is by addressing the business decisions—and there are plenty around. Use them to drive your testing activities and increase the business decisions being covered by testing. http://www.stickyminds.com/article/using-business-decisions-drive-your-testing-coverage
- The answer is: Why – because the answer depends on context.[The Testing Circus,vol.6 2.ed February 2015]: When asked about testing approaches, the options are so plentiful, that the reply is often “It depends” – and followed by a range of elaborations. But in our eager to reply, we forget to listen. http://www.testingcircus.com/february-2015/
- About Closure [The Testing Planet by Ministry of Testing, Nov 2014] When I’m in a testing activity I want my test cases [Passed], my user stories [done] and my coffee [black]. Stuff may have a start point, some states in between and an end state. Lets look at ways to represent states and articulate the meaning of states. http://www.ministryoftesting.com/2014/11/closure/
7 articles for The Testing Planet since 2011: http://www.ministryoftesting.com/tag/jesper-lindholt-ottosen/
January 1, 2015
Family and fun, LEGO
brain, change, collaboration, communication, geeks, knowledge, learning, LEGO_friends, STEM, WomenInTech
Who had the family’s largest LEGO set his Christmas – not the boys (age 8-10), neither the “boys” (age 40 and up) – it wasn’t me* – but the 11-year-old girl and her 8 wheel 42008 Service Truck – 1276 pieces, power functions, pneumatic, gears and 44 cm forcefulness. There was no boy band merchandize, no glitter or similar gender framing. Quite a project – as is the story about the “Research Institute” mini-figure set.
December 30, 2014
asperger, brain, exploratory, geeks, test, wicked problem
Sometimes testing is like being Sherlock Holmes – You find your clues hidden in plain sight: Where the users scratch their nails; how the application user interface is cobbled together; odd patterns in the error logs….
But seldom without experimenting, seldom without pushing the subject under test or consulting the weather report, time tables – and getting out in the rain, doing some footwork.
He always seems to know better, always asking questions. He is so passionate about his problem solving skills that his standard by default seems arrogant  (but that is usually not on purpose).
This is very clear in the recent BBC TV Series “Sherlock” – that illustrates and mentions his Asperger clearly. Almost on par with the The Curious Incident of the Dog in the Night-Time. Still when he is out solving mysteries he is a hero – if there ever was one. 
1: If your standard is to never be called arrogant, you’ve probably walked away from your calling. http://sethgodin.typepad.com/seths_blog/2014/12/in-search-of-arrogance.html
2: Don’t make people into heroes, John. Heroes don’t exist, and if they did, I wouldn’t be one of them. http://en.wikiquote.org/wiki/Sherlock_(TV_series)
Related: The yardstick of mythical normality, People are people – despite their labels,
November 14, 2014
Enterprise IT, Softwaretesting
#TestingPlanet, atWork, cartoontester, community, CopenhangenContext, exploratory, mindmap, rapidsoftwaretesting, softwaretestingclub, testingclub, unknown_unknown
QA Network Aarhus is a local non-affiliated network of testers (and good friends) in Aarhus. Where I had the great pleasure of talking about Exploratory Testing. This is the link collection, the slides are attached.
October 5, 2014
Ten years ago I was in software testing, and we looked into test process improvement by the guidelines of the Test Maturity Model (of 1998). Sometimes we tried to add more process and structure, insisting that unit test was documented prior to integration test start or that the lead developers participated in the reviews of the test strategy (gosh). Often it happened that our quick wins was not in line with the “maturity” of the rest of the delivery organisation.
Here is the picture we sometimes drew to understand: If you have a wheel, that has to be round to deliver systems to the business, and each group is a cog. Then the best thing is for the cogs to be equal length. If the testing cog is too long / too “mature”, the wheel doesn’t run around, but bumps into testing. If a cog is too short, the wheel may run, but the next cog catch the drift of the previous one.
Sometimes it was truly, that testing picked up where the application teams had failed to integrate and communicate. We could spend weeks of snow-plowing with one integration defect stopping all progress, just to find yet another bridge to cross in the enterprise delivery landscape.
Sometimes management added metrics – like the number of testcases to pass pr. day (linear progress) or a real couch to have the project manager to sit on, until we found yet another problem to have resolved among the many teams. But pouring more process descriptions, more contract interpretations or rigor into the testing never fixed the real problem – that the cogs of the delivery wheel needed to grown across teams – not driven by the need from management to control testing (because testing picked up the tap in the end).
ISO 29119 may be a vocabulary and a guideline, and at best a body of knowledge. But the thing is that if organisations try to fix their delivery process by adding more rigor, ceremony and borrowed vocabulary the testing activities will yet again be the odd cog in the wheel. I don’t see any ISO standard for management, for leadership, for a standard development methodology or ISO standard for how to solve unknown problems. And testing is about finding the unknown and inform the stakeholders.
Learn the context – and provide information to the business within that frame.
October 1, 2014
agile, cost, exploratory, mvp, value
Similar to scope creep we may also experience “test creep“. Test Creep is when the tester adds more tests than what is in scope. Just as well as more business functionality is added during scope creep, more testing is added in test creep. Both aren’t necessarily bad, but in time-boxed or similar (budgeted) constraint projects creeping isn’t necessarily value adding. This is probably easier to understand in an agile project focusing on minimal viable product , but may happen in other contexts too.
It is test creep, when the tester feels an obligation to run an extra drill down into browser and OS configurations, when scope is less broad. It is creeping the scope of testing, if the testers feels a “/need/ to write testcases for this first” when exploratory sessions fits the mission. Consider test creep like gold plating, in that way that it tries to refine and perfect the product – when good enough is the perfect fit.
Test creep can happen intentionally, happen by management or by product owner request. It may happen unintentionally, and usually it is with the best intentions – as more testing always is better testing – right? (But it Depends) Sometimes yes, we as testers are to blame that we add more scenarios, rigor and details, because a testing mindset drives us to investigate the product.
In discussing this with Mohinder and Darren, we found that – it’s not only a matter of removing wait time for testers. This may add more time, to test but the scope creep in testing may happen none the less. A Lean mindset with focus on what adds value to the business and a discussion on the minimum viable testing will assist the project in avoiding test creep.
September 25, 2014
Enterprise IT, Softwaretesting
business, change, itil, knowledge, standards, value
What I know of the ISO29119 is that specifies specific numerated techniques, documents and document content. I know this from their website, where I can read that it will cost me $1000 to buy “the book” (club discounts available) – the body of knowledge.
It’s a collective work written by a number of people in the industry, and have been years in the making. Some of the people work in consulting and provide training in the framework, some of the companies sponsoring the work provide consulting in implementation of the framework. Companies can have an audit for a certificate too. That will require a large investment as the organisation have to (norminative) conform to plenty of “shall”.
But besides that it’s a closed book (and it’s not even on Amazon). To me the 29119 is misguided from the beginning, it should be a book – a commercial body of knowledge, like TMAP or like ITIL. Something that you could buy into or not. Not something in any respect labelled as a international standard.
- It seems it requires either a range of documents and lot of tailoring
- It seems to be some what “dated” in the addressing ways of testing being added in recent years
- It seems to claim that it has consensus in the industry
- It seems that some people have tried to participate , but failed
- It seems that some people did not want to participate on principle, even if invited
- It seems to claim that it is a silver bullet, a one size fits all
I cannot evaluate the implications for my customers asking about compliance without elaboration – on the details of 29119, and on the customers objectives. What is the business driver for complying with said framework? Which is actually what I was looking for – what helps the (customer) business making a business?
I doubt that someone else’s delivery framework can provide you with the DNA, the unique value proposition, of the specific context that is needed – for you! #ImLookingAtYou. If we blindly comply with the framework what is the driver besides cost and commodity. If the driver is something else, then start right there. Start with how testing and artefacts implements the strategy, values and decisions that you have. Start with “innovative“, “quality of life“, “coherent” – how does that relate to your testing.