November 8, 2015
Enterprise IT, Softwaretesting
atWork, collaboration, communication
If all you bring to the meeting is what you have prepared in advance – what new do you learn by meeting?
I once was at a company that should have the motto “always prepared”, “semper fi” and “errors are not an option”. Every meeting had a check sheet, a resume form and a registration of preparation time. Because if it wasn’t measured it didn’t count. If it wasn’t in the play book it didn’t count either.
So one day we sat at the meeting reviewing a development story – the developer, an architect, me and the other roles required by the check sheet. Some had prepared written comments: everything from simple rephrasing, code changes and test case headlines. Some had not prepared a tangible thing – I had not.
So the interesting question came up: why meet at all, if the only purpose of the meeting was to go through the written changes. These could be sent directly to the developer or discussed bilaterally, and not heard by all five. The answer to this was not in the play book, and the meeting was puzzled.
I had prepared nothing in advance, yet I had prepared myself in advance to listen, think and wonder during the meeting. To me the purpose of meeting* was the joint collaboration of the participants.
The sum of the whole. That 1+1+1+1 can be five.
*: meetings can have other purposes. Sometimes I hold orientation/elaboration meetings, when things cannot be elaborated sufficiently digitally.
November 4, 2015
Family and fun, Softwaretesting
I can be obnoxious, arrogant, mister know-it-all and devil’s advocate – that seem to only want it done my way “the right way”. Like Skipper in this clip from the Penguins of Madagascar series battling Gus*: They both want this done the right way:
Sometimes this just the way I am – sometimes I even do it on purpose. Usually it’s my way of challenging what is being discussed – let me rephrase: I am testing, I am wondering, I come with counter examples, I articulate that testing is a business choice – that there are always more angles to it.
My personally preferred way is to test and think based on values – but explicit and intrinsic. I know testers that are way more process driven that I am, and more detail oriented too. That’s what makes the testing field great – we are a diverse bunch. But hey, that’s okay. We all have our own personal styles, and all styles are needed.
When I’m testing and challenging I’m prepared that I say something wrong. That my suggestion get’s rejected, that is part of the game. I probably know it’s not the 100% correct question in context. If you see only a challenger, you see only half of a testers competence. And you are reading her wrong.
I am ambitious to get the job done and devoted to advance, develop the testing craft of the company, myself and the testing community. My way to do this can be to speak boldly or to throw articles, stories and external information at you. Much like the Penguins throw everything but the kitchen skin at Gus. If you see only your own silver bullet, that is a shame – acknowledge my feedback and I am yours.
*: “Work order” is on the DVD “Operation Antarctica”.
October 29, 2015
Enterprise IT, Family and fun, LEGO, Softwaretesting
atWork, bugs, communication, community, decisions, gofigure, testingclub, value
I wonder if… the Norwegian and Swedish texts are correct on this picture:
“I wonder if“ is surely among the things that I as a tester say or think a lot. You will also hear me cheer when we find a critical bug. Every defect / bug / observation / issue / problem / incident we find is our chance to learn about the product. It’s a natural part of the game to find things and then to handle them. Defer them if so inclined, mitigate the risks, fix the bugs, update the processes – but always take a decision based on the new knowledge you have.
Here are some other things I often say:
Originally at the Ministry of Testing Facebook page, but the twists above are mine.
October 18, 2015
Enterprise IT, Family and fun, Softwaretesting
acceptance, atWork, communication, context, cynefin, exploratory, gofigure, requirements, test_strategy, unknown_unknown, validation, wicked problem
Many test processes and test tools mention that you have to establish the expected results to every test. It is at best risk ignorance not to take the notion of expected results with a kilo of salt #YMMV.
If you can establish the result of your solution in a deterministic, algorithmic and consistent way in a non-trivial problem, then you can solve the halting problem. But I doubt your requirements are trivial.. or even always right. Even the best business analyst or subject matter expert may be wrong. Your best oracle may fail too. Or you may be focussing on only getting what you measure.
When working with validation in seemingly very controlled environments changes and rework happens a lot, as every new finding needs the document trail back to square one.. Stuff happens. Validation is not testing, but looking to show that the program did work as requested at some time. It is a race towards the lowest common denominator. IT suppliers can do better that just to look for “as designed” .
Still the Cynefin framework illustrates that there are repeatable and known projects, and in those contexts you should probably focus on looking to check as many as the simple binary questions in a tool supported way, and work on the open questions for your testing.
Speaking of open ends – every time I see an explicit expected result I tend to add the following disclaimer inspired by Michael Bolton (song to the tune of nothing else matters ):
And nothing odd happens … that I know of … on my machine, in this situation 
And odd is something that may harm my user, business or program result
But I’d rather skip this test step and work on the description of the test and guidelines to mention:
And now to something completely different:
See also: The unknown unknown unexpressed expectations, Eating wicked problems for breakfast
1: Anyone can beat us, unless we are the best, todays innovation becomes tomorrows commodity
3: I’ve heard that somewhere…
September 14, 2015
communication, skills, unknown_unknown, wicked problem
As mentioned in “Diversity is important in testing” my view is that the best testers are those that know that testing can be done in many ways. “Testing practices appropriate to the first project will fail in the second. Practices appropriate to the second project would be criminally negligent in the first.” ( http://context-driven-testing.com/) There is no “one way to do testing” (despite what standards tells you). Granted there are – in context – best practices.
The best testers are those that can see beyond the current project and company framework. Those that realize that there is a fundamental difference between life-science validation and modern enterprise IT projects – and for agile projects even more. If the company frameworks fails to keep current and allow clear tailoring, then “life finds a way“.
There will be contexts where UX is not very interesting, where there is no software as such, where they release directly to production (so what we have TitW). There will even be contexts where structured software testing has very little business value. As well as, there will be contexts, where it’s one-shot only and testing and dress-rehearsals are done, over and over again. (consider though that for space launches superstition and good-luck charms play a very large role).
But don’t confuse your one context and what you have seen in some domains to be directly applicable in others. See beyond the visible, extrapolate your testing knowledge and approaches for different contexts, and you are the better tester.
September 13, 2015
Enterprise IT, Softwaretesting
atWork, change, management, questions, wicked problem
Not all my projects are thundering successes… Different early decisions have set the scene – but still the play, so to speak, have been full of lessons in what testers find – when asked open questions:
- A team had previously synchronized information in the old context, so they where invited to test the new solution… which turned out would eliminate the need for synchronization. but nobody had troubled them to tell them so.
- That the development team was not acting agile, sprints and scrums in words only
- That the biggest PBI (product backlog item) in hours, actually was to get GUI automation of modal-windows to work.
- That despite having a requirements sheet, the actual value of the sheet was zilch
- That despite putting in extra hours and overtime, the quality and stability of the delivered did not improve
- That the biggest problem towards integration was problems between .Net and java implementation of SOAP
- That simple requirements on virus scanning documents, was both hard and expensive to solve
- That part of the project deliveries was upgraded business guides, when tested they needed corrections too
Calling them all defects or even software bugs, does sound odd to me, because those that really take time is not the software issue it self, but misunderstandings and due to not knowing everything (will we ever). Standards tell you that testing finds risks in the Product, the Process and the Project – it seems to me even more we find issues in management decisions, architecture decisions, cultural issues, organisational change… it’s just software but it does have a business impact.
September 6, 2015
acceptance, acceptance criteria, atWork, community, Quality, WomenInTech
I want the field of testing to have high diversity
- Different personality types:
- we need people to get ideas, and people to finish them
- We need people to see the strategic view, and people to get into the details
- Different backgrounds
- We need people that can code
- We need people that understand the business domain
- Different business domains
- We need testers in the field of software development
- We need testers in the field of IT / ITIL service delivery
- We need device testers, embedded software testers….
- We need testers that understand the GxP regulations
- We need testers that understand rapid and agile delivery
- Different people
- Parents, singles, women, men, people with kids and without
- Young people, experienced people
- People who take it as a lifestyle, and people to whom it’s just a job
…most of all people. People who knows that things can be done in many ways. Let’s get rid of the prejudices that testing is for the detailed and i-dotting only. Testing is about bringing information to the stakeholders about what works and what doesn’t – it’s never about “failure is not an option”.
Recently I was required to do a Cubiks Problem solving test. It’s a 12 minute online test in word patterns, calculations and geometric patterns. Apparently I “failed” to complete all in time, but had a high degree of right answers, so my score was “average” #whatever. That apparently made me perfect to the testing area… OH NO – it only tells you that I put pride in my own work. Everything else is pure speculation and prejudice, as mentioned by Gerry Weinberg in Psychology of Intelligent Problem Solving there is a challenge with these kinds of tests for problem solving – they test, but not for problem solving.
Testing is about solving problems – business problems. Like can we ship?