Artificial Intelligence (AI) and Machine Learning (ML) can perhaps solve some testing challenges, but not all testing. The testing vs. checking debateand all the shift-left of checking, have revealed that some of testing is about critical thinking and some…
Many test processes and test tools mention that you have to establish the expected results to every test. It is at best risk ignorance not to take the notion of expected results with a kilo of salt #YMMV.
If you can establish the result of your solution in a deterministic, algorithmic and consistent way in a non-trivial problem, then you can solve the halting problem. But I doubt your requirements are trivial.. or even always right. Even the best business analyst or subject matter expert may be wrong. Your best oracle may fail too. Or you may be focussing on only getting what you measure.
When working with validation in seemingly very controlled environments changes and rework happens a lot, as every new finding needs the document trail back to square one.. Stuff happens. Validation is not testing, but looking to show that the program did work as requested at some time. It is a race towards the lowest common denominator. IT suppliers can do better that just to look for “as designed” .
Still the Cynefin framework illustrates that there are repeatable and known projects, and in those contexts you should probably focus on looking to check as many as the simple binary questions in a tool supported way, and work on the open questions for your testing.
Speaking of open ends – every time I see an explicit expected result I tend to add the following disclaimer inspired by Michael Bolton (song to the tune of nothing else matters ):
And nothing odd happens … that I know of … on my machine, in this situation 
And odd is something that may harm my user, business or program result
But I’d rather skip this test step and work on the description of the test and guidelines to mention:
As mentioned in “Diversity is important in testing” my view is that the best testers are those that know that testing can be done in many ways. “Testing practices appropriate to the first project will fail in the second. Practices appropriate to the second project would be criminally negligent in the first.” ( http://context-driven-testing.com/) There is no “one way to do testing” (despite what standards tells you). Granted there are – in context – best practices.
The best testers are those that can see beyond the current project and company framework. Those that realize that there is a fundamental difference between life-science validation and modern enterprise IT projects – and for agile projects even more. If the company frameworks fails to keep current and allow clear tailoring, then “life finds a way“.
There will be contexts where UX is not very interesting, where there is no software as such, where they release directly to production (so what we have TitW). There will even be contexts where structured software testing has very little business value. As well as, there will be contexts, where it’s one-shot only and testing and dress-rehearsals are done, over and over again. (consider though that for space launches superstition and good-luck charms play a very large role).
But don’t confuse your one context and what you have seen in some domains to be directly applicable in others. See beyond the visible, extrapolate your testing knowledge and approaches for different contexts, and you are the better tester.
Not all my projects are thundering successes… Different early decisions have set the scene – but still the play, so to speak, have been full of lessons in what testers find – when asked open questions:
A team had previously synchronized information in the old context, so they where invited to test the new solution… which turned out would eliminate the need for synchronization. but nobody had troubled them to tell them so.
That the development team was not acting agile, sprints and scrums in words only
That the biggest PBI (product backlog item) in hours, actually was to get GUI automation of modal-windows to work.
That despite having a requirements sheet, the actual value of the sheet was zilch
That despite putting in extra hours and overtime, the quality and stability of the delivered did not improve
That the biggest problem towards integration was problems between .Net and java implementation of SOAP
That simple requirements on virus scanning documents, was both hard and expensive to solve
That part of the project deliveries was upgraded business guides, when tested they needed corrections too
Calling them all defects or even software bugs, does sound odd to me, because those that really take time is not the software issue it self, but misunderstandings and due to not knowing everything (will we ever). Standards tell you that testing finds risks in the Product, the Process and the Project – it seems to me even more we find issues in management decisions, architecture decisions, cultural issues, organisational change… it’s just software but it does have a business impact.
Sometimes testing is like being Sherlock Holmes – You find your clues hidden in plain sight: Where the users scratch their nails; how the application user interface is cobbled together; odd patterns in the error logs….
But seldom without experimenting, seldom without pushing the subject under test or consulting the weather report, time tables – and getting out in the rain, doing some footwork.
He always seems to know better, always asking questions. He is so passionate about his problem solving skills that his standard by default seems arrogant  (but that is usually not on purpose).
There is more to motivation than carrots and sticks – or in the case of the image above: Gold and rotten potatoes.
The poor farmer above had his potato harvest fail, and he had to move driven by fear, hunger, despair – at being targeted for outplacement…. as modern management speak would label depleted human resources.
The wise guy with the pickaxe is out for the rewards of the gold. Out for the cheat and greed of the quick fix. He though fails to deliver in the long run. His balanced score card is loaded for the current budget – containing only my, myself and I.
Lady Liberty in the back as a symbol of opportunities and unknown rewards.A New Hope. I doubt that many immigrants of the days ever visited the monument in the turmoil – it remained only a beacon…
So what has this got to do about testing?
Motivating people is very much about leading testers. But the three “personas” above might also inspire in thinking about things to test:
– Where are the burning platforms?
– Where are the quick rewards?
– Where are the long-term rewards?
If you are not Alan Page – go see RSA Animate – Drive: The surprising truth about what motivates us