QA Aarhus – Exploratory Testing How and When

QA Network Aarhus is a local non-affiliated network of testers (and good friends) in Aarhus. Where I had the great pleasure of talking about Exploratory Testing. This is the link collection, the slides are attached.

nnit

Continue reading

Advertisements

Mindmaps for 400

Finally non-profit self-organizing software testing is happening in Denmark. On may 21 2014 we actually had two events:

At the first I was glad to share my experiences using mind maps in software testing, note taking and information structuring. (Get the PDF Xmind mind map here: https://jlottosen.files.wordpress.com/2014/05/mindmaps-400.pdf)

You stop going deeper down the tree, when there is nothing more knowledge to gain, just like good (exploratory) testing.

Cultural context of the “for 100″ comes from the Jeopardy TV Quiz, where the questions come in 4 levels: 100, 200,300, 400 for the  increasingly harder questions. The prize is similarly $100 for level 100 etc.  

FYI conference – workshop om ET

[ FYI IT QUALITY & TEST MANAGEMENT conference | 12.6.2013-13.6.2013 Hotel First Copenhagen ]

WORKSHOP: Sådan kommer du i gang med Exploratory Test – uden at miste overblikket!
Exploratory Test er på dansk også kendt som udforskende test. Man kan nemt forledes til at udføre den ustruktureret og uden overblik. Med få teknikker kanman dog lede en Exploratory Test, så indsatsen bliver en aktiv valgt tilgang med fokus på det, der giver ny viden. Nogle af de teknikker, der er vigtige at kende i regi af exploratory test er: mindmaps, heuristikker og session-based test management.
Workshoppen tagersigtemod at give dig nye ideer til, hvad der kunne være relevant at teste af det, du sidder med til dagligt. Ligeledes får du input til, hvordan du undgår at miste overblikket, når du konkret står overfor at skulle vælge, hvad det giver mest værdi at teste i en given situation.
Dette er en praktiskworkshop – såmedbring din laptop!

Om workshop underviseren:
Jesper Lindholt Ottosen har i 10 år arbejdet med struktureret test i Systematic, CSC og TDC. Han har siden 2009 haft interne og eksterne blogs om softwaretest og følger de seneste trends på den globale scene indenfor Exploratory Test og context-drevettest. Jesper harfokus på, at tests skal gøre en forskel ved at være værdi-drevet og finde information om projektets egenskaber.

ETinanutshell_png

Exploratory Test in a nutshell

At DWET (Danish Workshop on Exploratory Testing) I presented a drawing of “Exploratory Testing in a nutshell” that have helped me structure my ET approach.

Case 1:
Supplementary to scripted test strategy of a customer facing website. Use to identify ideas, select ideas to be in scope, and to structure feedback loop. Test management tool support: HPQC with ideas in the test plan and “in scope” in the test lab.

Case 2:
Social media website to be used by account management teams to collaborate with external contacts. Test strategy consists of three parts: Factory Acceptance at supplier, Site Acceptance / “UAT” – and “Exploratory Subject Matter Expert and Social Media Site Advocates”. Tool support based on blog posts and documents on the social media platform it self.

See also

All oracles are failable

[Software Testing Club Blog | Oct 6 2011 | myself ]

All oracles are fail-able to a certain level of confidence.

Recently I had the opportunity to participate in Rapid Software Testing (game master Michael Bolton) and the acclaimed dicegame. I also had the chance to be game master on a variation of the dicegame session for a small test team. Reflecting on the experiences I had two considerations: (Some spoilers apply)

When are you confident enough?

The dicegame is played by a loop of theories/ideas and tries/tests on the idea. The goal is to produce a theory/algorithm that can succesfully predict the number that the game master presents. How many tries/tests/checks would give you confidence in the theory you have in mind? Options:

  • When you succesfully predict one throw. Ie you say 7, game master say 7. Do you yell “LOOSERS, Sea ya”?
  • When you have 7 succesfully predicted in a row? (why 7)
  • All 7776 combitations of 5 pieces of 6-sided one-colored standard dices?
  • Do you try every throw just once, or more times?
  • Would you know if every trial number divisible by 100, the game master would say “pi” (think leap years)

 

The dice game seems simple, but the problem domain of even the dice game is infine. Or at least practically infinite (7776 is practically infinite in the dice game IMHO). James Bach replied to my tweet that “The number of tests doesn’t matter, but the character of the test, relative to your mental models, matters a great deal.“. My purpose is not to find a fixed number of tries, but to make you consider the underlying assumption on confidence levels. That is you have a confidence in your model until it fails. You are confident to the level of x succesfull predictions, where the x+1 prediction fails. All you know at that time is that your theory is “incomplete” (not wrong, not right) – and this calls for more learning and more ideas…

 

All oracles are failable

The oracle in software projects is the ressource of answers – the documents, the mindmap, the subject matter expert. In the dice game the game master is the oracle (ie Michael Bolton, James Bach or yours humbly). We are humans hence failable. The physical oracles (docs, …) even more. This made me ponder:

  • Would you approach the dice game differently actively knowning that the dice game master is failable?
  • If the game master (aka oracle) made a deliberate error ever once in a while – would you know?
  • If there is a bug (non deliberate) in the game masters algorithm would you know?
  • How would you test for the oracle making mistakes?
  • Do you test the dice game different if it was a human or machine oracle
    • First think about the dice game being computer based
    • Secondly what if it was a human behind a computer based interface
    • Consider the implications of the Turing Test
  • Oh, did you forget that I could make mistakes? – was that a rule, or an assumption?

 

Michael Bolton replied to my tweet that “In the dice game, we need to be prepared to deal with any test that a tester offers. Is it a bug? That depends on the framing.” The key framing of the dice game is usually a lesson in learning, in theory setting and trials of that theory – still under the underlying assumption that the game master can deal with any test that is offered. What would happen if the game master was blindfolded? What would the case if the algorithm was more complex – less humanly processable in a short time. There will always be a level of capability of the oracle, and it will fail – eventually.

 

http://www.thinkgeek.com/geektoys/plush/7ccc/?srp=13
20 Sided Fuzzy Dice Danglers

Testing a new version

[The Software Testing Club blog | November 29, 2011 | Jesper Lindholt Ottosen ]

Enterprise applications often come in packages and are purchased as (Commercial Off the Shelf – COTS). Every now and then a new version of SharePoint,SAP, Jive, OeBS, Microsoft Windows…. is made available and the business and product owner decides to implement the upgrade.

Usually the setting is that there is a “Factory Acceptance Test” by the vendor of the COTS package and a “Site Acceptance Test” by the implementing IT service organization. Here are some ideas that come to my mind, the couple of times I have had look into a testing strategy for a Enterprise COTS upgrade project. It’s not a best practice – at best it’s a heuristic:-)

 

Regression testing first – it might be considered to examine that quality didn’t get worse. Select some of the key existing features that is most important to the product owner, and examine them. Also involving the super users or application advocates in an exploratory testing activity will provide benefits for both the testers, the super users and the other project participants.

Interfaces – in an enterprise environment there is always interfaces to legacy systems and new “bud shots” to the IT tree. SOA services makes it even more important to look for known and unknown interfaces to the application. Similarly context specific customization’s (additions and removals) and applied “production hot fixes” having been applied or constructed based on v2.5. Analyzing the intermediate versions (above example 2.3, 2.4 and finally 2.5) including known fixes and known new features, can be another approach to identify the required levels of testing. Discuss with the product manager and business representative – the key is to find a level of test that they are OK with.