Taking My Own Medicine

Recently I had the chance to apply my own templates to myself and my active project – as I had to mentor a new test manager. I was challenged in explaining how I read the upcoming IT environment project. After looking into resources for new test leads, I realized I could take my own medicine.

Photo by PhotoMIX Company on Pexels.com
Photo by PhotoMIX Company on Pexels.com

A year ago, I created a new test plan format – the Situational Aware Test Plan. While mind-maps and one-page test plan canvases exist, I wanted to elaborate using the evolution principles from Wardley mapping and stop writing test plan documents.

The table structure is there to provide guard rails for the elaboration. I will use the Darlings, Pets, Cattle, and GUID -mnemonic as headlines. Our strategic decisions emerge as we use the worksheet based on the current situation and state. The strategies will be the decisions to push a field in the grid to another state. 

Delivery and Situation

DarlingsPetsCattleGUID’s
New projectFixed date
Existing delivery speedScheduled
Quarterly
Test Environments, internalRepeatable
Test environments with integrationsCraftedSome existing know-how
Environment InfrastructureHosted data center practices
Test dataKnown but cumbersome

While this project introduces new test environments, there is an existing environment with a quarterly delivery pace. This is a classic example of the core chronic conflict of pursuing both: responding to the rapidly changing competitive landscape and providing stable, reliable, and secure services (DevOps handbook introduction xxv) as elaborated on Align your Test Strategy to your Business Strategy.

The test team allocated beside me and the new test lead is a new junior and senior tester. We are in the same team, and most are even in the same office. So collaboration will be collaborative and pervasive, with a focus on helping the new people grow.

The test team

DarlingsPetsCattleGUID’s
Test team collaborationGrowingPervasive
Test leadGrowing
MentoringEnabling
Domain know-howGetting there

Test tools and approach

DarlingsPetsCattleGUID’s
Test activityExplore integrationsConfirm internal requirements
Test casesExisting can be updated.
Test case reproCreate new repository

As mentioned in the blog post about visualization, we can now use the map to discuss why we need CT and ET for the project. Based on the project’s layout, I would advise having an expert exploration of the integrations and more standard scripts for the known construction of the internal environments.

Strategy is about Making a Map

Strategy is not where you are heading, but how you’re getting somewhere in the long run. That goes for all strategies, and even for test strategies. Though for test strategies we often get caught up in mechanics of selector strategies, testing types and techniques that we lose track of the higher purpose: Moving the business towards a vision.

Continue reading

QA Aarhus – Exploratory Testing How and When

QA Network Aarhus is a local non-affiliated network of testers (and good friends) in Aarhus. Where I had the great pleasure of talking about Exploratory Testing. This is the link collection, the slides are attached.

nnit

Continue reading

Mindmaps for 400

Finally non-profit self-organizing software testing is happening in Denmark. On may 21 2014 we actually had two events:

At the first I was glad to share my experiences using mind maps in software testing, note taking and information structuring. (Get the PDF Xmind mind map here: https://jlottosen.files.wordpress.com/2014/05/mindmaps-400.pdf)

You stop going deeper down the tree, when there is nothing more knowledge to gain, just like good (exploratory) testing.

Cultural context of the “for 100″ comes from the Jeopardy TV Quiz, where the questions come in 4 levels: 100, 200,300, 400 for the  increasingly harder questions. The prize is similarly $100 for level 100 etc.  

FYI conference – workshop om ET

[ FYI IT QUALITY & TEST MANAGEMENT conference | 12.6.2013-13.6.2013 Hotel First Copenhagen ]

WORKSHOP: Sådan kommer du i gang med Exploratory Test – uden at miste overblikket!
Exploratory Test er på dansk også kendt som udforskende test. Man kan nemt forledes til at udføre den ustruktureret og uden overblik. Med få teknikker kanman dog lede en Exploratory Test, så indsatsen bliver en aktiv valgt tilgang med fokus på det, der giver ny viden. Nogle af de teknikker, der er vigtige at kende i regi af exploratory test er: mindmaps, heuristikker og session-based test management.
Workshoppen tagersigtemod at give dig nye ideer til, hvad der kunne være relevant at teste af det, du sidder med til dagligt. Ligeledes får du input til, hvordan du undgår at miste overblikket, når du konkret står overfor at skulle vælge, hvad det giver mest værdi at teste i en given situation.
Dette er en praktiskworkshop – såmedbring din laptop!

Om workshop underviseren:
Jesper Lindholt Ottosen har i 10 år arbejdet med struktureret test i Systematic, CSC og TDC. Han har siden 2009 haft interne og eksterne blogs om softwaretest og følger de seneste trends på den globale scene indenfor Exploratory Test og context-drevettest. Jesper harfokus på, at tests skal gøre en forskel ved at være værdi-drevet og finde information om projektets egenskaber.

ETinanutshell_png

Exploratory Test in a nutshell

At DWET (Danish Workshop on Exploratory Testing) I presented a drawing of “Exploratory Testing in a nutshell” that have helped me structure my ET approach.

Case 1:
Supplementary to scripted test strategy of a customer facing website. Use to identify ideas, select ideas to be in scope, and to structure feedback loop. Test management tool support: HPQC with ideas in the test plan and “in scope” in the test lab.

Case 2:
Social media website to be used by account management teams to collaborate with external contacts. Test strategy consists of three parts: Factory Acceptance at supplier, Site Acceptance / “UAT” – and “Exploratory Subject Matter Expert and Social Media Site Advocates”. Tool support based on blog posts and documents on the social media platform it self.

See also

All oracles are failable

[Software Testing Club Blog | Oct 6 2011 | myself ]

All oracles are fail-able to a certain level of confidence.

Recently I had the opportunity to participate in the acclaimed dice game. I also had the chance to be game master on a variation of the dice game session for a small test team. Reflecting on the experiences I had two considerations: (Some spoilers apply)

When are you confident enough?

The dice game is played by a loop of theories/ideas and tries/tests on the idea. The goal is to produce a theory/algorithm that can successfully predict the number that the game master presents. How many tries/tests/checks would give you confidence in the theory you have in mind? Options:

  • When you successfully predict one throw. Ie you say 7, game master say 7. Do you yell “LOOSERS, Sea ya”?
  • When you have 7 successfully predicted in a row? (why 7)
  • All 7776 combinations of 5 pieces of 6-sided one-colored standard dices?
  • Do you try every throw just once, or more times?
  • Would you know if every trial number divisible by 100, the game master would say “pi” (think leap years)

The dice game seems simple, but the problem domain of even the dice game is infinite. Or at least practically infinite (7776 is practically infinite in the dice game IMHO). The number of tests doesn’t matter, but the character of the test, relative to your mental models, matters a great deal. My purpose is not to find a fixed number of tries, but to make you consider the underlying assumption on confidence levels. That is you have a confidence in your model until it fails. You are confident to the level of x successful predictions, where the x+1 prediction fails. All you know at that time is that your theory is “incomplete” (not wrong, not right) – and this calls for more learning and more ideas…

All oracles are failable

The oracle in software projects is the ressource of answers – the documents, the mindmap, the subject matter expert. In the dice game the game master is the oracle. We are humans hence failable. The physical oracles (docs, …) even more. This made me ponder:

  • Would you approach the dice game differently actively knowing that the dice game master is failable?
  • If the game master (aka oracle) made a deliberate error ever once in a while – would you know?
  • If there is a bug (non deliberate) in the game masters algorithm would you know?
  • How would you test for the oracle making mistakes?
  • Do you test the dice game different if it was a human or machine oracle
    • First think about the dice game being computer based
    • Secondly what if it was a human behind a computer based interface
    • Consider the implications of the Turing Test
  • Oh, did you forget that I could make mistakes? – was that a rule, or an assumption?

The key framing of the dice game is usually a lesson in learning, in theory setting and trials of that theory – still under the underlying assumption that the game master can deal with any test that is offered. What would happen if the game master was blindfolded? What would the case if the algorithm was more complex – less humanly processable in a short time. There will always be a level of capability of the oracle, and it will fail – eventually.

http://www.thinkgeek.com/geektoys/plush/7ccc/?srp=13
20 Sided Fuzzy Dice Danglers

Testing a new version

[The Software Testing Club blog | November 29, 2011 | Jesper Lindholt Ottosen ]

Enterprise applications often come in packages and are purchased as (Commercial Off the Shelf – COTS). Every now and then a new version of SharePoint,SAP, Jive, OeBS, Microsoft Windows…. is made available and the business and product owner decides to implement the upgrade.

Usually the setting is that there is a “Factory Acceptance Test” by the vendor of the COTS package and a “Site Acceptance Test” by the implementing IT service organization. Here are some ideas that come to my mind, the couple of times I have had look into a testing strategy for a Enterprise COTS upgrade project. It’s not a best practice – at best it’s a heuristic:-)

 

Regression testing first – it might be considered to examine that quality didn’t get worse. Select some of the key existing features that is most important to the product owner, and examine them. Also involving the super users or application advocates in an exploratory testing activity will provide benefits for both the testers, the super users and the other project participants.

Interfaces – in an enterprise environment there is always interfaces to legacy systems and new “bud shots” to the IT tree. SOA services makes it even more important to look for known and unknown interfaces to the application. Similarly context specific customization’s (additions and removals) and applied “production hot fixes” having been applied or constructed based on v2.5. Analyzing the intermediate versions (above example 2.3, 2.4 and finally 2.5) including known fixes and known new features, can be another approach to identify the required levels of testing. Discuss with the product manager and business representative – the key is to find a level of test that they are OK with.