Visualize the Test Strategy

There are plenty of models/frameworks that list what you could have of testing activities – but there seems to be little assistance in WHAT to do in which situations. I have two visual ideas – let’s explore how to visualize the test strategy. And yes, there’s no size fits all – context matters.

It all started with one of my colleagues asking: “Do we have any test framework / strategy document that describe the purpose and which type of customers need which testing: Unit testing, Functional testing, Data Handling testing, Integrity testing,” … +16 other testing activities.

It was clear to me that while ISTQB lists and describes the above testing activities, and Heuristic Test Strategy Model lists heuristics – there’s little guidance as to when these activities are more or less valuable – and in what contexts they apply best. We should obviously not do everything and the kitchen sink in every single testing project in the world. The value of any practice depends on its context – even the heuristics.

What we are looking for, is a way to discuss and visualize the overall test approach. While this can be put in a test strategy document (I often do) – a test strategy is only a written narrative of the test strategy selected. Test coverage strategy and test automation strategy are parts of the test strategy – but we have first to see the relationship between the parts.

One way to go about it could be to visualize the pipeline:

Continuous Delivery Pipeline Example
Visualize the pipeline!

Visualizing the pipeline, as described by @lisacrispin & @aahunsberger, can help you find all the places from idea to deploy, where the story can be tested. I think this model could work even for non-DevOps deliveries too – testing can add value everywhere and there’s more to testing than gatekeeping.

Though if you are not clear of when in the SDLC/pipeline to apply the different testing approaches – you need to have a discussion around value and visibility on hand, and relevance of test activities on the other. That discussion could also help elaborate the boxes on the above visualization.

Another way to go about it could be to make a Wardley map

A Wardley map is an illustration that has the characteristics of a landscape map, and can be used for orientation in an IT setting. Wardley maps have two dimensions: Visibility and Evolution.

Visibility to the stakeholder could be business need or perceived value, as used in “No Code, No Test“. Evolution is mostly about the relative position from unknown/uncharted to embedded/industrialized. For instance looking at IT systems, it matters how evolved the system under test is:

It seems obvious that the more novel the SUT is there more exploratory testing is relevant, and similarly the more standardized the stack is the more continuous testing is valuable. …. Relatively valuable, is probably the better wording. Relative position of the elements is a key output of Wardley maps. (And more about the relative relationship between ET and CT later)

Add more exploratory testing to uncharted systems

So first of all we can position the test activities based on characteristics of the underlying IT structure. Secondly, as characteristics change, we can map the visibility and evolution of each testing activity. Continuous Testing, it self, can be made more or less visible and more or less embedded and industrialized.

As mentioned with ET and CT, we can now use the map to discuss why we need both CT and ET for a specific project. Continuous Testing relates upwards in the value chain to continuous Delivery, while exploratory testing ties more into a more visible end user goal of building the right thing, especially in a context of implicit and tacit knowledge.

To conclude on my colleagues question – to plan a test strategy we need to understand the pipeline, the relative value of the test activities and the relative evolution of the test activities.

[Wardley Mapping by Simon Wardley; template by @HiredThought]

Go read Accelerate!

ACCELERATE – The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations

by NICOLE FORSGREN, JEZ HUMBLE, GENE KIM

The authors have “multiple examples of applying these practices within mainframe environments, traditional packaged software application delivery teams, and product teams“. It’s not just for business-to-consumer web-based organizations.

The book is a tour the force into software delivery performance – the research and statistics shows a clear correlation from DevOps and Lean principles to high achieving organisations. Every arrow on the below model is backed with research. Read the arrow clearly as “drives”, “improves” and “lead to”. E.g. Continuous Delivery leads to Less Burn out.

Saved you a click: https://itrevolution.com/book/accelerate/

A last thing to highlight: High performing organisations have lower manual work percentages in areas like: configuration management, testing, deployments and in the (ITIL) change approval process.

So – if you want to increase the boxes on the right, go do the stuff on the left.

Read the book and act on it.

Characteristics Change

Susanne Kaiser @suksr did a talk at DDD Europe 2020 on the topic of: “Preparing for a future Microservices journey using DDD & Wardley Maps“. The below slide (from SlideShare) is a superb illustration of the Wardley map evolution scale. I just want so save it here – to savoir it.

PS: Even Wardley Maps moves on this scale…

Wardley Maps - Characteristics change
Wardley Maps – Characteristics change by @suksr

See also: To Transform Testing, Innovation in Testing, Broaden the scope of the SUT, todays innovation becomes tomorrows commodity.

OpsDev – more dev work by ops

The hyped mnemonic “DevOps” is equally true the other way around: OpsDev – that is, more and more work in the operations and infrastructure departments happens as development activities with scripts, code repositories and build managers. OpsDev is as tool-heavy as DevOps, and test involvement similarly pipeline focussed.

Guest blog post at http://www.plutora.com/blog/opsdev-test-environments-management 

Shift-Right, you wild one!

The Shift-Right label is that more and more testing (and checking) can happen on the live application in production. Some call it monitoring, some call it Testing in the Wild. It is a very wild idea for some people and some contexts #YMMV. It may very well be the best way of testing in some contexts.

Once I consulted on a network stabilization and delivery optimization project for a consumer bank. They had many issues in their production environment… I strongly advocated that they did test controlled and structured in production on the network changes and other operational activities. (I have talked about “How to Test in IT operations“ at Nordic Testing Days 2016). More on testing during IT deliveries in Shift-Deliver.

Shift-Right is trend that people have covered well before me, here are some pointers:

The key is really as Alan puts it “testers should try to learn more from the product in use” and with that comes the tools of Google Canary builds, NetFlix Chaos Monkeys etc.

kabuum

This trend goes along with Shift-Coach, Shift-Left and Shift-Deliver discussed separately. Initially I considered shift-right to be regarding consulting, but after hearing Declan O’Riordan at DSTB 2016 I realized that shift-right was the right label for test in production, testing in the wild etc.

Similar posts regarding things in the wild: Bugs HappensThe Kcal bugTradition is a choice and Can you see beyond the visible.

Testing decommissioning of ICBMs and software

[ SoftwareTestingClub | October 24, 2011 | Jesper Ottosen ]

Recently I had the great opportunity to visit the only museum in the world for the ICBM’s of the Cold War – http://www.rvsn.com.ua/en/. The museum have some ICBM vehicles, some remaining bunker tops – and the command bunker. I actually sat in the seat 10 stories below ground, that was manned 24x7x365 during the cold war! The decommissioning of the ICBM’s was a HUGE project (on more ways). I came to consider the testing and checking required in that program… how can we test for something that is gone?

Decommissioning, closing, retiring and sun setting applications have somewhat the same challenge. Albeit the mission usually is less …worldwide significant. The mission behind an application retirement project could be huge cost savings as there is a high prize on application maintenance – database and OS licenses etc. Other mission scenarios could be IT strategic initiatives to rearrange existing enterprise applications for an updated business fit. The retirement initiative could also be a side effect of moving applications to Software as a Service Platforms.

The simplest approach to application retirement is to stop it, and see if anybody notices. Alas – this might turn off something vital. It is important to do a retirement analysis and making some of the unknown known. The actions involved in a more controlled application retirement analysis could be:

  • Identifying known interfaces to the application
  • Identifying known interfaces from the application
  • Identifying data that needs to be migrated elsewhere
  • Identifying a close down, ramp down and retention periods
  • Identifying documents, manual processes, SLA’s, customer and vendor contracts

As in all testing projects it is worth discussing the test strategy with the project team (sponsor included) up front. Discuss the worst-case scenarios are and have a risk analysis and evaluation based on those. Discuss “Is there a problem if” scenarios. This will help to guide the retirement road map and testing activities moving forward. As in all testing and analysis work, you will not be able to turn every stone and pledge every deadly mistletoe.

From a business perspective the removal of the application from the underpinning contracts might be the one that enables the business case of whole project, and will be the worst-case for the project, should the cost reductions be missed. Don’t disregard the testing of removing the application from the contracts and SLA’s. Test and check all the identified documents and consider all manual business process that utilizes the application being removed. How are the business processes being affected by the application change?

The retirement road map may also addresses application “retention” time, a time frame where the application is “on hold” being monitored for something unexpected to pop up. There might be an unknown yearly batch job around the fiscal year. There might be regulatory requirements for the data to be read-only for a number of years.

Some good-old software testing & checking, will be possible considering the updated application portfolio:

  • The updated applications are not doing something it should be doing
  • The updated applications are doing something it shouldn’t be doing
  • The updated applications are doing something it should be doing but it feels or looks wrong

[from http://cartoontester.blogspot.com/2010/06/3-types-of-bugs.html]

In the end an application retirement project is a much about making a planned change, at “build” projects are. Some of it is just the other way around – but the key mission of testers & checkers is to provide decision input for a successful outcome.