#270 – But what if we can’t release often?

With digital solutions there is a ongoing urge to release often. A quest for feature toggles and continuous deliveries of new features and fixes. Automation of tedious tasks do help to drive consistent deliveries and aids in driving high-performing teams. There is good research to support that.

Recently I have worked on a solution, where the system had only to work for a month each year – and be closed down the other months. Some solutions I have similarly looked at, have years between being active. We can’t wait to ship features in the next release – the system has to be at 100% features that specific month.

Examples of business situations and domains:

  • Performances, like Eurovision
  • Sport events, like the Olympics
  • Elections
  • Black friday, Christmas shopping?

How do you test when you can only perform the act once – and if it fails it will have serious consequences? You practise and rehearse until it becomes safely repeatable. You have stage-moms and support teams to train with you.

Let’s look at the ultimate example: rock-climbing with no rope. How do you test for climbing up Yosemite’s El Capitan 3000 feet / 900 meters?

Photo by Tobias Aeppli on Pexels.com

The accomplishment is more preparation than performance. Honnold climbed El Capitan roughly 50 times in the decade before his free soloing of the rock formation on July 3, 2017. While he is famous for the ridiculously fast 3-hour, 56-minute ascent, 99% of Honnold’s time on the wall was spent roped up, practicing the route. Knowing where and how to move was the culmination of hundreds of hours on that granite in advance.

The Seven Lessons From ‘Free Solo’ On Working Without A Rope

Besides the scaling of the IT infrastructure for peak load – the test strategy has to consider the fact that the event itself will be a one-off, where there show must go on – and there’s no fallback, only fall forward.

There’s a huge difference between continuously delivering web features every day in a business to consumer setting, as compared to one-off projects of migration legacy platforms. This is why my approach to creating situational aware test plans starts with looking at delivery speed:

RareRegularOftenPervasive
One-offQuarterlyWeeklySo often you dont notice
A scale of delivery speed

In one of the projects I looked had we had extensive user rehearsals and dress rehearsals. Well, they are called something more IT fancy. But at the end of the day it was about training to make the performance muscle knowledge in the people performing the event. Much like the training Honnold did.

Lastly, my experience is that you get more organizational traction by aligning with goals rather than risks and issues. It’s a behavioral trick simply to talk about the thing you want to give attention. And at the end of the day the CEO wants to talk about goals rather than risks. She rather wants a successful performance with a few flaws that an delay due to bruised hands (and egos) during waxing on and waxing off.

#265 – Using MTTR to Understand When to Test

It interests me deeply to explore why testing is happening. Often it’s because some decision-maker or framework dictates – “This is the Way“. And off we go on the quest to slay the dragon – or move items from point A to point B. Without much thinking about how the side quests help to move the main risks of the story.

The main risks are usually around something irreplaceable – and hence we test and try our best to shield it. But not all risks are equally dangerous. In IT we can build implicit testing into repeatable deliveries and reduce the time to fix things. The faster things are fixed, the better is time to information for the business needs.

Grogu agrees
Continue reading

Calculating Time To Information

The key metric for any knowledge work – IT deliveries and testing in particular – is more than Mean Time to Repair (MTTR). While fixing fast matters – timing is everything. Timing in getting information to the people who needs it to make decisions. It’s no use if you can turn the ship around on a plate now, if you needed it yesterday. Key elements in calculating time to information is how far away the information is and how evolved the information is.

Continue reading

Focus on Goals over Risks

Looking into the discussion on what goes into a Test Plan and what goes into a Test Strategy – it’s my personal opinion that we can improve our business alignment. Risk-based testing and Product Risk Analysis have been around for long – but better models have emerged to address what will be more impactful.

Continue reading

Strategy is about Making a Map

Strategy is not where you are heading, but how you’re getting somewhere in the long run. That goes for all strategies, and even for test strategies. Though for test strategies we often get caught up in mechanics of selector strategies, testing types and techniques that we lose track of the higher purpose: Moving the business towards a vision.

Continue reading

The Testing, not the Testers

Occasionally I see posts and discussions, where testers are indignated that this and this company has no testers. How could they! Or similarly when a product is released publicly with significant issues: See – it’s because they have no testers! Or that the testers are not taken seriously. OMG!

Continue reading

Relations – are about half of IT

You can’t have IT projects without relations. Relations matter more than it seem.

Continue reading

To Transform Testing

There is no doubt that our long lived testing narrative is under pressure. To continue to bring business value, both the IT field and testing is transforming to be about proactive business enabling.

The IT domain is currently buzzing with the word “IT transformation” – the idea that IT services should be more about “inspire, challenge and transform the digital businesses“. That it should be less about delivering IT products and artifacts and more about enabling digital business outcomes. Even for testing – it should be less about a product/service, and more about business necessity. As Anders writes:

Stop focusing on the things that bug people, and dare to do both IT and testing smarter and more business focused from the start. Build Quality in – smarter from start. That goes for IT services as a whole, and definitely also for the testing activities.

What can you do to transform your testing? I have three areas:

Discuss Business Strategy

Learn Wardley mapping – and use it like Chris McDermott to create context specific maturity models with Wardley Maps informed by Cynefin. Use the mapping to Broaden the scope of the system under test.

Align with the Business Strategy

Leading Quality [Cummings-John, Peer 2019] has a whole chapter on “Align your team to your company growth metric“. Consider if the company you work for is driven by Attention, Transaction or Productivity metrics, and arrange your test activities accordingly.

Dare to Deliver in New Ways

We are usually talk so much about optimizing the (IT and testing) delivery, that we forget other ways to be innovative and provide business enabling. One way could be to dare use new technology like RPA or a HoloLens to support tedious tasks in testing – to use an existing product to something new. Another approach to actually test “all the things” that matter or to apply testing to IT outside the realm of application delivery.

To Transform Testing I will discuss, align and dare so that test solutions can be proactive business enablers – (not only achieve shippable quality).

Mapping Mondays – Pioneers, Settlers, Town Planners

A Ratio between Tests

During one of my recent projects I was considering the ratio between the checks and the tests – that is the ratio between those tests, that are primarily simple binary confirmations, and those tests that are more tacit questions. This blog is about my considerations on the idea/experiment/model.

First I observed, that we have a range of different items in our requirements – some of them are [actual copy of the current specification]:

Binary Confirmations

  • It must be possible to add a customer ticket reference
  • It must be possible to copy the ticket number

Tacit Questions

  • You must be able to navigate displayed lists easily
  • It must be easy to compare work log and audit log

You could argue that they need refinement, more testability and less “easy“. But this is what we have to work with for now. Even if we had all the time in the world (we don’t) – we would not be able to write all of the requirements in a perfect form (if such a form exists).

As the system under test is a commercial standard system, some of the requirements are even given as “Out of the Box”, we will probably not even be testing those explicitly. Our coverage criteria is not ALL OF THEM.

Ordering the tests

It is a deliberate experiment from my side to divide the requirements (and hence the tests) into the piles of Closed and Open Questions. Perhaps there is even three piles – Rapid Software Testing has: human checking, machine checking and human/machine checking , Wardley has Pioneers, Settlers and Town Planners. Perhaps the Rule of Three applies here too.. perhaps it’s an continuum … let’s see.

Perhaps it’s a continuum

As part of the requirement workshops I will label the requirements and align with the stakeholders to get the expectations right – with the help of a few friends. This a context/project based “operationalization“.

I wrote about this ratio on my blog post around the Test Automation Pyramid, as I will use the labels to automate the confirmations (and only the confirmations). The assumption is, that there are significantly more of the binary requirements tested by machine checking – and more human tested tacit questions. If we can get all the tedious tasks automated – that is the really the end goal.

Automate all the things that should be automated

Alan Page

Every project/context will have it’s own ratio, depending on a range of factors. Saying there should always be more of one type than the other would not hold. As the above project is the configuration and implementation of a standard commercial business software package (like SAP, SalesForce etc), my expectation is that most of the requirements are binary. Also considering that this project is heavy on the right hand side of the Wardley Map scale of evolution.

It’s a Reduction in Context

I am well aware that the two/three piles are an approximation / reduction. Especially when looking at the “binary” requirements and “only” testing these by confirmation. They could as easily be explored to find further unknown unknowns. If we prioritize to do so – it all about our choice of risk.

It is also an limitation as “perfect testing” should consist of both testing and checking. I factor this into the test strategy, by insisting that all of the requirements are tested both explicitly and implicitly. First of all most of our binary requirements are on the configuration and customization of the out-of-the-box software solution. So when the subject matter experts are performing the testing of the business flows, they are also evaluating the configuration and customization. And I do want them to spot when something is odd

The binary configuration is ok, but human know-how tells us otherwise.

Ultimately I want to use the experts to do the thinking and the machines to do the both the confirmations and the tedious tasks.