The authors have “multiple examples of applying these practices within mainframe environments, traditional packaged software application delivery teams, and product teams“. It’s not just for business-to-consumer web-based organizations.
The book is a tour the force into software delivery performance – the research and statistics shows a clear correlation from DevOps and Lean principles to high achieving organisations. Every arrow on the below model is backed with research. Read the arrow clearly as “drives”, “improves” and “lead to”. E.g. Continuous Delivery leads to Less Burn out.
A last thing to highlight: High performing organisations have lower manual work percentages in areas like: configuration management, testing, deployments and in the (ITIL) change approval process.
So – if you want to increase the boxes on the right, go do the stuff on the left.
There is no doubt that our long lived testing narrative is under pressure. To continue to bring business value, both the IT field and testing is transforming to be about proactive business enabling.
The IT domain is currently buzzing with the word “IT transformation” – the idea that IT services should be more about “inspire, challenge and transform the digital businesses“. That it should be less about delivering IT products and artifacts and more about enabling digital business outcomes. Even for testing – it should be less about a product/service, and more about business necessity. As Anders writes:
Stop focusing on the things that bug people, and dare to do both IT and testing smarter and more business focused from the start. Build Quality in – smarter from start. That goes for IT services as a whole, and definitely also for the testing activities.
What can you do to transform your testing? I have three areas:
Leading Quality [Cummings-John, Peer 2019] has a whole chapter on “Align your team to your company growth metric“. Consider if the company you work for is driven by Attention, Transaction or Productivity metrics, and arrange your test activities accordingly.
Dare to Deliver in New Ways
We are usually talk so much about optimizing the (IT and testing) delivery, that we forget other ways to be innovative and provide business enabling. One way could be to dare use new technology like RPA or a HoloLens to support tedious tasks in testing – to use an existing product to something new. Another approach to actually test “all the things” that matter or to apply testing to IT outside the realm of application delivery.
To Transform Testing I will discuss, align and dare so that test solutions can be proactive business enablers – (not only achieve shippable quality).
As of writing I am managing the testing of a large enterprise IT program, where we are implementing a new commercial enterprise solution (COTS).
Over the last many months there have been requirement workshops upon requirement workshop to write down what the new system should be able to do for the various business units. We have had many representatives from the business units as part of the workshops and now have about 1000 specific business requirements that needs to be tested.
Some requirements are closed questions, others are more open-ended or similarly require some thinking. Currently the ratio is that 70% is done by test automation and 30% is for a few of the subject matter experts to test. Management was happy with this, as this made the project faster, the solution more robust and the project less reliant on taking the business people away from their “real work”.
The other day I reached out by mail to more of the business people involved in the workshops to let them know that testing had started, and that they would be able to access the solution under test when it had been “hardened”. But so far, only a few “track leads” would be involved.
The feedback surprised me, as my message was both good and bad. Good in the sense that they would not be involved so much, but also bad that they would not be involved so much. One wrote back to me:
There is still a risk that the solution will not be as the workshops intended, as the requirements and solution might not capture precisely, what was agreed during the workshops
Having been part of the workshop, we are held responsible by our coworkers as to how well the new system supports the business
Why don’t the project want our involvement on this?
As a Test Manager I oversee the testing in a project or program – I am usually the only testing specialist in the project, so, I need the right leadership skills and the right tools to succeed. I have to own the data about the testing and quality activities.
As the test manager I need to facilitate a quite a range of testing activities:
Business Analysts  or similar within the project
End users or end user representatives
I need to balance that I need to know what’s going on (with regards to testing) but without micromanaging the people being involved in testing and quality activities. My role is to facilitate that testing things happen – like the project manager making project things happen. I cannot own the activities without owning the data about it. I need to cover the full spectrum of tests – from engineered (RDA and CI/CD) to people-based (scripts and exploration).
The most practical tool for a test manager with this scope is PractiTest, as there is more to testing than just the test cases . The old term “ALM”  comes to mind – it is still relevant when I look for a full test management tool. I need to cover both the “inputs” to testing (requirements, tickets and user stories) and the “outputs” (bugs) in one location. I need the requirements and user stories in my tool, as I need to base my test analysis and planning on the delivery model (that may not always be agile). I need the bugs in the testing tool too, as bugs can happen in any work product of the project: documents, code base and even the tests. PractiTest acknowledge that there is more to IT projects than code.
I appreciate the key driver of PractiTest – that all activities happen in-flow. You don’t have to change window, stack pop-ups or go to another tool in order to run the tests or create bugs. Creating bugs happens in context of the test case and seamlessly moves all data about the run to the bug. Everything you need to do is context-based, and available to you on screen. And it has some cool features of read-only links to graphs for management reporting, and a smart built-in “rapid reporter” for exploratory testing notes.
It can be a challenge to switch to PractiTest if you are in a compliance setting, if you need on-Premise or if your team generally uses Azure DevOps (the tool formerly known as TFS). To get the full potential of Azure DevOps, though, you need the full Microsoft Test Pro licenses, so it’s not a free tool either – nor is DevOps intuitive for testing things doesn’t have the code available. As with Azure DevOps PractiTest is also SaaS only, with multiple data centers for regional data compliance. As there is always inertia towards a commodity it won’t be long before there is no good arguments to have test management tools on-Premise and for the tool vendors to provide the compliance certificates (ISO/SOC really should be sufficient, IMO).
Out of the box PractiTest supports the categories of testing above (engineered, scripted, exploratory) and has the necessary integrations too: Surefire for unit testing, Maven for CI/CD, Jira, ServiceNow or any other ITSM for requirement input. There is even a two way integration to Azue DevOps. As the web design is “responsive” it could probably run off a tablet. That would enable easier test documentation for field tests. It would be even better to have a small version of it on a phone and be able to use the camera for “screen shots”.
And I need
to own the data around all of this, if I want to in in charge of the testing
(and not only the testers). We are very few software testing specialists on the
project team, but as the manager of testing I need to cover many other people
performing the testing. This transforms my role from test management to one
about leadership, coaching, and facilitation of testing being performed by the
SMEs – and anyone else really.
Among the currently shiny new test automation things are visual “script-less” test automation tools. But the visual test flows are still code – and thus require discipline to structure and maintain. Otherwise you are just adding yet another layer of spaghetti code.
Among the current shiny new test automation tools are visual “script-less” automation tools like LeapWork , Blue Prism  and UiPath . These tools are a part of a new class of business process automation tools called “Robot Process Automation” (RPA) . There are two sub types of – “RPA” which focuses on processing data and Robot Desktop Automation (RDA).
All you can to these type of business solutions is usually to add customization and configurations by entering or editing data directly in through the GUI. Some of these systems allow configurations and customization in the form of config-file – they really should be under change control , as they are part of the pipeline.
visual tests are code
part of the ship
part of the crew
Bootstrap Bill Turner
Using RDA tools for test automation  is a novel  uncharted approach . The editing of the “tests”/flows is usually done in a stand-alone application studio (Graphical IDE) with interactions to the solution under test (across the GUI and over Citrix and RDP) and to any test management and issue tracking system.
Interestingly the other more “data processing” RPA tools like Automation Anywhere  uses a VB-Script like syntax. Writing and maintaining “scripts” like that is quite like the common approaches to GUI automation using frameworks like Siluki, tagUI, Applitools .
etc. are coding frameworks you can apply if you have the application code base
or want to write test automation directly as code. There could be benefits in
coding UI testing in all web-only projects directly using Selenium and Applitools.
Most enterprise business solutions are often stand-alone applications, or their
web code is horrible to hook into, as often the selectors seems randomly
Hence the primary driver for RDA adoption in for test automation is to take the RDA & RPA  tools and apply their strengths in process automation of enterprise business solutions  to drive the test execution. And of a business flow could be “automating” activities during onboarding  or an SAP purchase order as below images:
driver for adoption of RPA for test automation is their visual approach in presenting
interactions/tests as flows. Some do it gracefully and user-friendly (LeapWork)
– others have a more old-school workflow/swim lane approach (Blue Prism, UiPath).
In both cases the visual flows illustrate an interaction across multiple GUI
applications to perform business actions (yes, this still happens).
These drivers probably to make the barrier to entry seem more manageable. The visual ones very easily turn into visual spaghetti code if you don’t keep an eye on it and use sub flows, low coupling and high cohesion . … as with any other non-trivial code (of a certain McCabe complexity ). One interesting way to go about a “coding” practice for visual test cases could be inspired by how BDD can be implemented in LeapWork  with annotation and self-referencing unit tests.
At the end of the day even a visual test automation project is a coding project, that should be part of the project code base like everything else . And probably best maintained by software engineers within the project team (where possible) – unless you want a team of test engineers spending all day playing catch-up to maintain the automation code.
COTS/OOTB = Commercial of the shelf, out of the box
During one of my recent projects I was considering the ratio between the checks and the tests – that is the ratio between those tests, that are primarily simple binary confirmations, and those tests that are more tacit questions. This blog is about my considerations on the idea/experiment/model.
First I observed, that we have a range of different items in our requirements – some of them are [actual copy of the current specification]:
It must be possible to add a customer ticket reference
It must be possible to copy the ticket number
You must be able to navigate displayed lists easily
It must be easy to compare work log and audit log
You could argue that they need refinement, more testability and less “easy“. But this is what we have to work with for now. Even if we had all the time in the world (we don’t) – we would not be able to write all of the requirements in a perfect form (if such a form exists).
I wrote about this ratio on my blog post around the Test Automation Pyramid, as I will use the labels to automate the confirmations (and only the confirmations). The assumption is, that there are significantly more of the binary requirements tested by machine checking – and more human tested tacit questions. If we can get all the tedious tasks automated – that is the really the end goal.
Automate all the things that should be automated
Every project/context will have it’s own ratio, depending on a range of factors. Saying there should always be more of one type than the other would not hold. As the above project is the configuration and implementation of a standard commercial business software package (like SAP, SalesForce etc), my expectation is that most of the requirements are binary. Also considering that this project is heavy on the right hand side of the Wardley Map scale of evolution.
It’s a Reduction in Context
I am well aware that the two/three piles are an approximation / reduction. Especially when looking at the “binary” requirements and “only” testing these by confirmation. They could as easily be explored to find further unknown unknowns. If we prioritize to do so – it all about our choice of risk.
It is also an limitation as “perfect testing” should consist of both testing and checking. I factor this into the test strategy, by insisting that all of the requirements are tested both explicitly and implicitly. First of all most of our binary requirements are on the configuration and customization of the out-of-the-box software solution. So when the subject matter experts are performing the testing of the business flows, they are also evaluating the configuration and customization. And I do want them to spot when something is odd
Ultimately I want to use the experts to do the thinking and the machines to do the both the confirmations and the tedious tasks.