YMWV – this is a model for reflection not to a 1:1 scale of everything in the universe. It might be useful.
The space for the testing professional is under pressure – for my own role and even more for the “traditional” testing professional. At least since 2017 there has been a shift and ongoing disruption. I finally have a form to visualize some of the trends that puts the role of the tester under pressure:
SIT / UAT debate
Low-code trend
Modern Testing
Quality Engineering and whole team approach
I still see two key areas (stars below) for the classic tester to move into: exploratory testing based on weak signals and supporting the end-users low-code activities (test tool smith). For the more managerial and coordinating role I will have to get back to you in a future blog post.
There are so many different IT projects out there – that assuming every IT project is about source code is quite a blind spot. Projects that deal with commercial standard systems or outsourced software might have source code underneath – but many teams does not have access to the source. Additionally, many legacy systems from the 90’es and older does not have the same automation capabilities as we have now.
Not all software projects are about consumer facing native apps and websites. While they are numerous there’s still plenty of systems out there for internal and business-to-business use. While the trends from CI/CD are picking up for B2B and internal systems, things doesn’t move so fast.
We often discuss where the “testing people” fit into the organisation – are they part of a delivery team or an enablement team independently of the delivery teams? The Team Topologies model enables a discussion about this challenge and a guidance on what goes where and why.
I can see the benefits of having a central Test Center of all the testing people – and, on the other hand, having them spread out in the delivery units. I work in an IT services company, sometimes a project does not require full-time test attention. So we work on a range of customers, at the same time. On the other hand, sometimes projects lasts for years and years – and the testing people become dedicated towards a specific company IT stack and delivery team.
Notice that I use the term “testing people” to cover testing specialists, test analysts, automation specialists, test engineers and test managers like myself. Besides the moniker “testers” in the blog title, I try to avoid calling us “testers”. First of all, “testing people” do more than testing and secondly other people do testing too.
The Test Center I’m in is an organisational unit of “test consultants” of various of the mentioned roles working for various of projects. But considering the “whole team approach to quality“, the Test Center unit sounds a bit .. off. Would it be better to assign everyone to a delivery unit? – What would be the reasoning behind what goes where where and why?
I have found the Team Topologies model, which identifies these teams:
Stream-aligned team: aligned to a flow of work from (usually) a segment of the business domain [yellow]
Enabling team: helps a Stream-aligned team to overcome obstacles. Also detects missing capabilities[cyan]
Complicated Subsystem team: where significant mathematics/calculation/technical expertise is needed[red]
Platform team: a grouping of other team types that provide a compelling internal product to accelerate delivery by Stream-aligned teams[purple]
The model identifies three modes of interaction during the flow of change:
Collaboration: working together for a defined period of time to discover new things (APIs, practices, technologies, etc.)
X-as-a-Service: one team provides and one team consumes something “as a Service”
Facilitation: one team helps and mentors another team
Based on this model the dedicated testing staff should be part of the stream-aligned delivery unit. While everyone working ideally to enable the team – testing coaches etc – should be part of an enablement team, aka. a center/unit/group/staff team for enablement (Accelerate the Achievement of Shippable Quality). I read into it, that the enablement team’s primary focus would be to build self-reliance in the team and get out of dodge. A key principle in Modern Testing.
How “testing people” could fit into the other two teams (Subsystem & Platform), I would have to consider a bit more. The testing activity is in both, and the Enablement team also facilitate (testing) into those two team modes/units. So perhaps it’s not that different. What do you think?
Reality, though, is a bit more complex. Even the Team Topology Model – is just a model, and wrong a some point. It is though still useful in enabling a discussion on where the testing people fit it, and why.
This post is my current take on using Robot Process Automation (RPA) tools for automation in testing. RPA tools comes in different shapes, some are better at some things than others. Similarly it has to do with the tool stack of the system under test.
First some definitions/terms.
RPA/RDAdefiniton: I use the original Horses for Sources definitions: “increase productivity, support human operators in completing tasks and activities” (RDA) and “increase process efficiency, reduce manual labor by automating transaction intensive activities” (RPA). In more practical terms RDA is across the desktop while RPA is more about background processing – a poor man integration between systems.
System under Test: There is more to a system under test than a web page with backends or an app for a smart phone. Explore the notion of System under Test. You might not have access to the source code – as the test pyramid assumes.
The enterprise challenge: Large businesses and organizations unfortunately struggle, as their IT stack is much more that web only. They want the benefits of continuous testing to their whole technology stack. Existing automation best practices doesn’t seem to address testing on top of IT systems that are older than SAFe, agile and Test-Driven Development.
Automation in Testing: See the AiT definition and namespace: “using automation to support their testing outside of automated checks“. Use tools and automation to handle all the tedious tasks of your active testing activity. Use automated checks to cover the binary requirement confirmations.
Hence, the sweet spot for using RPA tools is a as an execution muscle for mainframe solutions, commercial standard applications and legacy systems with inactive or unavailable codebases. The test management system is still key in providing an overview over all testing activities across CI/CD pipelines, RPA and tests based on human evaluation.
If your system under test is web only, you can follow the modern testing principles and build in Observability in (https://charity.wtf/) and a lot of the things in the code. Plenty of best practices around ci/cd for web systems. Obviously it depends on how well the knowledge about the system is codified – but you can work on that within your org/team too. It’s more tricky of the source code of your web SUT is not available to you and render new locators every time you deploy or refresh. … for that consider to move up the stack and use Cypress or Testim.
In my current and primary projects the testing is not done by software testing professionals – and it’s probably for the better too! It is in contexts like these:
A Microsoft Dynamics “D365O” implementation of health registration forms. Tested by public service clerks partly comparing to the previous solution, partly testing the new system platform.
Moving 700+ servers running 50+ applications from one data center to another while keeping everything from mainframe to SaaS integrations live. Tested by the application staff that have maintained the system since for ever (10+ years).
Implement at standard commercial off-the-shelf tool for 2000+ IT savvy users. To most users this tool is their primary work tracking system, so they get to test it too.
In contexts like these the act of testing done by subject matter experts of the field – infrastructure specialists, public service clerks, support staff, application developers and the like. These persons qualify as the “customer” in the Modern Testing Principle that “the customer is the only one capable to judge and evaluate the quality of our product“. They might have a testing /role/ during the project, but that is because of their high domain knowledge, but at the end of the project they continue with their “real business job” of using the system to produce stuff for the business.
It’s not their job to know ISTQB from “MT Principles” and “RST methodology“. That is up to me, as the manager of the testing. My role is more and more about the guidelines for the testing and the facilitation of the people doing the testing. My reach goes so far as to ask them to think about how the product fails and succeeds. But I cannot expect them to know checking from testing.
Long gone are the days of managing testers that put all their skill into the niches of the testing craft. There are less software testing professionals doing the testing in projects like the above. Part of it is, that the describing the whole system explicitly is simply to expensive in time and money. This makes the requirements inherently fuzzy and undefined. And part of it is that learning the skills simply takes to long. Some technical tests require skills of a certified VMware specialist, others having an eye for every unwritten tacit business rule.
Another angle is that the skills that the usual software testing specialist brings to the table is handled on a lower level. Testing is done by the organisation (like Microsoft) that builds the standard solutions and commercial off the self systems (SAP, D365O etc). Another is that the test techniques of the software testing field simply no longer applies. I mean how does boundary value analysis add value to enterprise data center transition executions, when the system under test it not even software?
The better tester is neither the software developer nor the software testing specialist. It’s the person who ponders:
How could this go wrong…
I wonder if…
For this to work, we need to do…
Come to think of it, everyone in the project does that! Some do it more explicitly, some do it more experimental. Everyone evaluates how their actions add value to the people that matter (at some time).