.. with only 5 songs and a book. Which ones and how they tie into the topics of leadership, biases and figuring out your learning pathway. Listen here:
- It’s better to do it often, than to let it pile up
- It’s a tedious task that robots can do (partially)
- Automation can catch some base level hairy stuff
- Bigger hairy catches should be hand-picked
- It’s always involves using tools
- It’s better when it’s a whole team approach to cleaning
- If everybody does their area, it all adds up
- There’s always the usual spots …
- .. And the spots to see after you thought you were done
- In a hurry, you use the snow-plow method
- It catches bugs
This is an analogy blog post – consider it an experiment, not a wholesome truth, but rather a model. And a model is always false, but sometimes useful.
This blogpost is also coming from a community outreach from the Bloggers Club on the Ministry of Testing. There are regular challenges that aim to share community thoughts. This month, the challenge is to share the personal perspective about “Testing is like…”
The analogy is inspired by Heathers post about her new vacuum robot below. If you want to consider how to test a robot vacuum, go see the club post: How to Test a Robot Vacuum?
[Image of “Floor-a” with permission from Heather]
We often discuss where the “testing people” fit into the organisation – are they part of a delivery team or an enablement team independently of the delivery teams? The Team Topologies model enables a discussion about this challenge and a guidance on what goes where and why.
I can see the benefits of having a central Test Center of all the testing people – and, on the other hand, having them spread out in the delivery units. I work in an IT services company, sometimes a project does not require full-time test attention. So we work on a range of customers, at the same time. On the other hand, sometimes projects lasts for years and years – and the testing people become dedicated towards a specific company IT stack and delivery team.
Notice that I use the term “testing people” to cover testing specialists, test analysts, automation specialists, test engineers and test managers like myself. Besides the moniker “testers” in the blog title, I try to avoid calling us “testers”. First of all, “testing people” do more than testing and secondly other people do testing too.
The Test Center I’m in is an organisational unit of “test consultants” of various of the mentioned roles working for various of projects. But considering the “whole team approach to quality“, the Test Center unit sounds a bit .. off. Would it be better to assign everyone to a delivery unit? – What would be the reasoning behind what goes where where and why?
I have found the Team Topologies model, which identifies these teams:
- Stream-aligned team: aligned to a flow of work from (usually) a segment of the business domain [yellow]
- Enabling team: helps a Stream-aligned team to overcome obstacles. Also detects missing capabilities [cyan]
- Complicated Subsystem team: where significant mathematics/calculation/technical expertise is needed [red]
- Platform team: a grouping of other team types that provide a compelling internal product to accelerate delivery by Stream-aligned teams [purple]
The model identifies three modes of interaction during the flow of change:
- Collaboration: working together for a defined period of time to discover new things (APIs, practices, technologies, etc.)
- X-as-a-Service: one team provides and one team consumes something “as a Service”
- Facilitation: one team helps and mentors another team
Based on this model the dedicated testing staff should be part of the stream-aligned delivery unit. While everyone working ideally to enable the team – testing coaches etc – should be part of an enablement team, aka. a center/unit/group/staff team for enablement (Accelerate the Achievement of Shippable Quality). I read into it, that the enablement team’s primary focus would be to build self-reliance in the team and get out of dodge. A key principle in Modern Testing.
How “testing people” could fit into the other two teams (Subsystem & Platform), I would have to consider a bit more. The testing activity is in both, and the Enablement team also facilitate (testing) into those two team modes/units. So perhaps it’s not that different. What do you think?
The findings of the model also aligns with the “Product Team” and “Competence Team” of this article (in Danish): Hvilke karakteristika og udfordringer har dit team?
Reality, though, is a bit more complex. Even the Team Topology Model – is just a model, and wrong a some point. It is though still useful in enabling a discussion on where the testing people fit it, and why.
The testing activity has been under change for long. And it’s clear that the testing activity has shifted. Even the test managers have to re-calibrate – as other roles will be doing the test management activity. Be prepared, as someone else will do your testing job. Work on building self-reliance in others and be prepared to hand-over what you can do.
There is more to testing than testing specialists punching test cases. The testing activity as such, has shifted (both left and right), and testing is being done by more roles than “testing people”. Depending on the context, the explicit testing activity is done by a mix of developers, testing specialists, end users and others.
I often find myself as the only testing person on the project. The testing activity is done by automation specialists and end users in one project, and by technical operations staff and end users in another. In these projects either the technology or the business knowledge is paramount, and not so much exploration, flaws and edge cases for specialized testers to explore.me, 2020. YMMV
Similarly for the test managers – there’s a trend/shift, that sometimes the test management activity is shifting away from the test managers. Even to me – even if I’m sometimes more an a “project manager of the testing activity“, a “Test coach” or similar. The trend is already there – coined sometimes as “whole team approach to quality“. Yes, most of the test management activity can be done by scrum masters, Release Train Engineers and even project managers ….
Recently I was asked to assist a large transition project for a holding company with many brands. Each brand had their own applications and technology stack, but the holding company had decided to move the hosting. So the holding company’s Project Management Office (PMO) was put in charge of facilitating the brand’s testing activities – an activity they had never considered nor done before. My role would only be to provide guidance, not do the actual facilitation.
Which got me thinking….
And after some deep thinking. – I do have the privilege to be able to adapt. I don’t need to hoard knowledge or make power moves (anymore) or worry about health-coverage or any of the lower Maslow pyramid terms (anymore).
It’s very natural for me to hand over project approaches to my co-workers. I’m often on the “blue team” to outline the strategy, My best field of work is to bring clarity and consistency, not scalability or repeatability to the practise.
I naturally hand over learning anyways, so why not re-calibrate when the thing I do has reached a stage, where it’s repeatable. And then focus on building the skills in others, work myself out of the test management role as we know it.
And don’t worry that someone else will eventually do my (testing and test management) job. The first step is to acknowledge the trend/pattern, second to redefine and bring clarity! Let’s explore and see what we find!
You, yourself, is responsible for getting the training, learning and knowledge you need. Don’t wait for your boss – be proactive, it drives your success. Here are some places to start:
Meetup’s are happening online now, which removes one primary barrier to attending great talks. Similarly conferences go online, some with a fee, some for free – some even in multiple time zones. Lastly online training sites are abundant with relevant information for the challenges you have. Yes – also for you!
Just this week, April 2020, I’m attending:
- Smart Bear Connect 2020, online / live
- Lean Agile Scotland 2019 recording
- Test Bash @ home 24 hour online / live
With plenty of talks about risk based testing, test management in the light of automated deliveries, BDD etc. With live slack groups the experience is almost as the physical conferences :). Next up in may is the Online Test Conf, Spring 2020 with topics for everyone in convenient global time slots.
When your boss says there’s no budget for attending conferences in person this year (again!), there are other ways to attend – physically. You could try to submit a talk and get accepted, but the barrier is quite high. A great way is simply to reach out and volunteer to help the program committee. If you can time it, with regards to the budget year, ask you boss based on the conference program aligned with your company strategy. At least what the boss should do is to allow it to be company time – else take the time off. …
If you are hungry to learn
What I see in the global testing community is that Scandinavians are complacently waiting for the company to pay time, money and effort to their learning, while people in emergent economies (Hi Sfax and Argentina) are eager to learn and on the forefront of the trends of the trade. They are driving the change of a positive inclusive community.
And for you!
if you still work in silos, your success – will be lessMike Lyles, Smart Bear connect 2020
In classic test techniques and test approaches the test activity is a scarce ressource. Due to time and money constraints a risk-based priority was always required to make ends meet. We now have the tools and approaches for a better Return of Investment on the testing activity, and it’s all about running more tests, more frequent and sooner.
You never have time to test everything. So in the context of classic test techniques and testing types (I’m looking at you, old fart) you had to prioritize and combine tests to make more with what you had.
- “MoScoW priorities” on test cases (Must-Should-Could-Would). Yet when management put a 20% max on must cases, the business just requested more tests until the level was achieved.
- Pairwise combinatorics and equvivalence classes might reduce the number of tests, but always seemed to focus on low level input fields and formats, never about business outcomes.
- Discussion on whether the test case was a regression test, an integration test or what not. Sometimes regression tests mattered more than new functionality. Sometimes SIT and UAT seems to be the very same thing, so why test it twice. What the business really wanted was not window dressing of words, but results no matter what the name.
Counting tests is like..
An analogy to testing everthing, is to count all possible decimal numbers. There is always one more, decimal position around the corner. For each number, I can select any amount of new numbers around it. As with tests, I can select any amount of new .. variations.. of the test (browser, time, user, preconditions…). It’s hard to count something that spills into one another, as two tests can cover much the same, but still be two different things in the end.
The classic techniques above are filtering techniques to first reduce the infinite space of possible tests into something distinct (a “test case”) – where every test is seperated from one another (countable). A “rock” in Aarons analogy. Secondly to filter it into something finite. so that it can be completed and answer “when are we done testing“.
Old Cost of Software Testing
The above filtering is is under the pretext that every value/test counted has a high price. Similarly that every test has a high cost to prepare and run. Average cost to write a formal test case could easily be 3 hours, and 1 hour to run / perform for at tester – and the perhaps with a 50% rerun/defect rate. So with 100 test cases a simple cost model would be at least: 450 hours, or 3 hours pr. test including 50% rerun.
No wonder management want to reduce this, and race the cost to the lowest denominator. Also considering that this only covers – at best – all the tests once, and half the tests twice. Is that a sufficient safetynet to go live on?
A new view on ROI
Current tools and test approaches turns the approach around, and focusses on making testing faster, frequent and cheaper (pr. test). The practises are so prevalent and well-described, that it really is already should be considered general development best practise. (G x P). Consider:
- Read the Accelerate Book
- Many Bits under the Bridge
- Characteristics Change
- Could Modern Testing work in the enterprise
Now every project will have it’s own ratio of automation, but for this simple model, let’s assume 75% can be automated/tools-supported to such a degree that running them is approximately costless. Ie. it runs as a part of the continous testing, a CI/CD pipeline with no hands or eyeballs on it.
Preparing tests still takes time, let’s assume the same 3 hours. So the 25 tests with no automation still needs 112,5 hours – but the automation, as running is zero only accounts for the 225 hours of preparation. Just this simple model of testing costs, reduces the cost for testing with 25% (from 450 to 337) – including reruning 50% of the tests once.
The modern approach is to make the tests plentifull and comoditice it, rather than make it scarce. (See also “Theory of Constraints” wrt bottlenecks). With the added benefits of CI/CD and whole team approach to quality – the research of Accelerate confirms the correlation to clear business benefits.
Since running the automated tests are cheap, we can run them “on demand”. Let’s run 25% daily – is this a regression test? Perhaps, it doesn’t really matter. Assuming we run 25% random tests a day for two weeks, aka 250 tests, we have increased the count of test runs, and the number of times each test has run. With this approach our test preparation effort of 225 hours above is now utilized for 250 runs… or under 1 hour/cost pr. run.
The whole idea is to (re)imagine the testing effort as fast and repeated sampling among all possible values, done multiple times. The more the tests are run the better – and the better ROI for testing. .. and if you dare an even better performance by the organization.
If the testing activity can be integrated into the coding activity, who tests if there is no code involved? Does there have to be code in order for there to be a test activity – and when does the scale tip for testing to happen?
There is a new type of business applications emerging – the “Low Code / No Code” products. The WordPress platform like this one I’m writing from now could be one example. AirTable could be another example of a higher order solution, that enables some user to quickly and without code organize and automate information. What we see in the testing tools space with Cypress and Mabl, is similarly a trend, where the test cases and scripts are directly linked to the end-2-end business purpose, not the underlying technologies. Low Code tools has emerged as yet another type of “customization” and “configuration” business solution.
The trend is clear and has been on the horizon for a while.
Low-Code/No-Code will disrupt this entire pattern, as enterprises realize they can be even more successful with their digital transformations if they do away with hand-coding altogether, adopting Low-Code/No-Code across their organizations instead. “No-Code is here, and it doesn’t care about making your IT organization more efficient,” explains E. Scott Menter, Chief Strategy Officer at BP Logix. “Its only purpose is to turn your business into a digitally integrated, audit-defying, silo-resistant object of their customers’ desire.”The Low-Code/No-Code Movement: More Disruptive Than You Realize
If a consultant can automate her unique process into a tool in hours, she can solve customer’s problem faster and show the value of her efforts. If a small business owner can build an app for his needs, he can increase business efficiency with automation and save valuable time to expand his business.No-code Revolution. Why Now?
To me there is a direct correlation between amounts of code required and business needs and end user visibility. The less “scripting” a business end user needs to do and “scripting languages” to understand the better. Airtable, as mentioned above, wins over spreadsheets in the end.
Similarly the faster cycles and feedback of Low Code tools is more attractive to the business than having “code high” teams develop applications. The “slow and high” code projects are never realized.
One way to see this trends, is that while “Robot Framework” and other web-based open source “RPA like” frameworks exist, the emerging approach for testing standard software solutions trends towards Low Code:
- Automating Complex End-to-End Tests (SalesForce / Testim)
- What to look for in a SAP test automation tool (SAP / LeapWork)
Perhaps RPA tools and similar Low-Code tools can be compared to the macros of If This Then That, where you can automate tedious repetitive tasks – also among your business tools. But even with low-code tools the complexity of the scripts can make it a mess, and the visual scripts needing coding practices.
Similarly, the need for explicit testing of the business functionality emerges at some point in the evolution of the “low code” solutions. Every solution moves from Experiment to emerging practice and end as a standard/best-practice. The explicit testing need emerges along the way but becomes less visible on the left-side products/commodities.
Yet to me – testing happens everywhere. Testing is key to the experiments of the pioneer, testing is key for the settler bundling solutions and testing is key for the town planner to secure stable operations.
Be aware that while testing is happening, it is not necessary by the tester. Don’t hawk the testing activity, let the experts play their part , have testers for the remaining exploration and have tools for the rest. The trend of less testers and more testing is still active and testing is shifting to the future even faster these days. A test happens every time a person doing something thinks and ask questions like: let me try this, could you test this, what happens if?
There doesn’t have to be code for testing to happen.
This post is my current take on using Robot Process Automation (RPA) tools for automation in testing. RPA tools comes in different shapes, some are better at some things than others. Similarly it has to do with the tool stack of the system under test.
First some definitions/terms.
RPA/RDA definiton: I use the original Horses for Sources definitions: “increase productivity, support human operators in completing tasks and activities” (RDA) and “increase process efficiency, reduce manual labor by automating transaction intensive activities” (RPA). In more practical terms RDA is across the desktop while RPA is more about background processing – a poor man integration between systems.
The class of RPA tools: To me there are at least 4 groups of tools that all claim the RPA label: Test automation (LeapWork, Tricentis), Business Process Optimization (UiPath, BluePrism, Kofax kapow/RPA, Automation Anywhere…), Web only (Testim.io, Cypress.io) and Coding frameworks (Robot Framework, RoboCloud, AppliTools).
System under Test: There is more to a system under test than a web page with backends or an app for a smart phone. Explore the notion of System under Test. You might not have access to the source code – as the test pyramid assumes.
The enterprise challenge: Large businesses and organizations unfortunately struggle, as their IT stack is much more that web only. They want the benefits of continuous testing to their whole technology stack. Existing automation best practices doesn’t seem to address testing on top of IT systems that are older than SAFe, agile and Test-Driven Development.
Automation in Testing: See the AiT definition and namespace: “using automation to support his testing outside of automated checks“. Use tools and automation to handle all the tedious tasks of your active testing activity. Use automated checks to cover the binary requirement confirmations.
When to use RPA tools
Putting RPA tools in play in test automation do enable digital transformation and acceleration of the test activities. There are many parameters to evaluate when to use RPA tools in a testing context:
Hence, the sweet spot for using RPA tools is a as an execution muscle for mainframe solutions, commercial standard applications and legacy systems with inactive or unavailable codebases. The test management system is still key in providing an overview over all testing activities across CI/CD pipelines, RPA and tests based on human evaluation.
If your system under test is web only, you can follow the modern testing principles and build in Observability in (https://charity.wtf/) and a lot of the things in the code. Plenty of best practices around ci/cd for web systems. Obviously it depends on how well the knowledge about the system is codified – but you can work on that within your org/team too. It’s more tricky of the source code of your web SUT is not available to you and render new locators every time you deploy or refresh. … for that consider to move up the stack and use Cypress or Testim.
Remember – there are no silver bullets.
How you treat the bringer of (bad) news tells me a lot about the organisation and potential for business growth. Go Read Accelerate – that book is full of insights. One of the models, is the organisational types from Westrum:
Andy Kelk has a to-the-point description about Westrum on his blog:
To test your organisation, you can run a very simple survey asking the group to rate how well they identify with 6 statements:https://www.andykelk.net/devops/using-the-westrum-typology-to-measure-culture
- On my team, information is actively sought.
- On my team, failures are learning opportunities, and messengers of them are not punished.
- On my team, responsibilities are shared.
- On my team, cross-functional collaboration is encouraged and rewarded.
- On my team, failure causes enquiry.
- On my team, new ideas are welcomed.
The respondents rate each statement from a 1 (strongly disagree) to a 7 (strongly agree). By collecting aggregating the results, you can see where your organisation may be falling short and put actions in place to address those areas. These questions come from peer-reviewed research by Nicole Forsgren.https://www.andykelk.net/devops/using-the-westrum-typology-to-measure-culture
So when a passionate person comes to you with (bad) news, what do you and your organisation do? Do you reflect, ignore or hide the request? Do you say that it’s not a good idea to bridge the organisation? Do you raise an Non-conformity and set in motion events to bring “justice”? Do you experiment to implement the novel ideas and actively seek information?
FAIL = First Attempt In Learning.
ACCELERATE – The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations
The authors have “multiple examples of applying these practices within mainframe environments, traditional packaged software application delivery teams, and product teams“. It’s not just for business-to-consumer web-based organizations.
The book is a tour the force into software delivery performance – the research and statistics shows a clear correlation from DevOps and Lean principles to high achieving organisations. Every arrow on the below model is backed with research. Read the arrow clearly as “drives”, “improves” and “lead to”. E.g. Continuous Delivery leads to Less Burn out.
A last thing to highlight: High performing organisations have lower manual work percentages in areas like: configuration management, testing, deployments and in the (ITIL) change approval process.