While my first book was on the more advanced topic of visual test strategies my next book will be an entry-level guide to people moving into test management and test leadership roles – primarily for all those without a background in testing. But also for testers moving into leadership, as Meg below.
Folks who moved into test leadership roles – what do you wish you could go back and tell yourself before you started?
@cactusflamingo asks: Folks who moved into test leadership roles – what do you wish you could go back and tell yourself before you started?
I have been playing with the idea of it being a guidebook to these destinations:
The Sea of Testing Know-How
The Community Deep
The Pearly Personalities
The Leadership Lands
Management meadows
Shallow Shores
Project roadways
Transition highways
Agile release trains
Artifact wetlands
Document pitfalls
The Certification Wastelands
Rivers of Tenacity
And the various tool towns scattered across the land
Which parts of the testing world would you need a guidebook for? Let’s see how it goes – the voice and style will more likely be similar to the first. Currently, the introduction starts with the following verse
Once upon a time,
our protagonist set out on a quest to lead a testing activity.
While they searched and searched across the world,
they found no clear map to guide their quest.
And there, my friend, is where this story begins.
TL;DR: What I learned from applying the research from the book Accelerate (2018) by Forsgren et.al. as a strategy legend on my Wardley Strategy Map. Spoiler alert – it’s more about people than technology.
This, of course, is not a map, as I have not considered the user need, but more given each element a score based on my initial reading of the project. It’s a new project team that hasn’t worked together before – building a new system, that does not yet exist similarly. That we know of – so it’s Cynefin Complex domain. The tech stack and testing challenges are well-known and can be mostly automated.
There is probably a natural order to the items above: It’s a new situation (for the customer too) and a release is needed. the delivery needs staffing – staffing needs collaboration and tools. Delivery (releases) needs environments and verification and verification needs test data. The table is neither a strategy as I have not considered which items to move – but I had an idea.
If we zoom in, we can see the prerequisites for Continuous Delivery to be successful:
Test Automation
Deployment Automation
Trunk-based Development
Shift Left on security
Loosely Coupled Architecture
Empowered teams
Continuous Integration
Version Control
Test Data management
Monitoring
Proactive system notifications
These items are just in the order from the book – but I can similarly score our ability and tools. Because of the platform, most are a commodity or best practices: Loosely Coupled Architecture, version control, CI/CD, trunk-based development, and deployment automation. A few require a bit of a push from possibility to active use (proactive system notifications, monitoring). The sore spots are (as detailed in 24 Key Capabilities to Drive Improvement in Software Delivery):
for the development team to own test automation: Test automation is a practice where software tests are run automatically (not manually) continuously throughout the development process. Effective test suites are reliable—that is, tests find real failures and only pass releasable code. Note that developers should be primarily responsible for the creation and maintenance of automated test suites.Â
establish test data management: Test data requires careful maintenance, and test data management is becoming an increasingly important part of automated testing. Effective practices include having adequate data to run your test suite, the ability to acquire necessary data on demand, the ability to condition your test data in your pipeline, and the data not limiting the number of tests you can run. We do caution, however, that teams should minimize, whenever possible, the amount of test data needed to run automated tests.Â
Empowered teams: Research shows that teams that can choose which tools to use do better at continuous delivery and, in turn, drive better software development and delivery performance. No one knows better than practitioners what they need to be effective.
The Test Strategy
Hence my focus as a test manager would obviously be the first two. Let me be the first to move for test automation by the developers (the domain for the test is mostly API endpoints) and the first to ask for killing the test environment so that we can learn to bring it up again – and again.
But because I care – and because I have read in the book what is a requirement for the listed team capabilities above – I should be mindful to consider also empowering teams and encouraging transformational leadership: Vision, Inspirational Communication, Supportive Leadership, Intellectual Stimulation, and Personal Recognition.
Using the research from Accelerate as a mapping legend for the team’s Continuous Delivery Practices I can see that even with state-of-the-art tooling and a modern high-level stack – it’s a people problem first of all – and thus the test strategy is threefold:
Encourage testing by the developers (evolve test automation)
Prioritize the construction of stubs and drivers for test data (evolve test data)
Encourage collaboration, learning, and a supporting culture (evolve culture)
For reasons of regulations and data restrictions, we were recently doing a cloud project for a large customer of ours. Cloud resources had to be spun up, granted access to, utilized, and taken down (CRUD-like). But we had to do it without granting any of the customer’s employers access to the cloud controls. They couldn’t even have a cloud admin. So for various organizational reasons, we designed the solution below.
The user would order cloud activities in a “shop” and the requests sent first to the customer ticketing system, then to our ticketing system. And finally to the cloud pipelines and scripts. And there my friend is where the testing started.
Testing Input Sanity
The first script was for initializing the cloud services: Ressource Name. First of I found out the (cloud) developers had already tested the happy paths. I loaded Bugmagnet and gave some funny names, as the requirements for that field hadn’t really been that specific.
I quickly learned that the cloud provider was already sanitizing the input, so no Little Bobby Tables were allowed. It made the customer think, though, and she added a rule for the shop with regard to the Ressource names. This was coded as a regular expression (Start with a capital letter, then some letters, perhaps some numbers) – not that we or the cloud cared for that strict naming scheme, it was more of a business usage rule. Was that a bug? No mostly something that testing finds – not a bug.
I Found A Bug- Can I Go Home Now?
The next request could assign privileges to the cloud resources. Based on the resource name and Access Level, the cloud resources would be enabled. Among the access level options was “admin” rights. Which was directly opposite to the whole purpose of the project, and the customer risked breaching the regulatory inertia/controls.
And that was really the only formal bug we logged. And I don’t go home – I quickly reached out to the team, had the Accces level parameter removed and all was good. The intention was all along that only one contributor access level should be granted. That level was then hardcoded in the code repository.
And then we tested further around creating things already created, revoking resources that were not there, and race conditions for revoking the services. The only thing that really gave us a challenge was when a request failed (say you request something already there). Having the two ticking systems pick up the response and send the error message back to the users was more trick than expected as they were more REQUEST focussed – but we managed. Without creating a bug the team worked together and resolved the issues.
Key Learnings
As always this is a story retold from actual experience, there are details I can’t share and elements I preferred to have done differently – but due to the organizational culture had its own way.
A mindset toward input sanity and access control is key to supporting business goals
Bugs, issues, and observations will be found they are learnings for the team, not problems
If something as seemingly simple as ordering a chocolate car can be an absurd people problem, no wonder complex IT projects can be a calamity.
Once upon a time, there was a professional chocolate shop. It was an actual chocolate shop, full of products for many needs, and a staffed counter, where you could go and order. One fine day a customer arrived and asked if the shop could create a chocolate car. The shop chocolatier said: “Come by tomorrow and I will have it for you”.
The next day the customer arrived and was given a demo of the product: “Here is your chocolate car”. “Ah, very nice chocolate car. But please could the wheels turn.”The shop chocolatier a little annoyingly said: “Come by tomorrow and I will have it for you”.
The next day the customer arrived and was given a demo of the product: “Here is your chocolate car, see the wheels turn”. “Ah, very nice chocolate car. But please could the doors open.” The shop chocolatier annoyingly said: “Come by tomorrow and I will have it for you”.
The next day the customer arrived and was given a demo of the product: “Here is your chocolate car, see the wheels turn, see the doors open”. “Ah, very nice chocolate car. But please could it have a sunroof.” The shop chocolatier a little more annoyingly said: “Come by tomorrow and I will have it for you”.
The next day the customer arrived and was given a demo of the product: “Here is your chocolate car, see the wheels turn, see the doors open, see the sunroof”. Just as the shop chocolatier was going to get even more annoyed, the customer said: “thank you, no more updates”. Baffled the shop owner forgot all about steering, ABS, and passenger seats, and asked: “How would you like it packaged”. And the customer said “ah! No need, I’ll eat it right away” And so he did. The End.
Despite all the product demos and product development, the customer consumed the product in a heart beat – or probably two.
The Morale, as There is Always a Morale
The origin of this story is a two person sketch from my local youth work[link in Danish] in the tradition of Abbott and Costello’s “Who is on first base” – so it’s probably from the 1960’es. The sketch is usually acted with elaborate gesturing building up to the absurd punchline on the final day.
I was reminded about it recently – in a project where external reviewers kept coming back for more. More requests for bells and whistles that where not originally stated. While we did share the test approach initially, the feedback from the customer is coming towards the end of the project delivery.
All the extra effort seems absurd in contrast to communicating goals and collaborating up front. While each party might be trustworthy, it only takes one of them to extend trust and improve the relationship significantly. Extending trust would obviously defuse the absurdity of the exchange, and there would be no sketch. And we need the sketch to provide the backdrop to the morale:
As a customer state your end goal clearly, don’t be a push-over
As a development team, stop and ask. If it’s outside your usual range it’s ok to say no.
If something as simple as ordering a chocolate car can be an absurd people problem, no wonder complex IT projects can be a calamity.
I primarily work in situations that are less about application delivery and more about moving the whole system stacks, implementing a standard system, or similarly changing the organizational IT landscape. Some would list these projects as “staff projects” if that helps you. I often find the terms Regressiontest, SIT, and UAT to be misleading and not helpful to what kind of test is needed for my examples.
Everything and the kitchen sink
Example 1: We are rebuilding all the environments in the delivery stack. All the Dev-, Test-, Integration-, Preprod-, and Prod- environments, and their underlying databases, brokers, and websites. Every time we are constructing, an environment we will be testing the setup from a baseline version of the existing running systems. A baseline that we know is functional already. The classic software development testing types don’t really help us in this situation, as neither regression test, UAT, or SIT conveys the things we want to confirm and the learnings we want to explore.
Example 2: We are setting up a new expense platform for employee reimbursements to go live with a new branding of the company. It’s a SaaS system and we load it every month with data about the organization. So while it’s needed for various purposes – the risk is low and the mean time to repair is similarly low. The testing we will do will be a limited confirmation of an initial data load. Snow-plow style – not a full system-integration test and similar user-acceptance test. After all, this is a SaaS – not a custom solution. It’s OK to shift right.
SIT and UAT has become generic term that it has lost the strength to convey the needed quality narrative. If you do CI/CD (which you should) for your application development that such be sufficient. If you figure out, you need to do a “connection alive” test for your third-party integrations that you move from one environment stack to another that should be accepted with the acknowledgment that you actually considered the challenges ahead.
It’s all about the risks and the mitigations – less about testing everything to the dot. One tool to read the landscape is to communicate curiously about what the stakeholders value more – and value less. And on the other hand, consider the nature of the solution being proposed.
Example 3: Setting up a cloud-based Azure Active Directory – the solution comes with a given security level out of the box (OOTB). As with other OOTB and Software-as-a-service solutions you have little impact on the security features of the solution, besides some simple configurations. While you might think that all security requirements would require 100% acceptance testing coverage, what you want to accept is that they might be provided “by design” – or by a solution decision made long ago.
I would prefer that we can call things what they are and not blindly apply old testing types
A “testing mindset” of inquiry applies when you are a principal tester or even a senior advisor in testing. Asking open questions and approaching a challenge with exploration proves better than command and control. People are people – be mindful of where they are.
Currently, I’m involved in an extensive change program for a company. It’s not your usual agile feature factory but more about the technology stack needed to run a 3000 people company (payment, finances, etc). While testing professionals could be onboarded – their learning curve would not scale – not even being equipped with all the heuristics and testing vocabulary in the world. As it is a one-off, automation wouldn’t scale either.
Fortunately, most applications have a system owner (or product owner, or manager in charge) and usually a team around maintaining the application. To some degree, we could frame these teams as stream-aligned teams. The experts in testing the applications are the superuser – hence they do the testing.
One such team is the XYZ team anchored by a VP-level manager. A colleague and I had been trying to communicate with them to align on the testing needed but had met reluctance. They would not have time to help us. We discussed the matter and realized that we needed to approach them differently. Since the change program was such a fundamental shift for the team, they had already considered the implications. They were already testing and thinking about the impact. We just had to trust them – and build on what they already had instead of imposing specific ways of working.
Without requiring them to earn it first, tell everyone who works for you that you #trust them implicitly. From that point forward, just ask them to never give you reason to withhold it. (In my experience, most won’t).
Communicating curiosity can be a little thing like “Remind me, how does XYZ work” or simply “sitting on your hands” during meetings that you would previously fill with questions and requests. The tables of the great program test lead are turning, it’s about being an enabling function for others to succeed. Similar to the freedom you should give your growing up kids – it’s about leadership.
Recently I had the chance to apply my own templates to myself and my active project – as I had to mentor a new test manager. I was challenged in explaining how I read the upcoming IT environment project. After looking into resources for new test leads, I realized I could take my own medicine.
A year ago, I created a new test plan format – the Situational Aware Test Plan. While mind-maps and one-page test plan canvases exist, I wanted to elaborate using the evolution principles from Wardley mapping and stop writing test plan documents.
The table structure is there to provide guard rails for the elaboration. I will use the Darlings, Pets, Cattle, and GUID -mnemonic as headlines. Our strategic decisions emerge as we use the worksheet based on the current situation and state. The strategies will be the decisions to push a field in the grid to another state.
Delivery and Situation
Darlings
Pets
Cattle
GUID’s
New project
Fixed date
Existing delivery speed
Scheduled Quarterly
Test Environments, internal
Repeatable
Test environments with integrations
Crafted
Some existing know-how
Environment Infrastructure
Hosted data center practices
Test data
Known but cumbersome
While this project introduces new test environments, there is an existing environment with a quarterly delivery pace. This is a classic example of the core chronic conflict of pursuing both: responding to the rapidly changing competitive landscape and providing stable, reliable, and secure services (DevOps handbook introduction xxv) as elaborated on Align your Test Strategy to your Business Strategy.
The test team allocated beside me and the new test lead is a new junior and senior tester. We are in the same team, and most are even in the same office. So collaboration will be collaborative and pervasive, with a focus on helping the new people grow.
The test team
Darlings
Pets
Cattle
GUID’s
Test team collaboration
Growing
Pervasive
Test lead
Growing
Mentoring
Enabling
Domain know-how
Getting there
Test tools and approach
Darlings
Pets
Cattle
GUID’s
Test activity
Explore integrations
Confirm internal requirements
Test cases
Existing can be updated.
Test case repro
Create new repository
As mentioned in the blog post about visualization, we can now use the map to discuss why we need CT and ET for the project. Based on the project’s layout, I would advise having an expert exploration of the integrations and more standard scripts for the known construction of the internal environments.
The Ministry of Testing Bloggers Club suggested that I write a post based on “In testing, I have changed my mind about ________”. As this blog dates back to 2012 with consistent (220) articles about testing, and my career in the field dates back to 2002 – it seems a 20-year experience should give me a few things. Testing is still not dead – and it’s still about the context (lower-case context, not CDT).
It’s not about: Testers being the only ones doing Testing
yeah, not so much these days. Testing is an act that any role can do in context. It’s about the testing – not so much the testers. And I have realized that even classic test management tasks can be done by someone else. Testing is not owned by the testers – it might be stewarded/facilitated by us, but it’s performed by a team member (who could be a tester).
It’s not about: Perfect Requirements
After decades in IT, it’s clear that even requirements are never perfect. When we look closer we see the business requirements can vary from a profound idea to a rudimentary feature of the system under test. Even in regulated industries requirements can be both about a specific configuration in a SaaS system or a loose idea of a relevant dashboard. Sometimes a requirement can be by design of an underlying commodity product – there doesn’t need to be a test case for everything.
The more rigor you add to the requirements management – the more fragile it becomes. It’s key to understand the risks and bets of the person paying for your solution. – in that lies the true borders of the delivery. Much can go informally along if it aligns with stakeholder values.
It’s not about: Defects
Back in the day defects needed to be accounted for, tracked, and distributed. Besides testing documents – defects were the only tangible delivery of the testers. The defects needed to be raised and closed. I recently wrote a guideline that stated that only observations that couldn’t be fixed within a day should be raised to the project manager for shared handling. In that context fixing things is within the same team. If it’s for another team to fix, defects are simply something communicated between the teams (check team topologies for team interactions). Sure you can still find a blocker or a P1 – what matters is how fast you can fix things.
It’s not about: Month-Long Testing Phases
The more time there is from idea to implementation – the more the requirement risks not addressing an up-to-date business objective. Timing is key. Some tools provide epics and user stories – but the structure is often misused to be a simple work aggregation – and not goal aligned.
The counter-intuitive trick is not to add formality, and more time between releases – but less. Less time between feedback between idea and implementation, and less time between implementation and test. Less time between the various forms of feedback adds to better results faster.
It still happens, I’m sure, that a business needs a month-long testing phase before a release; having a range of business staff to participate in testing the latest release of the enterprise ERP or CRM. More often the testing phase is one sprint behind the development activity. I have pondered this a lot.
At best testing is an integrated activity in the team and in the sprint. But if testing is a more separate activity – it can be both agile and context-relevant. So I have changed my mind about this anti-pattern.
It’s ok for testing to be in the next sprint –
if that adds consistency and less stress to the team*
… So that we can learn to pick ourselves up, Alfred! I was recently reminded of this quote from watching the “Batman Begins” movie with my 16yo. I really needed that reminder. Then I read the two blog posts by Beren on “Those who Failed” and “Versus the Endboss“. Let’s put it out there that we fall– and fail to remember that we fall.
I had been part of a large project – but had read the culture all wrong and we had failed hard. For a number of reasons and maybe mostly for systemic reasons. The team expected one mindset and one way of tooling – we provided another one. Even with all my best intentions and know-how of change management, this crashed. As Hannes elegantly put it, we had cycled too far ahead of the team:
Something I learned trying to be a change agent: it is kind of like team cycling. If you want to help your organization forward, going too far ahead of the group will only wear you out. Riding in front of the group absorbing some of the air resistance is much more effective.
The team expected detailed test cases to be approved at all steps, we provided intentions and purpose
The team expected detailed handovers, we worked entrepreneurially to set things in motion
The team expected an error-free lead, we worked knowing I wouldn’t remember everything
At one point I was arguing that the team needed to read Hofstede’s cultural dimensions theory – to read up on the different cultures we would be interacting with (our customers). In retrospect, we should have used it on ourselves first of all.
Hofstede’s cultural dimensions theory
It’s a little more detailed than Westrum – and even Westrum might have helped. That is if we had been able to articulate the conflict well in advance. Perhaps a senior hire should have spotted the signals beforehand. As an outsider, I relied on people telling me things. I couldn’t hear or see the back-channel communications. This is a struggle for many staff people when switching roles:
People underestimate how much of their success is simply that they’ve been working at the same company for a long time and know the people & culture.
A recipe for success in switching companies as a senior hire is entering with a beginner’s mindset. When you don’t it’s painful. https://t.co/RJceWwKmm5
– innovation is hard without the right support structure and context – execution is hard without being well-versed in the culture – culture often has a steep learning curve https://t.co/AR2LtUpk27
Initially, no one from the operations organization and latest implementation opted for the leading the activity. As we had no playbook or project plan (only the produced artifacts) – I made a scrum-board-inspired work tracking system. Perhaps I should have used a Wardley map first of all as recommended by John Cutler in “TBM 18/52: We Need Someone Who Has Done “It” Before“
What is Wardley Mapping doing for us here? It is letting us explore a more nuanced view of the problem space. Instead of treating things as one problem, we break the problem apart into a bunch of capabilities. When we do this exercise we typically find:
Not everything is an existing playbook. Not everything is a new playbook.
To solve new problems, we need a foundation of stable playbooks. For example, to solve that crazy new problem, the team might need a foundation of trustworthy data.
Yes, you can break things apart to see them better. But you’re also dealing with the whole thing.
TL;DR: Investing in basic tooling and automation improves your team besides expected metrics.
I work mostly with the implementation of enterprise SaaS systems these days. Large global companies are consolidating custom-built applications and on-premise applications with web-based standard solutions in the cloud aiming for “one standardized source of information to enable digital transformation”.
Yet the testing tooling hasn’t caught up. One company with €5000 million in sales is still using Word documents for test cases and “party like it’s 1999“. They are reluctantly considering tooling to support more agile ways of working. The whole “automate the knowns-knowns” is still pending an evaluation of return on investment (ROI) into technology from 2015. As of writing, Anno Domini 2022.
Assumptions
Writing test cases in documents takes about as long as writing automation
Maintaining automation is a more explicit task, humans can more easily apply a bit of fuzziness
When automation is in place, the execution requires limited efforts to run
The alternative to automated test execution is hours of people following and filling out the documents
With the investment in the tool, there’s a break-even around XX hours of document-based testing a month. That is if we plan for more than XX hours of document-based testing a month, the investment pays off. Your Mileage May Vary
But there’s more to it
First of all, when automated test execution is at limited costs to run and it can run independently at night, you will get the same effects as Continuous Integration and nightly builds have had in software development: you tend to run them more and more often.
This enables faster feedback both with regards to confirming new features and sums up to more effective regression testing. I have seen this happen in both custom application development and configuration of web-based standard solutions. In one project where I added automation, we have run nearly 8000 automated runs in a year (and 200 SME-based). We actually run the tests more often, and we cover the important things every day – and everything often enough. We do in fact get more testing, and broader coverage than any document-supported testing could ever scale to do.
Believe the experts
While there is some vendor basis in the following two webinars, the story is the same: Test automation can accelerate IT deliveries:
Alternatively, look into the research from Accelerate – and the DevOps handbooks. The ripple effects of automated test execution are plenty and go beyond the math of the testing effort. One thing to keep in mind is that test automation itself is not enough. At first, you need transformational leadership.