In Charge of Testing

As a Test Manager I oversee the testing in a project or program – I am usually the only testing specialist in the project, so, I need the right leadership skills and the right tools to succeed. I have to own the data about the testing and quality activities.

As the test manager I need to facilitate a quite a range of testing activities:

I need to balance that I need to know what’s going on (with regards to testing) but without micromanaging the people being involved in testing and quality activities. My role is to facilitate that testing things happen – like the project manager making project things happen. I cannot own the activities without owning the data about it. I need to cover the full spectrum of tests – from engineered (RDA and CI/CD) to people-based (scripts and exploration).

The most practical tool for a test manager with this scope is PractiTest, as there is more to testing than just the test cases [2]. The old term “ALM” [3] comes to mind – it is still relevant when I look for a full test management tool. I need to cover both the “inputs” to testing (requirements, tickets and user stories) and the “outputs” (bugs) in one location. I need the requirements and user stories in my tool, as I need to base my test analysis and planning on the delivery model (that may not always be agile). I need the bugs in the testing tool too, as bugs can happen in any work product of the project: documents, code base and even the tests. PractiTest acknowledge that there is more to IT projects than code.

I appreciate the key driver of PractiTest – that all activities happen in-flow. You don’t have to change window, stack pop-ups or go to another tool in order to run the tests or create bugs. Creating bugs happens in context of the test case and seamlessly moves all data about the run to the bug. Everything you need to do is context-based, and available to you on screen. And it has some cool features of read-only links to graphs for management reporting, and a smart built-in “rapid reporter” for exploratory testing notes.

It can be a challenge to switch to PractiTest if you are in a compliance setting, if you need on-Premise or if your team generally uses Azure DevOps (the tool formerly known as TFS). To get the full potential of Azure DevOps, though, you need the full Microsoft Test Pro licenses, so it’s not a free tool either – nor is DevOps intuitive for testing things doesn’t have the code available. As with Azure DevOps PractiTest is also SaaS only, with multiple data centers for regional data compliance. As there is always inertia towards a commodity it won’t be long before there is no good arguments to have test management tools on-Premise and for the tool vendors to provide the compliance certificates (ISO/SOC really should be sufficient, IMO).

Out of the box PractiTest supports the categories of testing above (engineered, scripted, exploratory) and has the necessary integrations too: Surefire for unit testing, Maven for CI/CD, Jira, ServiceNow or any other ITSM for requirement input. There is even a two way integration to Azue DevOps. As the web design is “responsive” it could probably run off a tablet. That would enable easier test documentation for field tests. It would be even better to have a small version of it on a phone and be able to use the camera for “screen shots”.

At work I am currently running a large project regarding customizing and implementing a standard commercial software system, PractiTest would fit right in, as we have the following test activities:

  • Unit test by the developers
  • Automation by test engineers
  • Exploratory test by Subject matter experts
  • Formal scripted testing with end users

And I need to own the data around all of this, if I want to in in charge of the testing (and not only the testers). We are very few software testing specialists on the project team, but as the manager of testing I need to cover many other people performing the testing. This transforms my role from test management to one about leadership, coaching, and facilitation of testing being performed by the SMEs – and anyone else really.

I will be talking about Leading When the Subject Matter Experts Test at ConTEST NYC 2019 until then read more about leadership:

  1. Anthropologists and similar humanities educations can be great BA’s
  2. looking at you Test Rail 😉
  3. ALM = Application Life Cycle, like Micro Focus Quality Center etc.

Disclaimer: This is an influencer review sponsored by PractiTest.

Advertisements

Visual Tests are Still Code

Among the currently shiny new test automation things are visual “script-less” test automation tools. But the visual test flows are still code – and thus require discipline to structure and maintain. Otherwise you are just adding yet another layer of spaghetti code.

Among the current shiny new test automation tools are visual “script-less” automation tools like LeapWork [9], Blue Prism [10] and UiPath [7]. These tools are a part of a new class of business process automation tools called “Robot Process Automation” (RPA) [4]. There are two sub types of – “RPA” which focuses on processing data and Robot Desktop Automation (RDA).

RDA is interesting in the context of test automation [9], as they can automate GUI interactions – also on top of enterprise package applications (SaaS, COTS, OOTB etc. [2]). The test automation challenge for most of these enterprise applications (SAP, MS Dynamics [6] etc.) is that they come with no access to the code-base, even if these are pure-play web based – the GUI is all there is.

All you can to these type of business solutions is usually to add customization and configurations by entering or editing data directly in through the GUI. Some of these systems allow configurations and customization in the form of config-file – they really should be under change control [3], as they are part of the pipeline. 

visual tests are code

part of the ship

part of the crew

Bootstrap Bill Turner

Using RDA tools for test automation [9] is a novel [1] uncharted approach [12]. The editing of the “tests”/flows is usually done in a stand-alone application studio (Graphical IDE) with interactions to the solution under test (across the GUI and over Citrix and RDP) and to any test management and issue tracking system.

Interestingly the other more “data processing” RPA tools like Automation Anywhere [5] uses a VB-Script like syntax. Writing and maintaining “scripts” like that is quite like the common approaches to GUI automation using frameworks like Siluki, tagUI, Applitools [11].

Applitools etc. are coding frameworks you can apply if you have the application code base or want to write test automation directly as code. There could be benefits in coding UI testing in all web-only projects directly using Selenium and Applitools. Most enterprise business solutions are often stand-alone applications, or their web code is horrible to hook into, as often the selectors seems randomly generated (been-there-done-that).

Hence the primary driver for RDA adoption in for test automation is to take the RDA & RPA [4] tools and apply their strengths in process automation of enterprise business solutions [2] to drive the test execution. And of a business flow could be “automating” activities during onboarding [7] or an SAP purchase order as below images:

Another key driver for adoption of RPA for test automation is their visual approach in presenting interactions/tests as flows. Some do it gracefully and user-friendly (LeapWork) – others have a more old-school workflow/swim lane approach (Blue Prism, UiPath). In both cases the visual flows illustrate an interaction across multiple GUI applications to perform business actions (yes, this still happens).

These drivers probably to make the barrier to entry seem more manageable. The visual ones very easily turn into visual spaghetti code if you don’t keep an eye on it and use sub flows, low coupling and high cohesion [13].  … as with any other non-trivial code (of a certain McCabe complexity [14]). One interesting way to go about a “coding” practice for visual test cases could be inspired by how BDD can be implemented in LeapWork [8] with annotation and self-referencing unit tests.

At the end of the day even a visual test automation project is a coding project, that should be part of the project code base like everything else [3]. And probably best maintained by software engineers within the project team (where possible) – unless you want a team of test engineers spending all day playing catch-up to maintain the automation code.

  1. Since 2017’ish.
  2. COTS/OOTB = Commercial of the shelf, out of the box
  3. https://twitter.com/mipsytipsy/status/1146968926493929472
  4. https://www.horsesforsources.com/2019_RTS_survey_070619
  5. https://www.linkedin.com/pulse/automation-anywhere-example-neil-kolban/
  6. https://www.leapwork.com/blog/automate-testing-microsoft-dynamics-365-crm
  7. https://www.uipath.com/blog/how-rpa-can-help-companies-rethink-hr-tasks
  8. https://www.capgemini.dk/bdd-in-leapwork/#tab5
  9. https://dojo.ministryoftesting.com/dojo/lessons/rpa-as-a-power-tool-for-testing  
  10. https://crunchytechbytz.wordpress.com/2018/03/13/automation-with-blue-prism/
  11. https://applitools.com/features
  12. https://jlottosen.wordpress.com/2019/04/20/broaden-the-scope-of-sut/
  13. https://medium.com/clarityhub/low-coupling-high-cohesion-3610e35ac4a6
  14. https://en.wikipedia.org/wiki/Cyclomatic_complexity

A Ratio between Tests

During one of my recent projects I was considering the ratio between the checks and the tests – that is the ratio between those tests, that are primarily simple binary confirmations, and those tests that are more tacit questions. This blog is about my considerations on the idea/experiment/model.

First I observed, that we have a range of different items in our requirements – some of them are [actual copy of the current specification]:

Binary Confirmations

  • It must be possible to add a customer ticket reference
  • It must be possible to copy the ticket number

Tacit Questions

  • You must be able to navigate displayed lists easily
  • It must be easy to compare work log and audit log

You could argue that they need refinement, more testability and less “easy“. But this is what we have to work with for now. Even if we had all the time in the world (we don’t) – we would not be able to write all of the requirements in a perfect form (if such a form exists).

As the system under test is a commercial standard system, some of the requirements are even given as “Out of the Box”, we will probably not even be testing those explicitly. Our coverage criteria is not ALL OF THEM.

Ordering the tests

It is a deliberate experiment from my side to divide the requirements (and hence the tests) into the piles of Closed and Open Questions. Perhaps there is even three piles – Rapid Software Testing has: human checking, machine checking and human/machine checking , Wardley has Pioneers, Settlers and Town Planners. Perhaps the Rule of Three applies here too.. perhaps it’s an continuum … let’s see.

Perhaps it’s a continuum

As part of the requirement workshops I will label the requirements and align with the stakeholders to get the expectations right – with the help of a few friends. This a context/project based “operationalization“.

I wrote about this ratio on my blog post around the Test Automation Pyramid, as I will use the labels to automate the confirmations (and only the confirmations). The assumption is, that there are significantly more of the binary requirements tested by machine checking – and more human tested tacit questions. If we can get all the tedious tasks automated – that is the really the end goal.

Automate all the things that should be automated

Alan Page

Every project/context will have it’s own ratio, depending on a range of factors. Saying there should always be more of one type than the other would not hold. As the above project is the configuration and implementation of a standard commercial business software package (like SAP, SalesForce etc), my expectation is that most of the requirements are binary. Also considering that this project is heavy on the right hand side of the Wardley Map scale of evolution.

It’s a Reduction in Context

I am well aware that the two/three piles are an approximation / reduction. Especially when looking at the “binary” requirements and “only” testing these by confirmation. They could as easily be explored to find further unknown unknowns. If we prioritize to do so – it all about our choice of risk.

It is also an limitation as “perfect testing” should consist of both testing and checking. I factor this into the test strategy, by insisting that all of the requirements are tested both explicitly and implicitly. First of all most of our binary requirements are on the configuration and customization of the out-of-the-box software solution. So when the subject matter experts are performing the testing of the business flows, they are also evaluating the configuration and customization. And I do want them to spot when something is odd

The binary configuration is ok, but human know-how tells us otherwise.

Ultimately I want to use the experts to do the thinking and the machines to do the both the confirmations and the tedious tasks.

Expectations around Testing

I usually mention that the work I do as a test manager is more around managing the testing activity, than managing testing specialists. “Managing the testing activity” to me is about:

  • Identifying the expectations are around the testing activities
  • Facilitating the performance/execution of the testing activities
  • Administration and documentation of the testing activity
  • Make the people doing the testing self-reliant

… in Context

The project context is the most important frame: it is all about the projects
story, risk profile, culture, traditions, deadline, budget etc. I am as Context-driven as contexts allow, in the classical “Seven Basic Principles of the Context-Driven School” sense*.

As I am motivated by finding solutions and making them work** my drive is more along the lines of “accelerate the achievement of shippable quality” [Modern Testing Mission] than “finding the problems that threaten the value of the product” [Rapid Software Testing].

Focussing on achievements over problems seems to work for me in the contexts I’m in, regarding enterprise transition, infrastructure projects and the implementation of commercial standard systems.

Setting a Frame for Expectations

Finding the “test solution” (or test strategy) that fits the project context is the key activity to me. The rest of it, is mostly about implementation – that too can be quite interesting. I like that too, but plan first!

First of all we have to realize that the testing activities we choose are limited and affected by or context (and biases). We can never test everything and think of everything to test. Based on the context restrictions (time, space, money, etc) the project gives me, I make a reduction of the testing theories and principles into a definition along the lines of:

In a specific context – testing will be a finite activity, to investigate if the shared interpretations of the requirements are implemented – at some time, for some configuration, evaluated by someone (that we trust), where nothing odd happens.

A reduction of the testing activity

Let me be the first to say: It’s not theoretical perfect! But it’s practical and based on context. The reduction gives me an achievable goal-oriented focus. It helps me to iron out what the relation is between the thing under test and someone whom it matters to.

Ironing out the Expectations

If there is an underlying risk that things will change a lot, then we can argue for test automation to multiply the configurations and the number of “runs” we can complete. Not all IT projects are around software development, so test automation practices and tooling might not be in place.

We can ask Open Questions to explore the boundaries of the shared understanding. We can discuss: how much total test coverage is needed here? We can challenge the requests for the kitchen sink – but also direct the testing to what matters. I have found that it is better to slowly impact the projects with questions from within, as discussed on the Guilty Tester Podcast, than break down traditions up front. We can look into “who” is doing the investigation and how much we trust them.

To make the agreements around the reduction of the testing activity explicit I establish a “Test Plan” document. I would often prefer to do without, and have a mutual team agreement – or even a mind map. But I know the enterprise contexts, too well to know, that shared expectations are best written down (even though it being an imperfective as well).

It’s all about the context and the expectations, really.

The expectations where that we could snorkel…

*: Even “CDT” is a context/model, and thus is flawed. One of the flaws of the model is that all test approaches are equally valid (as long as it adds value to someone who matters) and thus that no approach is never better than any other. Not even CDT..

**: See: Innovation in Testing, Less Software more Testing.

Innovation in Testing

Let’s look at testing and test management as something you can build expertise in, thus it can be placed in various places on a wardley map. Similarly innovation activities in the field of testing can be modelled by “Pioneers, Settlers, Town Planners” [also originally swardley, article by Itamar Goldminz].

The model has three types of talent: those that experiment, those that build products and those that optimize the products/commodity. Shortly put:

Simple illustration of the Pioneers, Settlers & Town Planners Model.

Each group innovates, but there is also an built-in drive from experiment to product, to optimal commodity and back again as components to experiment on. As stated in the original article (from 2015) all three kinds are brilliant people. We can relate the model both to what value the customer looks for and what kind of activity the organization strives for. We can apply it for the broader testing field as not all testing is pure play experiments and not all testing is a commodity.

PST by @swardley

Examples of Pioneer experiments could be all the fuzz around RPA, AI and ML.. and square lashings on the System Under Test – on the technology side. On the practise side, it could be emerging practices of how to test in the space of infrastructure or IT service transition. It’s the “Pippi Longstocking” of – “I have never done that before, so I probably can“.

The settler activities are all about taking the emerging practises mature them and make them repeatable. Shortcutting the time to learn something or repeat some novel practise in a new setting. Some examples could be: A Practical Guide to Testing in DevOps, the shifts of testing (at their time of writing) as ways to codify emerging practices.

Example: In 2018 I did management of testing of a large enterprise IT transition of 700 servers, running 100+ applications – it was a novel first time, so we put together some testing practices that seemed to work (for that context). In 2019 I’m doing a similar transition of similar size, where we try to repeat the practices and approaches.

The brilliant quest of the settlers is to take ideas and built innovative and established solutions for the broader audience. Most settlers are probably framework (and content) creators .. not framework maintainers.

As soon as a practise has been established it’s up to the Town Planners to maintain and optimize the practices. To me, examples in this space includes:

  • Using Selenium to test web applications with
  • Using BDD/Gherkins for collaboration
  • Using agile practices and embed testing in the agile teams
  • … following the ISTQB cook book

You mind find it harsh that I group all of those practices together. To me, they are so established by now that they can be purchased. It’s a commodity market and it’s frowned upon if you don’t use it. But still – innovation happens and town planners do a brilliant job. It’s about faster, better, smarter – and especially about building more effective teams.

Also the Town Planners build the components that the Pioneers stand upon for their next novel idea. One example could be that to test web applications with code-less test an RDA tool utilizes the Selenium framework.

Broaden the scope of the SUT

When testers talk about SUT (System Under Test) there seem to be an implied context of it being software, developed, bespoke software to be specific. Let me broaden the notion of a SUT using Wardley Maps and with that illustrate how testing can add value across the board.

Bespoke software (aka Custom built) is where the solution (SUT) is built and maintained tailormade to a specific company by a specific team that answers to order giver. When you build and maintain an app or web site for a company and is embedded in the team delivering the code base – it’s usually in a bespoke context.

Experiment/emerging example: Built an internal web site to do some simple public service case management. Write it based on MicroSoft .Net and IIS technology. The solution is new and novel, so interaction with the user is important.

All the commotion on buiding MVP experiments and interaction with the Product Owner are usual symptoms of a genesis situation. As the processes mature and products emerge, the solution development becomes more an customization activity.

Customization example: Implement Dynamics 365, SiteCore, SalesForce etc – but taylor and code them to your specific purpose. I have worked in a project taking Dynamics 365O and creating custom forms to handle public sector health events.

The last class of software “development” is the pureplay configurations of standard solutions. This is the context of SaaS – pay the license and get started. Think SAP or Office Applications, anything that is so accepted that it’s almost free (OpenOffice) and kills the IT department.

Let me draw this on the axis of Wardley Maps Evolutions:

Similarly we can add the underlying infrastructure to the drawing. As solutions move to the cloud and infrastructure becomes code, the system under test could very well be code around infrastructure. Initially bespoke infrastructure experiments (in perl?), and as time moves – even infrastructure becomes a commodity in the form of Amazon S3.

So where is your SUT? – what is the path down the stack? Because there is a huge difference between testing custom code for cloud services, as compared to product customization on actual physical owned hardware.

Let’s think testing outside the bespoke areas on the map too. Some current examples I am working on are:

  • Infrastructure transition of 700 serves from being owned to being hosted
  • Application transition of 50+ applications from being owned to being outsourced
  • Transformation of a standard form management solution
  • Implementing a standard system for ITIL case management

These projects have no code, the SUT is either a server, an environment (collection of servers), a form, a process or something else. While we do know a lot about testing in bespoke software contexts, the practices for testing in transition and transformation are emerging practices! This gives us and extra layer. And this is where it gets interesting.

There are plenty of standard practices (SAFe, agile..) but the practices for testing in the context of transition is yet to materialize.

The same model can be applied to IT as a whole. IT support and end user computing (devices, desktop operation) are to the very far right as commodity services. While on the far left is the constant experimentation and tinkering (of AI, ML and RPA) to become actual products.

If we only see testing as part of building bespoke software we fail – we fail to see the horizontal and vertical contexts, where the testing disciplines can add similar value and impact.

Career paths for testing specialists

I believe it is possible to have challenging opportunities and career advancement within the broad field of testing – for all kinds of people and backgrounds.

I’m probably both spoiled and privileged, so see down below for context of the following model. It is a model for career paths that is in active use as of writing. Some might consider it old fashioned or limited, but I do hope that you can learn a bit from it with regards to defining career paths for testing specialists.

Let’s look at the following titles:

  • Tester,
    • prepares test cases in a test case management tool
    • performs the testing activity
  • Technical Test Analyst,
    • prepares and initiate engineered tests
  • Test Manager,
    • prepares test plans, test strategies
    • lead the testing activity

You might have other titles at your place – the point is to identify the titles and not take the work areas too literally. On smaller delivery teams the testing specialist is both the analyst, test manager and the one performing the tests. On larger projects there may be more testing professionals with more defined roles/titles. On other projects the test managers job is more around leading SMEs in testing (and less the testers).

Notice that the test manager manages the testing, she is not a people/line manager of the testing specialists. All the testing professionals may have the same manager or be distributed into the delivery teams. That usually depends on if the company’s focus is on consultancy/projects or on products/deliveries.

There are strong trends that much test engineering is a developers discipline and that management of testing is more about coaching. Still some development organisations strive with having exploratory testers on all teams, where they dig into the specific domain and application. But the field is not all dead yet, and many organisations will have the above titles represented for years.

Based on the titles above you can identify the work usually being done, but not the skill level or span of control. This is where we usually add (promotion) levels like:

  • Junior
  • Advanced/Associate
  • Senior
  • Principal

Do use the promotions that your company uses for developers (and similar titles) and other roles! If you have ninja developers as a formal promotion level over lead developers, the by all means add that to your testing titles as well. Do insist and argue! If you fail, move away and let that company deal with not wanting to improve their people. (having the option to turn away can be a privilege too).

The levels of formal training could follow the levels of promotion. ISTQB training is one approach that is similarly scaled. That can be helpful if your organisation has a quest for certificates (for some business reason). Certificates are though just a race to the bottom.

The advancement from one level to the next could be on the basis of independence of the person. A junior level is an entry level and will usually require that the person tries it out and have a skills mentor. Advance and associate levels apply the know-how consistently on one project/delivery team.

The higher up the level, the more teams the person can apply the knowledge to at the same time (span of control, see also the law of raspberry jam). Alternatively it could also be to be able to generalize practices learned in one project and apply it to a project/delivery team that works in a different way. Senior and principal roles is more into the strategic work or work as a adviser. They could be the advisers on bids & tenders for new projects or be more of an test architect working with implementing principles for the testing activities.

Context: I work in the Danish IT outsourcing sector in a global IT company of 3000+ people. The software testing team working across projects is 30+ people (globally). The title “promotions” are used consistently in the company for various job titles: Developers, project managers etc. I have applied a similar model for 20+ testing people at other outsourcing companies and the job titles are consistent with similar software development companies in Denmark.

Denmark is a country where trust in others is very high, where there is a high wealth equality level (Gini) and where women have equal privileges to men. In the company 30% is female, with many female “middle managers” and also high in the technical hierarchy.