When testing web features doesn’t matter

(aka not every testing problem can be solved by a webdriver)

Web features doesn’t matter as much in the contexts I usually work in. While some may be delivered over the web, the focus for testing is on the whole system’s fit for the business. Adding automation in testing to the mix gives additional challenges as there is no source code in the solution to interact with, and we have to find other solutions to solve the tedious tasks in testing with.

One area where this is the case, is when implementing standard commercial software packages (COTS) for the enterprise or public sector. Solutions like SAP for retail CRM and ERP, Microsoft Dynamics 365 for Finance and Operations, EPIC for hospitals, Service Now for IT Service management etc. These are standard solutions that can be configured and customized, but the general source code is not available.

Thus the “test automation pyramid” falls short to help us automate things as only the GUI is available for interacting with the solution. Test engineers might want to setup CI/CD but the success of that depends on the system architecture and the provided as-is methods of releasing updates. Some of the solutions above are provided as SaaS but quite often they still run “on premise” and the business still wants releases tested before launching things on a corporate scale.

Screenshot of an RPA test on SAP

Another example is the many bespoke software solutions that are still in operation. My local electronics store has two interfaces for the sales persons: web to look things up (specs and availability) and a mainframe system to set up the actual purchase (Point of sales, POC). Many public organisations and enterprises are only now transitioning from the desktop applications of the 1990’es to more up to date solutions. Unfortunately systems that are 10+ years old have very little of live and relevant specifications and neither CI/CD suites.

While some COTS and POC solutions are being delivered over the web, testing web particularities the very least of our focus areas. The web particularities seems to matter more if the solutions are business-to-consumer but not so much when it’s business-to-employer or business-to-business.

In a business-to-employer and business-to-business context, usually only one browser is in scope. And that’s it. There is little interest for HTTP status codes, broken links, browser compatibility or login forms.

The primary testing challenges of these projects cannot be solved by Selenium, Cypress or neat tweaks in the latest JavaScript library.

Focusing only on testing the web in contexts like these we fail in

  • covering the whole system landscape across applications of different technologies
  • addressing the real questions of the business subject matter experts and the business

It’s in this context that RPA probably has some benefits in providing automation of tedious testing tasks to the tester with a business background. That is, they are business people first – and then they do the testing that matters to the context.

Don’t request the kitchen sink

More and more often I see outsourcing contracts that requests 10-15 test phases. It looks like someone has simply thrown the book at it, and not considered if it is an infrastructure project, a software development project or COTS implementation or – what on earth, they actually want to learn from the testing.

These are the phases of a recent project:

  • Environment Acceptance testing
  • Hardware and integration testing
  • Component testing
  • Component- integration testing
  • Installation test
  • System testing
  • Functional testing
  • Regression testing
  • Security testing
  • Performance testing
  • Operational acceptance testing
  • Service Level testing

It’s a challenge in the vendor reply. The vendor will want to reply to all test phases in order to be compliant with the tender, and not loose points. There is no room for elaboration or discussion if you want in on the bid process.

Quite often the requester comes back and say “we didn’t want all those weird testing things, we just want something that works for us”. But when the contract is signed and the work set in motion the project team have challenge to make the testing practical within the framework of the contract. This goes from both sides. Many good hours can be wasted with unwinding cumbersome contractual terms.

What I usually do in such a situation is to bundle the contract’s testing scope into fewer activities, and setup a mapping so that everything is covered. That is if the client allows me to make the activities practical and context-driven. If not – my hands are tied, and we deliver according to spec – even if the chapters of the test plans are set in stone.

Let’s work towards better deals for testing activities. If you are looking to prepare a BID include a test manager – and have a discussion of the value-add and learning of testing up front. There is no one book of how to do testing. Instead spend the time and money figuring out your context. Figure out what phases are on the client side, and what is on the vendor side. Have a test management consultant on retainer for before and after the bid process. Do something to discuss your test strategy and put the guidelines in the contracts, so that the vendors can propose a solution.

Don’t request everything and the kitchen sink too

Everything and the kitchen sink
Everything and the kitchen too

 

 

Test ALL the things

TL;DR: We can add testing to all requirements and all business risks. Testing to document requirements and to debunk risks provides valuable information for the business. Let us not limit testing to things that can be coded. The intellectual activity of trial and learning is happening anyways, we might as well pitch in with ways to find important evidence for the decision makers.

Test all the requirements

Traditionally testing was all about testing the functional requirements that could be coded. Non-functional requirements was left for the specialists, or plainly disregarded. I know I have done my share of test planning, with a range of requirements left “N/A” with regards to testing. Especially performance scope, batch jobs, hardware specs, data base table expansions and virus scanning have been left out of my functional test plans…

When I look at a list of requirements now – I see that we can indeed test all the things, or we can at least work on how to document that the requirement is fulfilled. Some of the requirements are actually quite easy to document. If it’s on a screen somewhere, take a screen shot and attach it to a simple test case. Done deal really. Additionally with a testing mind-set I can think of ways to challenge the details. But do we really, really need to fill up a disk to establish if it’s exactly a 1 Gb allocation – probably not. Do we really really need to document all requirements – yes in some contracts/contexts it’s important for the customer to know that everything has indeed been established. Sometimes the customer doesn’t trust you otherwise, sometimes the tests are more about your ability to deliver and provide evidence that matters.

Test all the business risks

Look into the business case of your project and find the business risks. Sometimes they are explicitly stated and prioritized. A a recent Ministry of Testing Meetup we looked into a case for a large national health system. We looked at the tangible benefits, intangible benefits and on the scored business risks.  What worried the business and management most was budget, time and whether the new system would be used in a standardized way. There is an opportunity for testing here to help address, document and challenge the most important business  risks. Traditional testing would usually look at functional requirements that can be coded or configured, and miss totally to address what matters most to the business.

OK, how do we test the project costs? How about frequent checkpoints of expected spending – would that be similar to tracking test progress. Perhaps – let’s find out. Testing is all about asking questions for the stakeholders and solving the most important problems first. Can we help to analyse risks and setup mitigation activities – sure we can. We just have to step out of our traditional “software only bubble”.

MEME - Test ALL the things
Meme ALL the things

 

 

 

 

 

 

 

Read also: Many Bits under the Bridge, Less Software, more TestingTest Criteria for Outsourced SoftwareThe Expected, Fell in the trap of total coverage.

Links: “A Context-Driven Approach to Delivering Business Value”, Cynefin In Software TestingTesting during Application Transition Trials

 

The domain expert is the tester

Sometimes the best tester is the domain expert, the person that knows all the in’s and out’s and corners of the system. I have worked with testers that have had hands-on on a system since the late 1970’es, but I also know testers of mobile app’s that marvel in being the subject matter expert of the domain. Sometimes the professional tester doubles as agile Product Owner(1) too given their vast knowledge. The tester becomes the the SME …

The subject matter expert, though, is usually a business analyst, or perhaps a User Experience expert. Those persons might have a better stand to be testing the system, than testers with no prior knowledge. Often the SME is the best tester available. I see this happening in a shift-left setting – but also in settings with a heavy user and business involvement. Like SAP releases to enterprise systems – where the business users and SAP architects still spend a month off their “actual” work (user acceptance) testing corporate configurations and customization.

The UAT is not dead, but the classic role of the tester testing on behalf of the business is declining. The business would rather test their own, with in-house subject matter experts. The field is active, as there is tool support for this activity. Panaya(2) is a tool that specializes in managing the UAT of a corporate system like SAP, and one of the key elements is that test cases can be broken up in steps and handed over between persons. Not even classic HP ALM’s handle hand-over between testers well. While ALM’s support that the tester does the testing, Panaya supports that tests are distributed across many people. People that have other (“real business”) tasks during the work day.

Testing can also be pushed even further out to the users with crowd-based testing, beta releases etc. In both crowd-based and UAT-based testing, the role of the pro’ tester is missing but the testing is still happening. IT’s being done by the most skilled – most valuable for the task.

So what can we as testers do when our tasks are gone – skill up, go with the change and become the expert – or move out to other skills: Coaches, delivery leads etc.

2015-12-18 13.49.28
The expert says 5 pieces should be in the build – though the customer is OK.

  1. If they double as scrum master, she’s probably more a Test Coach
  2. this post is not sponsored. I’m just making observations – not recommendations.

Create, Maintain, Move and Close

Usually when we talk testing it’s about the road to production. It’s about getting requests from the customer/Product owner and shipping it. We tend to forget that there is more to the life cycle of application than adding to the pile. Inspired by the old CRUD I identify the following stages in the application life. Create, Maintain, Move and Close. Testing can add value all of the four modes, with twists for each one.

Create: “oh shiny” – Creating an application is usually novel, but the more times you have build similar applications it becomes routine. Some applications have to be build from scratch, others merely configured. It matters a lot if you are building a unique app – or if it yet another roll-out of a COTS application. The testing in “create” usually focuses on bugs to be fixed before go-live, and very little on what happens afterwards. Building a new  application is usually a strategic decision to the business that solves a problem or builds on a potential. Requests are numerous for new things.

Maintain: “ship, but don’t destroy production” – At some point the customer sends you more requests to the stuff already build than new features. Application maintenance is all about balancing new features and updates to existing features. Existing features are being used by the end users, and they will eventually request updates and bug fixes. Fragmentation, merging and branching becomes and issue – especially if you maintain the application as a solution for a range of customers. Customers might want to differentiate between their requests – as they won’t want to pay for bugs in previous releases, but rather want to pay for new additions.

Move:  “It has to work as before, just with a new team“. To many businesses maintaining an application is not their key area; They might be a public organisation with no need to have their own staff of developers. So “Application Outsourcing” becomes a thing for many applications, and with deals being won and lost – it will happen that  the development tasks moves from one supplier to another. Testing can make sure that processes are in place in the new location and that the state of the application is known in the new location. The testing during “move” doesn’t involve the functionality of the application, but rather the ability of the new team. Sometimes the hosting of the application stays they same,  in other cases it is the hosting of the application that changes and nothing else.

Close: test that it’s gone” Sometimes IT strategies and businesses decide to close down an application. Perhaps it’s being replaced, perhaps it’s redundant after a long time. Examples could be hospitals moving from one patient journal to another or whole systems being sunsetted. It could also be the closing of end-of-life applications (Windows servers, HPQC etc). The value to the business is knowing the application is gone, and the information in the old systems not trusted anymore.

It is very much possible to have testing in all modes of the application life cycle. Similarly it is very much possible for testing to add value in all stages of the software development life cycle. It’s a matter of perspective.

Getting Testing in Early

Even before there is an “system development life cycle” – testing in the form of thought experiments and  evaluation can take place and add valuable information to the context.

My test management tasks are often about the next thing coming up. Bids for outsourcing agreements and application development often comes with a large document of test activities to be answered and elaborated. In this role I am the the subject matter expert (in test), and while have to write the tender reply for my domain. Sometime down the line the bid materials becomes an actual project, but by then I’m onto the next thing.

Sometimes I draw an analogy to the Secret Service advance team arriving two weeks before the president, setting up protection and identifying gaps – while then moving on to the next location before the president even gets there.

Another example of advance work for test people, is where the organisation uses frequent releases of systems. While the majority of the test effort is put into the release currently being tested, some effort must go into looking into the frame of the coming release. In the coming release the test manager can look for headlines to test, review initial high level design and find flaws and conflicts in the release content.

Sometimes I draw the analogy to the blue and gold teams of US nuclear submarines. While one full crew is out sailing/delivering, the shore team prepares, trains for the next big push.

Testing early can also be in the form of running simulations on various business case scenarios. Business simulations is all about experimenting and evaluating. For novel solutions prototyping, wire-framing and user experience activities helps develop minimum solutions to be tested for viability by the customer.

In the article “Continuous Testing in Dev Ops…” we see testing happening during Plan and Branch. In the article “A Context-Driven Approach to Delivering Business Value” testing can help establish viable market, match to vision and in identifying business risks.

testing related to revenue generation may focus on functionality or regulatory compliance; testing related to revenue protection may focus on maintainability or legal defense; testing related to supporting revenue may focus on business process improvement or cost reduction.

Testing is a lot of things – also outside the SDLC.

 

Trending: Shift-Left

TL;DR: Shift-Left is about testing early and automated. Shift technical with this trend or facilitate that testing happens.

Shift-Left is the label we apply when testing moves closer to development and integrated into the development activities. The concept is many IT years old, and there are already some excellent articles out there: What the Shift Left in Testing Means (Smart Bear, no date), “Shift left” has become “drop right” (Test Plant 2014), Shift Left QA. How to do it. Why it matters (Work Soft, 2015).

To me Shift-Left is still an active trend and change how to do testing. This goes along with Shift-Right, Shift-Coach and Shift-Deliver discussed separately. I discussed these trend labels at Nordic Testing Days 2016 during the talk “How to Test in IT operations“.

Here are some contexts where Shift-Left happens:

  • Google have “Software Engineer in Test” as job title according to the book “How we Test Software at Google“.
  • Microsoft have similar “Software Design Engineer in Test” as discussed by Alan Page in “The SDET Pendulum” and in the e-book “A-word
  • A project I was regarding pharmaceutical  Track and Trace, had no testers. I didn’t even test but did compliance documentation of test activities. The developers tested. First via peer review, then via peer execution of story tests and then validation activities. No testers, just the same team – for various reasons.
  • A project I was in regarding a website and API for trading property information had no testers, but had continuous build and deploy with even more user oriented test cases that I could ever grab. (see: Fell in the trap of total coverage)

The general approach to Shift-Left is that “checking” moves earlier in the cycle in form of automation. More BDD, more TDD, more automated tests, continuous builds, frequent feedback and green bars. More based on “Test automation pyramid” (blog discussion, whiteboard testing video). Discussing the pyramid model reveals that testing and checking goes together in the lower levels too. I’m certain that (exploratory) testing happens among technicians and service-level developers; – usually not explicitly, but still.

To have “no QA” is not easy. Not easy on the testers because they need to shift and become more SET/SDET-like or shift something else (Shift-Right and Shift-Coach and Shift-Deliver). Neither is it easy on the team, as the team has to own the quality activities – as discussed in “So we’re going “No QA’s”. How do we get the devs to do enough testing?

Testers and test managers cannot complain, when testing and checking is performed in new ways. When tool-supported testing take over the boring less-complex checks, we can either own these checks or  move to facilitate that these checks are in place. Similarly when the (exploratory) brain-based testing of the complex and unknown is being handed over to some other person. Come to think of it I always prefer testing done by subject matter experts in the project, be it users, clients, testers or other specialists.

We need to shift to adapt to new contexts and new ways of aiding in delivering working solutions to our clients.

jollyrum

Less Software, more Testing

I rarely test software these days. I mostly lead testing of IT solutions.

Testing in the context of:

  • Updating all corporate PC’s from windows 7/8 to Windows 10
  • Consolidating network equipment from more devices to one box, on 80 global locations *
  • Move 40 live business applications from one data center to another *
  • Take over application maintenance for a specialized public organization
  • Implement track and trace for pharma products from production to shops
  • Migrate HR data for 2500 people from one system platform to another

Yes, it happens that I participate in a project that is about developing a new business application, but my activities are less about testing software and more about testing in IT solutions in general.

Mostly I manage test activities and describe testing in these contexts. My preferred way of working is in setting and implementing test strategies. I prefer complex and non-ordered projects (Complex and Chaotic – I’m looking at you), it fits well with my context-driven approach of finding the “test solution that fits the context”.

Testing is in it self a solution, that must solve a business problem. Great testing is all about providing information to the stakeholders. I don’t care especially if this is done by someone TESTING or a TESTER. It is my responsibility to setup the testing activities (information gathering) that supports the team, faces the business & technology and challenges the product “sufficiently“.

Sometimes “sufficiently” is merely confirming and going through the motions of explicit requirement coverage. This is a special challenge to me, as I know of many effective and Rapid approaches, that could add valuable information. When I face this challenge, I try to look at the full picture of the project, and what the business want’s to achieve.

The business of the business is business. What matters is not software or projects, but the solutions to the challenges the business have. And the context of testing is similarly so much more than the software.

*: As mentioned in “How to test in IT Operations” at Nordic Testing Days 2016.

13346977_10153714694356849_242947497430826688_n

The Expected

Many test processes and test tools mention that you have to establish the expected results to every test. It is at best risk ignorance not to take the notion of expected results with a kilo of salt #YMMV.

If you can establish the result of your solution in a deterministic, algorithmic and consistent way in a non-trivial problem, then you can solve the halting problem. But I doubt your requirements are trivial.. or even always right. Even the best business analyst or subject matter expert may be wrong. Your best oracle may fail too.  Or you may be focussing on only getting what you measure.

Nailed it
Nailed it

When working with validation in seemingly very controlled environments changes and rework happens a lot, as every new finding needs the document trail back to square one.. Stuff happens. Validation is not testing, but looking to show that the program did work as requested at some time. It is a race towards the lowest common denominator. IT suppliers can do better that just to look for “as designed” [1].

Still the Cynefin framework illustrates that there are repeatable and known projects, and in those contexts you should probably focus on looking to check as many as the simple binary questions in a tool supported way, and work on the open questions for your testing.

Speaking of open ends – every time I see an explicit expected result I tend to add the following disclaimer  (song to the tune of nothing else matters [2]):

And nothing odd happens … that I know of … on my machine, in this situation [3]

And odd is something that may harm my user, business or program result

Significantly…

But I’d rather skip this test step  and work on the description of the test and guidelines to mention:

See also: The unknown unknown unexpressed expectationsEating wicked problems for breakfast

1: Anyone can beat us, unless we are the besttodays innovation becomes tomorrows commodity

2: https://www.youtube.com/watch?v=tAGnKpE4NCI 

3: I’ve heard that somewhere…

Wither the test manager?

[ Paul Gerrard | Will The Test Leaders Stand Up | The Testing Planet issue 10, Page 4, March 2013 , Paywall] paraphrased:

There are five broad choices for you to take if you are a test lead or test manager:

  1. Provide advice to the business leaders, as an independent agent cajole project leadership and review performance
  2. Take control of the knowledge required to define and build systems. Demand clarity and precision in requirements
  3. Help agile projects to recognise and react to risks, coach and mentor and manage testing
  4. Manage the information flow between the key groups and continuous integration system, control change and delivery
  5. Manage outsourced and offshore teams, manage relationships

It’s probably 6) All of the above

legodinosaur