A 30 Days Agile Experience

In September 2017 the Ministry of Testing had a crowd-based knowledge sharing event called “30 Days of Agile Testing” with a small learning activity for each day of the month. As with the similar security event I set up a weekly schedule at work to meet for an time-boxed hour and discuss 3-5 selected topics each time.

Our score was 17 topics discussed – some more discussed than actually tried out. Hence the half marks on the poster on the window below. Me and my coworkers work on many different teams – so to dig into specific team tools and processes was out of scope.

Here is a few of our findings:

IMG_0007

Links to “the Club” on some of the topics we selected:

 

 

Advertisements

Less Test Managers, More Test Coaches

One of the trends/shifts I experience in testing & test management in particular is the Test Coach as discussed initially here: The Shift-Coach Testing Trend (Oct, 2016). Recently (Aug 2017) it came up again in a Twitter thread, where Stephen Janaway stated the inspiration to the title of this blog post.

Less Test Managers and more coaches. That’s how I see it going.

Fittingly as he inspired the first post with his talk “How I Lost My Job As a Test Manager” presented at Test Bash 2015. This post is a further elaboration of the Shift-Coach test management trend. Here are some of my experiences:

  • I have been assigned to an agile development team to introduce them to 3 Amigos, Test data driven test automation and such things. The purpose of my involvement was to enable the team to continue the practices without me, and without testers besides the business analyst / product owner (See The domain expert is the tester) as they are doing Shift-left. Similar to an agile or scrum coach, my approach was to look at it as a change in the way of working.
  • Another project is an infrastructure project, there are no testers only technicians configuring Cisco routers that by software can replace firewalls, iron ports, VM servers and other network equipment. The project has to implement 80+ of these, so I setup both a test process and an ITIL change request process acting as a test and release manager – another quite frequent trend. I could continue in the project for the duration, but instead I setup guidance and leave when it’s sufficiently in place.

This might be similar to a test architect, a (internal) test consultant activity. It has nothing to do with diminishing testing. Rather I see it as more testing happening, something that would not have been done without the coaching from a test manager. It’s all about finding a test approach that is fit for the context.

Here are some things others have written:

The competence of the test coach is to have enough change management expertise (people skills) and test management expertise (domain skills) to know how to coach and facilitate the change. Should test coaches test too, perhaps when required, but not necessarily. The activity is primarily to up-skill the team to continue on their own.

The “Test Coach” is a trend similar to “shift-left” and all the other shifts in testing and test management. I see it as a pattern, and what I read from the threads and discussions is that many test managers gradually shift towards test coaches.

2017-07-03 13.57.42

Don’t request the kitchen sink

More and more often I see outsourcing contracts that requests 10-15 test phases. It looks like someone has simply thrown the book at it, and not considered if it is an infrastructure project, a software development project or COTS implementation or – what on earth, they actually want to learn from the testing.

These are the phases of a recent project:

  • Environment Acceptance testing
  • Hardware and integration testing
  • Component testing
  • Component- integration testing
  • Installation test
  • System testing
  • Functional testing
  • Regression testing
  • Security testing
  • Performance testing
  • Operational acceptance testing
  • Service Level testing

It’s a challenge in the vendor reply. The vendor will want to reply to all test phases in order to be compliant with the tender, and not loose points. There is no room for elaboration or discussion if you want in on the bid process.

Quite often the requester comes back and say “we didn’t want all those weird testing things, we just want something that works for us”. But when the contract is signed and the work set in motion the project team have challenge to make the testing practical within the framework of the contract. This goes from both sides. Many good hours can be wasted with unwinding cumbersome contractual terms.

What I usually do in such a situation is to bundle the contract’s testing scope into fewer activities, and setup a mapping so that everything is covered. That is if the client allows me to make the activities practical and context-driven. If not – my hands are tied, and we deliver according to spec – even if the chapters of the test plans are set in stone.

Let’s work towards better deals for testing activities. If you are looking to prepare a BID include a test manager – and have a discussion of the value-add and learning of testing up front. There is no one book of how to do testing. Instead spend the time and money figuring out your context. Figure out what phases are on the client side, and what is on the vendor side. Have a test management consultant on retainer for before and after the bid process. Do something to discuss your test strategy and put the guidelines in the contracts, so that the vendors can propose a solution.

Don’t request everything and the kitchen sink too

Everything and the kitchen sink
Everything and the kitchen too

 

 

Test ALL the things

TL;DR: We can add testing to all requirements and all business risks. Testing to document requirements and to debunk risks provides valuable information for the business. Let us not limit testing to things that can be coded. The intellectual activity of trial and learning is happening anyways, we might as well pitch in with ways to find important evidence for the decision makers.

Test all the requirements

Traditionally testing was all about testing the functional requirements that could be coded. Non-functional requirements was left for the specialists, or plainly disregarded. I know I have done my share of test planning, with a range of requirements left “N/A” with regards to testing. Especially performance scope, batch jobs, hardware specs, data base table expansions and virus scanning have been left out of my functional test plans…

When I look at a list of requirements now – I see that we can indeed test all the things, or we can at least work on how to document that the requirement is fulfilled. Some of the requirements are actually quite easy to document. If it’s on a screen somewhere, take a screen shot and attach it to a simple test case. Done deal really. Additionally with a testing mind-set I can think of ways to challenge the details. But do we really, really need to fill up a disk to establish if it’s exactly a 1 Gb allocation – probably not. Do we really really need to document all requirements – yes in some contracts/contexts it’s important for the customer to know that everything has indeed been established. Sometimes the customer doesn’t trust you otherwise, sometimes the tests are more about your ability to deliver and provide evidence that matters.

Test all the business risks

Look into the business case of your project and find the business risks. Sometimes they are explicitly stated and prioritized. A a recent Ministry of Testing Meetup we looked into a case for a large national health system. We looked at the tangible benefits, intangible benefits and on the scored business risks.  What worried the business and management most was budget, time and whether the new system would be used in a standardized way. There is an opportunity for testing here to help address, document and challenge the most important business  risks. Traditional testing would usually look at functional requirements that can be coded or configured, and miss totally to address what matters most to the business.

OK, how do we test the project costs? How about frequent checkpoints of expected spending – would that be similar to tracking test progress. Perhaps – let’s find out. Testing is all about asking questions for the stakeholders and solving the most important problems first. Can we help to analyse risks and setup mitigation activities – sure we can. We just have to step out of our traditional “software only bubble”.

MEME - Test ALL the things
Meme ALL the things

 

 

 

 

 

 

 

Read also: Many Bits under the Bridge, Less Software, more TestingTest Criteria for Outsourced SoftwareThe Expected, Fell in the trap of total coverage.

Links: “A Context-Driven Approach to Delivering Business Value”, Cynefin In Software TestingTesting during Application Transition Trials

 

Create, Maintain, Move and Close

Usually when we talk testing it’s about the road to production. It’s about getting requests from the customer/Product owner and shipping it. We tend to forget that there is more to the life cycle of application than adding to the pile. Inspired by the old CRUD I identify the following stages in the application life. Create, Maintain, Move and Close. Testing can add value all of the four modes, with twists for each one.

Create: “oh shiny” – Creating an application is usually novel, but the more times you have build similar applications it becomes routine. Some applications have to be build from scratch, others merely configured. It matters a lot if you are building a unique app – or if it yet another roll-out of a COTS application. The testing in “create” usually focuses on bugs to be fixed before go-live, and very little on what happens afterwards. Building a new  application is usually a strategic decision to the business that solves a problem or builds on a potential. Requests are numerous for new things.

Maintain: “ship, but don’t destroy production” – At some point the customer sends you more requests to the stuff already build than new features. Application maintenance is all about balancing new features and updates to existing features. Existing features are being used by the end users, and they will eventually request updates and bug fixes. Fragmentation, merging and branching becomes and issue – especially if you maintain the application as a solution for a range of customers. Customers might want to differentiate between their requests – as they won’t want to pay for bugs in previous releases, but rather want to pay for new additions.

Move:  “It has to work as before, just with a new team“. To many businesses maintaining an application is not their key area; They might be a public organisation with no need to have their own staff of developers. So “Application Outsourcing” becomes a thing for many applications, and with deals being won and lost – it will happen that  the development tasks moves from one supplier to another. Testing can make sure that processes are in place in the new location and that the state of the application is known in the new location. The testing during “move” doesn’t involve the functionality of the application, but rather the ability of the new team. Sometimes the hosting of the application stays they same,  in other cases it is the hosting of the application that changes and nothing else.

Close: test that it’s gone” Sometimes IT strategies and businesses decide to close down an application. Perhaps it’s being replaced, perhaps it’s redundant after a long time. Examples could be hospitals moving from one patient journal to another or whole systems being sunsetted. It could also be the closing of end-of-life applications (Windows servers, HPQC etc). The value to the business is knowing the application is gone, and the information in the old systems not trusted anymore.

It is very much possible to have testing in all modes of the application life cycle. Similarly it is very much possible for testing to add value in all stages of the software development life cycle. It’s a matter of perspective.

Getting Testing in Early

Even before there is an “system development life cycle” – testing in the form of thought experiments and  evaluation can take place and add valuable information to the context.

My test management tasks are often about the next thing coming up. Bids for outsourcing agreements and application development often comes with a large document of test activities to be answered and elaborated. In this role I am the the subject matter expert (in test), and while have to write the tender reply for my domain. Sometime down the line the bid materials becomes an actual project, but by then I’m onto the next thing.

Sometimes I draw an analogy to the Secret Service advance team arriving two weeks before the president, setting up protection and identifying gaps – while then moving on to the next location before the president even gets there.

Another example of advance work for test people, is where the organisation uses frequent releases of systems. While the majority of the test effort is put into the release currently being tested, some effort must go into looking into the frame of the coming release. In the coming release the test manager can look for headlines to test, review initial high level design and find flaws and conflicts in the release content.

Sometimes I draw the analogy to the blue and gold teams of US nuclear submarines. While one full crew is out sailing/delivering, the shore team prepares, trains for the next big push.

Testing early can also be in the form of running simulations on various business case scenarios. Business simulations is all about experimenting and evaluating. For novel solutions prototyping, wire-framing and user experience activities helps develop minimum solutions to be tested for viability by the customer.

In the article “Continuous Testing in Dev Ops…” we see testing happening during Plan and Branch. In the article “A Context-Driven Approach to Delivering Business Value” testing can help establish viable market, match to vision and in identifying business risks.

testing related to revenue generation may focus on functionality or regulatory compliance; testing related to revenue protection may focus on maintainability or legal defense; testing related to supporting revenue may focus on business process improvement or cost reduction.

Testing is a lot of things – also outside the SDLC.

 

3 Sessions of Security Testing

One way to collaborate in a team is to achieve shared knowledge together. An example of this is the online activity of “30 days of testing” that The Ministry Of Testing has been putting out to the online community to participate it. My test team has a “Work Group / Special Interest Group” with regards to security testing, so when a 30 day challenge for security testing came up, we scheduled sessions to learn from the topics provided (see below).

As we are testing consultants doing work for our customers, we scheduled 3 sessions – initially for an hour. At the start of the hour we picked 4-5 topics from the list, and worked our way through them in a prioritized order – within the time box of the hour. Come to think of it we might as well have used the Lean Coffee format. As we have team members two places in DK and one place in PH, it was a skype call using screen sharing. After the call I  summarized sending out a “link mail” to all in the testing group (DK and PH). Evaluating the sessions we extend our ordinary scheduled WG meetings to make room for collaboratively investigate additional security testing topics.

12 From the list: ZAP, Google Gruyere, threat models, HTTP proxies, posture assessments, tiger boxes, recent hacks (elaborated by Troy Hunt), OWASP top 10, OWASP SQL injections, adding data integrity testing into a test plan, share ideas for security testing internally and externally, discuss security testing with regards to EU GDPR compliance.

7 Not on the listNaughty Strings form GitHub, Bug Magnet plugin, How real persons names trick IT systems, how to be careful with custom license plates, DDoS attacks, IoT privacy failures, Chaos monkeys/Siamese army and little Bobby Tables:

exploits_of_a_mom
XKCD: Exploits of a mom

To sum up, we have learned about: what tools that can make testing easier, where to read about vulnerabilities and and simple exploits, understand how personal data and logins are used and stored, how to pitch security testing based on fear of breaches and safety concerns, testing the requirements for “by design” security.

30 Days of Security Testing
30 Days of Security Testing