Expectations around Testing

I usually mention that the work I do as a test manager is more around managing the testing activity, than managing testing specialists. “Managing the testing activity” to me is about:

  • Identifying the expectations are around the testing activities
  • Facilitating the performance/execution of the testing activities
  • Administration and documentation of the testing activity
  • Make the people doing the testing self-reliant

… in Context

The project context is the most important frame: it is all about the projects
story, risk profile, culture, traditions, deadline, budget etc. I am as Context-driven as contexts allow, in the classical “Seven Basic Principles of the Context-Driven School” sense*.

As I am motivated by finding solutions and making them work** my drive is more along the lines of “accelerate the achievement of shippable quality” [Modern Testing Mission] than “finding the problems that threaten the value of the product” [Rapid Software Testing].

Focussing on achievements over problems seems to work for me in the contexts I’m in, regarding enterprise transition, infrastructure projects and the implementation of commercial standard systems.

Setting a Frame for Expectations

Finding the “test solution” (or test strategy) that fits the project context is the key activity to me. The rest of it, is mostly about implementation – that too can be quite interesting. I like that too, but plan first!

First of all we have to realize that the testing activities we choose are limited and affected by or context (and biases). We can never test everything and think of everything to test. Based on the context restrictions (time, space, money, etc) the project gives me, I make a reduction of the testing theories and principles into a definition along the lines of:

In a specific context – testing will be a finite activity, to investigate if the shared interpretations of the requirements are implemented – at some time, for some configuration, evaluated by someone (that we trust), where nothing odd happens.

A reduction of the testing activity

Let me be the first to say: It’s not theoretical perfect! But it’s practical and based on context. The reduction gives me an achievable goal-oriented focus. It helps me to iron out what the relation is between the thing under test and someone whom it matters to.

Ironing out the Expectations

If there is an underlying risk that things will change a lot, then we can argue for test automation to multiply the configurations and the number of “runs” we can complete. Not all IT projects are around software development, so test automation practices and tooling might not be in place.

We can ask Open Questions to explore the boundaries of the shared understanding. We can discuss: how much total test coverage is needed here? We can challenge the requests for the kitchen sink – but also direct the testing to what matters. I have found that it is better to slowly impact the projects with questions from within, as discussed on the Guilty Tester Podcast, than break down traditions up front. We can look into “who” is doing the investigation and how much we trust them.

To make the agreements around the reduction of the testing activity explicit I establish a “Test Plan” document. I would often prefer to do without, and have a mutual team agreement – or even a mind map. But I know the enterprise contexts, too well to know, that shared expectations are best written down (even though it being an imperfective as well).

It’s all about the context and the expectations, really.

The expectations where that we could snorkel…

*: Even “CDT” is a context/model, and thus is flawed. One of the flaws of the model is that all test approaches are equally valid (as long as it adds value to someone who matters) and thus that no approach is never better than any other. Not even CDT..

**: See: Innovation in Testing, Less Software more Testing.

Rant on Login Screen examples

If you are demonstrating testing technologies or testing examples around RPA, ML, Selenium and so on – Please: DO NOT USE A LOGIN FORM!

The test scenarios I usually deal with are not this… mundane. While a few testers probably still have to build login forms from scratch, a login feature is a commodity by now. Use OAuth for public facing sites and Active Directory Federation internally in the organisation. Really – there’s no need to reinvent the wheel. To the end user and even the Product Owner logging in is just a stepping stone. 

I just want to log in, and then I’m done for the day

Said no user ever*

Showing that you can train an AI network or other framework to login might solve a tedius testing task, but is usually not the thing I’m after. When a user is logged in, they are there to solve something, to process something, to do something – to engage with something. And this is where the best tests are heading too – this is the tests that adds value to the business and tells something about the product. 

For instance: In one project I did, we disabled the login entirely to make the CI/CD run feature testing. The plain login screen was temporary anyways, as the solution authentication would be based on certificates. We never spend much time on it, neither on total coverage.

Less combinations – More Real Life Scenarios

So what can you use as an example instead of login boxes and combinatoric bar stories? How about anonymizing the latest test you had to do on your latest live-action testing project? This will tell me about your challenges, your business domain and when the last time was – among other things.

Let me start, as of writing the latest test case I touched (same day as writing this) was for a new public data registration project. The tester and end user “subject matter expert” was testing the data registration form from both a GUI and web-services perspective. Export the entered data form as XML and import it again via web-services to see the consistency.

Could that be tool supported – maybe. The knowledge about the system is not very explicit. It’s a bit complicated, actually. Could it be trained by an ML/AI system – I doubt it. There is no global training set for this class of systems – we have the old version but are adding new things. 

If you are (a tool vendor) demonstrating something – do try to understand the context and problems your customers are trying to solve first. Ask them what their latest tests were and where the challenges are. If it’s a login box you’re good to go, but I doubt it. 

*: Unless you are somehow measured on “logging in”. For instance to claim unemployment benefit, login to job site daily…Snap! OTOH shallow measure.

http://rebrick.lego.com

With A Little Help From New Friends

Do you ever feel guilty for not meeting the standards set by others in the Software Testing community? You’re in the right place then.

In this episode I talk to Dave (@theguiltytester). We discuss traditions, open questions and how to work within contracts which are specifically requesting traditional test practices based on large numbers of test cases. 

Listen and read all about it here: http://theguiltytester.libsyn.com/the-guilty-tester-episode-4-jesper-ottosen-with-a-little-help-from-new-friends 

Some of the blog posts mentioned are: 

New friends - the subject matter experts of all trades
New friends – the subject matter experts of all trades

What testers find

Not all my projects are thundering successes… Different early decisions have set the scene – but still the play, so to speak,  have been full of lessons in what testers find – when asked open questions:

  • A team had previously synchronized information in the old context, so they where invited to test the new solution… which turned out would eliminate the need for synchronization. but nobody had troubled them to tell them so.
  • That the development team was not acting agile, sprints and scrums in words only
  • That the biggest PBI (product backlog item) in hours, actually was to get GUI automation of modal-windows to work.
  • That despite having a requirements sheet, the actual value of the sheet was zilch
  • That despite putting in extra hours and overtime, the quality and stability of the delivered did not improve
  • That the biggest problem towards integration was problems between .Net and java implementation of SOAP
  • That simple requirements on virus scanning documents, was both hard and expensive to solve
  • That part of the project deliveries was upgraded business guides, when tested they needed corrections too

Calling them all defects or even software bugs, does sound odd to me, because those that really take time is not the software issue it self, but misunderstandings and due to not knowing everything (will we ever). Standards tell you that testing finds risks in the Product, the Process and the Project – it seems to me even more we find issues in management decisions, architecture decisions, cultural issues, organisational change… it’s just software but it does have a business impact.

tractor

Asking Open Questions

It has always been a good interview technique to ask open questions. Then the person being interviewed have to elaborate and talk in full sentences. In contrast to closed questions, that replied to in binary [1]: yes, no, 42 – the red pill [2]. Until now I really didn’t understand how simple yet powerful this questioning technique is in testing. I might have done it all along, for some time :-).

The primary eye opener was the Copenhagen Context 2015 [4] workshop on Exploration Under Pressure by Jon Bach. One of the treats was that they showed us a list of things to find on the ebay.com website. Not specific items, but information about the items. Finding the most expensive item, and by that stumbling over a live production bug in the max value field. Finding the number of blue shoes available etc. What a fun “online scavenger hunt” – we could battle to find the oldest, longest and most odd details etc.

Later the same week eBay Classified hosted a local meetup of “QA Aarhus” with a live demo of how they do testing sessions of their app. They had to host the session twice,  due to popular demand, and what we got was an intro to a setting of exploration, thinking loud and doing pair testing. And I got to try my new-found quest to ask open questions. To search for things – but look out of the corner of the eye for oddities and what-ifs.

But how could I apply this technique in my current testing project of migrating an HR solution for a large IT outsourcing company. I did today. A staff member allocated to the project to test during UAT [3] specifically the processes they use in the old system and to act distribute this knowledge back to the team. For reasons the testing scope in this are had yet not been established, so they didn’t really know where to start – but I did… open questions 

  • What processes do you have?
  • What kind of events do you need to register on an employee
  • Tell me more about vacation calculation
  • Where, if any, are your current processes described (I’m fallible)
  • What has likely changed comparing the old and new solution

I asked them to go as deep until no new learning could be achieved, but not to detail it in scripts or discrete steps. Because from here we have test cases – test ideas – “a question that someone would like to ask (and presumably answer) about a program

Eureka!

[1]: Binary replies can be checked, open questions are testing. Testing is “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.” http://www.satisfice.com/blog/archives/1509

[2]: I have seen how deep the rabbit hole goes…

[3] Let’s pretend there is such a thing as a “user acceptance test

[4]  Disclaimer: I was part of the program committee, and by chance most speakers hosts their own testing conferences. See more on http://copenhagencontext.com/blog/2015/01/meet-jesper-at-the-copenhagen-context-conference-venue/

What if – and Does it Matter

Software testing as two core questions:

Very related to Testing AND Checking

  • Checking is something that we do with the motivation of confirming existing beliefs. 
  • Testing is a process of exploration, discovery, investigation, and learning.