Rant on Login Screen examples

If you are demonstrating testing technologies or testing examples around RPA, ML, Selenium and so on – Please: DO NOT USE A LOGIN FORM!

The test scenarios I usually deal with are not this… mundane. While a few testers probably still have to build login forms from scratch, a login feature is a commodity by now. Use OAuth for public facing sites and Active Directory Federation internally in the organisation. Really – there’s no need to reinvent the wheel. To the end user and even the Product Owner logging in is just a stepping stone. 

I just want to log in, and then I’m done for the day

Said no user ever*

Showing that you can train an AI network or other framework to login might solve a tedius testing task, but is usually not the thing I’m after. When a user is logged in, they are there to solve something, to process something, to do something – to engage with something. And this is where the best tests are heading too – this is the tests that adds value to the business and tells something about the product. 

For instance: In one project I did, we disabled the login entirely to make the CI/CD run feature testing. The plain login screen was temporary anyways, as the solution authentication would be based on certificates. We never spend much time on it, neither on total coverage.

Less combinations – More Real Life Scenarios

So what can you use as an example instead of login boxes and combinatoric bar stories? How about anonymizing the latest test you had to do on your latest live-action testing project? This will tell me about your challenges, your business domain and when the last time was – among other things.

Let me start, as of writing the latest test case I touched (same day as writing this) was for a new public data registration project. The tester and end user “subject matter expert” was testing the data registration form from both a GUI and web-services perspective. Export the entered data form as XML and import it again via web-services to see the consistency.

Could that be tool supported – maybe. The knowledge about the system is not very explicit. It’s a bit complicated, actually. Could it be trained by an ML/AI system – I doubt it. There is no global training set for this class of systems – we have the old version but are adding new things. 

If you are (a tool vendor) demonstrating something – do try to understand the context and problems your customers are trying to solve first. Ask them what their latest tests were and where the challenges are. If it’s a login box you’re good to go, but I doubt it. 

*: Unless you are somehow measured on “logging in”. For instance to claim unemployment benefit, login to job site daily…Snap! OTOH shallow measure.

http://rebrick.lego.com
Advertisements

The Guilty Tester Episode 7 – – With A Little Help From New Friends

Do you ever feel guilty for not meeting the standards set by others in the Software Testing community? You’re in the right place then.

In this episode I talk to Dave (@theguiltytester). We discuss traditions, open questions and how to work within contracts which are specifically requesting traditional test practices based on large numbers of test cases. 

Listen and read all about it here: http://theguiltytester.libsyn.com/the-guilty-tester-episode-4-jesper-ottosen-with-a-little-help-from-new-friends 

Some of the blog posts mentioned are: 

New friends - the subject matter experts of all trades
New friends – the subject matter experts of all trades

What testers find

Not all my projects are thundering successes… Different early decisions have set the scene – but still the play, so to speak,  have been full of lessons in what testers find – when asked open questions:

  • A team had previously synchronized information in the old context, so they where invited to test the new solution… which turned out would eliminate the need for synchronization. but nobody had troubled them to tell them so.
  • That the development team was not acting agile, sprints and scrums in words only
  • That the biggest PBI (product backlog item) in hours, actually was to get GUI automation of modal-windows to work.
  • That despite having a requirements sheet, the actual value of the sheet was zilch
  • That despite putting in extra hours and overtime, the quality and stability of the delivered did not improve
  • That the biggest problem towards integration was problems between .Net and java implementation of SOAP
  • That simple requirements on virus scanning documents, was both hard and expensive to solve
  • That part of the project deliveries was upgraded business guides, when tested they needed corrections too

Calling them all defects or even software bugs, does sound odd to me, because those that really take time is not the software issue it self, but misunderstandings and due to not knowing everything (will we ever). Standards tell you that testing finds risks in the Product, the Process and the Project – it seems to me even more we find issues in management decisions, architecture decisions, cultural issues, organisational change… it’s just software but it does have a business impact.

tractor

Asking Open Questions

It has always been a good interview technique to ask open questions. Then the person being interviewed have to elaborate and talk in full sentences. In contrast to closed questions, that replied to in binary [1]: yes, no, 42 – the red pill [2]. Until now I really didn’t understand how simple yet powerful this questioning technique is in testing. I might have done it all along, for some time :-).

The primary eye opener was the Copenhagen Context 2015 [4] workshop on Exploration Under Pressure by Jon Bach. One of the treats was that he showed us a list of things to find on the ebay.com website. Not specific items, but information about the items. Finding the most expensive item, and by that stumbling over a live production bug in the max value field. Finding the number of blue shoes available etc. What a fun “online scavenger hunt” – we could battle to find the oldest, longest and most odd details etc.

Later the same week eBay Classified hosted a local meetup of “QA Aarhus” with a live demo of how they do testing sessions of their app. They had to host the session twice,  due to popular demand, and what we got was an intro to a setting of exploration, thinking loud and doing pair testing. And I got to try my new-found quest to ask open questions. To search for things – but look out of the corner of the eye for oddities and what-ifs.

But how could I apply this technique in my current testing project of migrating an HR solution for a large IT outsourcing company. I did today. A staff member allocated to the project to test during UAT [3] specifically the processes they use in the old system and to act distribute this knowledge back to the team. For reasons the testing scope in this are had yet not been established, so she didn’t really know where to start – but I did… open questions 

  • What processes do you have?
  • What kind of events do you need to register on an employee
  • Tell me more about vacation calculation
  • Where, if any, are your current processes described (I’m fallible)
  • What has likely changed comparing the old and new solution

I asked her to go as deep until no new learning could be achieved, but not to detail it in scripts or discrete steps. Because from here we have test cases – test ideas – “a question that someone would like to ask (and presumably answer) about a program

Eureka!

 

[1]: Binary replies can be checked, open questions are testing. Testing is “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.” http://www.satisfice.com/blog/archives/1509

[2]: I have seen how deep the rabbit hole goes…

[3] Let’s pretend there is such a thing as a “user acceptance test

[4]  Disclaimer: I was part of the program committee, and by chance most speakers hosts their own testing conferences. See more on http://copenhagencontext.com/blog/2015/01/meet-jesper-at-the-copenhagen-context-conference-venue/

What if – and Does it Matter

Software testing as two core questions:

Very related to Testing AND Checking

  • Checking is something that we do with the motivation of confirming existing beliefs. 
  • Testing is a process of exploration, discovery, investigation, and learning.