Sunday, November 24, 2013
Scripting, checking, testing - enemies or friends?
"If you don't script test cases, your testing is not structured" or "Testing is checking if the software works as designed" are statements made by people who like to call themselves "Structured testers".
I don't completely disagree with statements like those, but I also would call myself a structured tester, even though I suppose I see things in a slightly wider context...
Scripting, checking and testing can live seemlessly together as we can seemlessly live together in multicultural societies.
It is not because we don't do it too often... we effectively can't. A good tester knows when to check, test and/or script.
What do I check?
Checking is what I do when I have an exact understanding of what a function or process should have as an outcome.
This can be at a very low level for example:
- a field should contain no more than 15 characters
- an email should contain an @ sign, a "dot", in this order, and should contain some characters in between
It can be outcomes of a calculation engine that contains the rules on when to advise to grant an insurance or not, based on a defined set of conditions.
It can be a set of acceptance criteria or business rules.
Checking is verification and is a part of software testing. Whether or not they have to be all executed by a professional software tester is another question.
When there is a lot to check - the pairwise method can help, putting different parameters together to make checking more efficient.
What do I test?
Testing is what I do when I don't have an exact understanding of how functions and processes should co-exist or work together.
The software parts I tend to test are the complex combinations of possibilities an application offers.
In this case, I don't only look at a field, a screen or a function (sometimes I do) but at functions or flows that contain complex steps, and possibilities. I try to emulate an end user, taking into account the business processes, the company history, the development method and team's knowledge and skills... this combined with my own knowledge and tools... and go searching for the steepest and largest pitfalls that developers or analysts or business may have overlooked.
Knowing what a function under certain circumstances is supposed to do, I try to find out if it's consistent with itself under relevant conditions.
Questions I may ask myself:
- How will the system behave if the interface is not available? - This could have a big impact on business function and is a case that happened a lot with the previous software package... and a reason why they developed a new package that was supposed to handle these issues.
- What will happen if I push the back button instead of going back using the described functionality. The previous application in use - made use of the standard browser buttons. Many users are used to those, so I will check them all the time.
What do I script?
Scripting is what I do to make sure that I remember what to check before I start checking or to remember what I have tested. Scripts can possibly be used as evidence for bugs found during testing or as a working tool to follow up on all that needs verification.
I script what I should not forget.
I could script all those things that need checking - but most of the time that already has been done.
I could script the test cases that resulted in issues or bugs.
I could script the test cases that will help me find regression issues later on. Those I would prefer to automate.