Friday, November 30, 2012

The intake test

Before we want to start testing a product, it needs to be testable for the level of testing we want to perform. To ensure the subject under test is testable, we could require a whole set of measurable requirements as for example:
  • The product (part) is delivered with all it's functional dependencies
  • All interfacing products are in place, issues are known
  • Amount of defects of certain severities closed in previous test levels VS total are met
  • Amount of elements planned to test in previous test levels VS tested are met
  • Test environment is setup and tested
  • Test data is prepared and consistently scrambled over the whole integrated platform
  • Batches are running
  • Configuration is setup and tested
  • Test preparation is done
  • Testers are trained
  • Licenses for tools are handed out and confirmed
  • ...
When facing reality, these entry criteria are formulated at the very start of the project, but often barely maintained and applied as the delivery pressure becomes higher.

But how often do we end up in situations where at least one of those entry criteria are not met, which blocks us from proceeding with testing? And how many times did we assess entry critria as passed but in fact they were not? A defect could be wrongly allocated, users have access to the tools, but have the incorrect user rights, the environment is build but it can't be accessed... Releasing untestable chunks of code to testers or releasing in an environment that is not fit for testing, is despite many project managers opinion a waste of time and resources.
Environments need to be maintained longer, defects are discussed with more stakeholders and have longer turn-around times.

In short: Bad entry criteria management costs the project time and money.

A pragmatic and easy way to keep a decent check on the status of your entry criteria, is to define an intake test for every substantial module under test.
This intake test should be executable in a very limited amount of time and describe the positive flow(s) trough your modules under test. Based on the result of this intake test, delivery issues can be found instantly. And which statistical report can compare with "showing that it works". From the moment of ordering the environment, the teams focus should be on making this test work and all reasons why it doesn't will return into critical issues or defects that require high priority to solve.

When the intake test for a module is successful, it can be recorded and stored in a library as a proof and testing for this module can start.

Tuesday, November 20, 2012

An example of a simple defect life cycle

This blog post is providing a practical example of what I think is a minimal defect flow, accompanied with the roles and responsibilities

In general the flow has 3 end statuses.
  • Rejected - Closed: This will tell something about the quality of defects that are raised. Rejects occur most often when registering duplicates or when the desired functionality is not well understood.
  • Tolerated - Closed: Defects that end in this status are defects that are known. When going to production, these are the known issues that the product will be shipped with. We need to keep them separated from the rejected and fixed defects.
  • Fixed - Closed: Defects that end in this status are unambiguously fixed defects that have been found resolved.
 Some roles have been defined, regarding the defect flow
  • Issuer: Anyone who registers a defect. This is not necesarily a tester.
  • Tester: Someone who is trained in testing, knows the functionality of the product to be delivered and has a brief technical insight into development, infrastructure and architecture. The tester can challenge the not accepted defects and is able to retest defects in their context.
  • Defect manager: Someone who has development insight and decision power to accept, not accept and assign defects. This can role often maps on a development management role. 
  • Developer: A wide role for anyone who can fix defects in the code, infrastructure, parametrization or even in the design documentation. A defect does not necessarily reside only in code. A defect can also be an inconsistency in the design documentation.


On top of this it is advised to agree on SLA's on defect resolution times and agree on defect severity levels and/or priority levels. They can be combined and interrelated.
I mostly use the following basic levels:

  • Urgent -  Prio 1: Defect needs a patch - testers or users are blocked and no workaround can be put in place - Patch required within 2 working days
  • High - Prio 2: Defect might need a patch but a workaround can be put in place or a less substantial amount of functionality is blocked. - Update required in a week
  • Medium - Prio 3: Defect needs to be resolved, but there is a workaround possible. - All defects need to be resolved before go-live
  • Low - Prio 4: Cosmetic defects. - 70 percent of defects need to be resolved before go-live
Sometimes cosmetic defects increase in ranking, when for example they occur on documents that are sent out to customers.

The status flow and its description below depicts the minimum requirements for a defect flow to be implemented.



From status Responsible Description To status
- Anyone / Issuer Anyone who finds a defect, can register it in the system. A defect has an unambiguous title, a clear description of what is expected and what is observed and is evidenced. Open
Open Defect manager Assesses the opened defect and decides that the defect is valid. The defect is not yet in the pipeline of being resolved but it has been agreed with a developer to start working on, taking priorities and timelines into consideration Accepted
Open Defect manager Assesses the opened defect and decides that the defect is not valid. The defect is updated with the rationale behind the disagreement. Not accepted
Not accepted Tester The Tester assesses the non accepted defects and where they do not agree, they discuss with the defect manager and project manager. TOGETHER, they agree that the defect is valid and move the defect to the accepted status and directly assign it to a developer. Accepted
Not accepted Tester The tester assesses the  non accepted defects and where they agree, they consult with the issuer of the defect that it can be rejected. Closed Rejected
Accepted Developer The developer indicates that they start working on the defect assigned to them. In progress
In progress Developer The developer agrees with the project manager that other defects get priority over this particular defect under fix. The defect fix is put on hold. On hold
In progress Developer The developer indicates that the fix has been made and updates the defect with information on the fix that has been applied Fixed
On hold Defect manager The defect manager or development manager propose defects that can be tolerated in the release of the software to production. The defects that are accepted by all stakeholders, move to the status closed-tolerated so that they can be addressed in a later release. Closed Tolerated
Fixed Defect manager When a build or fix is released in a test environment, the defects that were fixed, will be re-testable. The defect manager updates those defects to the retest status. Retest
Retest Tester/issuer Any defect to be retested is taken up by a tester to verify if the defect has been really fixed in all instances. When the defect has been partially fixed or not fixed (not when another regression defect occurs) the defect is set to the status Open with the according evidence and comments. Open
Retest Issuer Any defect to be retested needs ultimately retesting of the issuer of the defect. When the fix is confirmed, the issuer brings the defect to the final status of Closed-Fixed Closed-Fixed




Friday, November 9, 2012

Introducing proper defect management


A defect process, a process where project related defects (bugs) are addressed and resolved, can be run efficient or can become frustrating and very time consuming. Certain people don't take up their responsibilities, certain procedures are not clear and therefore not followed. Defects start flying up and down between parties and start revealing some people's deepest emotional frustrations or  interesting dialogues, sometimes they even carry the only analysis of a certain topic available in the whole program... 
A defect is not an information exchange process, nor is it an analysis addendum or personal ventilation tool. A defect should be clear and unambiguous. The turn-around time of a defect should be quick. Therefore a defect needs to be light and to the point.


Process

The defect status flow is most often discussed. You might be using the companies existing process flow. However, often even if the process is defined, it is not always unambiguously explained. When your company is less mature in testing and test process implementation, it is advised to keep status flows as simple as possible. More statuses require more overhead and result in more confusion.
The 3 main questions you need to clarify in order to describe a defect process are:
  1. What are the circumstances for a defect to arrive in a certain status?
  2. Which role is responsible to treat a defect when it arrives in a certain status?  
  3. What are the next possible statuses a defect can arrive in?


Defect procedures

Defect procedures look into your particular instance of the maybe generic defect flow. Who will take up certain roles? What are the steps to be undertaken in certain situations? Agree on clear responsibilities.
These are, which I found vital defect procedures 
  • Defect coordination board with as main purpose treating defects that are not agreed upon. 
  • Defect retest procedures. Who retests where and how do we know a defect fix is deployed in which environment.

Defect guidelines or instructions 

Defect guidelines or instructions are the good practices you share with all testers and developers when registering a defect. A developer needs to know how to reproduce a defect and a manager needs to see what a defect is exactly about at a first glance.

These are defect guidelines that I find most important:

  • Summary - One line issue description, following a convention of giving the location where the defect occurred and a short and clear description of the defect that occurs: e.g. Billing - Field 'Street' should also accept numeric values 
  • Comments - Comments are added to a defect after them being discussed. No changes are made to a defect without a complete and concise explanation.
  • Prioritization - Define severity, impact and priority parameters unambiguously. The more statuses you invent, the more complex it will become to keep an overview. Keep it simple.

Defect coordination is mainly communication

You can introduce a flawless defect process, defect procedures and guidelines, if they are not understood and carried by all involved parties, your defect coordination is doomed.  When you are to become a defect coordinator, make sure you involve your main stakeholders. The easiest way to do so is to involve them in a workshop and start from a decent proposal. In environments where defect management is new, it often gets confused with production issue management. If defect management is a completely new item in the company, it is recommended to do a dry run with your stakeholders.