Friday, November 30, 2012

The intake test

Before we want to start testing a product, it needs to be testable for the level of testing we want to perform. To ensure the subject under test is testable, we could require a whole set of measurable requirements as for example:
  • The product (part) is delivered with all it's functional dependencies
  • All interfacing products are in place, issues are known
  • Amount of defects of certain severities closed in previous test levels VS total are met
  • Amount of elements planned to test in previous test levels VS tested are met
  • Test environment is setup and tested
  • Test data is prepared and consistently scrambled over the whole integrated platform
  • Batches are running
  • Configuration is setup and tested
  • Test preparation is done
  • Testers are trained
  • Licenses for tools are handed out and confirmed
  • ...
When facing reality, these entry criteria are formulated at the very start of the project, but often barely maintained and applied as the delivery pressure becomes higher.

But how often do we end up in situations where at least one of those entry criteria are not met, which blocks us from proceeding with testing? And how many times did we assess entry critria as passed but in fact they were not? A defect could be wrongly allocated, users have access to the tools, but have the incorrect user rights, the environment is build but it can't be accessed... Releasing untestable chunks of code to testers or releasing in an environment that is not fit for testing, is despite many project managers opinion a waste of time and resources.
Environments need to be maintained longer, defects are discussed with more stakeholders and have longer turn-around times.

In short: Bad entry criteria management costs the project time and money.

A pragmatic and easy way to keep a decent check on the status of your entry criteria, is to define an intake test for every substantial module under test.
This intake test should be executable in a very limited amount of time and describe the positive flow(s) trough your modules under test. Based on the result of this intake test, delivery issues can be found instantly. And which statistical report can compare with "showing that it works". From the moment of ordering the environment, the teams focus should be on making this test work and all reasons why it doesn't will return into critical issues or defects that require high priority to solve.

When the intake test for a module is successful, it can be recorded and stored in a library as a proof and testing for this module can start.

Tuesday, November 20, 2012

An example of a simple defect life cycle

This blog post is providing a practical example of what I think is a minimal defect flow, accompanied with the roles and responsibilities

In general the flow has 3 end statuses.
  • Rejected - Closed: This will tell something about the quality of defects that are raised. Rejects occur most often when registering duplicates or when the desired functionality is not well understood.
  • Tolerated - Closed: Defects that end in this status are defects that are known. When going to production, these are the known issues that the product will be shipped with. We need to keep them separated from the rejected and fixed defects.
  • Fixed - Closed: Defects that end in this status are unambiguously fixed defects that have been found resolved.
 Some roles have been defined, regarding the defect flow
  • Issuer: Anyone who registers a defect. This is not necesarily a tester.
  • Tester: Someone who is trained in testing, knows the functionality of the product to be delivered and has a brief technical insight into development, infrastructure and architecture. The tester can challenge the not accepted defects and is able to retest defects in their context.
  • Defect manager: Someone who has development insight and decision power to accept, not accept and assign defects. This can role often maps on a development management role. 
  • Developer: A wide role for anyone who can fix defects in the code, infrastructure, parametrization or even in the design documentation. A defect does not necessarily reside only in code. A defect can also be an inconsistency in the design documentation.


On top of this it is advised to agree on SLA's on defect resolution times and agree on defect severity levels and/or priority levels. They can be combined and interrelated.
I mostly use the following basic levels:

  • Urgent -  Prio 1: Defect needs a patch - testers or users are blocked and no workaround can be put in place - Patch required within 2 working days
  • High - Prio 2: Defect might need a patch but a workaround can be put in place or a less substantial amount of functionality is blocked. - Update required in a week
  • Medium - Prio 3: Defect needs to be resolved, but there is a workaround possible. - All defects need to be resolved before go-live
  • Low - Prio 4: Cosmetic defects. - 70 percent of defects need to be resolved before go-live
Sometimes cosmetic defects increase in ranking, when for example they occur on documents that are sent out to customers.

The status flow and its description below depicts the minimum requirements for a defect flow to be implemented.



From status Responsible Description To status
- Anyone / Issuer Anyone who finds a defect, can register it in the system. A defect has an unambiguous title, a clear description of what is expected and what is observed and is evidenced. Open
Open Defect manager Assesses the opened defect and decides that the defect is valid. The defect is not yet in the pipeline of being resolved but it has been agreed with a developer to start working on, taking priorities and timelines into consideration Accepted
Open Defect manager Assesses the opened defect and decides that the defect is not valid. The defect is updated with the rationale behind the disagreement. Not accepted
Not accepted Tester The Tester assesses the non accepted defects and where they do not agree, they discuss with the defect manager and project manager. TOGETHER, they agree that the defect is valid and move the defect to the accepted status and directly assign it to a developer. Accepted
Not accepted Tester The tester assesses the  non accepted defects and where they agree, they consult with the issuer of the defect that it can be rejected. Closed Rejected
Accepted Developer The developer indicates that they start working on the defect assigned to them. In progress
In progress Developer The developer agrees with the project manager that other defects get priority over this particular defect under fix. The defect fix is put on hold. On hold
In progress Developer The developer indicates that the fix has been made and updates the defect with information on the fix that has been applied Fixed
On hold Defect manager The defect manager or development manager propose defects that can be tolerated in the release of the software to production. The defects that are accepted by all stakeholders, move to the status closed-tolerated so that they can be addressed in a later release. Closed Tolerated
Fixed Defect manager When a build or fix is released in a test environment, the defects that were fixed, will be re-testable. The defect manager updates those defects to the retest status. Retest
Retest Tester/issuer Any defect to be retested is taken up by a tester to verify if the defect has been really fixed in all instances. When the defect has been partially fixed or not fixed (not when another regression defect occurs) the defect is set to the status Open with the according evidence and comments. Open
Retest Issuer Any defect to be retested needs ultimately retesting of the issuer of the defect. When the fix is confirmed, the issuer brings the defect to the final status of Closed-Fixed Closed-Fixed




Friday, November 9, 2012

Introducing proper defect management


A defect process, a process where project related defects (bugs) are addressed and resolved, can be run efficient or can become frustrating and very time consuming. Certain people don't take up their responsibilities, certain procedures are not clear and therefore not followed. Defects start flying up and down between parties and start revealing some people's deepest emotional frustrations or  interesting dialogues, sometimes they even carry the only analysis of a certain topic available in the whole program... 
A defect is not an information exchange process, nor is it an analysis addendum or personal ventilation tool. A defect should be clear and unambiguous. The turn-around time of a defect should be quick. Therefore a defect needs to be light and to the point.


Process

The defect status flow is most often discussed. You might be using the companies existing process flow. However, often even if the process is defined, it is not always unambiguously explained. When your company is less mature in testing and test process implementation, it is advised to keep status flows as simple as possible. More statuses require more overhead and result in more confusion.
The 3 main questions you need to clarify in order to describe a defect process are:
  1. What are the circumstances for a defect to arrive in a certain status?
  2. Which role is responsible to treat a defect when it arrives in a certain status?  
  3. What are the next possible statuses a defect can arrive in?


Defect procedures

Defect procedures look into your particular instance of the maybe generic defect flow. Who will take up certain roles? What are the steps to be undertaken in certain situations? Agree on clear responsibilities.
These are, which I found vital defect procedures 
  • Defect coordination board with as main purpose treating defects that are not agreed upon. 
  • Defect retest procedures. Who retests where and how do we know a defect fix is deployed in which environment.

Defect guidelines or instructions 

Defect guidelines or instructions are the good practices you share with all testers and developers when registering a defect. A developer needs to know how to reproduce a defect and a manager needs to see what a defect is exactly about at a first glance.

These are defect guidelines that I find most important:

  • Summary - One line issue description, following a convention of giving the location where the defect occurred and a short and clear description of the defect that occurs: e.g. Billing - Field 'Street' should also accept numeric values 
  • Comments - Comments are added to a defect after them being discussed. No changes are made to a defect without a complete and concise explanation.
  • Prioritization - Define severity, impact and priority parameters unambiguously. The more statuses you invent, the more complex it will become to keep an overview. Keep it simple.

Defect coordination is mainly communication

You can introduce a flawless defect process, defect procedures and guidelines, if they are not understood and carried by all involved parties, your defect coordination is doomed.  When you are to become a defect coordinator, make sure you involve your main stakeholders. The easiest way to do so is to involve them in a workshop and start from a decent proposal. In environments where defect management is new, it often gets confused with production issue management. If defect management is a completely new item in the company, it is recommended to do a dry run with your stakeholders. 


Wednesday, October 3, 2012

How software testing can contribute to a decrease in software quality

Time after time, project, program and line managers tried to impress me, telling they will increase the budget for testing, hoping to boost my interest and commitment as a test manager.
Hallelujah!, I thought, when this happened to me for the first time.
I felt the pressure as well as my growing set of responsibilities. Doubling a test team in the heat of the moment with only a couple of months left to deliver a massively buggy program, is not easy.

Especially when at the same time, analysts are being fired, development delays, entry criteria are used to mop the vermin floor and the deadline resides, carved in stone.

As we all know  project delivery is always in fragile balance between cost, time and quality.
While delivering within time and budget constraints trying to aim at quality, in reality quality is often decreasing despite of the increased focus on testing.

How can this possibly happen?
 
1. Start testing too early

It's the project managers first responsibility to deliver in time and budget. To do so the test manager easily gets overruled when entry criteria are not met. Starting to test when a product is in fact not yet testable, means that you start registering defects that are probably already known and are being taken care of. Looking into known defects is a waste of development time and budget which will without exception be rewarded with decreased quality.

2. Replace analysts with testers when analysis is done
The analysts are too expensive to keep them on the project but are also most aware of the difficulties and issues. Replacing them by testers is not only cheaper, but also increases the team freshness. The real knowledge about the product and the issues walked away together with the analysts. The quality of the product inevitably decreases.

3. Make testers and client responsible for product quality
Test cases are the full responsibility of the testers. Testers get demotivated to receive answers to questions as the analysts have disappeared. As an extra security, it is often practiced that the client validates the test cases, as they are responsible to accept. This altogether easily turns into ass-covering exercise.
The issues have smaller chances to be found during testing and when they are found, they are more easily accepted.

4. CMMI Levels and TPI levels are used as silver bullets
The process is sacred. Making sure that everything is done by the book makes a project easy to manage. Following predefined processes seems to add quality. When complexity increases, it is more tempting to fall back on the known processes and procedures, avoiding failure.  However, following processes without identification of anomalies, decreases product quality.



5. Use test management tools to control testing activities
Test management tools can easily contribute to 'report-driven testing'. Tests for example need to be created before execution and can not be increased or run twice because we need to be able to plan how many test cases there will be run per day per person and report against this plan. Test management tools make management give the illusion that testing is defined and measurable and tends to get micromanaged easily. This can incapacitate testers in doing their actual jobs (finding defects as soon as possible) and turn them into mindless test script executors.



Tuesday, September 18, 2012

Quality Insurance


How does a change in the software affect your business process?
What's the risk of having an issue in production? What's the occurrence of -  and damage on failure?


Most business fear lies in signing analysis documentation to send it off to development. They suddenly become responsible for knowing what they want and how they want it, before ever trying it out. Making mistakes is human, not being able to see the future is human as well. So why would you sign a document, knowing mistakes are most probably inside?
What else could we do, except for agile product delivery to deliver the clients explicit and implicit changing needs? And even there, how can we prevent a business going broke as a result of a software defect that was not found during development?

For example:
http://www.reuters.com/article/2012/08/02/knightcapital-loss-idUSL2E8J27QE20120802

The answer might be.. QUALITY INSURANCE (or QI)


Why the term QI?

Quality insurance is a term used to oppose the often wrongly used term 'Quality Assurance'

How would that conceptually work?


We would make a difference between soft- and hardware insurance.
By default, this insurance system would not focus on the underlying infrastructure. Neither is it explicitly excluded from the insured scope.

A software insurance policy could be taken after the Go/No-go decision was positive and would start at the end of supplier warranty.
For starters, the business processes in scope of the insurance policy would be described by a team of experienced analysts and testers.
An intake is done on the delivered product in a technical go live in a production-like environment.
This intake would consist of an exploratory test.

Possible product issues and risks are reported. Per identified risk, an insurance fee is mapped to an insured amount.
The interesting risks to insure are the ones with a very low probability of occurrence, but having a very high impact on failure.

The company can then decide to reduce risks (and insurance costs) by fixing defects if the fix cost is lower than the fee or decide to insure themselves for this risk.

   
What would be the Strengths?
  • Increase the business attention on product risks. Issues can be resolved before go-live.
  • The business can go live with a product they are not sure about resulting an overall decreased project failure.
  • Direct linking of risks and issues with costs. This confronting exercise increases awareness.

 What would be the Weaknesses?

  • Resolving issues late in the development cycle is still more expensive. The direct cost of software delivery is not reduced but rather increased due to the additional insurance policy.
What would be the Opportunities?
  • Level out the current overrated importance of certification  - a tester will need to prove what they're worth. Also development quality will stand more in the picture. 
  • The real value of a good tester becomes visible. The tester's function would get more appreciation and direct value. 
  • The value of the Exploratory Testing method can be proven. Only by exploring, focusing and de-focusing, the dangerous underlying issues, inconsistencies and process gaps can be found... and insured.
  • Increase software quality by increasing the awareness of the cost on failure.
What would be the Threats?
  • Decrease of software quality and user satisfaction due to competitive insurance policies. 
  • Decrease of insurance quality due to competitive insurance policies

Tuesday, September 4, 2012

10 Signs that you're not cut to be a tester

I had quite some fun reading trough signs indicating why you would be not cut for IT, not cut to be a developer or an IT consultant on the Tech republic blog:
http://www.techrepublic.com/blog/10things/10-signs-that-you-arent-cut-out-for-it/3072

I don't mean to say that it made a whole bunch of sense to me, but I did notice that one particular blog post was still outstanding... so I decided to suggest this one myself:

These are my top 10 signs that you're not cut to be a tester:

1. You get nervous from buggy software

The software you're facing on a day to day base is going to be full of bugs and will look like a poor Picasso imitation. You'll need to find workarounds. And when a bug has been resolved, you'll be facing the next problem. You'll have to go trough again.

2. You're unaware of business expectations

If you're not aware of business expectation, you'll look over half of the defects that are in front of you. You'll read over inconsistencies and you won't question the rationale behind an architectural solution that has been proposed. An overview of what business wants and why they need an application, gives you structure and focus on finding the defects that potentially are of high impact.

3. You get tired from explaining defects occurrences

Not everyone knows the system like you do. Not everyone understands the reason why a bug is a bug. You'll need to explain it... again and again. Even when it works on their machine...

4. You don't read blogs, books or attend conferences about software testing

A tester keeps up to date on the latest evolutions on tools, techniques, methods and can learn from practical experience of other testers. It's no use to invent what already has been invented for you. It is  even less useful not to learn from others mistakes.

5. You're ashamed of your role in the project

A tester could also be an analyst, a developer, a designer or an architect, but chooses to be a tester. A tester is not a developer or analyst, coming close to retirement. A tester is proud of their job and aims on improving software quality. They aim on challenging development, analysis and architecture on their correctness, completeness and coherence. A tester is proud of and dedicated to find the human flaws in a program before it is released to production.


6. You know how to check but you don't know how to explore

Checking against expectations is a given. This needs to be done. After this, the software needs exploration. How else are we going to find those undefined features and prove that if you click on the left mouse button 6 times, then do a page refresh, while keeping your right mouse button clicked and then releasing the right mouse button after which you disconnect the keyboard, your money has gone from your account, but has not been transfered?...


7. You are not keeping up to date on the technical aspects of IT

IT terms are not just hype words for a tester. A tester has to have an understanding of OO, has to know what a web service is or a message queue, what the function is of an isolation layer or why server and client side validations can be implemented while building a website. A tester needs to constantly improve and understand what they are technically dealing with as every technicality has it's weakness. That's where most probably the defects will be.

8. You don't like to communicate

Communication, communication, communication. Testing is about finding bugs and inconsistencies and communicating about them.

9. You are not aware of the application development life cycle

A tester needs to know about architecture, design, analysis, development, infrastructure, release management, ... and also how they fit together. A testers could even carefully address flaws in the application life cycle, since they might result in software bugs later on.

10. You can't get over defects you found and don't get fixed

Face it. You'll find those defects that are obvious and you would never want to see in your software. However, you're not paying for every single defect to be fixed. Decisions will be made that might not make you happy. Your job is to point to the risks and issues of not fixing a defect. Your job is not to get them resolved.

Saturday, September 1, 2012

Becoming an Inspiring Test Coordinator

Many starting testers would like to become test coordinators in stead of better testers. However, many good test coordinators are good all-around testers with extra coordination skills. I don't know for myself, how good of a tester I would make but I know I always liked testing and I keep improving my testing skills, even though officially, I'm not testing anymore for the last 3 years.

Seeing Test Coordinators, and working with and for them, I realized there are only few inspiring leaders among them. The biggest pitfalls a test coordinator has in front of them, that keep them from becoming inspiring leaders are the following:

"The common enemy" or "The common goal"?

As a coordinator, you'll need to run test activities like clockwork. The best way to do so is by making your individual testers work as a team.  Bringing people together can be done the easy way, by finding the common enemy, or the hard way, by identifying the common goal. The common goal sounds a bit naive maybe. Facing a goal that you might not reach also requires some sort of bravery. Defining achievable targets towards the common goal, is the key to move on and increase the team spirit on success and even on failure.


"Driving the heard" or "Leading by example"?

Telling people what to do, often results in the opposite effect. The best way to get respect,  is to take part in the test activities. So you better aim to be a good tester, if you ever want to become a good coordinator. Coaching is part of the job. If a tester doesn't know how to get started, you will have to come up with something that works. If your testers miss out on something, it's your job to find out about it and make them aware. 

"Responsibilization" or "Taking the problems away"?

Your testers will face issues constantly. Unstable environments, data restore issues, changing user expectations, defect discussions... Issues will overcome them on a daily base. You have 2 options. Try to let your testers solve the issues by making them responsible or taking their problems away. Taking problems away is often a hard job and might not be very rewarding. You'll need to make your team aware of how you manage their problems, you can even ask them for advice.


"Push pressure further" or "Push pressure back"?

Project managers like to hear that all is going well and tend to become a pain in the ass when testing is not proceeding as planned. Mostly that is because all the buffers they placed in the project are used and your poor test activities are getting squeezed in timelines. Your job is to make sure that testing continues like clockwork and keep management changing ideas, controls and unnecessary reports as far away from your testers as possible. 

"Mushroom coordination" or "Transaparant coordination"?


Knowledge is power. As a test coordinator, you know more than your team members. You take part in steering and coordination meetings. You can use this knowledge either to outsmart your team members and stay one step ahead of them, or you can use it to inform your team members for which they will reward you with trust.




Sunday, August 26, 2012

How to start a new mission as a Test Manager

Starting to work in a new company or starting a new mission that sometimes even looked pretty similar to the previous one, I always find every mission to turn out quite different in it's objectives and challenges.

In this blog post I will focus on how I learnt to orient in a new job as a test coordinator or test manager.

Who will I be working for? 

This might be a Project or Programme manager or Line manager - but there might be other important stakeholders out there. Find out who they are. Who will need/want information from you? They can be persons trusted by higher management, key business users, release managers, operation teams...These are your stakeholders. Know who they are and what they want, as you will be working for them and reporting to them (this can be directly or indirectly).


Who will I be working with?
  • Who are your team members that will be reporting to you? What are their skills? What do you expect and what can they deliver? Find out what you can learn from them and what you will need to teach them. Also here, involve your stakeholders so that you can align the stakeholders' wishes with the budget they need to set aside for it.
    Your team can be on shore or off shore, external or internal, and work under different contractual situations. You need to get a hold of each of them and find a way to make all of them work together as ONE team.
  • Who are  your direct colleagues that you need to share information with? Release management or deployment management, delivery management, analysts, designers, architects, project managers, development, infrastructure... Get to know them and find out for each of them how you will exchange information and what you will require or what might be required from you. You'll need a project planning, a methodology, an architectural blueprint, a data model, designs, analysis or a test environment in a specific way so that you can use it for testing. You'll need to address questions, issues, defects and in this way that they accept them and understand your team's role.
Watch and observe. Treat your colleagues and team members with respect. Understand what they need from you. Make them understand what you need from them. Communicate with them.

How does the project team structure look like?

You can be working in many different kinds of structures using horizontal divisions or vertical divisions, flat structures, resource pools, very hierarchical structures or even complete chaos. A project manager might in reality be an issue manager or a project management officer. A business analyst might be an architect and a designer might be an analyst. Get to an understanding of the project structure and your responsibilities. Find out how the informal and formal roles and responsibilities are defined and where you find yourself in this structure. You can even test the project structure, identify the gaps and bring them up to your management. It's always your job to consult in favor of qualitative delivery.
 

How is testing implemented and what are my responsibilities? 

  • Look at test levels - Are you responsible for unit, system testing, system integration testing, acceptance testing? Who will be performing the tasks? Define your scope and check with your stakeholders.
  • Look at reference material for testing - What will you need to perform testing and to prepare for testing? Assess the quality of the reference material and inform about possible risks or issues.
  • Look at entry and exit criteria - Are you responsible to report on entry criteria or to even monitor them up front?
  • Look at the testing approach - How is testing performed currently? Can it be improved? Point to gaps in the used methods and to their risks to your stakeholders and find out if those risks are accepted or if you suddenly increased your job scope.  
  • Look at test environments and infrastructure - Are you responsible for defining, ordering, maintaining test environments or test infrastructure, defining or improving release processes? 
  • Look at test types. Are you responsible for functional testing only or also for usability testing, security testing, regression testing, performance testing... ? Do you have the required skills in your team to address the required test types to be performed?
  • Look at test tools. Which test tools will you require or are standard in the company? Do you need to define or adhere to procedures? Make sure all your team members are mastering and if required contributing to the tool selection and setup.
  • Look at communication and reporting patterns - What is discussed in which structural meetings? Which deliverables, reports and metrics are expected at which times? What would you think would be required? Come forward with proposals.
Refer to existing, representative products and processes in the company. Find out how testing has been performed in the past. What was expected, what was delivered and how it was the result perceived by stakeholders.

Don't go in and re-invent the wheel. Gather the existing knowledge.  If there is one, adopt the existing way of working. Bring your colleagues, team members and stakeholders towards improvement step by step.


Some tips and tricks that help me
  • Make a list of your colleagues' names. You can make a drawing of the desks in the office and put their names on it.
  • Document what you learn as you learn it (if not yet documented) and store it in a central place. Let other team members contribute to that.  (special phone numbers, creation of test users,...)
  • Hear people out. Ask for their opinions. Show that you can listen - and learn from their skills and experience.
  • Wear clean, washed clothes. Don't go meeting people directly after smoking.
  • Say good morning and good evening to everyone on the floor.
  • Don't say directly YES or NO if you don't understand the request or question completely, but say you will look into it and come back on the next day.
  • Set-up workshops with your business users and identify the important end to end scenario's with them as soon as possible. This will keep you focused during project delivery.
  • Always get to an agreement with at least your key stakeholders separately before proposing new elements to them together in a meeting.
  • Estimate and plan, using techniques and common sense. Make plans with input from your team members. Make them all agree on and commit to the plans before presenting and committing a plan to your management.
  • As a people manager, try to make a positive difference for your people. 

Monday, August 13, 2012

List of requirements for a new test environment

It was the first time we needed an integration test environment that we didn't have yet.
There was no infrastructure, no procedures, no version control, no data restore process,...
What we needed was a controlled test environment where we could test integration for a project of considerable size.

As a tester, I know that knowing what you want, is not obvious and putting it all together to make it work is even more difficult. So, a long brainstorm session with all stakeholders, some Google sessions and blog-scans came afterwards.
It lead to the following list of requirements that we used as a base, to start building our integration test environment.


Testability requirements

  • All applications that exist or are to be delivered in the production environment have to exist or simulated in the test environment. 
  • List of settings and configurations is available together with the comparison of the settings and configurations of the production environment. 
  • Required tools and licenses are available for testers (If for any reason tools can not be provided, a work-around is in place to facilitate testing)
  • Front end devices to access software through different channels are available (card readers, GPS Units, desk tops, laptops, pads, phones,...)
  • Batch procedures are internally controllable and automated as in a production environment (manual trigger, start, pause, stop batch procedures)
  • Measuring tools are in place to enable monitoring the communication between applications and application layers.
  • External Connectivity to test environments for support, deployment and testing is required (as different vendors need to deploy and test their software and integration on the environment)
  • Time traveling is possible in at least one of the following means: "time goes by faster (eg: 1 week = 1 hour)", "move forward and reset from a defined time in the future", "coherent data creation/update to a  time stamp in the past"
  • Input and/or output simulators are in place and documented for every interface type
  • All configurations of the different interfacing systems within the test environment have to be centrally managed 
  • At least 1 test user exists for every defined role in the application under test
  • Database read access is required for all testers
  • The test infrastructure has an availability of 95% (planned deployments are not included)
  • Defects are centrally managed, using 1 agreed defect managing system and flow 

Test data requirements

  • Any production data loaded into the test environment is scrambled in a way that it still fits logically together but is impossible to link back to actual clients in the production database
  • A representative set of  production(-like) test data can be unloaded in the test environment
  • Back-up and restore procedures exist to back-up and release a partial or full data extract
  • Back-up and restore procedure does not take longer than 4 working hours
  • Back-up and restore procedure should be possible at least on a weekly basis with a lead time of 1 working day
  • Full Data refresh with scrambled production data is possible on request  with a lead time of at least 1 working week. 
  • A procedure to request for a data is implemented
 

 Deployment/Release management requirements

  • Version control is applied to all development products and system documentation
  • Deployments are always done from 1 agreed central data store
  • Roll-back scenario's are in place for every deployment cycle
  • Users are trained and informed about deployment management agreements
  • A patch procedure is known, documented and implemented
  • Deployment tasks follow formalized checklists - verified by the test team
  • Builds or data refreshes in a test environment take only place after approval of the impacted test teams
  • A deployment/release management role is assigned
  • Every defect fixed for a new deployment in the test environment is at least communicated within an agreed defect management tool 
  • Every deployed, fixed component is at least communicated trough a build note
  • Builds are subject to formal and/or informal entry criteria

Maintainability requirements

  • Procedures to add/delete or update hardware components are existing and followed
  • The test environment components are sized in relation to the production environment
  • Log files are available for the different applications deployed on the test environment
  • Procedure exists to request and add test users for any application in the test environment
  • Disaster recovery plan is known, documented and implemented
  • Unexpected downtime of the (partial) test environment is communicated to all stakeholders (preferably using a central communication channel - for instance an intranet website)
  • Physical locations of different infrastructural components are identical to the production environment and documented. Any deviation is clearly identified and approved by the project team 



I definitely forgot some requirements and not all of them are actually in place. But but based on these we set up an environment we can control with a handful of people and test in properly. We are agile enough to patch and refresh data properly which improved the test experience drastically.
Any suggestions to improve this list are welcome.

Wednesday, August 8, 2012

How to deliver a qualitative product


The mind-map I'm posting, has been created by looking back on different projects and consulting different colleagues, including my wife, the best Business Analyst I've ever seen. It is most probably not complete. But I think it is ready to start sharing it.

The idea behind this mind-map is to focus on key aspects required to deliver a qualitative software product of considerable size and how testers can contribute to obtain a higher level of quality of the product to be delivered.





If a tester wants to contribute to quality, they can do more than testing a product when it is delivered. Testers have many other duties during software development. They know the product and project in- and outputs and are aware of their risks. They understand the changing business needs and their explicit and implicit requirements. They challenge the architecture and analysis, coach developers and acceptants to check their delivered product effectively on standards. They facilitate in environment and data preparation. A tester understands and contributes to the release and deployment mechanisms. 


Planning & Control

It is the testers job to:
  • Take part in the project planning and approach; challenge the planning
  • Look into the processes and challenge them in function of quality delivery
  • Find dependencies and flag them up to project/programme management
  • Understand the test requirements and build a log of them
  • Advise on test tool selection in function of the budget, timeline, experience and test requirements.

 Design

It is the testers job to:
  • Understand the product needs and rationale behind the project
  • Test the designs and architectural models and point to their weaknesses
  • Find and report flaws in communication patterns and stakeholder involvement
  • Coach acceptants in the definition of E2E prototyping test cases; lead them and help them in the  E2E test preparation. 

Delivery

It is the testers job to:
  • Test, test, test, and automate tests
  • Coach developers in checking against standards
  • Contribute to and cooperate closely with deployment and release management
  • Coordinate proper and clean bug reporting and the bug life cycle 
  • Provide and manage the required test infrastructure.

Acceptance

It is the testers job to:
  • Coach the acceptants in their verification and testing process
  • Analyze and report test results in simple and understandable language
  • Provide release advise
  • Coordinate E2E test execution.

Communication

A tester is a great communicator. They understand the business needs, reason with analysts, challenge architects and managers and coach users and developers.

    Thursday, August 2, 2012

    Why TMap should be called CMap


    As I told before, I’ve been raised with TMap.  And since I arrived in, what one could call ‘test puberty’, I start to make up my mind about testing and find my own way.  My adventure lead me to a great blog post that made me think about testing and checking (http://www.ministryoftesting.com/2012/07/mindmaptesting-and-checking/) which lead me to Michael Bolton’s blog (http://www.developsense.com/blog/2009/08/testing-vs-checking/) and made me realize that TMap (Test Management Approach) should actually be called CMap (Check Management Approach) 
    If the above 2 links are new to you, I would recommend to go and read them.
    I don’t have the time or will to get into every detail of TMap to make my point, as this "bible" counts over 700 pages of valuable information about testing. Therefore, I’m about to touch the most important parts of the book and explain how I want to propose to apply changes to the book in the next version(s).
    Let’s start with the definition of testing.
    TMap®: Testing is a process that provides insight into, and advice on, quality and the related risks
    I would like to split it out into 2 new definitions
    1.     Checking is a process that possibly provides insight into, and advice on, quality and the related risks
    2.     Testing is an investigation that provides insight into, and advice on, quality and the related risks.
    Why is testing not a process?
    A process is a set of interrelated tasks that, together, transform inputs into outputs (http://en.wikipedia.org/wiki/Process_(engineering))
    So we should have:
    -        Input: Test base, test cases, expected results
    -        Output: Defects, test report
    In reality, testing can not be a process.  Why not? Because we can’t plan where we will find the bugs, so we can’t plan were we will need to look and for how long. Was the analysis/development consistent with the company’s history, is compliant with the business process? What were the real user expectations? In order to do so, we have to plan an investigation and re-plan, based upon our findings.
    Checking can be mapped out in a process and can be easily planned. What do we check ? When do we check ? How much re-checking do we estimate ? Which bugs did we find, checking which requirements ? We can check if requirements are implemented correctly. We can check if field validations are representing the analyzed error messages, we can verify the load and stress requirements...
    Check specification (scripting of checks) can only be done for the parts that we can check against expectations or a predefined standard. Testing we can do out of those boundaries. What else can the system do that was not specified? Do we want the application to be able to do this? Is it important to test this?

    What is testing?
    When we elaborate further on testing, according to TMap, each definition of testing compares a product with a predefined standard.
    Of course this is the ultimate way to deliver a set of beautiful metrics later on. But what about those expectations that were not predefined? What about those orphan features, those implicit requirements or poorly communicated company standards… or those gaps that nobody wanted to add to the predefined standard? Are those out of scope for testing because they were not defined?
    For Checking, I would agree. Checking is this very important task that great developers do and that is of high importance during the acceptance phase of product delivery.

    The 4 Essentials
    1. Business driven test management
    This area of TMap offers a great method and toolset to prepare for checking and testing with a correct business focus and performing proper product risk analysis. From the moment the risks are defined in the BDTM life cycle, testing and checking in my opinion part ways.
    Checking continues perfectly conform to the TMap methodology, where at a certain moment in time  check goals are consolidated, product risks are defined, test and check thoroughness is defined and check-scripts are being written and executed.
    Testing however will not consolidate. Risk areas might be reduced or increased, new goals might be introduced and other goals will disappear. Testing will be focused on those risks and goals within a changing environment. There will be more cycles passing trough business, there will be less script-preparation but a lot of testing will take place and will get documented.
    I would propose to separate BDTM from TMap as this toolset and method is a very valuable one for testing as well as for checking. I can only recommend it to anyone who hasn’t used it before.

    2. The structured test process
    See definition of testing. Testing is not a process, it is an investigation.
    Checking however can follow a process. The one described in TMap offers a great way to do so. My proposal is to change testing into checking here for the sake of terminology and testing.

    I have to add here that there is one lovely part in the TMap test process. Specification of the test infrastructure and the checklists provided to do so, helped me quite a lot to specify and not to forget about all artefacts I had to order to enable testing.

    3. Complete toolbox
    For starters, a toolbox in my opinion is never complete. Only a sales department will try to convince me that they can offer me a complete toolbox…. Not an expert, using tools. I have to be fair TMap doesn’t repeat the statement further in the book. It is mentioned that there is a large set of tools present, which cannot be denied.
    I would still request for the addition the tea-test and the shoe-test for example. Those tools, I tend to use on every occasion I get to test software and sad enough, stay very rewarding.

    TMap offers a great bunch of checklists and templates on http://www.tmap.net/
    I have seen and used almost all of the TMap test tools but always seem to come to the same conclusion that they are check-tools or used as check tools when they really could be test tools.
    For example: The well known BVA (Boundary Value Analysis)
    BVA is a great test tool but if you are looking for boundaries only in the predefined standard, you turn your test tool into a check tool. How do you know for sure if you know the real boundaries or if you know all boundaries? Checking applies BVA to the information provided in the standard. Testing is looking for the real boundaries in the system.

    4. Adaptive method

    I have to admit that TMap is an adaptive method. You don’t have to use any of it if you don’t want to and you can pick and choose the useful things that are relevant for your context.
    I propose to use the following great features TMap offers
    -        BDTM (taking into account that testing scope changes)
    -        All provided checklists and techniques
    -        Test Infrastructure part of the test process

     

    Conclusion

    TMap is a great method for checking and offers a bunch of useful checklists and templates. Those are downloadable from http://www.Tmap.net.
    BDTM is a good check Management method and offers a good base for a test management method.
    I would therefore extract BDTM out of TMap and re-brand TMap to CMap.
    I would refer to for example http://www.ministryoftesting.com or http://www.developsense.com to get to know what you're missing out on if you only check and don't test.

    Thursday, July 26, 2012

    Why testers know best...

    Because testers ask questions when they don't understand
    Because testers think about what can go wrong and not how it should work
    Because testers are not dedicated to go live in time
    Because testers still want to go live in time more than finding bugs that don't matter
    Because testers love finding bugs that matter
    Because testers don't have much advantage in telling 'I told you so'
    Because in the end, if a problem is not fixed in time, the tester suffers from that so they are driven to find issues and defects early
    Because testers don't have 'babies' to protect and therefore are the least biased people in the project
    Because testers have a critical technical eye for detail and understand what business wants in the same time


    Tuesday, July 24, 2012

    Forming Storming Norming Performing


    We kicked off with a team of completely diverse people, starting a project that was going to be the biggest challenge most of us ever faced.

    The first months of the project where more difficult than any previous other project I had been working on. Finding out our places in the project, how we would fit to the puzzle, was for everyone one of the biggest challenges. At first conflicts were avoided, and specific tasks were not performed. Later on, some of us got caught up in quarrels and fights and after months of hard work, as a team, we contributed to late and poor quality delivery.


    After a couple of months of suffering, we learned on a team building event that this is so very normal. After that we found that by good leadership, team members are being carried trough different phases in a model of a team-building process.
    The model I'm talking about is the model of Tuckman and considers the Forming, Storming, Norming and Performing phases in team building.

    If you ever get a chance to look at your own team from a distance, you'll definately recognize this model and if you're manager of a leader of a newly created team, you definately need to know these things if you want to improve your team atmosphere and quality of delivery.

    Forming

    The team is assembled. Team members don't know each other well and tend to act independently. There is no real group-feeling.  Conflict is avoided. Team members have a need for guidance.

    Storming

    Team members start to take their positions in the team. This process often leads tentions between team members and contradictive ideas.

    Norming

    Team members are aware of the need of rules and methods. Those rules and methods are defined within the team. The common objectives become clear and the required roles are divided over the team members. The team with all team members take responsibility for the delivery of the objectives.  The risk during the Norming phase is that the team loses either their creative spirit or the drive that brought them to the Norming phase.

    Performing

    The team is working together harmoniously to achieve the common objectives. The team is working independently and makes collaborate decisions. Performing teams are identified by high levels of independence, motivation, knowledge and competence.



    Unfortunately, many teams never make it past the Storming phase. However I dear to say that we made it at least to the Norming phase as a team, yet we still have a lot to deliver...

    Monday, July 16, 2012

    Bug Fix Bingo

    How to dynamically turn frustration of testers into positive energy?


    Bug Fix Bingo
    Developers seem to return with interesting reasons why a bug is not 'really' a bug.
    Testers on the other hand, need to understand that developers don't produce code but create art-work.

    Now, tester, when you come back from a soft-skill-battle with the developer where you finally, without hurting their feelings, were able to make them aware of this little mistake they made, you can return and have a little prize as a reward for your empathy and communication skills.
    In stead of getting frustrated and going back to your PC with a head-ache, you will be allowed to participate in the Bug Fix Bingo.


    bug fix bingo screenshot























    To the developers: Please don't take it too personal. This Bug Fix Bingo keeps your testers sane and in the end, those bugs really need fixing.
    The game was created by K. J. Ross & Associate ( http://www.kjross.com.au/page/Resources/Testing_Games/)


    Saturday, July 7, 2012

    Let's get rid of the safety net


     I couldn't possibly say it better than Gojko did... but I can write about it.

    What Gojko came to tell us in the Eurostar conference of 2011 was that with software testing as we see it most - A phase of executing test cases after the product development phase has ended - does not contribute to quality.
    It contributes to building a safety net for developers, managers, and even testers.


    Since testing is done after development, developers get encouraged to become lazy as they don't need to make sure that their development still really works.

    Testing should not be about
    • Logging defects that developers actually already know about. When for example simple client side field validations don't work, developers could know before a tester knows.
    • Waiting for a product to start testing and designing tests against a formalized analysis, to cover our asses instead of the product
    • Answering business on their requests instead of providing what they really need
    • Assume that all requirements and designs are clear and correctly defined

    Testing should be about
    • Helping analysts and developers by showing them how things can go wrong during design, development and delivery
    • Providing what business needs instead of what business wants
    • Help non-testers be better at testing
    • Test the complex and critical parts of a product
    • Find the real requirements and share them with all team-members
    • Being part of a team that is jointly responsible for the quality of the delivered product



     
    View more presentations from gojkoadzic