Saturday, February 21, 2015

Test activities in Scrum

Lately I've been absorbed and emerged into the REAL agile development life cycle and how to do do Agile properly. To do so, I got my scrum master certificate, started reading about scrum, worked as a scrum master and went to work in the company that hired Arie van Bennekum, one of the co-founders of the Agile Manifesto.

On my path to learn about agile and scrum,  I learned a lot about Unit testing and integration testing, Test Driven Design (TDD) Design and eXtreme Programming (XP), The definition of done and ready, continuous build and integration... I also earned about acceptance criteria and even Acceptance Test Driven Development. Those terms were not entirely new for me, but let's say I understand them better now since I saw them being put in practice by the teams.

But what about exploratory testing, non functional testing, front-end automation, test data preparation, test environment management for example? And how do testers get their test-related jobs done when they require considerable extra effort? I found, next to the very structured applications of unit testing and TDD, manual testing and test automation aspects as well as test related tasks and estimations were not yet very mature.

As I'm now only 2 years active in Agile development and more particularly in Scrum, I do not claim to own Agile development. But I do claim to know something about testing and where I would take proper testing tasks into consideration while working in for example Scrum. This is how I did it:

I divided the test related tasks into 3 parts
- Taks to be taken up during product backlog definition
- Tasks to be taken up during sprint backlog definition
- Tasks to be taken up during the sprint





Product Backlog definition


Production issues
It is important that production issues ( and maybe also issues coming out of acceptance testing if this is not taken into the definition of done of a sprint) get a proper description and impact and become part of the product backlog.

Test related features and testability items
Features needed to accommodate testing (stubs, drivers, datasets, monitoring plugins, maybe even a button and input field within the application to provide feedback for beta testers) need to be defined and developed in time.
Also product supporting elements like test automation, performance tests, security tests, exploratory sessions are items that could be put on the product backlog.


Sprint backlog definition


Test estimation
During the sprint backlog definition, testing has to make sure that all testing efforts are estimated for every user story that goes into a sprint. Things that are easily forgotten are test data preparation and often even testing of negative paths and exploration sessions.

Acceptance Criteria
Making sure the user stories have adequate acceptance criteria defined.

Evaluation
Proof-reading user stories - checking if they are concrete and elaborated enough to go into development. Check them on corner cases and look for inconsistencies and interdependencies between stories.



Sprint


Daily scrums
Testing is or should be part of the definition of done of a sprint. Therefore testing is an activity taking place within the sprint and testing should be part of every daily scrum.

Manual testing
Manual testing, checking on acceptance criteria but also exploratory testing is a very important part of the sprint. Whatever piece of code that is test-ready should be tested.
Exploratory testing is not always.. or let's say almost never related to one user story. Exploratory sessions could focus on company history, comparable products, internal consistency, browser and device compatibility... and often is a trans-feature test activity. This means that exploration sessions might need individual tasks, next to the user story related checks that need to be part of the user stories definition of done.

Automated testing
Best is to automate tests that on stable code of the previous sprint and run a test set of automated regression tests on top of the unit and integration test framework

Test data preparation
During the sprint, testers or developers need to make sure the required test data is available. sometimes this is part of release management, some times it needs to be prepared in files, directly in the database or in the application, using the front end.
This needs to be a well-coordinated.

Test environment
As agile development equals constant build and and constant integration, the test environment needs to be ready for testing when it needs to be ready and needs to stay in a certain situation maybe for a small period of time before a new version of the product is released. This requires intensive communication. We need to know at all times which version is under test and under which version bugs are being reported.

User coaching
Another duty of testing is coaching of users in acceptance testing or even in the definition of requirements and acceptance criteria. Users need coaching. Often they don't understand the principles of scrum to much and a scrum master can not go into details involving and coaching all users. A tester has the competences and the adequate product knowledge to take this role.




Sunday, August 24, 2014

Why testing is NOT a "standard process"

In the beginning there was consumption...

It all starts with the consumption society in which we, software testers, live.
An average consumer, willing to spend some money, asks for a certain custom product.
Being a consumer for quite a while, and having the experience of buying standard products, we know a price and an exact scope of  what we're going to get for our money.

When I for example go and buy a car, I know exactly what I'm going to pay, how my car will look like and when I'm going to receive it. The same goes by the way for a book, cd or even a software package.

So our problem, of testing being a process, begins exactly here. We are all very familiar with the trade of standard products. But there is a big difference between standard products and custom made products. Many clients and suppliers are not very competent in buying or selling such a non-standard, custom product.
And even though you think your car is customized when you have carefully chosen your options and colour - you just configured your car with the standard options and colour a supplier provides for this car. If you disagree, go to the garage and try to have an integrated GPS with bluetooth module, traffic board recognition and rear-driving camera, ESP and ABS system in a 1992 Opel Corsa and you'll see what I'm talking about.

When testing is required, we're talking mostly about new product development. We're going to build or integrate something that we never built or integrate before. So this is NOT a standard product that we can just take off a shelf.

What we seem to like to omit, is the fact that in custom software production, we're going to make something for the first time. We might integrate with components that aren't there yet, using technologies we haven't been using yet, working with people we have never seen before. There are a lot of unknown factors here.


So how do we need to buy or sell a custom product?

When we sell or buy something that does not yet exist, we don't want to find out that what we wanted is not really what we needed on the moment we spent all our money. We want to verify step by step and have the possibility to learn and steer and even more, get a good understanding about the real cost of the complete product we're asking for.

What we need to do is learn quickly, as we will be making false assumptions on cost or complexity, forgot about important things and maybe had some misunderstandings on all different levels. We have to keep our eyes open, and our focus sharp. Is this product that we're building fit for purpose? Are we on the right track? We better find out that we made a mistake before we spent all those millions.

So ideally we plan, build and act in small iterations. Each iteration gives us time to act and learn so that we can adopt in order to do better during the next iteration. Testing is a part of acting and is taking place in close cooperation with all stakeholders and is therefore a important driver of the learning and improvement of the team.

We buy and sell by building a solid trust relation between buyer and seller that gives room for continuity and results after a couple of iterations in a very clear view on budget, scope, planning and their  respective interferences.


Process VS Learning

Now what is a process?
A process is a series of predefined actions that produce something or that lead to a particular result.
Processes tend to always start from the same starting points, with known input and the output of a process is also known.

Processes are used to streamline factories and service companies that offer standard products or services. We don't want those people, building cars, delivering lettres or producing cd's to learn and improve. They have to follow the process. We want them to work effectively and at a minimal cost.

When building custom products, it is close to impossible to have a concrete view on any standard process to follow right from the start as there is no routine yet, that could be the base for a process to be put in place. On the contrary, there are many unknown factors in input, actions to be taken and even outputs.

This does not mean that no agreements can be made about communication and cooperation.
By learning and improving, processes and procedures might be put in place during software development. But those processes  and procedures will be very specific and tailored to the people and the context of this piece of software that is being built.


Conclusion

I would like to claim that testing can not be performed successfully by only following any standard process. Quality of any custom made software product can not be guaranteed by following any process. Good and effective testing requires expertise, knowledge, experience and skill.

During testing, it is the learning curve that can lead to creation of specific processes that can be applied within projects, programs or cooperating teams. But it is the learning curve and close stakeholder cooperation that leed to less issues in production, not any process.

If you read this and you agree with me, please go and sign here to support the testing community.







Sunday, August 10, 2014

The role of QA in the Agile organization

What is quality not (anymore)?

Some companies (still) think of quality as a standard set of KPI's that should be met for all projects in the company and reported on.
Those KPI's consist in the worst case for example of amount of analysis documents finished in time VS finished late, amount of test cases prepared per designed function, quotes on how well templates are being followed, bugs per test case, reopened bugs, bug fix times etc...
One role in the company is ultimately responsible for implementing quality in systems.
This role is the QA Lead or QA Manager who represent the project "head of police". They take their best-practice sheet out of the magical best practice box and start implementing it in the company.
The result of this way of implementing quality in a company producing software is a product that meets KPI's, was more costly to build and the people building it getting frustrated from working in a framework they don't support.
People on the floor don't seem to see the added value of the QA Manager and their KPI's, but those KPI's are sent way up, so they need to shine... in the worst case at the cost of product quality.
Maybe this was an acceptable way to approach quality when building products using the waterfall and V-model, and as a step to take in order to learn how to do better.
Now there are better ways to do this.



What is quality and how to pursue it?

"Quality" is like any software requirement. The idea of what an application needs to and may not do, changes over time and not all quality aspects can or should be fulfilled in order to deliver a successful project.
Each and every project member needs to understand the goal of what they are building and why they are building it. That's the first step in delivering software quality - a clear common goal.

The team with all their individual members are the owners of quality, the QA lead is their coach. 
The definition of what quality is, is formulated by it's stakeholders and can change over time.
What it means in reality is that good software quality (how stakeholders perceive quality) can only be reached when the common understanding of quality is carried by the team.
Stakeholders in this case are not only the business, but also the developers, testers, end users, release team, support teams etc...

The QA job in a project starts by making sure all stakeholders are identified in the first place and making sure they are aware they need to provide (continuous) input on what they need from a software product in order to reach the level of quality they want to achieve.
The next thing QA needs to do, is coach the product owner (or project sponsor - who owns the budget) in deciding which of all identified aspects are more or less important considering the budget and timelines. 

Then QA talks with the team on how these quality aspects will be put in place. The team will decide, together with QA on the definition of done (What are the measurable aspects of the software product that need to be fulfilled before we can ever say any component of this software has been "released".)

By doing so, the team defines how they will reach quality standards to ensure development speed, release efficiency, product maintenance, product aesthetics, acceptance criteria etc...
Based on the team's "definition of done"  that evolves over time, the individual project will gain more maturity in understanding what quality means for their project and will start reporting on measurable project related KPI's.
The QA Lead can assist in setting up and maintaining those reports, but needs to take care not to become the owner of those reports.

Projects are further monitored by testers (or test engineers, QA engineers or whatever you call the people who look for flaws in products and processes). They assess process and product, and contribute to the definition of done by testing the product, but also by checking with the team and stakeholders on the completeness and correctness of the definition of done  - that can be adapted after every iteration.

At the end of every iteration, it is important to look back on what went well and what went wrong in this project.
Lessons learned and incentives are the most valuable aspects a project can receive. Making sure those sessions take place and result in action points, can also be a QA role.

Results of implementing quality are not statistics in the means of the traditional overall KPI's anymore, but are project specific KPI's and even more importantly - they result in more successful projects, more accurate cost estimations, faster release cycles, faster delivery cycles, faster defect fix-rates, closer business cooperation and even higher retention levels of employees.

Friday, May 30, 2014

Moving towards acceptance

For an average professional software tester, testing is a merely rational concept.
The tester discusses the test scope, creates a test plan, defines an approach, defines the test levels, entry and exit criteria. Then they install the tools, define working instructions, test the analysis, the code, the procedures and maybe even te project members. They register bugs, shout about risks ad issues that get solved or not after which they retest, retest and retest after which they finally advise about moving to acceptance testing or go-live, and (maybe after a couple of times going trough this procedure) start the hand over to the standing organisation to start testing for another project. For a tester, most of the time, there is no life beyond the project. No fear for production issues and how they will impact maybe thousands of people after the final Go-Live is given.

For an acceptant, testing is a totally different matter. the acceptance testers are generally a representative set of constructive people that KNOW the organisation, operate or lead operations and therefore have a good influence on their business colleagues. They will have the last say about going live or not. (even when they are not formally asked - they can create a lot of commotion if their buy-in is not obtained by the time of go-live)
It's the business users that will face the issues coming from poor design, architecture, forgotten parts from analysis and also from development and testing shortcomings.

They care less about those numbers in those professional and colourful defect reports. They don't look into most of your risk assessments and don't even care about your scope descriptions.
For them, the product needs to work on an acceptable level before go live and there is no number or colour that will convince them otherwise.
Therefore those acceptants are the managers biggest nightmare as they are out of his control but will decide about the added value of the project team's work.

It's those people that will have to work with the product for the next weeks/months/years until a possible update of the product will be made available. And that can take a while when you're not working in an agile structure.

So how can we create this buy-in that managers are so desperately looking for?

It's not all that hard. It all comes down to basic things. The project needs to be able to listen. They should not abandon all those "extra feature requirements" that are constantly coming in as defects at the end of the project. Instead the project should distill those extra features as soon as possible in the project, so that they can be estimated and built in time.

It is useless to build a program that fits incorrect or incomplete requirements. The goal is to get a good understanding of what we are  building and why we are building it, making sure that we deliver value. To get to a good understanding, feedback loops are of the highest importance.
Is this what you wanted and is it working as you expected? Thats what we need to find out as soon as possible.

So how can we do that?


1. Agile product delivery
Running short sprints of software delivery, focussing on the most important parts of the most important features, is one of the best ways I know to get early feedback from your stakeholders. They get involved in setting up priorities and backlog creation and can be involved in testing very early on. They need to be coached in testing though as they might expect more then they will receive in the first deliveries and are looking for the wrong defects.

If agile product delivery is not an option for you, there are still other ways to get a good and in time understanding of what your business actually really wants.

2. Dry-run
Testing for business is not only about the software, but about doing their jobs with this software.
A dry-run can be seen as a role-playing game where all business that is involved in accepting the product makes a simulation of their day to day operational work with the new software package.
This can already be done based on analysis material as for example wire frames or a prototype.
Every part that is analysed and prototyped or wire-framed, can pass a dry-run session - which can eventually lead to proper documentation of the business processes and analysis and re-used in all later testing.

3. Prototyping
Before building the actual product, it might be a good idea to build a complete prototype. Prototypes can help people understand what the development team understood they need to build.
Prototypes can be run trough from different angles
- Does this prototype support the business process?
- Do we capture all possible errors that can occur?
- Does the screen layout and setup make sense?
- Will the software be able to support the amount of users we estimate to have?
- etc...
They can be evaluated individually by the stakeholders and testers and/or used in a dry-run session.

4. Demo's
As soon as you have a feature ready that can be shown, the developers take the initiative to give a demo. A demo is a forum to receive constructive feedback. The invitees should not be the managers high up in the chain but a representative set of users.
After the developer finished the demo, the stakeholders ask the developer to show other relevant scenario's of the same feature. The developer can find and solve defects and the project leader might receive a set of changes in features or new features.

Probably there are more ways to accomplish a good relation with your stakeholders, but I found the previous four very fruitful. Engaging your business early on in the project, making them part of the project delivery, is key to successful project delivery.




Sunday, November 24, 2013

Scripting, checking, testing - enemies or friends?


"If you don't script test cases, your testing is not structured" or "Testing is checking if the software works as designed" are statements made by people who like to call themselves "Structured testers".
I don't completely disagree with statements like those, but I also would call myself a structured tester, even though I suppose I see things in a slightly wider context...

Scripting, checking and testing can live seemlessly together as we can seemlessly live together in multicultural societies.
It is not because we don't do it too often... we effectively can't. A good tester knows when to check, test and/or script.

What do I check?
Checking is what I do when I have an exact understanding of what a function or process should have as an outcome.
This can be at a very low level for example:
- a field should contain no more than 15 characters
- an email should contain an @ sign, a "dot", in this order, and should contain some characters in between
It can be outcomes of a calculation engine that contains the rules on when to advise to grant an insurance or not, based on a defined set of conditions.
It can be a set of acceptance criteria or business rules.
Checking is verification and is a part of software testing. Whether or not they have to be all executed by a professional software tester is another question.
When there is a lot to check - the pairwise method can help, putting different parameters together to make checking more efficient.

What do I test?
Testing is what I do when I don't have an exact understanding of how functions and processes should co-exist or work together.
The software parts I tend to test are the complex combinations of possibilities an application offers.
In this case, I don't only look at a field, a screen or a function (sometimes I do) but at functions or flows that contain complex steps, and possibilities. I try to emulate an end user, taking into account the business processes, the company history, the development method and team's knowledge and skills... this combined with my own knowledge and tools... and go searching for the steepest and largest pitfalls that developers or analysts or business may have overlooked.
Knowing what a function under certain circumstances is supposed to do, I try to find out if it's consistent with itself under relevant conditions.
Questions I may ask myself:
- How will the system behave if the interface is not available? - This could have a big impact on business function and is a case that happened a lot with the previous software package... and a reason why they developed a new package that was supposed to handle these issues.
- What will happen if I push the back button instead of going back using the described functionality. The previous application in use - made use of the standard browser buttons. Many users are used to those, so I will check them all the time.


What do I script?
Scripting is what I do to make sure that I remember what to check before I start checking or to remember what I have tested. Scripts can possibly be used as evidence for bugs found during testing or as a working tool to follow up on all that needs verification.
I script what I should not forget.
I could script all those things that need checking - but most of the time that already has been done.
I could script the test cases that resulted in issues or bugs.
I could script the test cases that will help me find regression issues later on. Those I would prefer to automate.








Saturday, November 16, 2013

Agile reporting


How do we report and what do we report on?
We often try to fall back on "facts" in order to provide test reports.
Those facts are mostly represented in bars or curves, giving a view on the "reality" of the product quality and test progress.
I put reality in quotes because reality is different for every viewer.
I also put facts in quotes because facts are just a single dimension of showing a single aspect of a reality, seen by a person.
It becomes clear that "facts" are becoming less solid as they may seem when pronounced by experienced managers.

Traffic in bars
Imagine you need to get traffic information in order to find out whether it is worth taking off from your destination. You go to your favourite traffic website and suddenly you see something like this:





Would you be able to find out, based on this information where traffic is situated and if you are able to get home in time?
I would not.

Same goes for a test report.
A test report can give you a lot of valid information. But sometimes it lacks that what is really important to understand like for example
- the context and underlying risks of the bugs that are still open?
- what has been tested, and what not and what are the underlying risks?
- how testing was conducted and which issues were found during testing that prevent us from testing efficiently or from testing at all?
- what can be improved in the testing activities?

I got inspired by James Bach's website (https://www.developsense.org) and by the team I'm working with to find an alternative for the current reporting in bars... and when I started looking at the traffic information, I realized how silly we actually are, only making use of graphs and bars as 'facts' to present 'reality".

Let's for example have a look an example in software development 

Building of a Web Shop
This conceptual web shop has a product search, a shopping basket, a secure login, and some cross- and up-selling features, a back end with an admin tool, a customer support tool and it interfaces with a logistics application, a CRM application, a BI application and a payments application.



The web shop is build in bi-weekly iterations of development and test results need to be reported.
The standard test report looks like this on the moment the go/no-go decision has to be made.

A standard report
Test cases were planned and prepared up front (460 of them).
  1. In Phase1 of the project, the testers were motivated to find relevant bugs in the system and realized that they would not find them by only following the prepared test cases.They were not allowed to create extra test cases, so a lot of bugs were registered,without a trace to a test.
  2. In Phase 2 of the project, the project manager extrapolated the amount of test cases executed to the future and found that testing will delay his project if it is continued like this. The test manager was instructed to speed up testing. A lot of unstructured testing had been going on and this was not budgeted for. The testers anticipated, an started failing all test cases they could, to get back on track with test execution. A lot less defects were registered as the testers were not really testing anymore, but rather executing test cases. Project management was happy. Less defects got raised so development was able to get back on track and testing was on track - the delivery date was safe again.
  3. In Phase 3 End-to-end tests started and business got involved. By doing so, they started also looking around in the application and registered a lot of extra bugs again. Flagging off those E2E test cases went very slowly as business did not want to sign off on a test case, if they saw still other things not working well. It took Project Management up to 3 weeks to convince them to finalize the tests. In the meanwhile the testers got instructed to pass as many test cases as they could - because the release had to go trough and now everything was depending on testing
Finally 3 blocking bugs were not resolved. However, the overview of test progress and defect progress looked pretty well. We also can see that development reacts always fast on blocking and critical issues, so if we would find a blocking issue, we can be sure it will be fixed in a day. However, that turn-around time was measured when a developer stated the defect was resolved, and not when it was confirmed to be fixed indeed. A lot of critical and blocking defects went trough the mill for some iterations before they finally got resolved.




Clearly something went really wrong here. This report is unclear and seems fabricated. But it is only when we look at it with an eagle's eye, that we're able to notice these oddities.
Based on these results it is impossible to make a profound decision whether or not to release this software.

The pitfall of  what we call "Goodhart's law" was stumbled into:  "When a measure becomes a target, it ceases to be a good measure"

So what can be done to avoid these kind of "facts" to represent "reality"?
The answer to this is:

A visual, agile report.
So how would this report look like? Well, it would look very much like a traffic map would look like.
We could, for example, use functional blocks in an architectural model. 
Coming back to the web shop, we see the different functions with an indication wheter they were tested or not, if there were bugs found and if there were, how severe they are and how many blocking and severe issues we may have found in each module (visualized in the testing progress circles showing first the amount of blocking issues and then the amount of criticals).
We also clearly indicate which of the existing functions are taken out of scope or come in scope of the test project - so that everyone has a same understanding of the scope we are really talking about.

We take a look one level deeper into the functions, present in each functional block with blocking issues.
For example the shopping basket. In the shopping basket we see that deleting items from the shopping basket is not possible. This is an annoying one to go live with, but if this one is solved one day after go-live, we can still agree to go live. Let's have a look at the transactions

In the transactions section, we see that we can not really pay with the money transfer option and there are still some relevant issues in the Visa payment area. We can go live with Pay-pal, have Visa money transfer working in a day and have those Visa and money transfer issues that are left, fixed in a weeks time. We're still going live...

In the interface with logistics, we see something more worrying. The stock order isn't working and actually none of the functions behind have been ever tested. In the standard report those tests were noted as failed. Now we see that we won't be able to go live yet with this software. This defect needs to be resolved and further testing is required.

In general we can conclude that this kind of reporting, can maybe not completely replace the classical reporting. But it can be a good addition. Reporting on test progress, based on prepared tests however, seems tempting as it may give a fake feeling of control. However this control is not real. So this is always a very bad idea.

The report is made to reflect and cope with agile development, but can also be used for a classical phased delivery approach. Also different color indications can be used or different "maps" (eg: business processes or functional flows might also do the job )

This dashboard view still has to be accompanied with the test approach and efficiency of the testing that was conducted.  That's gonna be for another post.

This post has been created with the help of some of my  RealDolmen colleages (Bert Jagers, Sarah Teugels, Stijn Van Cutsem, Danny Bollaert and  and the inspiration of my wife) Thanks!

Sunday, June 16, 2013

Creating an effective defect report


It's easy to monitor on defects, but when you start reaching your 1000th defect and you keep track of all kinds of data, you might start loosing control on what the actual quality of your application might look like. You start to lose overview on where the issues lie and what actions need to be taken to control product quality within delivery time. You drown in the graphs you need to present and in stead of reading your report yourself and making the correct conclusions, you'll use your energy creating the shiny report that nobody reads.


Many test coordinators and test managers let themselves drive by all the information available and recorded in the defect registration tool, presenting all kinds of exotic graphs but at the same time lose focus on the actual reason of the defect report - Giving insight in product quality.

So how do we provide insight in product quality, making use of a defect report?


1. First of all, before starting to report, you need to get data in the defect registration system.
You only  want to capture data that is useful to show product quality. All other data is waste.

  • Divide your software into logical blocks to report defects on (Using functional and technical blocks) to report defects on eg: Intake; Contract; Interfaces System A; Interfaces System B,... Make sure you don't have more than 9 of those. If you have more, regroup.
  • Define defect priorities and/or severities : 'Urgent - High - Medium - Low' will do for starters
  • Defect statuses to report on do not equal your actual defect statuses - simplify and group them for a report. Use:
    • Open (For the developer to pick up or to resolve)
    • Closed - Rejected (Closed - Agreed not to fix)
    • Resolved (Needs a retest)
    • Closed - Fixed ( Closed and fixed)
    • Escalated (No agreement found - needs to be taken up in a defect board)
  • You want to know what the actual issue was of defect when it is being resolved. This can be data, test, design, front end code, back end code, business rule, migration... again make sure you don't have more than 9

2. Present your data in a way that anyone can understand what you want to show.

  • The titles of the graphs you present, should be meaningful and unambiguous
  • The report should be printable on maximum 2 pages and still be readable
  • Use the correct chart types to display what you want to show. Trend views are no luxury. 
  • The colors you present need to be in line with their meanings
  • Always provide an interpretation. A report that only shows metrics is can be easily misunderstood.



An example of how a defect report could look like, is shown below: