Sunday, August 26, 2012

How to start a new mission as a Test Manager

Starting to work in a new company or starting a new mission that sometimes even looked pretty similar to the previous one, I always find every mission to turn out quite different in it's objectives and challenges.

In this blog post I will focus on how I learnt to orient in a new job as a test coordinator or test manager.

Who will I be working for? 

This might be a Project or Programme manager or Line manager - but there might be other important stakeholders out there. Find out who they are. Who will need/want information from you? They can be persons trusted by higher management, key business users, release managers, operation teams...These are your stakeholders. Know who they are and what they want, as you will be working for them and reporting to them (this can be directly or indirectly).

Who will I be working with?
  • Who are your team members that will be reporting to you? What are their skills? What do you expect and what can they deliver? Find out what you can learn from them and what you will need to teach them. Also here, involve your stakeholders so that you can align the stakeholders' wishes with the budget they need to set aside for it.
    Your team can be on shore or off shore, external or internal, and work under different contractual situations. You need to get a hold of each of them and find a way to make all of them work together as ONE team.
  • Who are  your direct colleagues that you need to share information with? Release management or deployment management, delivery management, analysts, designers, architects, project managers, development, infrastructure... Get to know them and find out for each of them how you will exchange information and what you will require or what might be required from you. You'll need a project planning, a methodology, an architectural blueprint, a data model, designs, analysis or a test environment in a specific way so that you can use it for testing. You'll need to address questions, issues, defects and in this way that they accept them and understand your team's role.
Watch and observe. Treat your colleagues and team members with respect. Understand what they need from you. Make them understand what you need from them. Communicate with them.

How does the project team structure look like?

You can be working in many different kinds of structures using horizontal divisions or vertical divisions, flat structures, resource pools, very hierarchical structures or even complete chaos. A project manager might in reality be an issue manager or a project management officer. A business analyst might be an architect and a designer might be an analyst. Get to an understanding of the project structure and your responsibilities. Find out how the informal and formal roles and responsibilities are defined and where you find yourself in this structure. You can even test the project structure, identify the gaps and bring them up to your management. It's always your job to consult in favor of qualitative delivery.

How is testing implemented and what are my responsibilities? 

  • Look at test levels - Are you responsible for unit, system testing, system integration testing, acceptance testing? Who will be performing the tasks? Define your scope and check with your stakeholders.
  • Look at reference material for testing - What will you need to perform testing and to prepare for testing? Assess the quality of the reference material and inform about possible risks or issues.
  • Look at entry and exit criteria - Are you responsible to report on entry criteria or to even monitor them up front?
  • Look at the testing approach - How is testing performed currently? Can it be improved? Point to gaps in the used methods and to their risks to your stakeholders and find out if those risks are accepted or if you suddenly increased your job scope.  
  • Look at test environments and infrastructure - Are you responsible for defining, ordering, maintaining test environments or test infrastructure, defining or improving release processes? 
  • Look at test types. Are you responsible for functional testing only or also for usability testing, security testing, regression testing, performance testing... ? Do you have the required skills in your team to address the required test types to be performed?
  • Look at test tools. Which test tools will you require or are standard in the company? Do you need to define or adhere to procedures? Make sure all your team members are mastering and if required contributing to the tool selection and setup.
  • Look at communication and reporting patterns - What is discussed in which structural meetings? Which deliverables, reports and metrics are expected at which times? What would you think would be required? Come forward with proposals.
Refer to existing, representative products and processes in the company. Find out how testing has been performed in the past. What was expected, what was delivered and how it was the result perceived by stakeholders.

Don't go in and re-invent the wheel. Gather the existing knowledge.  If there is one, adopt the existing way of working. Bring your colleagues, team members and stakeholders towards improvement step by step.

Some tips and tricks that help me
  • Make a list of your colleagues' names. You can make a drawing of the desks in the office and put their names on it.
  • Document what you learn as you learn it (if not yet documented) and store it in a central place. Let other team members contribute to that.  (special phone numbers, creation of test users,...)
  • Hear people out. Ask for their opinions. Show that you can listen - and learn from their skills and experience.
  • Wear clean, washed clothes. Don't go meeting people directly after smoking.
  • Say good morning and good evening to everyone on the floor.
  • Don't say directly YES or NO if you don't understand the request or question completely, but say you will look into it and come back on the next day.
  • Set-up workshops with your business users and identify the important end to end scenario's with them as soon as possible. This will keep you focused during project delivery.
  • Always get to an agreement with at least your key stakeholders separately before proposing new elements to them together in a meeting.
  • Estimate and plan, using techniques and common sense. Make plans with input from your team members. Make them all agree on and commit to the plans before presenting and committing a plan to your management.
  • As a people manager, try to make a positive difference for your people. 

Monday, August 13, 2012

List of requirements for a new test environment

It was the first time we needed an integration test environment that we didn't have yet.
There was no infrastructure, no procedures, no version control, no data restore process,...
What we needed was a controlled test environment where we could test integration for a project of considerable size.

As a tester, I know that knowing what you want, is not obvious and putting it all together to make it work is even more difficult. So, a long brainstorm session with all stakeholders, some Google sessions and blog-scans came afterwards.
It lead to the following list of requirements that we used as a base, to start building our integration test environment.

Testability requirements

  • All applications that exist or are to be delivered in the production environment have to exist or simulated in the test environment. 
  • List of settings and configurations is available together with the comparison of the settings and configurations of the production environment. 
  • Required tools and licenses are available for testers (If for any reason tools can not be provided, a work-around is in place to facilitate testing)
  • Front end devices to access software through different channels are available (card readers, GPS Units, desk tops, laptops, pads, phones,...)
  • Batch procedures are internally controllable and automated as in a production environment (manual trigger, start, pause, stop batch procedures)
  • Measuring tools are in place to enable monitoring the communication between applications and application layers.
  • External Connectivity to test environments for support, deployment and testing is required (as different vendors need to deploy and test their software and integration on the environment)
  • Time traveling is possible in at least one of the following means: "time goes by faster (eg: 1 week = 1 hour)", "move forward and reset from a defined time in the future", "coherent data creation/update to a  time stamp in the past"
  • Input and/or output simulators are in place and documented for every interface type
  • All configurations of the different interfacing systems within the test environment have to be centrally managed 
  • At least 1 test user exists for every defined role in the application under test
  • Database read access is required for all testers
  • The test infrastructure has an availability of 95% (planned deployments are not included)
  • Defects are centrally managed, using 1 agreed defect managing system and flow 

Test data requirements

  • Any production data loaded into the test environment is scrambled in a way that it still fits logically together but is impossible to link back to actual clients in the production database
  • A representative set of  production(-like) test data can be unloaded in the test environment
  • Back-up and restore procedures exist to back-up and release a partial or full data extract
  • Back-up and restore procedure does not take longer than 4 working hours
  • Back-up and restore procedure should be possible at least on a weekly basis with a lead time of 1 working day
  • Full Data refresh with scrambled production data is possible on request  with a lead time of at least 1 working week. 
  • A procedure to request for a data is implemented

 Deployment/Release management requirements

  • Version control is applied to all development products and system documentation
  • Deployments are always done from 1 agreed central data store
  • Roll-back scenario's are in place for every deployment cycle
  • Users are trained and informed about deployment management agreements
  • A patch procedure is known, documented and implemented
  • Deployment tasks follow formalized checklists - verified by the test team
  • Builds or data refreshes in a test environment take only place after approval of the impacted test teams
  • A deployment/release management role is assigned
  • Every defect fixed for a new deployment in the test environment is at least communicated within an agreed defect management tool 
  • Every deployed, fixed component is at least communicated trough a build note
  • Builds are subject to formal and/or informal entry criteria

Maintainability requirements

  • Procedures to add/delete or update hardware components are existing and followed
  • The test environment components are sized in relation to the production environment
  • Log files are available for the different applications deployed on the test environment
  • Procedure exists to request and add test users for any application in the test environment
  • Disaster recovery plan is known, documented and implemented
  • Unexpected downtime of the (partial) test environment is communicated to all stakeholders (preferably using a central communication channel - for instance an intranet website)
  • Physical locations of different infrastructural components are identical to the production environment and documented. Any deviation is clearly identified and approved by the project team 

I definitely forgot some requirements and not all of them are actually in place. But but based on these we set up an environment we can control with a handful of people and test in properly. We are agile enough to patch and refresh data properly which improved the test experience drastically.
Any suggestions to improve this list are welcome.

Wednesday, August 8, 2012

How to deliver a qualitative product

The mind-map I'm posting, has been created by looking back on different projects and consulting different colleagues, including my wife, the best Business Analyst I've ever seen. It is most probably not complete. But I think it is ready to start sharing it.

The idea behind this mind-map is to focus on key aspects required to deliver a qualitative software product of considerable size and how testers can contribute to obtain a higher level of quality of the product to be delivered.

If a tester wants to contribute to quality, they can do more than testing a product when it is delivered. Testers have many other duties during software development. They know the product and project in- and outputs and are aware of their risks. They understand the changing business needs and their explicit and implicit requirements. They challenge the architecture and analysis, coach developers and acceptants to check their delivered product effectively on standards. They facilitate in environment and data preparation. A tester understands and contributes to the release and deployment mechanisms. 

Planning & Control

It is the testers job to:
  • Take part in the project planning and approach; challenge the planning
  • Look into the processes and challenge them in function of quality delivery
  • Find dependencies and flag them up to project/programme management
  • Understand the test requirements and build a log of them
  • Advise on test tool selection in function of the budget, timeline, experience and test requirements.


It is the testers job to:
  • Understand the product needs and rationale behind the project
  • Test the designs and architectural models and point to their weaknesses
  • Find and report flaws in communication patterns and stakeholder involvement
  • Coach acceptants in the definition of E2E prototyping test cases; lead them and help them in the  E2E test preparation. 


It is the testers job to:
  • Test, test, test, and automate tests
  • Coach developers in checking against standards
  • Contribute to and cooperate closely with deployment and release management
  • Coordinate proper and clean bug reporting and the bug life cycle 
  • Provide and manage the required test infrastructure.


It is the testers job to:
  • Coach the acceptants in their verification and testing process
  • Analyze and report test results in simple and understandable language
  • Provide release advise
  • Coordinate E2E test execution.


A tester is a great communicator. They understand the business needs, reason with analysts, challenge architects and managers and coach users and developers.

    Thursday, August 2, 2012

    Why TMap should be called CMap

    As I told before, I’ve been raised with TMap.  And since I arrived in, what one could call ‘test puberty’, I start to make up my mind about testing and find my own way.  My adventure lead me to a great blog post that made me think about testing and checking ( which lead me to Michael Bolton’s blog ( and made me realize that TMap (Test Management Approach) should actually be called CMap (Check Management Approach) 
    If the above 2 links are new to you, I would recommend to go and read them.
    I don’t have the time or will to get into every detail of TMap to make my point, as this "bible" counts over 700 pages of valuable information about testing. Therefore, I’m about to touch the most important parts of the book and explain how I want to propose to apply changes to the book in the next version(s).
    Let’s start with the definition of testing.
    TMap®: Testing is a process that provides insight into, and advice on, quality and the related risks
    I would like to split it out into 2 new definitions
    1.     Checking is a process that possibly provides insight into, and advice on, quality and the related risks
    2.     Testing is an investigation that provides insight into, and advice on, quality and the related risks.
    Why is testing not a process?
    A process is a set of interrelated tasks that, together, transform inputs into outputs (
    So we should have:
    -        Input: Test base, test cases, expected results
    -        Output: Defects, test report
    In reality, testing can not be a process.  Why not? Because we can’t plan where we will find the bugs, so we can’t plan were we will need to look and for how long. Was the analysis/development consistent with the company’s history, is compliant with the business process? What were the real user expectations? In order to do so, we have to plan an investigation and re-plan, based upon our findings.
    Checking can be mapped out in a process and can be easily planned. What do we check ? When do we check ? How much re-checking do we estimate ? Which bugs did we find, checking which requirements ? We can check if requirements are implemented correctly. We can check if field validations are representing the analyzed error messages, we can verify the load and stress requirements...
    Check specification (scripting of checks) can only be done for the parts that we can check against expectations or a predefined standard. Testing we can do out of those boundaries. What else can the system do that was not specified? Do we want the application to be able to do this? Is it important to test this?

    What is testing?
    When we elaborate further on testing, according to TMap, each definition of testing compares a product with a predefined standard.
    Of course this is the ultimate way to deliver a set of beautiful metrics later on. But what about those expectations that were not predefined? What about those orphan features, those implicit requirements or poorly communicated company standards… or those gaps that nobody wanted to add to the predefined standard? Are those out of scope for testing because they were not defined?
    For Checking, I would agree. Checking is this very important task that great developers do and that is of high importance during the acceptance phase of product delivery.

    The 4 Essentials
    1. Business driven test management
    This area of TMap offers a great method and toolset to prepare for checking and testing with a correct business focus and performing proper product risk analysis. From the moment the risks are defined in the BDTM life cycle, testing and checking in my opinion part ways.
    Checking continues perfectly conform to the TMap methodology, where at a certain moment in time  check goals are consolidated, product risks are defined, test and check thoroughness is defined and check-scripts are being written and executed.
    Testing however will not consolidate. Risk areas might be reduced or increased, new goals might be introduced and other goals will disappear. Testing will be focused on those risks and goals within a changing environment. There will be more cycles passing trough business, there will be less script-preparation but a lot of testing will take place and will get documented.
    I would propose to separate BDTM from TMap as this toolset and method is a very valuable one for testing as well as for checking. I can only recommend it to anyone who hasn’t used it before.

    2. The structured test process
    See definition of testing. Testing is not a process, it is an investigation.
    Checking however can follow a process. The one described in TMap offers a great way to do so. My proposal is to change testing into checking here for the sake of terminology and testing.

    I have to add here that there is one lovely part in the TMap test process. Specification of the test infrastructure and the checklists provided to do so, helped me quite a lot to specify and not to forget about all artefacts I had to order to enable testing.

    3. Complete toolbox
    For starters, a toolbox in my opinion is never complete. Only a sales department will try to convince me that they can offer me a complete toolbox…. Not an expert, using tools. I have to be fair TMap doesn’t repeat the statement further in the book. It is mentioned that there is a large set of tools present, which cannot be denied.
    I would still request for the addition the tea-test and the shoe-test for example. Those tools, I tend to use on every occasion I get to test software and sad enough, stay very rewarding.

    TMap offers a great bunch of checklists and templates on
    I have seen and used almost all of the TMap test tools but always seem to come to the same conclusion that they are check-tools or used as check tools when they really could be test tools.
    For example: The well known BVA (Boundary Value Analysis)
    BVA is a great test tool but if you are looking for boundaries only in the predefined standard, you turn your test tool into a check tool. How do you know for sure if you know the real boundaries or if you know all boundaries? Checking applies BVA to the information provided in the standard. Testing is looking for the real boundaries in the system.

    4. Adaptive method

    I have to admit that TMap is an adaptive method. You don’t have to use any of it if you don’t want to and you can pick and choose the useful things that are relevant for your context.
    I propose to use the following great features TMap offers
    -        BDTM (taking into account that testing scope changes)
    -        All provided checklists and techniques
    -        Test Infrastructure part of the test process



    TMap is a great method for checking and offers a bunch of useful checklists and templates. Those are downloadable from
    BDTM is a good check Management method and offers a good base for a test management method.
    I would therefore extract BDTM out of TMap and re-brand TMap to CMap.
    I would refer to for example or to get to know what you're missing out on if you only check and don't test.