- The product (part) is delivered with all it's functional dependencies
- All interfacing products are in place, issues are known
- Amount of defects of certain severities closed in previous test levels VS total are met
- Amount of elements planned to test in previous test levels VS tested are met
- Test environment is setup and tested
- Test data is prepared and consistently scrambled over the whole integrated platform
- Batches are running
- Configuration is setup and tested
- Test preparation is done
- Testers are trained
- Licenses for tools are handed out and confirmed
But how often do we end up in situations where at least one of those entry criteria are not met, which blocks us from proceeding with testing? And how many times did we assess entry critria as passed but in fact they were not? A defect could be wrongly allocated, users have access to the tools, but have the incorrect user rights, the environment is build but it can't be accessed... Releasing untestable chunks of code to testers or releasing in an environment that is not fit for testing, is despite many project managers opinion a waste of time and resources.
Environments need to be maintained longer, defects are discussed with more stakeholders and have longer turn-around times.
In short: Bad entry criteria management costs the project time and money.
A pragmatic and easy way to keep a decent check on the status of your entry criteria, is to define an intake test for every substantial module under test.
This intake test should be executable in a very limited amount of time and describe the positive flow(s) trough your modules under test. Based on the result of this intake test, delivery issues can be found instantly. And which statistical report can compare with "showing that it works". From the moment of ordering the environment, the teams focus should be on making this test work and all reasons why it doesn't will return into critical issues or defects that require high priority to solve.
When the intake test for a module is successful, it can be recorded and stored in a library as a proof and testing for this module can start.