5.3 Test progress monitoring and control (K2)



5.3.1Test progress monitoring (K1)
The purpose of test monitoring is to give feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget. Plotting a graph can be a powerful visual aid. The graph, called an S-curve graph because the shape resembles the letter ‘S’, gives early warning of problems.

Common test metrics include:
§ Percentage of work done in test case preparation (or percentage of planned test cases prepared).
§ Percentage of work done in test environment preparation.
§ Test case execution (e.g. number of test cases run/not run, and test cases  passed/failed).
§ Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).
§ Test coverage of requirements, risks or code.
§ Subjective confidence of testers in the product.
§ Dates of test milestones.
§ Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.

5.3.2Test Reporting (K2)
Test reporting is concerned with summarizing information about the testing endeavour, including:
§ What happened during a period of testing, such as dates when exit criteria were met.
§ Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.
§  
The outline of a test summary report as given in ‘Standard for Software Test Documentation’ (IEEE 829) is given below.
Test-Summary Report Outline :
o Purpose.
o Outline.
§ Test-Summary-Report Identifier.
§ Summary.
§ Variances.
§ Comprehensiveness Assessment.
§ Summary of Results.
§ Evaluation.
§ Summary of Activities.
§ Approvals.
Metrics should be collected during and at the end of a test level in order to assess:
§ The adequacy of the test objectives for that test level.
§ The adequacy of the test approaches taken.
§ The effectiveness of the testing with respect to its objectives.

5.3.3Test control (K2)
Test control involves the management actions and decisions that affect the testing process, tasks and the people in achieving the testing objectives. These objectives include the original test plan or the one modified on acquiring new knowledge. We commonly use exit and entry criteria as a highly effective control mechanism. The associated activity can start only if we meet these entry criteria and the associated activity can stop only after we meet the exit criteria. The exit criteria of one activity are the entry criteria for another activity. The test manager can tighten the criteria on an activity and reallocate resources by acquiring more testers or developers and moving people from one task to another to focus on more important areas.

Test Manager and tester must not take the decision about the release of the product. The test and test manager are responsible for supplying accurate and objective information about software quality so that whoever does make the release decision, makes it on the basis of solid facts.

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.

Examples of test control actions are:
§ Making decisions based on information from test monitoring.
§ Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
§ Change the test schedule due to availability of a test environment.
§ Set an entry criterion requiring fixes to have been retested (confirmation tested)
      by a developer before accepting them into a build.

0 comments:

Post a Comment