5.2 Test and estimation (K2)

5.2.1Test planning (K2)

This section covers the purpose of test planning within development and implementation projects, and for maintenance activities. Planning may be documented in a project or master test plan, and in separate test plans for test levels, such as system testing and acceptance testing.

Outlines of test planning documents are covered by the ‘Standard for Software Test Documentation’ (IEEE 829).

Planning is by the test policy of the organization, the scope of testing, objectives, risks, constraints, criticality, testability and the availability of resources. The more the project and test planning progresses the more information is available and the more detail that can be included in the plan.

Test planning is a continuous activity and is performed in all life cycle processes and activities.
Feedback from test activities is used to recognize changing risks so that planning can be adjusted.

5.2.2Test planning activities (K2)
Test planning activities may include:
§ Determining the scope and risks, and identifying the objectives of testing.
§ Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.
§ Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.
§ Making decisions about what to test, what roles will perform the test activities, when and how the test activities should be done, how the test results will be evaluated.
§ Scheduling test analysis and design activities.
§ Scheduling test implementation, execution and evaluation.
§ Assigning resources for the different activities defined.
§ Defining the amount, level of detail, structure and templates for the test documentation.
§ Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.
§ Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution.

5.2.3Exit criteria (K2)
The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set of tests has a specific goal.

Typically exit criteria may consist of:
§ Thoroughness measures, such as coverage of code, functionality or risk.
§ Estimates of defect density or reliability measures.
§ Cost.
§ Residual risks, such as defects not fixed or lack of test coverage in certain areas.
§ Schedules such as those based on time to market.

5.2.4Test estimation (K2)
The most preferred time to do a test estimate is at the beginning of the project itself because it is a predictive science. Test Estimation must be taken into account whenever any major change during the development process takes place.

Depending on whether the organization follows a top-down or bottom-up hierarchy, the person doing the test estimate changes. In the top down approach, the senior management at the project level undertakes the estimation. In the bottom up approach, testers at the lower level estimate the time their area requires and then take an aggregate. Both approaches have their advantages and disadvantages. In most cases a compromise is made between the two. For example a project manager allocates a month for testing. We split this down to finer elements. Testing staff then provides feedback, after which estimates and allocations may be amended.

Two approaches for the estimation of test effort are covered in this syllabus:
§ The metrics-based approach : Estimating the testing effort based on metrics of former or similar projects or based on typical values.
§ The expert-based approach : Estimating the tasks by the owner of these tasks or by experts.

The testing effort may depend on a number of factors, including:
§ Characteristics of the product:
· the quality of the specification and other information used for test models (i.e. the test basis),
· the size of the product,
· the complexity of the problem domain,
· the requirements for reliability and security,
· the requirements for documentation.
· Test requirements scope
· Number of builds planned.

§ Characteristics of the development process:
· Maturity and stability of the organization
· Tools used  
· Test process
· Skills of the people involved
· Knowledge on tools used
· Time pressure
· Domain knowledge
· Test Team Organization.
· Risks
· Regulatory Control.
· Physical location.
· Language

§ The outcome of testing:
· the number of defects
· the amount of rework required.

Estimation is basically a four-step approach in which we:
§ Estimate the size of the development product. This is either in LOC [Lines of Code] or FP [Function Points]. The concept of using UCP [Use Case Points] is still in its infancy.
§ Estimate the effort in person-months or person-hours.
§ Estimate the schedule in calendar months.
§ Estimate the cost in currency.

Conventional Approach to Test Effort Estimation
Test engineering managers use many different methods to estimate and schedule their test engineering efforts. Different organizations use different methods depending on the type of projects, the inherent risks in the project, the technologies involved etc. Most of the time, test effort estimations are clubbed with the development estimates and no separate figures are available.

Here is a description of some conventional methods in use:

The Ad-hoc Method
The test efforts are not based on any definitive timeframe. The efforts continue until we reach some pre-decided timeline set by managerial or marketing personnel or until the budgeted finances run out. This practice is prevalent in extremely immature organizations and has error margins of over 100% at times.

Percentage of Development Time
The fundamental premise here is that test engineering efforts are dependent on the development time / effort. First, we estimate development effort using some technique such as LOC or Function Points. Next, we use some heuristic to peg a value next to it. This varies widely and we usually base it on previous experiences. This method is not defendable since it is not based on any scientific principles or techniques. Schedule overruns could range from 50 – 75% of estimated time. This method is also by far the most used.

From the Function Point Estimates
In Tthis estimate, we s that we can determine the number of test cases by the function points estimate for the corresponding effort.

The formula is Number of Test Cases = (Function Points)

We calculate the actual effort in person-hours with a conversion factor obtained from previous project data. The disadvantage of using FP is that they require detailed requirements in advance. Another issue is that modern object-oriented systems are designed with Use Cases in mind and this technique is incompatible with them.

A person with past project experience and expertise can make reasonably accurate estimates.  However, it is not a good method for final estimates and can often be challenged.

Work breakdown structure (WBS)
Here we identify tasks that make up the test activities and estimate each one in turn. Testers, who verify that the estimate is realistic, can then review each task estimate. If not, then we re-work the estimates.

Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
5.2.5Test approaches (test strategies) (K2)
§ One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun:
§ Preventative approaches, where tests are designed as early as possible.
§ Reactive approaches, where test design comes after the software or system has been produced.

Typical approaches or strategies include:
§ Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk.
§ Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).
§ Methodical approaches, such as failure based (including error guessing and Fault-attacks),experienced based, check-list based, and quality characteristic based.
§ Process- or standard-compliant approaches, such as those specified by industry-specific standards or the various agile methodologies.
§ Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.
§ Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.
§ Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites.

Different approaches may be combined, for example, a risk-based dynamic approach.

The selection of a test approach should consider the context, including:
§ Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.
§ Skills and experience of the people in the proposed techniques, tools and methods.
§ The objective of the testing endeavour and the mission of the testing team.
§ Regulatory aspects, such as external and internal regulations for the development process.
§ The nature of the product and the business.


Post a Comment