1.1.Why is Testing Necessary? (K2)

1.1.1. Software systems context (K1)
Software systems are an increasing part of life, from business applications (e.g. banking) to consumer products (e.g. cars). Most people have had an experience with software that did not work as expected. Software that does not work correctly can lead to many problems, including loss of money, time or business reputation, and could even cause injury or death.

We conduct testing to:
§ Learn what a system does or how it behaves.
§ Assess the quality of a system.
§ Ensure that a system performs as required.
§ Demonstrate to the user that a system conforms to requirements
§ Demonstrate to the users that the system matches what they ordered

1.1.2. Causes of software defects (K2)
We believe that software can never be made perfect. Consequently, we test the software to find as many defects as we can, to ensure that we deliver a high quality product with a minimum of defects. As testers, we find defects and so we must understand how defects occur. Because testing covers a large number of activities spread throughout the lifecycle, organizations have a variety of reasons for testing.

We use the term bug generally to refer to a problem in the software. It does not have a specific meaning and sometimes we use it subconsciously to deflect the blame or defect away from the real source.

Errors (or mistakes): The defects in the human thought process, made while trying to understand given information, to solve problems, or to use methods and tools

Defects: The concrete manifestations of errors within the software. One error may cause several defects and various errors may cause identical defects.

Failures: The departures of the operational software system behavior from the user requirements.

A human being can make an error (mistake), which produces a defect (defect, bug) in the code, in
software or a system, or in a document. If a defect in code is executed, the system will fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so. A particular failure may be caused by several defects and some defects may never cause a failure. This provides some insight into the difference between reliability and correctness. A system may be reliable but not correct, i.e. it may contain defects but if we never execute those defects, we consider it reliable. On the other hand, if we define correctness as the conformance of the code to the specification, a system may be correct but not reliable because the user may try to use the system in ways not permitted in the specification and the system can crash. We can have a defect without a failure; a program may have a defect that never affects the user.


 


Fig. 1: Errors, Defects, and Failures

Example 1: As depicted above, the analyst makes an error of assuming that the year will always begin with 19. This in turn leads to an error of defining the date as a char(2) field. Hence, there occurs a failure  while computing the difference.

Example 2: An analyst makes an error of assuming that the minimum balance of Rs 3000 is to be maintained in a savings account else the account is deactivated. The analyst’s error becomes a defect in the software.This defect leads to a failure at runtime, due to which accounts are deactivated, and hence leading to customer dissatisfaction.

Computer systems may fail due to one or many of the following reasons:
§ Communication. The lack of communication or miscommunication about what an application does ( the application’s requirements ).
§ Software complexity. It is difficult to comprehend the complexity of current software applications without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
§ Programming errors. Programmers, like anyone else, make mistakes.
§ Changing requirements( documented or undocumented ). The end-user may not understand the effects of changes, or may understand and request them anyway. In case of a major change or many minor changes, known and unknown dependencies among parts of the project interact and cause problems. The complexity of coordinating changes including redesigning, redoing some work or throwing it out entirely, changes in hardware requirements and redistribution of human resources, may result in errors. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and the QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control
§ Time pressures. Scheduling of software projects often requires a lot of guesswork. With deadlines looming, programmers make mistakes.
§ Egos. People prefer to say “no problem” instead of admitting that the changes can add a lot of complexity and taking the time to take a closer look at the problem. The software has a lot of bugs as a result of too many unrealistic ‘no problems.’
§ Poorly documented code. Maintenance and modification of badly written and poorly documented code results in bugs. In many organizations management does not provide an incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, the programmers get points for quickly turning out code, and consider their job secure if nobody else can understand it.
§ Software development tools. The visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or because of poor documentation, result in added bugs.
§ Environmental conditions. Radiation, magnetism, electronic fields, and pollution can cause defects in firmware or influence the execution of software by changing hardware conditions.

1.1.1. Role of testing in software development, maintenance and operations (K2)

Rigorous testing of systems and documentation can help to reduce the risk of problems occurring during operation and contribute to the quality of the software system, if defects found are corrected before the system is released for operational use.

Software testing may also be required to meet contractual or legal requirements, or industry-specific standards

We evaluate the software project requests, to determine a rough order of magnitude, risk analysis and resource requirements. Senior management and Quality Assurance staff conducts the evaluation prior to initiating contractual commitments. The company must perform all inspection and testing of the product, necessary to demonstrate conformity with contract requirements, and must maintain inspection and test records sufficient to demonstrate the conformity of the product to contract requirements.


Testing is done to :
§ to verify that it satisfies specified requirements
§ to identify differences between expected and actual results
§ to determine that it meets its required results
§ to measure software quality

1.1.2. Testing and quality (K2)
Quality is defined as “meeting customer requirements”. Testing can measure the quality of a product and indirectly, improve its quality. As testing and quality are obviously closely related, testing measures quality. Testing gives us an insight into how closely the product meets its requirements/specifications and so it provides an objective measure of its fitness for purpose. By assessing the rigor, number of tests and counting the defects found, we can make an objective assessment of the quality of the system under test. If we do detect defects, we can fix them and improve the quality of the product.

With the help of testing, it is possible to measure the quality of software in terms of defects found, for both functional and non-functional software requirements and characteristics (e.g. reliability, usability, efficiency maintainability portability). For more information on non-functional testing see Chapter 2; for more information on software characteristics see ‘Software Engineering – Software Product Quality’ (ISO 9126).

Testing can give confidence in the quality of the software if it finds few or no defects. A properly designed test that passes reduces the overall level of risk in a system. When testing does find defects, the quality of the software system increases when those defects are fixed.

Lessons should be learned from previous projects. By understanding the root causes of defects found in other projects, processes can be improved, which in turn should prevent those defects from reoccurring and, as a consequence, improve the quality of future systems. This is an aspect of quality assurance.


Testing should be integrated as one of the quality assurance activities (i.e. alongside development standards, training and defect analysis).

1.1.3. How much testing is enough ? (K2)

There are an infinite number of tests to apply and software is never perfect. It is impossible (or at least impractical) to plan and execute all possible tests. Even after thorough testing, we can never expect perfect, defect-free software. If we define ‘enough’ testing ‘when all the defects have been detected’, we obviously have a problem - we can never do ‘enough’.

There are objective measures of coverage (targets ) that we can arbitrarily set, and meet. We base these on the traditional test design techniques. Test design techniques give an objective target. The test design and measurement techniques set out coverage items. We can then design and measure the tests against these. Using these techniques, we can set and meet arbitrary targets.

But all too often, time is the limiting factor. The problem is that for all, but the most critical developments, even the least stringent test techniques generate many more tests than we can possibly use or accept within the project budget available. In many cases, testing is time limited. Ultimately, even in the highest integrity environments, we have time limits on testing. Often the test measurement techniques give us an objective ‘benchmark’. Based on that we arrive at an acceptable level of testing by consensus and ensure that we do at least the most important tests. A tester plays an important role in providing enough information on risks and the tests that address these risks, so that the business and technical experts understand the value of doing some tests and the risks of not doing other tests. In this way, we arrive at a balanced test approach.

Scope of testing highly depends upon the following factors:
§ Industry Standards may impose a level of testing
§ The size and complexity of the system.
§ New systems integration work required.
§ Any new or cutting edge (i.e., relatively unproven) technology involved.
§ The performance of the system integrated till date.
§ The knowledge and experience of the customer or the customer’s project team.

We require a small testing effort for a small system, installed by a highly proficient system integrator, for a very knowledgeable customer. If any of the factors listed above increase, the amount of required test planning and execution required also increase. Two systems of identical size and complexity may require the same or very different levels of testing, depending upon the performance of the integrator and the knowledge of the customer.

Deciding how much testing is enough should take account of the level of risk, including technical and business product and project risks, and project constraints such as time and budget. (Risk is discussed further in Chapter 5.)

Testing should provide sufficient information to stakeholders to make informed decisions about the release of the software or system being tested, for the next development step or handover to
customers.

0 comments:

Post a Comment