2.2 Test levels (K2)

For each of the test levels, the following can be identified: their generic objectives, the work product(s) being referenced for deriving test cases (i.e. the test basis), the test object (i.e. what is being tested), typical defects and failures to be found, test harness requirements and tool support, and specific approaches and responsibilities.

2.2.1 Component Testing (K2)


Component/Unit Testing is testing at the lowest level. It is at the bottom on the V-Model software development life cycle. Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system. Stubs, drivers and simulators may be used. In this type of testing, generally, programmers test their own code (in case of White Box Testing). We can use both black box and white box testing techniques. It is also known as Unit / Module / Program testing.

Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behavior (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). Test cases are derived from work products such as a specification of the component, the software design or the data model.

Typically, component testing occurs with access to the code being tested and with the support of
the development environment, such as a unit test framework or debugging tool, and, in practice,
usually involves the programmer who wrote the code. Defects are typically fixed as soon as they
are found, without formally recording incidents.

The programmer who wrote the code generally performs component testing making it a sensible and the most economic approach. A programmer, who executes test cases on his/her own code, can usually track down and fix any defects revealed by the tests relatively quickly. If a tester executes the test cases, he/she must document each failure. Eventually the programmer investigates each of the defect reports and perhaps reproduces them in order to determine their causes. Once fixed, the tester re-tests the software to confirm that each defect had indeed been fixed. This amounts to more effort and yet has the same outcome.

But, an organization must bring some independence into the test specification activity. Someone other than the programmer must also specify test cases. Both functional and structural test case design techniques are appropriate, though we must define the extent of their use during the test planning activity. This depends on the risks involved.

One approach in component testing is to prepare and automate test cases before coding. This is

called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.

2.2.2 Integration Testing (K2)

Integration testing tests interfaces between components, interactions to different parts of a system, such as the operating system, file system, hardware or interfaces between systems.

It is performed to expose defects in the interfaces and in the interaction between integrated components. Testers should understand the architecture to plan the integration testing. It is highly recommended that They be included in the integration testing planning and design.

Integration is concerned with the process of combining components into an overall system. In software, we normally have integration at two levels. First the integration of components at the module level into a system, sometimes known as component integration testing or integration in the small. Second the integration of systems into a larger system, sometimes known as system integration testing or integration testing in the large.

Integration testing is all encompassing. It not only includes the correct implementation of the interface between components, but also checks whether the integrated components, now a system, behave as specified. This behavior covers both functional and non-functional aspects of the integrated system. Fig. 7 shows two components interacting to form an integrated system.



Fig. 7: Integration

E.g.: We consider two components (actually one or more) as integrated when:
1 They have been compiled, linked, and loaded together.
2 They have successfully passed the integration tests at the interface between them.

Thus, we create a new, larger component (A, B) by integrating components A and B. Note that this does not conflict with the idea of incremental integration. It just means that we added a big component ‘A’ and a small component ‘B’. This level of testing is also called Link Testing or Sub-System Testing. A better name for Link Testing as mentioned in the Practitioner Syllabus: Component Integration Testing.

The greater the scope of integration, the more difficult it becomes to isolate failures to a specific
component or system, which may lead to increased risk.

As we combine more and more components together, we form a subsystem. This has a more system-like functionality that we can test. Testing non-functional aspects, such as performance, at this stage may prove useful. For integration testing in the small, we have two choices to make:
1 How many components to combine in one go?
2 In what order to combine the components?

There are three main reasons for Integration Testing:
1 To find defects not found in component testing because they become apparent only after integrating the components.
2 To generate credible information about the software under test, so that technical and business decisions can be made.
3 Confirm that even as the system grows, there is a diminished risk of failure.

The decision over which choices are made is called the ‘integration strategy’. The two main integration strategies are Big Bang and Incremental. 

Big Bang Integration
In “Big Bang” Integration, we put together all the components in one go. We assume that since all components have already undergone testing at the unit level and have no defects, we can now put them together.

The Big-Bang approach has a very simple philosophy where we basically construct and test all the modules or builds independent of each other and on finishing, we put them all together at the same time. The main advantage of this approach is that it is very quick. No drivers or stubs are needed and this cuts down development time. However it increases the difficulty of locating and fixing a defect in case a problem arises, ultimately making it much more time consuming.

Incremental Integration
We define Incremental Integration as combining a small number of components all at once. The minimum number of components added at each increment is one.

The main advantage of integration testing is that it makes the location as well as fixing of defects and recovery by reverting to the last known good baseline very easy. Three main incremental integration strategies determine the order of combination of the components:
1 Top Down.
2 Bottom-Up.
3 Functional Incrementation.

Top Down Integration and Stubs
In Top down integration we develop modules and tested them starting at the top level of the programming hierarchy, continuing on to the lower levels. In this type of integration, we combine components starting with the highest level in the hierarchy. We integrate all components at a certain level and tested them before moving to the next level. We use stubs, temporary programs for testing purpose in Top Down Integration, in the place of unavailable components. Stubs are similar to programs. Drivers are also known as test harness or scaffolding.







Fig. 8 : Stubs and Drivers

Bottom Up Integration and Drivers
Bottom-up strategy, as the name suggests, is the opposite of the Top-down method. This process starts with the building and testing of the low level modules, working its way up the hierarchy. As we start with the lowest level components, all calling components may not be available. Hence we make use of drivers. We can also use stubs in bottom-up integration, though not frequently. We use a Driver, a temporary program, for testing purpose. It replaces a calling program, in its absence.



Fig. 9:Top down and Bottom Up Strategy

Functional Incrementation
The third integration strategy is Functional Incrementation. Use this to achieve a basic functionality with a minimum number of components integrated. Functional Incrementation is defined as “a strategy for combining components in integration testing in the small where they are combined to achieve some minimum capability or to follow a thread of execution of transactions”.

Consider the following points when performing integration testing:
§ Adding small increments to the baselines.
§ Considering use of larger increments if using stubs and drivers.
§ Keeping stubs and drivers as simple as possible.
§ Planning the Integration Testing in the small as early as possible in SDLC. This saves a lot of time and effort.

Integration Testing in rhe large , also called as “System Integration Testing”, where integration testing of two or more system components takes place. Specifically, system integration testing is the testing of software components distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e., defects involving distribution and back-office integration).

The typical objectives of system integration testing are to:
§ Cause failures involving the distribution of integrated system components.
§ Cause failures involving integration of the system with other applications (e.g., legacy back-office systems).
§ Report these failures to the development team so that they can fix them.
§ Help the development team to ensure that the software has been successfully distributed to improve the effectiveness of system and launch testing.
§ Minimize the number of low-level defects that prevent effective system and launch testing.

System integration testing can typically begin when the following preconditions hold:
§ The integration team is adequately staffed and trained in system integration testing.
§ The integration environment is ready.
§ At least two software components have:
· Passed the relevant software integration testing.
· Been ported to the integration environment.
· Been integrated.

System integration testing is typically complete when the following post conditions hold:
§ All system components have been integrated (e.g., software components have been distributed to their hardware components).
§ A system integration test suite of test cases exists for each interface between software components distributed to different hardware components.
§ All system integration test suites execute successfully (i.e., the tests completely execute and the actual test results match the expected test results).

The system passes the subset of the black box functional system tests provided by the independent test team, whereby these tests exercise major system interfaces, especially between distributed software components and with external server systems. Now after identification of all of the package's components, we can finalize the package through the inclusion of scripts and other external components, and then automate the package for deployment. This refined package serves to support the next battery of tests.

It ensures that:
§ Our system can interact with different networks, operating systems and communication middleware.
§ Our system can work with other systems like inventory, payroll etc
§ Our system works with 3rd party packages.

Planning for system integration testing , includes:
§ Testing of connections/interfaces one at a time.
§ Identifying risks and testing most critical areas first.
§ Identifying the various resources required.
§ Considering different operating systems.
§ Considering different machine configurations.
§ Considering different network configurations.
§ Considering a tie-up with some hardware manufacturers to use their sites for testing.

2.2.3 System Testing  (K2)

In System Testing, we test the entire product or a major portion of it at one time.

System testing is concerned with the behaviour of a whole system/product as defined by the scope of a development project or programme.

In system testing, the test environment should correspond to the final target or production
environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.

System testing may include tests based on risks and/or on requirements specifications, business
processes, use cases, or other high level descriptions of system behavior, interactions with the
operating system, and system resources.

It is divided into two categories: Functional System Testing and Non-Functional System Testing. Functional System Testing is further divided into two more categories: Requirements-based testing and Business Process-based testing.

Functional System Testing
In Functional System Testing, we test the completed application as a whole, to determine that it provides all of the behaviors required of it. It also covers testing of completed increments that provide some degree of end-user functionality.

In this type of Testing, we search for defects or variances between the actual operation of the system and the requirements for the system. We treat the system as a black box and analyze the expected behavior of the system, according to its functional specification and generate a test procedure for each of the possible usage scenarios.

It also corresponds to use case scenarios. Here we analyze how a change in one part of the system affects other parts of the test cases. The result of one test case produces the data that serves as the input for the next test case.

Requirements-Based Testing
Requirements-based testing uses a specification of the functional requirements for the system, as the basis for designing tests. Use the table of contents of the requirement specification, as an initial test inventory or list of items to test (or not to test). We must also prioritize the requirements, based on risk criteria, and use this to prioritize the tests. This ensures that the most important and most critical tests are included in the system testing effort. Requirement based testing mainly:
1 Is based on functional requirements.
2 Uses specification of functional requirements.

Business Process-Based Testing
Business process-based testing is defined as “testing based on expected user profiles such as scenarios or use cases used in system testing and acceptance testing.” This type of testing is based on business process. Here we use knowledge of business profiles (situations in day to day business use of system). We also create test cases from a business perspective.

Non-Functional System Testing
Non-functional system testing is defined as “testing of system requirements that do not relate to functionality, i.e. performance, usability, etc. Also known as quality attributes.” The various types of non-functional system testing include:
§ Load, performance & stress testing.
§ Usability testing.
§ Configuration & Installation testing.
§ Reliability testing.
§ Back-up & Recovery testing.
§ Documentation testing.

We have mentioned about non-functional system testing in detail later in section 2.3.2

2.2.4 Acceptance testing  (K2)
Acceptance testing is often the responsibility of the customers or users of a system; other
stakeholders may be involved as well.

The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance test for a system.

Acceptance testing may occur as more than just a single test level, for example:
§ A COTS software product may be acceptance tested when it is installed or integrated.
§ Acceptance testing of the usability of a component may be done during component testing.
§ Acceptance testing of a new functional enhancement may come before system testing.

Typical forms of acceptance testing include the following:

User Acceptance Testing
In User Acceptance Testing (UAT), we test the software, against certain pre-determined criteria for acceptance. The UAT is the final stage of validation. We must involve the users in the test specification of UAT. In this type, we conduct testing in the replica of a real environment. UAT is also known as Beta Testing (usually in the PC world) QA Testing, Application Testing or End User Testing. This phase focuses on functionality testing to check whether the system meets user acceptance criteria or not.

By users we mean the real business users, who operate the system like the staff of an organization, the suppliers or customers. They understand the business exactly, and know how it operates and therefore, they are the only people qualified to check a system to see if it delivers any benefit to the business or organization.

It is unlikely that system developers know much about the realities of running the organization, other than what they acquire from requirements specifications, and similar documents. In addition they have close involvement in the design compromises that always take place during system development, and so have a commitment to the system as it is. In UAT, business users try and make a system fail, taking into account the real organization it must work in. UAT checks the system in the context of the business environment it must operate in.

In short we must protect the organization from harm. Any changes in a business, especially installing new computer systems, expose it to many risks like:
§ Reputation Risk: When external people such as customers, suppliers, or legal authorities perceive a problem with the organization, and decide not to use it, or in the case of legal authorities, give it more scrutiny then they have before.
§ Legal Risk: The possibility that the system could break laws, leaving the organization open to legal proceedings.
§ Time Risk: The system may not meet key business deadlines. We must discover this in testing and not when a system goes live.
§ Resource Risk: If a system does not properly integrate with our organization, we may have to expend a lot of resources for working around the system, adding to the cost.

Therefore the main reason for UAT is to find out what a system does to our organization before we implement it. The final decision is then made based on the evidence presented by the testing. 

Operational (acceptance) testing
The acceptance of the system by the system administrators, including:
§ testing of backup/restore;
§ disaster recovery;
§ user management;
§ maintenance tasks;
§ periodic checks of security vulnerabilities.


Contract and regulation Acceptance Testing
If a system is the subject of a legally binding contract, there may be certain areas/aspects of the system that we have to test. In this type, we conduct testing as per the contract. When the contract is negotiated, both customer and supplier focus on agreeing about the specifications and the timetable for delivery. Often the parties are content to deal with the important area of acceptance on an “agreement to agree” basis, but this is legally unsatisfactory, as an “agreement to agree” is not enforceable. If the parties insist on taking this approach, the lawyer (for the customer) must try to use contractual wording to at least describe the parameters and timing of the testing process.
Regulation acceptance testing is performed against any regulations which must be adhered to, such as governmental, legal or safety regulations

Alpha and Beta (Field) Testing
We perform Alpha and Beta testing when the software seems stable. The people who represent our market use the product as if they have bought the finished version and give us their comments. This helps the developers of market, or COTS, software get feedback from potential or existing customers in their market before the software product is put up for sale commercially Alpha tests are performed at the developer’s site, while beta tests are performed at the testers’ sites. The customer, in the presence of the developer, conducts Alpha Testing at the developer’s location. We conduct testing in a controlled environment. Alpha testing is the software prototype stage when the software is first able to run. It does not have all the intended functionality, but has core functions and is able to accept inputs and generate outputs.

Beta Testing is conducted at customer’s location by the end user. The developer is not present during testing. The end user, in live condition, conducts pre-release test. It is useful in making the final release acceptable to the end user.

Both Alpha and Beta testing are performed by potential customers, not the developers of the product.Organizations may use other terms as well, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer’s site.






0 comments:

Post a Comment