Learning Center
Plans & pricing Sign in
Sign Out



									                                                           Chapter Comments     13-1

Chapter        13
Software Testing Strategies


This chapter discusses a strategic approach to software testing that is applicable
to most software development projects. The recommended process begins unit
testing, proceeds to integration testing, then validation testing, and finally
system testing.

13.1 A Strategic Approach to Software Testing

Testing is the process of exercising a program with the specific intent of
finding errors prior to delivery to the end user.

What Testing Shows

                           requirements conformance


                                                         an indication
                                                          of quality
13-2   SEPA, 6/e Instructor’s Guide

Who Tests the Software?

         developer                            independent tester
    Understands the system                     Must learn about the system,

    and, is driven by "delivery"               and, is driven by quality

All s/w testing strategies provide the s/w developer with a template for testing
and all have the following generic characteristics:

    Conduct effective formal technique reviews, by doing this, many errors will
     be eliminated before testing commences.
    Testing begins at the component level and works “outward” towards the
     integration of the entire computer-based system.
    Different testing techniques are appropriate at different points in time.
    Testing is conducted by the developer of the S/W and an independent test
    Testing and debugging are different activities, but debugging must be
     accommodated by testing strategy.

13.1.1 Verification and Validation

Verification refers to the set of activities that ensure that S/W correctly
implements a specific function.

Validation refers to the set of activities that ensure that the S/W has been built is
traceable to customer requirements.

Verification: Are we building the product right?
Validation: Are we building the right product?
                                                            Chapter Comments      13-3

The definition of Verification and Validation encompasses many of the activities
that are encompassed by S/W Quality assurance (SQA).

Testing does provide the last fortress from which quality can be assessed and
more pragmatically, errors can be uncovered.
Testing should not be viewed as a safety net that will catch all errors that
occurred b/c of weak S/W eng. practices. Stress quality and error detection
throughout the S/W process.

13.1.2 Organizing for Software Testing

For every S/W project, there is an inherent conflict of interest that occurs as
testing begins. Programmers that built the S/W are asked to test it.

Unfortunately, these developers have a vested interest in demonstrating that the
program is error free and work perfectly according to the customer‟s req.

An independent test group does not have the conflict that builders of the S/W
might experience.

There are often a number of misconceptions that can be erroneously inferred
from the preceding discussion:

1. That the developer of S/W shouldn‟t test.
2. That the S/W should be tossed over the wall to strangers who will test it
3. That testers get involved only when testing steps are about to begin.

These aforementioned statements are incorrect.

The role of an Independent Test Group (ITG) is to remove inherent problems
associated with letting the builder test the S/w that has been built.

The ITG and S/W eng. Work closely throughout a S/W project to ensure that
thorough tests are conducted.
13-4   SEPA, 6/e Instructor’s Guide

                        Testing Strategy
   unit test                                                     integration

  system                                                               validation
   test                                                                   test

13.2 Strategic Issues

Testing Strategy

    We begin by „testing-in-the-small‟ and move toward „testing-in-the-large‟
    For conventional software
         The module (component) is our initial focus
         Integration of modules follows
    For OO software
         Our focus when “testing in the small” changes from an individual
           module (the conventional view) to an OO class that encompasses
           attributes and operations and implies communication and

Specify product req. in a quantifiable manner long before testing commences
“Portability, maintainability, and usability.”

State testing objectives explicitly “test effectiveness, test coverage, mean time to
failure, etc.”
                                                                       Chapter Comments   13-5

Understand the users of the software and develop a profile for each user category. Build

Develop a testing plan that emphasizes “rapid cycle testing.” Feedback generated
from rapid-cycle tests can be used to control quality levels and the corresponding
test strategies.

Build “robust” software that is designed to test itself.

Use effective formal technical reviews as a filter prior to testing.

Conduct formal technical reviews to assess the test strategy and test cases themselves.

Develop a continuous improvement approach for the testing process. The test strategy
should be measured by using metrics.

13.3 Test strategies for Traditional Software

13.3.1 Unit Testing
Both black-box and white-box testing techniques have roles in testing individual
software modules.
Unit Testing focuses verification effort on the smallest unit of S/W design.
Unit Testing Consideration

                                           to be

                                                           test cases
13-6   SEPA, 6/e Instructor’s Guide

    to be
                                      local data structures
                                      boundary conditions
                                      independent paths
                                      error handling paths

                          test cases

Unit Test Considerations:
Module interface is tested to ensure that information properly flows into and out
of the program unit under test.
Local data structures are examined to ensure that data stored temporarily
maintains its integrity.
All independent paths through the control structure are exercised to ensure that
all statements in a module have been executed at least once.
All error handling paths are tested.
If data do not enter and exit properly, all other tests are moot.
Comparison and control flow are closely coupled. Test cases should uncover
errors such as
1. comparison of different data types
2. incorrect logical operators or precedence
3. expectation of equality when precision error makes equality unlikely
4. incorrect comparison of variables
                                                           Chapter Comments     13-7

5. improper loop termination
6. failure to exit when divergent iterations is encountered
7. improperly modified loop variables
Boundary testing is essential. S/W often fails at its boundaries. Test cases that
exercise data structure, control flow, and data values just below, at, and just
above maxima and minima are very likely to uncover errors.
Error handling: when error handling is evaluated, potential errors should be
1. error description is unintelligible
2. error noted does not correspond to error encountered
3. error condition causes O/S intervention prior to error handling
4. exception-condition processing is incorrect
5. error description does not provide enough information to assist the location
   of the cause of the error.
Unit Test Procedures
Because a component is not a stand-alone program, driver and/or stub S/W
must be developed for each unit test.
In most applications, a driver is nothing more than a “main program” that accepts
test case data, passes such data to the component, and prints relevant results.
Stubs serve to replace modules that are subordinate to the component to be
tested. A stub “dummy program” uses the subordinate module‟s interface, may
do minimal data manipulation, provides verification of entry, and returns
control to the module undergoing testing.

                                                  local data structures
                Module                            boundary conditions
                                                  independent paths
                                                  error handling paths
           stub      stub

                                              test cases
13-8   SEPA, 6/e Instructor’s Guide

13.3.2 Integration Testing Strategies
Section 13.3.2 focuses on integration testing issues. Integration testing often
forms the heart of the test specification document. Don't be dogmatic about a
"pure" top down or bottom up strategy. Rather, emphasize the need for an
approach that is tied to a series of tests that (hopefully) uncover module
interfacing problems.
       The “big bang” approach: all components are combined in advance; the
        entire program is tested as a whole.
       An incremental Integration: is the antithesis of the big bang approach.
        The program is constructed and tested in small increments, where errors
        are easier to isolate and correct.
Integration Testing is a systematic technique for constructing the S/W architecture
while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.

Top-down Integration

Top-down Integration testing is an incremental approach to construction of the
S/W arch.

Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program).

Modules subordinate to the main control module are incorporated into the
structure in either depth-first or breadth-first manner.

Depth-first integration integrates all components on a major control path of the
program structure. Selection of a major path is somewhat arbitrary and depends
on application-specific characteristics.

Breadth-first integration incorporates all components directly subordinate at each
level, moving across the structure horizontally.

The integration process is performed in a series of 5 steps:
   1. The main control module is used as a test driver, and stubs are substituted
       for all components directly subordinate to the main control module.
   2. Depending on the integration approach selected subordinate stubs are
       replaced one at a time with actual components.
   3. Tests are conducted as each component is integrated.
                                                           Chapter Comments    13-9

   4. On completion of each set of tests, another stub is replaced with the real
   5. Regression testing may be conducted to ensure that new errors have not
      been introduced.

   The process continues from step 2 until the entire program structure is built.

                                              top module is tested with

                        B           F          G

                             stubs are replaced one at
                             a time, "depth first"
                            as new modules are integrated,

       D            E

Top-down strategy sounds relatively uncomplicated, but, in practice, logistical
problems can arise.

Bottom-down Integration
Bottom-down Integration testing begins construction testing with atomic modules.

The Bottom-down Integration strategy may be implemented with the following
1. Low-level components are combined into clusters that perform a specific S/W
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the
   program structure.
13-10   SEPA, 6/e Instructor’s Guide


                            B             F       G

                                       drivers are replaced one at a
                                       time, "depth first"

                                  worker modules are grouped into
                                  builds and integrated
        D             E

Regression testing
Regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
Regression testing is the activity that helps to ensure that changes do not
introduce unintended behavior or additional errors.
The regression test suite contains three different classes of test cases:

1. A representative sample of tests that will exercise all S/W functions.
2. Additional tests that focus on S/W functions that are likely to be affected by
   the change.
3. Test that focus on the S/S components that have been changed.

Smoke Testing

Smoke Testing: is integration testing approach that is commonly used when S/W
products are being developed.
A common approach for creating “daily builds” for product software
Smoke testing steps:
1. Software components that have been translated into code are integrated into a
   “build.” A build includes all data files, libraries, reusable modules, and
                                                           Chapter Comments     13-11

   engineered components that are required to implement one or more product
2. A series of tests is designed to expose errors that will keep the build from
   properly performing its function. The intent should be to uncover “show
   stopper” errors that have the highest likelihood of throwing the software
   project behind schedule.
3. The build is integrated with other builds and the entire product (in its current
   form) is smoke tested daily. The integration approach may be top down or
   bottom up.

Smoke testing provides a number of benefits when it is applied on complex, time
critical S/W projects.

   Integration risk is minimized: because smoke tests are conducted daily,
    incompatibilities and other errors are uncovered early.
   The quality of the end-product is improved: because the approach is construction
    oriented, smoke testing is likely to uncover functional errors and architectural
    and component-level design errors.
   Error diagnosis and correction are simplified: errors uncovered during smoke
    testing are likely associated with “new S/W increments”.
   Progress is easier to access: with each passing day, more of the S/W has been
    integrated and more has been demonstrated to work.

13.4 Test Strategies for Object-Oriented Software

This section clarifies the differences between OOT and conventional testing with
regard to unit testing and integration testing. The key point to unit testing in an
OO context is that the lowest testable unit should be the encapsulated class or
object (not isolated operations) and all test cases should be written with this goal
in mind.

Given the absence of a hierarchical control structure in OO systems integration
testing of adding operators to classes is not appropriate.

13.4.1 Unit Testing in the OO Context

An encapsulated class is the focus of unit testing; however, operations within the
class and the state behavior of the class are the smallest testable units.

Class testing for OO S/W is analogous to module testing for conventional S/W.
It is not advisable to test operations in isolation.
13-12   SEPA, 6/e Instructor’s Guide

13.4.2 Integration Testing in the OO Context

An important strategy for integration testing of OO S/W is thread-based testing.
Threads are sets of classes that respond to an input or event. Use-based tests
focus on classes that do not collaborate heavily with other classes.

Thread-based testing integrates the set of classes required to respond to one
input or event for the system. Each thread is integrated and tested individually.

Use-Based testing begins the construction of the system by testing those classes
(called independent classes) that use very few server (if any) classes.

Next, the dependent classes, which use independent classes, are tested.

This sequence of testing layers of dependent classes continues until the entire
system is constructed.

Cluster testing is one-step in the integration testing of OO S/W. a cluster of
collaborating classes is exercised by designing test cases that attempt to uncover
errors in the collaborations.

13.5 Validation Testing

In this section validation testing is described as the last chance to catch program
errors before delivery to the customer. If the users are not happy with what they
see, the developers often do not get paid. The key point to emphasize is
traceability to requirements. In addition, the importance of alpha and beta testing
(in product environments) should be stressed.

High Order Testing

Validation Test Criteria:

Focus is on software requirements. A test plan outlines the classes of tests to be
conducted, and a test procedure defines specific test cases. Both the plan and
procedure are designed to ensure that all functional req. are satisfied, all
behavioral characteristics are achieved, and all performance req. are attained,
documentation is correct, and usability and other req. are met.

Configuration Review:

It is important to ensure that the elements of the S/W configuration have been
properly developed.
                                                            Chapter Comments     13-13

Alpha/Beta testing:

The focus is on customer usage.

The alpha-test is conducted at the developer‟s site by end-users. The S/W is used
in natural setting with the developer “looking over the shoulder” of typical users
and recording errors and usage problems. Alpha tests are conducted in a
controlled environment.

The beta-test is conducted at the end-users sites. The developer is generally not
present. Beta test is a live application of the S/W in an environment that cannot
be controlled by the developer. The end-user records errors and all usage
problems encountered during the test and the list is reported to the developer.
Then S/W engineers make modifications and then prepare for release of S/W
product to the entire customer base.

13.6 System Testing

The focus is on system integration. “Like death and taxes, testing is both
unpleasant and inevitable.”

System Testing is a series of different tests whose primary purpose it to fully
exercise the computer-based system. The following are the types of system tests:

13.6.1 Recovery Testing

Forces the software to fail in a variety of ways and verifies that recovery is
properly performed. “Data recovery”

13.6.2 Security Testing

It verifies that protection mechanisms built into a system will, in fact, protect it
from improper penetration.
Beizer “The system‟s security must, of course, be tested for invulnerability from
frontal attack-but must also be tested for invulnerability from flank or rear
attack.” “Social Engineering”

13.6.3 Stress Testing

It executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume.
13-14   SEPA, 6/e Instructor’s Guide

For example:

1) Special tests may be designed to generate 10 interrupts per second, when one
   or two is the average rate,
2) Input data rates may be increased by an order of magnitude to determine
   how input functions will respond,
3) Test cases that require maximum memory or other resources are executed,
4) Test cases that may cause memory management problems are designed,
5) Test cases that may cause excessive hunting for disk-resident data are created.

A variation of stress testing is a technique called sensitivity testing. They attempt
to uncover data combinations within valid input classes that may cause
instability or improper processing.

13.6.4 Performance Testing

It tests the run-time performance of software within the context of an integrated

Performance tests are coupled with stress testing and usually require both H/W
and S/W instrumentation. “Processing Cycle, log events”

1.7 The Art of Debugging

13.7.1 The Debugging Process

                test cases

                       new test
            regression cases
               tests      suspected
                                                          Chapter Comments     13-15

Debugging occurs as a consequence of successful testing. That is, when a test
case uncovers an error, debugging is an action that results in the removal of
the error.
Debugging is not testing but occurs as a consequence of testing. Debugging
process begins with the execution of a test case.
Results are assessed and a lack of correspondence between expected and
actual performance is encountered. In many cases, the non-corresponding
data are a symptom of an underlying cause as yet hidden. Debugging
attempts to match symptom with cause, thereby leading to error correction.
Debugging will always have one of two outcomes:
1) The cause will be found and corrected, or
2) The cause will not be found.

Why is debugging so difficult?
1. The symptom and the cause may be geographically remote  highly
   coupled components.
2. The symptom may disappear temporarily when another error is corrected.
3. The symptom may actually be caused by non-errors (round-off
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problem, rather than processing
6. It may be difficult to accurately reproduce input conditions (a real-time
   application on which input ordering is indeterminate).
7. The symptom may be intermittent. That is particularly common in
   embedded systems that couple H/W and S/W inextricably.
8. The symptom may be due to causes that are distributed across a number
   of tasks running on different processors.
13.7.2 Psychological Considerations
It appears that debugging prowess is an innate human trait. Although it may
be difficult to learn how to debug, a number of approaches to the problem
can be proposed.
13-16   SEPA, 6/e Instructor’s Guide

Three debugging strategies have been proposed:
   1. Brute force
   2. Backtracking
   3. Cause Elimination
Each of these strategies can be conducted manually, but modern tools can
make the process much more effective.
Brute force is probably the most common and least efficient method for
isolating the cause of a software error. Using a “let the computer find the
error”, memory dumps, run-time traces, and loading the program with
output statements.
Although the mass of information may ultimately lead to success, it more
frequently leads to wasted effort and time.
Backtracking: Beginning at the site where a symptom has been uncovered, the
source code is traced backwardly until the site of the cause is found. The
larger the program, the harder is to find the problem.
Cause elimination: it is maintained by induction or deduction and introduces
the concept of binary partitioning. Data related to the error occurrence are
organized to isolate potential causes.
A “cause hypothesis” is devised, and the aforementioned data are used to
prove or disprove the hypothesis. Alternatively, a list of all possible causes is
developed, and tests are conducted to eliminate each.
It initial tests indicate that a particular cause hypothesis shows promise; data
are refined in an attempt to isolate the bug.

To top