Testing- Techniques by SanjuDudeja


									                                 UCLA CS130 – Software Engineering
                                             Winter, 2002
                                             Lecture Notes
                                   Testing Techniques and Strategies
                                           February 13, 2002

Today’s Lecture – Software Testing (Chapter 17 in text)

1.   Testing Techniques
      1.1. Fundamentals – objectives, principles, testability
      1.2. Test Case Design
      1.3. White Box Testing
      1.4. Basis Path Testing
      1.5. Control Structure Testing
      1.6. Black-Box Testing
      1.7. Testing for Specialized Environments
2.   Testing Strategies
      2.1. Overview
      2.2. Relationship of Testing, Verification and Validation
      2.3. Organization - Roles of Software Developers, Software Testers
      2.4. Generalized Testing Strategy
      2.5. Completion – knowing when to stop
      2.6. Strategic Issues

3.   Testing Fundamentals

     •    Testing done after “constructive” activities of requirements specification, design, im-
     •    May be viewed as “destructive” – test cases may be developed to break software.
     •    Are test objectives really destructive?

         3.1.   Objectives – execute software to:
                3.1.1.     Find Faults – this may be viewed as somewhat destructive
                  Want to remove as many faults as possible before software
                                     delivered to customer
                  Good test cases have high probability of finding previously
                                     undiscovered faults.
                  Successful test case uncovers previously unidentified faults.
                3.1.2.     Demonstrate software functions and behaves properly – in addition to
                           finding faults, customer and developer need to know that software
                           functionality is as expected. Test data can also be used to verify be-
                           havior is correct (e.g., performance, throughput, reliability).
                3.1.3.     Characterize software behavior – in some (most?) cases, the require-
                           ments specification may be incomplete. In this case, testing serves to
                           characterize the behavior of the software under expected operating

                   conditions. Customer and developer review may be needed to deter-
                   mine whether the results are as expected or not.
3.2.   Principles for developing effective test cases:
       3.2.1.      Traceability – test cases should ultimately be traceable to customer re-
       3.2.2.      Planned – test planning should begin before testing actually starts.
                   Can start as soon as requirements model is complete. Tests should be
                   planned and designed before code has been generated.
       3.2.3.      “80-20 Rule” (Pareto principle) – “80% of trouble associated with
                   20% of code”. Identify suspect portions of software and focus efforts
                   on those. Software reliability engineering lecture will describe meth-
                   ods for doing so.
       3.2.4.      Progress from “testing in the small” to “testing in the large” – start
                   with individual software components (unit testing) to integrated sets of
                   components and then entire system. In other words, don’t do “big
                   bang” testing where you put everything together before testing for the
                   first time.
       3.2.5.      Exhaustive testing is not possible – for real world systems, the number
                   of possible paths through the system precludes testing each one. It is
                   possible to “adequately” cover program logic and all component-level
       3.2.6.      Most effective to have independent third party doing testing – has
                   highest likelihood of finding faults. Developers are too closely tied to
                   their products to test effectively.
3.3.   Testability – ideally, software is designed to be “testable” – how easily can a
       piece of software be tested? Following properties are characteristic of testable
       3.3.1.      Operability – the better the software works, the more efficient the test-
                   ing. Operable software has few faults, no faults blocking test execu-
                   tion, and evolves in functional stages (permits simultaneous develop-
                   ment and test – we did this on GALILEO flight software – developers
                   would be working on version x+1, testers would be testing version x).
       3.3.2.      Observability – Output is readily visible, easy to analyze
                   • Distinct output for each input
                   • Incorrect output easily identified
                   • Automatic detection and reporting of internal errors
                   • Source code accessible
       3.3.3.      Controllability – How well the execution of the system can be con-
                   • Can generate all possible outputs with some input combination
                   • All code can be executed with some input combination
                   • Direct control of SW and HW states
       3.3.4.      Decomposability – Controlling test scope leads to quicker fault isola-
                   tion and smarter testing
                   • Software built from independent modules
                   • Modules can be independently tested

              3.3.5.      Simplicity – Fewer things to test mean quicker testing
                          • Minimal feature set necessary to meet requirements – no gold plat-
                          • Modular architecture – limits fault propagation
                          • Code simplicity – coding standards adopted to simplify inspection,
              3.3.6.      Stability – Fewer changes to software mean fewer testing disruptions
                          • Infrequent changes
                          • Changes are controlled
              3.3.7.      Understandability – Better knowledge of system means smarter testing
                          • Well-understood design
                          • Design changes are communicated
                          • Accurate, specific and detailed, accessible, well-organized techni-
                              cal documentation
4.   Test Case Design – wide variety of testing techniques for SW. They can be divided into
     two categories
       4.1. Black Box Testing – based on knowing function that software is intended to per-
              form. Conduct tests to demonstrate that each function is present while simultane-
              ously looking for faults.
               • Tests conducted at software interfaces (internal and external)
               • Demonstrate required functionality is present
               • Demonstrate input properly accepted, output properly produced
               • Demonstrate integrity of external information is maintained (e.g., don’t erase
                   or modify files unless you’re supposed to!)
       4.2. White Box Testing – based on detailed knowledge of internal software workings
               • Test cases exercise specific sets of instructions and/or loops
               • Test results may include queries of internal component states
       4.3. Both approaches used in combination to test software systems
               • Black box testing used predominantly when testing integrated sets of compo-
                   nents – far too many paths in a real system to test each one using white-box
                   testing. Focuses on testing satisfaction of program requirements.
               • White-box testing used primarily at individual module level
               • White-box testing also used for testing:
                    o (Selected) important logical paths through system
                    o Validity of important data structures
5.   White Box Testing – uses knowledge of procedural design control structure to derive test
       5.1. White-box Test Coverage – White box test cases categorized as follows:
              5.1.1.      Path Coverage - Execution of all independent paths in software at least
              5.1.2.      Branch Coverage – Exercise all logical conditions on true and false
                          sides at least once
              5.1.3.      Execute loops at boundaries and within operational bounds – may also
                          want to execute them outside of operational bounds

                5.1.4.    Internal Data Structure Testing – look at details of internal data struc-
                          tures to ensure their validity.
       5.2.     Why do white box testing? If black-box testing focuses on testing program re-
                quirements, shouldn’t we do that?

                No – nature of software faults indicates white-box testing should be employed:

                5.2.1.      Faults tend to show up in infrequently-used places. “Everyday” proc-
                            essing is well-understood, special cases aren’t as well understood.
                            White-box testing can get to the paths and data structures implement-
                            ing those special cases easier than black box testing.
                5.2.2.      Beliefs about control flow may not match reality – a path that we think
                            won’t be frequently executed may, in fact, be executed very often.
                            This belief may lead to design errors that won’t be uncovered until we
                            do (white-box) path testing.
                5.2.3.      Typos are random – We make typos when coding, and can make them
                            just as easily when coding some obscure path as a mainstream path.
                            Path testing will systematically exercise all paths to find these types of
6.   Basis Path Testing
      • May be best-known of white-box test methods
      • Devised by McCabe in 1976
      • Provides method for ensuring that each statement is executed at least once.
      • Test cases derived from flow graph representation of control
      6.1. Flowgraph Notation – in developing basis path test cases, procedural design rep-
              resentations (e.g., program design language, flow charts) are transformed to flow-
              graph representation (see figure below)

          Sequence            If                While                      Until                Case

                •    All structured programs are built from these five constructs.
                •    Each circle represents one or more non-branching PDL or source code state-
                     ments. (draw flow chart example on board – make sure one leg has at
                     least two processing elements)

                6.1.1.      Circles are nodes, links are edges, areas bounded by edges and nodes
                            are regions.

       6.1.2.    Nodes containing conditions are predicate nodes – has two or more
                 edges radiating from it.
       6.1.3.    All edges must terminate at a node
       6.1.4.    Compound conditions handled by making multiple nodes (e.g., if A
                 and B is represented by two “if” structures)
6.2.   Cyclomatic Complexity
       6.2.1.    Defines number of independent paths through flowgraph representa-
        Independent path – two paths are independent if one has at
                           least one processing elements the other doesn’t have. Trivial
                           example - the two legs of the “if” structure above are inde-
                           pendent paths.

                            Note – for loops, “x+1” cycles through the “while” or “until”
                            structures wouldn’t be considered to be independent of “x”
                            cycles through the loop, so basis path testing doesn’t help us
                            for testing loops.

       6.2.2.    Defines upper bound for number of tests that must be executed to en-
                 sure all statements have been executed at least once.
       6.2.3.    Cyclomatic complexity computation
        V(G) = Number of regions in flowgraph where V(G) = cyc-
                            lomatic complexity. Example – for “case” structure above,
                            the number of regions is 3 – don’t forget that the area “out-
                            side” the structure also counts as a region.
        V(G) = E - N + 2 where E = number of edges in flowgraph,
                            and N = number of nodes
        V(G) = P + 1, where P is the number of predicate nodes. I
                            think that this is a typo in the text, since it obviously doesn’t
                            work for the “case” structure in the figure above. This
                            should be either:
                            • V(G) = P’ + 1, where P’ is a new set of predicate nodes
                                that includes what you get when you transform all of the
                                ones having multiple conditions into single-condition
                            • V(G) = P + 1 + total number of “excess” edges from all
                                predicate nodes, where an “excess edge” is the third,
                                fourth, … outflowing edge from a predicate node.
6.3.   Deriving Test Cases – now that know what flowgraph representations and cyclo-
       matic complexity are, we can use them to derive test cases. Follow these steps
       6.3.1.    Using the design or code as a foundation, draw a flowgraph. (Use the
                 following nonsensical pseudo-C example on the board):

                  void fake_function(bool b)
                           int i, j, k;

                                  i = rand();
                                  j = initialize(i, b);
                                  if (b && (i < j)) // Divides into two predicate nodes
                                           i = yah(j);
                                           j = hoo(i);
                                           for (k = i; k < LIMIT; k++)
                                                    this(i, j);
                                                    that(i, j);
                                                    other(i, j);
                                           j = boo(i);
                                           i = hoo(j);
                                           rats(i, j);
                                  cleanup(i, b);
              6.3.2.     Calculate the cyclomatic complexity of the flowgraph – calculate V(G)
                         of the example of above
              6.3.3.     Determine a set of basis paths
              6.3.4.     Prepare test cases that will force execution along each of the basis
7.   Control Structure Testing
      7.1. Condition Testing – exercises conditions in a module.
              • Simple condition – E1 <relational operator> E2
              • Compound condition – Condition-1 <boolean operator> Condition-2
                     o Condition-1 and condition-2 may be simple or complex
              7.1.1.     Goal – find faults in conditions within module. Condition testing
                         likely to be effective for finding other types of faults.
              7.1.2.     Condition testing strategies
                Branch testing – execute true and false branch of every con-
                                   dition at least once
                                   • Compound conditions – need to test true and false condi-
                                       tions of compound condition as well as every simple con-
                                       dition within compound condition.
                Domain testing – extension of branch testing. Requires three
                                   or four tests to be derived for each relational expression

                                  Example: a+b < c+d

                                  •  Set a+b < c+d, a+b = c+d, a+b > c+d to test for er-
                                     rors in relational operator (did we choose the wrong
                                     operator?). Assumes expressions are correct.
                                 • To detect errors in expressions: make difference be-
                                     tween a+b and c+d as small as possible
                                 • For Boolean expression with n variables, 2n tests
                                     will be needed. Practical only for small n.
         BRO (branch and relational operator) testing. Guarantees de-
                           tection of branch and relational errors in a condition provided
                           all Boolean variables and relational operators in condition
                           occur only once and have no common variables
                          • BRO uses condition constraints – the i’th constraint in the
                             set of constraints for the condition specifies a constraint on
                             the outcome of the i’th simple condition


                              C1: bool_1 & bool_2

                              •   Condition constraint is (D1, D2) where D1 and D2 may
                                  each be TRUE or FALSE.
                              •   Suppose that for an execution (test case), the condition
                                  constraint (FALSE, TRUE) is covered by test for which
                                  bool_1 = FALSE and bool_2 = TRUE. BRO requires
                                  that we cover the constraint set {(TRUE, TRUE),
                                  (TRUE, FALSE), (FALSE, TRUE)}. If there are Boo-
                                  lean operator errors, this constraint set will cause C1 to

                              C2: bool_1 && (a < b)

                              •   Substitute (a < b) for bool_2
                              •   Condition constraint (FALSE, <) is covered by test for
                                  which bool_1 = FALSE and a < b. BRO requires that
                                  we cover the constraint set {(TRUE, <), (TRUE, =),
                                  (TRUE, >), (FALSE, <)}. If there are Boolean or rela-
                                  tional operator errors, this constraint set will cause C1
                                  to fail.

7.2.   Data Flow Testing
       7.2.1.    Defines test paths according to locations of variable definitions (e.g. i
                 = 1) and uses (e.g., j = i).
       7.2.2.    Start by considering variable definitions and uses for statement S
                 within a module:
                 • DEF(S) = {X | statement S contains a definition of X}
                 • USE(S) = {X | statement S contains a use of X}

                  •  DEF(S) for variable X is live at statement S’ if there’s path from S
                     to S’ that contains no other definitions of X.
       7.2.3.     Construct DU chains – DU = [X, S, S’]
                  • S and S’ are statement numbers
                  • X is in DEF(S) and USE(S’)
                  • Definition of X in S is live at S’
       7.2.4.     DU testing strategy – require that every DU chain be covered at least
                  • For each variable, find all of the statements in which it’s defined.
                  • For each definition of variable X, find its DU chains.
                  • Repeat for all variables

                  DU testing doesn’t guarantee that all branches of a program are cov-
                  ered. This only happens in rare cases:
                  • If-then-else construct for which “then” has no variable definitions
                      and “else” part doesn’t exist. The “else” branch of the “if” state-
                      ment is not necessarily covered in this case.

                  DU testing has been shown to be effective for detecting faults. More
                  difficult to measure test coverage and select test paths than for condi-
                  tion testing.
7.3.   Loop Testing – focuses exclusively on validity of loop constructs in programs.
       There are four types of loops to be considered – simple, nested, concatenated, and
       unstructured (draw these on the board).
       7.3.1.     Simple Loops – test the following conditions:
         Skip the loop entirely
         Make one pass through the loop
         Make two passes through the loop
         Make less than the maximum number of passes through loop
         Make n-1, n, and n+1 passes through the loop – n = maxi-
                             mum number of passes through loop
       7.3.2.     Nested Loops
         Start at innermost loop – set all others to minimum values
         Conduct simple loop tests for innermost loop – hold all other
                             loops at their minimum value
         Work outward – keep outer loops at minimum values, nested
                             loops at “typical” values
         Continue until all loops have been tested
       7.3.3.     Concatenated Loops
         If they’re independent (separate loop counters, loops follow
                             each other sequentially), use approach for simple loops for
                             each loop
         If the loop counter for loop_2 uses counter for loop_1 as its
                             initial value, use the approach for nested loops.
       7.3.4.     Unstructured Loops – best advice is to redesign it so it becomes a
                  combination of the first three.

8.   Black-Box Testing

      8.1.   Overview
             8.1.1.     Goal – fully exercise all functional requirements for a program
             8.1.2.     Complementary to white-box testing – uncovers different classes of
              Incorrect or missing functionality
              Interface faults
              Faults in data structures, external database access
              Performance or other behavior faults
              Initialization and termination faults
             8.1.3.     Applied during later testing stages
             8.1.4.     Focuses attention on information, rather than control, domain. An-
                        swers following questions:
              Does the system exhibit all required functionality?
              Does the system behavior and performance meet require-
              Is the system sensitive to certain input values?
              What data rates, data volume can the system tolerate?
              What will be the effect of specific combinations of input on
                                   the system’s operation?
      8.2.   Graph-Based Testing – models system under test as a directed graph of interacting
             objects. Attributes of objects and links between objects determine test cases
             8.2.1.     Graphical Representation – objects linked by edges
                          o Nodes – data objects we’ve previously discussed as well as mod-
                          o Links – represent relationships between objects
                                   ƒ   May be directed or bi-directional (symmetric)
                                   ƒ   Parallel links used when number of different relationships
                                       established between nodes.
                          o Nodes and links have weights assigned to them.
                                   ƒ   Node weights describe specific properties of a node (e.g.,
                                       data value [background screen color, default desktop
                                       wallpaper], state behavior)
                                   ƒ   Link weights describe characteristics of a link. For in-
                                       stance, one object (menu selection) generates another one
                                       (Science Data Record) – “generates” is the link, and
                                       “generates a minimum of 100 per second” could be the
                                       link weight.
             8.2.2.     Graph-based testing steps
              Identify all nodes, node weights. Can use data model as a
                                   starting point, but nodes may also be program objects. Make
                                   sure to define entry and exit nodes.
              Identify links, link weights. Links should be named, but
                                   those representing control flow between objects don’t have to
                                   be named.

         Study each relationship separately to derive test cases.
                           ƒ  Study transitivity to see how impact of relationships
                              propagates across objects.

                                Example: A required to produce B, B required to produce
                                C establishes A required to produce C.

                                Testing the production of C must consider values for both
                                A and B.

                            ƒ  For symmetric links, the symmetry must be tested (e.g.,
        Node coverage – first objective – make sure all nodes have
                           been included and that all node attributes are correct
        Link coverage – second objective – test each relationship
                           based on its properties (i.e., symmetric/directed, transitivity,
                           link weight). If links have loops, apply loop testing methods
                           previously discussed.
8.3.   Equivalence Partitioning
       8.3.1.    Based on dividing input space into equivalence classes. Equivalence
                 class represents a set of valid or invalid states for input conditions.
       8.3.2.    Equivalence classes should be known relatively early in the design, so
                 the planning for this type of testing can begin as the design is being
       8.3.3.    Guidelines for defining equivalence classes.
        Input condition specifies a range – one valid, two invalid
                           equivalence classes.
        Input condition specifies a value – one valid, two invalid
                           equivalence classes.
        Input condition specifies a member of a set – one valid, one
                           invalid equivalence class.
        Input condition is Boolean – one valid, one invalid equiva-
                           lence class.
8.4.   Boundary Value Analysis (BVA)
       8.4.1.    Greater number of faults tend to occur at boundaries of input domain –
                 this method complements equivalence partitioning by selecting test
                 cases that exercise these boundaries. This method also develops test
                 cases from the output domain.
       8.4.2.    Guidelines for developing boundary value analysis test cases.
        Input is a range – design test cases to test input values just
                           above and below each range boundary.
        Input specifies a number of values – develop test cases exer-
                           cising minimum and maximum values. Also can use values
                           just above and below minimum and max values.
        Apply preceding two guidelines to output conditions.

       For internal data structures have defined boundaries (e.g.,
                          bounded array), create test cases to exercise structures at their
                          boundaries (e.g., reference next to last array element, last ar-
                          ray element, just after last array element).
8.5.   Comparison Testing
       8.5.1.   Developed for testing critical systems that must have high reliability
                (e.g., commercial avionics, medical device controllers). Related to
                idea of N-version programming.
       8.5.2.   Testing steps
       Develop specification
       Give specification to N development teams
       Each team develops separate version of SW
       Test team uses one set of test data to execute different ver-
                          sions in parallel – results are automatically compared
       If outputs from each version are the same, it is likely that all
                          implementations are correct
       If outputs differ, then one or more implementations has one
                          or more faults, and fault identification is required.

            Even though N versions are developed, only one is fielded.

            Even though versions may produce same results, it may be that:
            • The specification is incorrect
            • Each of the N versions has the same fault (correlated faults)
8.6.   Orthogonal Array Testing
       8.6.1.    Can be applied to problems for which input domain is relatively small,
                 but too large to accommodate exhaustive testing (i.e., every possible
                 permutation of input domain). Basic idea is creating a Latin Square
                 experimental design. For example, a system having 3 parameters (A,
                 B, C) with 4 values (1, 2, 3, 4) would have the following test cases:

                       Test Case         Test Parameters       Comments
                                         A       B     C
                      1              1         1     1         No two
                      2              1         2     2         pairs of val-
                      3              1         3     3         ues appear
                      4              1         4     4         more than
                      5              2         1     4         once in the
                      6              2         2     1         array
                      7              2         3     2
                      8              2         4     3
                      9              3         1     3
                      10             3         2     4
                      11             3         3     1
                      12             3         4     2

                               Test Case         Test Parameters       Comments
                                                 A       B     C
                              13             4         1     2
                              14             4         2     3
                              15             4         3     4
                              16             4         4     1

               8.6.2.       Provides good test coverage with far fewer test cases than exhaustive
                            testing would require. For the above example, we’d need 43 test cases.
                8.6.3.      Example shown above identifies single mode faults (i.e., fault is asso-
                            ciated with only one parameter) as well as double mode faults (interac-
                            tions between two parameters). Orthogonal arrays can also be devised
                            to find multi-mode faults.
                8.6.4.      Supposes that functionality of system can be parameterized, and that
                            parameters have ranges of values or values are discrete elements of a
                            finite set.
                8.6.5.      Commercial tools for creating orthogonal test arrays are available –
                            AETG by Telcordia is one of these.
9.    Testing for Specialized Environments
       9.1. Documentation – users’ documentation (e.g., user’s guides, help system) must be
               tested as well as the software. Can use review and inspection as well as “live
               test” to test documentation. Review and inspection looks at editorial clarity. Live
               test uses the documentation in conjunction with the actual system.
       9.2. Documentation test should answer the following questions
                9.2.1.      Does the documentation accurately describe how to use the software?
                9.2.2.      Is the description of each interaction sequence accurate?
                9.2.3.      Are examples accurate?
                9.2.4.      Are terminology, menu descriptions, and system responses consistent
                            with the actual software?
                9.2.5.      Is it easy to locate guidance in the documentation?
                9.2.6.      Can troubleshooting be done using the documentation?
                9.2.7.      Are the document table of contents and index accurate and complete?
                9.2.8.      Does the document design (layout, graphics, typeface, spacing) make
                            it easy to understand and assimilate information?
                9.2.9.      Are error messages displayed by the software elaborated in the docu-
                            ment? Does the documentation say what to do in response to an error
                9.2.10.     If used, are hypertext links accurate and complete?
                9.2.11.     If hypertext is used, is the navigation design appropriate for the con-
                            tent displayed and required?
10.   Testing Strategy –Introduction and Overview
       10.1. Since testing can be planned ahead of time (in parallel with specification and de-
               sign), a strategy should be devised. Strategy identifies:
               • Which test case design techniques and testing methods to use
               • When to use them
       10.2. All test strategies have the following characteristics

               10.2.1.     Test from inside out – start at component (e.g., module) level and
                           work toward integrating entire system. Test strategy has to accommo-
                           date low-level and high-level testing.
               10.2.2.     Different testing techniques are appropriate at different times
                           • Testing at lowest level uses white box testing – black box testing is
                               not appropriate here.
                           • Black box testing used to test entire system – white box testing is
                               largely impractical here.
               10.2.3.     Testing is conducted both by developers and an independent test group
                           (for large systems)
               10.2.4.     Debugging must be part of strategy. Testing is not debugging, and
                           debugging is not testing, but debugging must be accommodated.
       10.3. A test strategy has the following goals
               10.3.1.     Provide guidance for developers – “what am I supposed to do now?”
               10.3.2.     Provide set of milestones for managers – “how much progress have we
                           made? Are we there yet? How far do we have to go?”
11.   Relationship of Testing, Verification and Validation (V&V) – how are they related?
       11.1. V&V defined
               • Verification – are we building the product right? Ensures that software cor-
                   rectly implements a given functionality. If we’re building a database man-
                   agement systems, are we building the indexing component so it operates cor-
               • Validation – did we build the right product? Makes sure that what we’ve built
                   is traceable back to customer’s requirements. Did the customer really want a
                   database system, or were they asking us to build a system to control a chemi-
                   cal plant?
       11.2. Testing is one specific V&V activity – others include
               • Formal technical reviews of requirements, design code
               • Quality and configuration audits
               • Documentation review

            Testing provides final opportunities to find and remove faults, but don’t rely on test-
            ing alone to find faults – other V&V activities are important

       11.3. Does testing improve quality?
               11.3.1.    Testing does find faults, and removing faults does improve software
                          quality, but:
               11.3.2.    Quality is built into a system, not tested into it – complete-
                          ness/accuracy of specification, system design, soundness of manage-
                          ment, determine quality.
               11.3.3.    Testing confirms quality rather than putting it into software.
12.   Organization - Roles of Software Developers, Software Testers
       12.1. Developers and testing organization work together to test software
               12.1.1.    Although developers have knowledge of how system works, they
                          shouldn’t be responsible for testing of complete system:

                           •    They might have blind spots – “it always works that way – you just
                                have to know how to get around it”
                           • Conflict of interest - Since they built the system, developers have
                                an interest in demonstrating that it works. Demonstrating that a
                                system works is only one goal of testing.
     12.2. Common Misconceptions
             12.2.1.       “Developers shouldn’t do any testing”
             12.2.2.       “Throw the software over the wall to complete strangers to do the test-
     12.3. Responsibilities of developers, testers
             12.3.1.       Developers always test individual modules (unit test) – make sure each
                           unit performs its function correctly
             12.3.2.       Developers may be responsible for at least some of the integration test-
                           ing – building some or all of the system from its components.
             12.3.3.       Independent testing group (ITG) removes conflict of interest, blind
                           spots that might arise if developers alone did testing
             12.3.4.       Developers and ITG work together – developers don’t throw software
                           over the wall to be tested.
                           • Cooperate during test planning – ITG must know about what it’s
                                going to test
                           • Developers help in confirming test results analyses made by ITG –
                                ITG may be uncertain whether a particular set of test results indi-
                                cates a failure or not – developers may confirm this
                           • Developers find and remove faults during test
             12.3.5.       ITG is part of software development team (participates in development
                           of a system starting at specification stage). May be part of develop-
                           ment organization, or may be part of different organization than devel-
                           opers (e.g., SQA organization).
                 Developers and ITG may switch places from time to time
                                    • Delivery of GALILEO flight software was incremental de-
                                        livery. Testers for version X of software would become
                                        developers of version X+1.
                                        o Preserved some independence
                                        o Made job more interesting (each person got to have two
                                            roles instead of one, learned new and interesting skills).
13. Generalized Testing Strategy
     13.1. Testing goes from bottom to top (or from inside to outside) – draw inverted water-
             fall to illustrate
             13.1.1.       Unit test – make sure each component (module) correctly implements
                           its function
             13.1.2.       Integration test – focuses on design and construction of software archi-
                           tecture. Have the pieces been put together correctly?
             13.1.3.       Validation testing – makes sure that system satisfies requirements that
                           are traceable to the customer. Customer often participates in develop-
                           ing validation testing test cases
14. Completion – knowing when to stop – several points of view

       14.1. “Testing never stops” – takes (extreme) point of view that when customer oper-
             ates software, they’re also testing it. Developers have simply passed the testing
             burden on to the customer. Underscores importance of software quality assurance
             activities prior to delivering system to customer.
       14.2. “We’re done when we run out of money/time”. Several variations on this one –
             probably guides more software releases than we’d like to know about. Software
             publishers are driven by the need to get to market before their competitors –
             “market window” established by management is often inflexible, and products are
             rushed out the door that are of poor quality.
       14.3. More rigorous criteria – want to release based on measurable characteristics of the
             software, particularly characteristics that are important to the user. Software reli-
             ability modeling can help
             14.3.1.     Software reliability models – statistical models that can estimate and
                         forecast the reliability/failure intensity of a software system during
                         test. Can help answer these questions:
                         • What is the current value of software reliability/failure intensity?
                              Has the required reliability been reached?
                         • How much more effort/time is required to achieve the required
                         • If there aren’t enough resources available to achieve the required
                              reliability, how far away from the required reliability will we be?
             14.3.2.     Can stop when reliability has reached a required level
             14.3.3.     Can work to minimize cost over expected lifetime of system

                                      Total Cost
                                                                         Cost of Failure

                                                                       Testing Cost

                                          Failure intensity

15.   Strategic Issues – successful testing strategy must address following issues
        15.1. Specify requirements in a quantifiable manner. To see if something works, you
                have to measure it to know with certainty if it works correctly.

15.2. State testing objectives explicitly – e.g., reliability, number of residual faults, time
      to next failure, test coverage target.
15.3. Know how users will use software – develop testing profiles for each type of user.
15.4. Use format technical reviews prior to test – for some types of faults, formal tech-
      nical reviews are more effective than testing. Formal technical reviews reduce the
      testing load by removing some of the faults.
15.5. Use formal technical reviews on the test strategy – make sure the strategy is com-
      plete, correctly reflects the system to be tested, is consistent (both internally and
      with other software artifacts), uses the right testing approach at the right time.
15.6. Make continuous improvement part of testing approach – make sure that meas-
      urements are taken so that testing process can be improved. For instance, meas-
      ure test coverage, number of faults identified, mean time failure reports are open,
      time required to repair a fault.


To top