Goal Driven Approach to Testing

Document Sample
Goal Driven Approach to Testing Powered By Docstoc
					Software verification, validation
          and testing

            March, 2001
          Course Objectives

 measure software quality
 measure software robustness
 measure confidence in software
 what is testing? How do we do it?
 What are the classical approaches to
  testing software
 what is special about testing OO software

                                          2
                   Resources
   Lecture
     presentations   by me
     discussion

 quizzes, small project or paper
 WWW
 Research papers




                                    3
               Textbook
There isn’t a good textbook available.
We’ll take material wherever we can get
it, including course notes & papers




                                          4
                    Syllabus
 What is software testing
 approaches to formalizing testing
            approaches to testing procedural or
     classic
     action-oriented software
     approaches to testing object-oriented software

 coverage
 metrics -- to measure software quality


                                                 5
What is Object-Oriented
   Programming?
    Do we really know
       what it is?


                          6
    Things you “hear” about OO:
    a grain of truth in all but one
 It‟s a “new way” of looking at things
 It‟s a paradigm shift
 You have methods instead of functions
 you pass messages
 If you‟ve used the “procedural approach”
  than you can‟t transition to OO
 You use objects, whatever they are!
 You use polymorphism, whatever that is!
                                         7
         Let‟s compare OO
         to the “old ways”
 Previous approach stressed algorithms.
 Usuall referred to as “procedure-
  oriented” approach
 I like A. Riel‟s term: “Action-Oriented”
  approach




                                             8
    Action-Oriented Programming
 Stresses “getting the algorithm correct”
 uses “functional decomposition”
 Has a “main” calling routine that
  choreographs the activity, calling
  functions to implement the application




                                             9
    Object-Oriented Programming
 Stresses the data
 Decompose the data, together with the
  functions that operate on that data
 Put the data and operations into a “class”
 An instance of the “class” is an object
 Each “object” maintains its own data and
  operations: must initialize and clean up!
 Decentralize control
                                          10
    Let‟s consider an example
 Let‟s compare the worst-case action-
  oriented with the best-case object-
  oriented
 Then, we‟ll consider the case when the
  action-oriented goes right!
 And, the object-oriented goes wrong!




                                           11
               A typical
       action-oriented topology
                          F2()
F1()


                                  F3(


F5()

             F4()

                                        12
 Consider a change to the data,
          marked X
                       F2()
F1()


                              F3(


                         Functions f1()
F5()
                         and f2() must
                         be modified
          F4()

                                    13
 However, suppose you or another
 developer added f6() (and forgot you did it!)
                                 F2()
 F1()


F6()                                    F3(


  F5()

              F4()

                                              14
 However, suppose you or another
 developer added f6() (and forgot you did it!)
                                        You compile,
                                 F2()   link and exec
 F1()                                   and it doesn‟t
                                        work!

F6()                                    F3(


  F5()

              F4()

                                               15
How does the OO approach
 control this complexity

   D1                  D2

  D1‟s data           D2‟s data

              call
  f1()                   f4()
  f2()                   f5()
  f3()         call      f6()



                                  16
        Decompose the data

 Data is encapsulated in a class (or object)
 well-defined public interface permits
  operations on the data




                                          17
    When action-oriented goes right
 Many people build systems using files, with
  each data structure in a separate file
 the functions that operate on data are placed
  in the same file
 some languages contain primitives to restrict
  operations within/across files (e.g. static,
  extern)
 here, AO takes on attributes of OO approach

                                            18
      How can OO go wrong
 The god class: performs most of the
  work, leaves minor details to other
  classes
 proliferation of classes: too many
  classes for the size of the problem




                                        19
        Class design heuristics
 All data is hidden in a class
 implement a minimum public interface that
  all classes understand
 do not put implementation details in the
  public interface
 A class should capture one, and only one,
  key abstraction
 Keep related data and behavior in one place
                                          20
        An important aspect of OO design is to
      determine the relationships between classes
                Person                  LibrarySystem             LibraryItem




StudentPerson     StaffPerson   ClerkPerson
                                                        JournalItem       BookItem




                                     CommandProcessor




                                                                                21
Introduction to Testing




                          22
         First: a riddle about testing
               by Brian Marick
   A mathematician, a physicist, and an engineer are
    told: “All odd numbers are prime.”
      The mathematician says, “That‟s silly, nine is a
       non-prime odd number.
      The physicist says, “Let‟s see, 3 is prime, 5 is
       prime, 7 is prime -- looks like it‟s true.”
      The engineer says, “let‟s see, 3 is prime, 5 is
       prime, 7 is prime, 9 is prime, 11 is prime -- looks
       like it‟s true.”

                                                      23
               Software testing
   Historically, was not popular:
     with managers
     with testers
     with developers
     with students

   testing and many software innovations
    evolved out of the “software crisis”

                                            24
      Software Failure rate (ideal)
Failure Rate




                 Time




                                      25
    Software Failure rate (real)
          increased failure
          rate due to side effects
Failure
   rate




             change
                                      actual curve



                                     idealized curve

                                           Time
                                                       26
   The Cost of Change
                            60-100x




                1.5-6x
    1x


Definition   Development   After release

                                           27
    An error found after release costs
         four times (W. Perry)
 1st cost: developing program erroneously
 2nd cost: system has to be tested to
  detect the error
 3rd cost: wrong specs/code removed,
  correct specs/code added
 4th cost: system must be retested!



                                         28
       The “software crisis”
 By the 1980‟s, “quality” in software
  became a goal; SEI was born
 “software engineering” became popular
 the life cycle was studied
 software developers and testers began
  to work together
 by the 1990‟s, testing tools became
  available
                                          29
        What is software testing
   “The process of executing computer software
    in order to determine whether the results it
    produces are correct”, Glass „79
   “The process of executing a program with the
    intent of finding errors”, Myers „79
   “Program testing can be used to show the
    presence of bugs, but never their absence”,
    Dijkstra „72


                                                   30
    What is software testing (cont)
 “The aim is not to discover errors but to
  provide convincing evidence that there
  are none, or to show that particular
  classes of faults are not present”,
  Hennell „84
 “Testing is the measure of software
  quality”, Hetzel „85


                                              31
    What is software testing (cont)
   “The process of operating a system or
    component under specified conditions,
    observing or recording the results, and
    making an evaluation of some aspect of
    the system or component.”
     IEEE/ANSI, 1990



                                          32
     Testing is a state of mind
 “If our goal is to show the absence of
  errors, we will find very few of them”
 “If our goal is to show the presence of
  errors, we will discover a large number
  of them”




                                            33
       Time spent on testing

 50%      Brooks/Myers, 1970s
 80%      Arthur Andersons‟ Director of
  testing in North America, 1990s




                                           34
      Tester-to-developer ratios

 1:5-10 Mainframes
  i.e.,1 tester for every 5 to 10 developers
 2:3 Microsoft, 1992
 2:1 Lotus (for 1-2-3 Windows)
 1:2 Average of 4 major companies,1992
           Microsoft, Borland, WordPerfect,
           Novell

                                           35
    Difficulties in testing software
 poorly expressed requirements
 informal design techniques
 nothing executable until coding stage
 Huge input set: consider testing
  software that categorises an exam
  grade: 101 inputs
  consider testing software that
  categorises two exam grades:
  101*101 inputs!                         36
Difficulties in testing software (cont)

   Exhaustive software testing is intractable
   Even if all possible inputs could be identified, the
    problem of identifying non-halting cases is
    undecidable
   Weyuker (1979) has shown that there is no
    algorithm that can determine if a given
    statement, branch or path will be exercised!
   we‟ll look at this difficulty in more detail after we
    understand graphs

                                                     37
             control flow graph
   Directed graph G(V, E)
    V  is set of vertices
     E is set of edges, E = VXV

 The granularity of the vertices can be an
  operation, a statement or a basic block
 The edges are directed; direction
  indicates flow of control from one vertex
  to another
                                          38
            basic block (defn)
   sequence of statements such that the
    only entrance to the block is through the
    first statement and the only exit from the
    block is through the last statement




                                             39
     Let‟s consider path testing
 Construct test cases to exercise all
  paths through a program.
 Called “path coverage”.




                                         40
 Finding the square root of an
  inputted value: an example

start
  read number
  root = square_root(number)
  print root
end




                                 41
        Finding the square root
start
  read number                     t        f
  if number > 0
     root = square_root(number)
     print root
  else
     print error message
  endif
end

                                      42
           Finding the square root
start
  read number
  while number != 0
    if number > 0
       root = square_root(number)
       print root
    else
       print error message
    endif
    read number
  endwhile
end                                  43
How many paths?




                  44
    Exam processing example
 consider a program to process one
  exam result for 10 students
 categorise the result as A, B, C, D, F
 How many paths through the program?




                                           45
Find the number of paths for 10 inputs




                                     46
             Kinds of testing
   static: don‟t execute the program
     codeinspection or “walk through”
     symbolic execution
     symbolic verification

   dynamic: generate test data and
    execute the program



                                         47
         Functional testing
 formerly known as “black box” testing,
  or specification based testing
 test cases are derived from the
  functional design specifications w/out
  knowledge of internal structure
 understanding the code changes the
  way the requirements are seen; test
  design should not be “contaminated” by
  this knowledge
                                           48
        Functional testing (cont)
   functional testing will not test hidden
    functions, i.e., functions implemented
    but not described in the functional
    design specs




                                              49
          white box testing
 requires knowledge of internal program
  structure
 test cases are derived from the code




                                           50
    The term “bug” is not very precise
 Error: mistake made by the developer;
  located in people‟s heads.
 fault: an error in a program. An error
  may lead to one or more faults.
 Failure: execution of faulty code may
  lead to one or more failures. Failures are
  found by comparing the actual output
  with the expected output.
 Many people still call it a “bug” !
                                           51
                A test is

 an activity in which a system or
  component is executed under specified
  conditions, the results are observed or
  recorded, and an evaluation is made of
  some aspect of the system or
  component.
 a set of one or more test cases



                                            52
                A test case is

   A set of test inputs, execution
    conditions, and expected results
    developed for a particular objective




                                           53
                 Tests

 A test is made up of (many) test cases
 A test procedure is the detailed
  instructions for setting up, starting,
  monitoring and restarting a given test
  case. aka test plan
 a test case may be used in more than
  one test procedure
 a test case may include many subtests
                                           54
Tests




        55
        Testing v. debugging

 Testing detects errors, faults & failures
 Debugging is the process of finding and
  removing errors in software




                                              56
Progressive and regressive testing
   Progressive
     testingnew code to determine whether it
      contains errors
   Regressive
     process of testing a program to determine
      whether a change has introduced errors
      (regressions) in the unchanged code
     re-execution of some/all of the tests
      developed for a specific testing activity

                                                  57
                    Static analysis
   Static analysis is the process of evaluating a
    system or component based on its form,
    structure, content or documentation
       control flow analysis to find endless loops,
        unreachable code, etc.
       data-use analysis to find data used before
        initialisation, variables declared but not used, etc.
       range-bound analysis to find e.g. array indices
        outside the bounds




                                                                58
              Static analysis

 interface  analysis to find e.g. mismatches in
  argument lists between called and calling
  modules
 verification of conformance to project
  coding standards
 code volume analysis, e.g. counts of LOC
  in each module
 complexity analysis, e.g. cyclomatic
  complexity
                                               59
            Dynamic analysis

 The process of evaluating a computer
  program based on its behaviour during
  execution
 Usual to analyse the coverage of source
  code e.g. statement, branch, path
             best practice = 85% branch
     Industry
      coverage
   Less efficient than static analysis
                                          60
                Testware

 So named because it has a life beyond
  its initial use
 it should be managed, saved and
  maintained
 includes verification checklists,
  verification error stats, test data and
  supporting documentation (e.g., test
  plan), test specifications, test
  procedures, test cases and test reports 61
            Performance testing

   Verify that
        worst case performance targets have
     all
      been met
     nominal performance targets are usually
      achieved
     any best-case performance targets have
      been met



                                                62
                Unit test
 Perform the tests required to provide the
  desired coverage for a given unit
 The current trend is to test subsystems,
  rather than units.




                                          63
         Integration Testing
 Testing across units or subsystems
 test cases to provide the desired
  coverage for the system as a whole
 Testing subsystem connectivity




                                       64
               Security tests

   Check that
     system  is password protected
     users only granted necessary system
      privileges
   Deliberately attempt to break the
    security mechanism by e.g.
     accessing  the files of another user
     breaking into the system authorisation files
     accessing a resource when it is locked
                                                     65
           Portability tests

 Run a representative selection of
  system tests in all the required
  environments
or
 Run the system on one platform and
  check for conformance to standards ( if
  not possible to run on all environments)

                                             66
                  Safety tests
 Deliberately cause problems under
  controlled conditions and observe the
  system behaviour (e.g. disconnecting
  the power during system operations)
 Observe system behaviour when faults
  occur during tests
     Requirements  may identify functions whose
     failure would cause a critical or catastrophic
     hazard. Such functions need exhaustive
     testing.                                       67
                 Stress tests

   Measure maximum load the system
    under test can sustain for a time, e.g.
     maximum   number of activities that can be
      supported simultaneously
     maximum quantity of data that can be
      processed in a given time
     volume test, i.e. maximum quantity of input
      data together

                                                68
        Which form to use ?
         Do risk analysis
 Full testing for critical software or critical
  parts of software that will have heavy
  and diverse usage
 Partial testing for small, non-critical
  software products with a small, captive
  user population



                                               69
Testing cost curve:
 by William Perry
           Optimum test

                     Cost of testing




                     Number of defects


                                       70
        Problems with testing
 Failure to define the testing objective
  (don‟t know when to stop)
 Testing at the wrong phase of the life
  cycle
 Use of ineffective test techniques




                                            71
Testing Object-Oriented
       Software
       An Overview,
   from McGregor/Sykes


                          72
    Advantages of OO Approach
 improvement to development process & to
  code
 if implemented correctly, oo software is
  higher quality than software developed
  under the procedural paradigm:
  understandability, maintainability,
  extensibility, reliability and reusability


                                           73
              Adv of OO
 represents design results as refinement
  & extension of analysis
 represents implementation results as a
  refinement & extension of design
 testing OO is based primarily in the
  requirements of the software



                                            74
What‟s different about testing OO
 Much carry-over from the procedural
  approach
 For example, still do unit testing;
  however, what constitutes a until may be
  different in OO




                                         75
          What is software?
 instruction codes and data necessary to
  accomplish a task, as well as all
  representations of those instructions and
  data
 what are representations: the models
  developed during analysis



                                          76
            Adv of models
 test cases can be identified earlier in the
  development process
 errors and faults can be detected earlier
 test cases developed for analysis
  models can be refined and extended to
  test design models, which can be
  refined and extended to test subsystems
  and systems
                                            77
            The approach
 analyze a little
 design a little
 code a little
 test what you can




                           78
      Kinds of testing for OO
 Model testing
 interaction testing; replaces integration
  testing
 system and subsystem testing
 acceptance testing: testing of the
  application by end users, usually before
  release

                                              79
         Testing Perspective
 questions the validity of software &
  utilizes thorough investigation to identify
  failures
 makes execution-based testing more
  powerful than reviews or inspections;
 reviews never find something that‟s
  missing -- only validates what‟s there

                                            80
OO Programming centers around:
 object
 message
 encapsulation
 inheritance, and
 polymorphism




                             81
        For testing, an object:
 might be destroyed before it should be
 might persist longer than it should
 might be manipulated by another object
  so that its data becomes inconsistent
  with other data in the system
 might have inappropriate data or exhibit
  incorrect behavior in the system

                                             82
    For testing, a message can fail if:
 a sender requests an operation not defined
  by the receiver
 a message contains an improper actual
  parameter
 the receiver cannot perform the requested
  operation at the time of the request
 a sender doesn‟t handle the reply properly
 the receiver returns an incorrect reply
                                          83
         Two parts to each class
   specification: what each object in the
    class can do
     query operations (in C++ s/b const)
     modifier operations

   implementation: how each object in the
    class does it



                                             84
    Two special kinds of class
          operations
 constructor
 destructor
 These are different from queries and
  modifiers: invoked implicitly!




                                         85
    3 Kinds of statements to specify
        the semantics of a class
 pre-conditions: conditions that must hold
  before an operation can be performed
 post-conditions: conditions that must hold
  after the operation is performed
 class invariants: conditions that must
  always hold for an instance of a class; an
  implied post-condition for each operation;
  can be violated during execution
                                           86
                  methods
 Characterized by behavior
 behavior can be specified by:
          diagrams
     state
     sequence diagrams

   Two approaches to defining the
    interface between sender and receiver:
     contractapproach, B. Meyers (1994)
     defensive programming approach

                                             87
           Contract Approach
 The interface is defined in terms of the
  obligations of the sender and receiver
  involved in the action
 obligations specified by pre/post conditions
 pre-condition: describe the obligation of the
  sender (to ensure preconditions are met)
 post-condition: obligations of the receiver (to
  ensure post-conditions and class invariants
  are met)                                   88
#ifndef PUCKSUPPLY_H          #include “PuckSupply.h”
#define PUCKSUPPLY_H          PuckSupply::PuckSupply():(N) {
class PuckSupply {                 for (int I = 0; I < N; ++I) {
                                               _store[I] = new Puck;
public:
                                   };
         PuckSupply();
                              }
         ~PuckSupply();
         Puck * get();        PuckSupply::~Puck() {
         int count() const;       (for int I=0; I < _count; ++I) {
private:                                     delete _store[I];
         int _count;              };
         Puck* _store[N];     }
};
#endif                        Puck* PuckSupply::get() {
                                  return _count > 0 ? _store[--_count]:0;
                              }

                              int PuckSupply::count() const {
                                   return _count;
                              }
                                                                       89
     Spec for PuckSupply based on contracts
    A puck supply is a set of pucks not in play that can be retrieved one at a time. The
                 pucks are created by a puck supply when it is created.


   Class invariant: count associated w/ puck supply is always
    an integer from 0 to 3, incl.
   The count() operation can be applied at any time; it returns
    the no. of pucks left in the receive and has no effect on the
    receiver
   The get() operation can only be applied if the receiver has
    at least one puck (count > 0)
   constructor has no pre-conditions. The result of the
    constructor is a puck supply of 3 (count = 3)
   the destructor has no pre-conditions; the destructor
    deletes any pucks that remain in the object
                                                                                      90
 Spec for PuckSupply based on Defensive Programming
A puck supply is a set of pucks not in play that can be retrieved one at a time. The pucks
                    are created by a puck supply when it is created.


     Class invariant: (count between 0 & 3) -- same
     The count() operation: (returns no. of pucks) -- same
     The get() operation can be applied at any time; if the
      receiver has at least one puck (count>0) then the result
      of the operation is a return of a pointer to a puck and
      reduce the number of pucks by one; owise a NULL
      pointer is returned and count attribute remains 0.
     Constructor: (gives you 3 pucks) -- same
     the destructor: (deletes remaining pucks) -- same


                                                                                      91
        Spec for PuckSupply based on Contract Programming
                       Using OCL Notation

PuckSupply
       count >= 0

PuckSupply::PuckSupply():
       pre:     -- none
       post:    count = 3 AND pucks->forAll(puck | not puck.inPlay())

void PuckSupply::count() const:
        pre:    -- none
        post:   result = count

Puck * PuckSupply::get()
        pre:   count > 0
        post:  result = pucks->asSequence->first AND
               count = count@pre-1                                      92
               Defensive vs Contract
   Defensive:
       interface defined in terms of receiver
       an operation typically uses return code, or exception
       goal is to identify “garbage in” to eliminate “garbage out”
       tends to increase software complexity because each
        sender must follow each message w/ check of return
        code (even though receiver already checked)
       complicates class testing: must test for all possible
        outcomes!
       Complicates interaction testing: all outcomes handled by
        sender
                                                               93
          Defensive vs Contract
   Contract:
     reflects mutual responsibility and trust
     eliminates need for receiver to check return
      codes
     important question: How are contracts
      enforced?
     Simplifies class testing but complicates
      interaction testing: ensure sender meets
      pre-conditions
                                                 94
    A class implementation has:
 Data members
 set of member functions
 set of constructors
 destructor
 set of private operations (in private
  interface)


                                          95
              Class testing
   Since a class is an abstraction of
    commonaltie, the process must ensure
    that a rep sample of instances of the
    class are selected for testing




                                            96
    Potential causes of failure of class
         design/implementation
 Doesn‟t meet specification
 contains operations and data that affects
  the proper construction of instances
 might rely on collaboration w/ other objects
 might meet its spec, but spec might be
  incorrect, or violate a higher requirement
 might specify pre-conditions but not provide
  mechanism to check if pre-condition is met
                                            97
    In contract/defensive approach:
 Contract: must test to ascertain that
  every attempt to apply an operation
  must first satisfy pre-conditions
 defensive: ascertain that every possible
  outcome is handled properly




                                             98
              Inheritance
 Define a new class based on definition
  of an existing class
 used only to implement is-a-kind-of
  relationship
 best use of inheritance is with respect to
  interfaces and not implementations



                                           99
    Inheritance, from testing view
 Permits bugs to be propagated from a
  class to its descendents
 permits test case reuse
 verify it‟s used properly:
     use  of inheritance solely for code reuse will
      probably lead to maintenance difficulty,
     this is a design issue, but so common that
      testers can make a significant contribution
      by making sure inheritance is used properly
                                                  100
            Polymorphism
 A sender in an OO program can use an
  object based on its interface and not on
  its exact class
 a derived class inherits the public
  interface of its base class and thus
  instances of either class can respond to
  the same messages
 in Java, supported through inheritance
  or interfaces                           101
              Poly (cont.)
 Because polymorphic references hide
  the actual class of a referent, both C++
  & Java provide support for determining
  at run time the actual class of a referent
 Good OO design holds these run-time
  inspections to a minimum because they
  create a maintenance point: extension of
  the class hierarchy introduces more
  types to be inspected
                                          102
            Testing polymorphism
   An operation can return a reply that is a
    polymorphic reference; the actual class of the
    referent could be incorrect
   any operation can have one or more parameters
    that is a polymorphic reference; the parameter
    could be incorrect
   Number of instances to check could be large: Need
    statistical analysis to determine which
    configurations will expose most faults for least cost

                                                    103
      OO development products
 Use cases
 class diagrams
 activity diagrams
          diagrams
     state
     sequence diagrams

 class specification (in OCL)
 state diagrams
 activity diagrams
                                 104
    Functional Testing
Specification Based Testing
      (Black Box testing)



                              105
     Testing is a state of mind
 “If our goal is to show the absence of
  errors, we will find very few of them”
 “If our goal is to show the presence of
  errors, we will discover a large number
  of them”




                                            106
                Why do it?
   Increase our confidence that the
    software works and works correctly




                                         107
Devise a test plan

  A program reads 3 integer values. The 3 values are
  interpreted as representing the lengths of the sides
  of a triangle. The program prints a message that
  states whether the triangle is scalene, isosceles, or
  equilateral.

  Write test cases that would adequately test this program.




                                                              108
               Myers Test Cases
   1. Valid scalene (5, 3, 4) => scalene
   2. Valid isoscleles (3, 3, 4) => isosceles
   3. Valid equilateral (3, 3, 3,) => equilateral
   4. First permutation of 2 sides (50, 50, 25) =>
    isosceles
   5. Second perm of 2 sides (25, 50, 50) =>
    isosceles
   6. Third perm of 2 sides (50, 25, 50) => isosceles
   7. One side zero (1000, 1000, 0) => invalid
                                                   109
                   More test cases
   8. One side has negative length (3, 3, -4) => invalid
   9. first perm of two equal sides (5, 5, 10) => invalid
   10. Second perm of 2 equal sides (10, 5, 5) => invalid
   11. Third perm of 2 equal sides (5, 10, 5) => invalid
   12. Three sides >0, sum of 2 smallest < largest (8,2,5) =>
    invalid
   13. Perm 2 of line lengths in test 12 (2, 5, 8) => invalid
   14. Perm 3 of line lengths in test 12 (2, 8, 5) => invalid



                                                           110
                  More test cases
   15. Perm 4 of line lengths in test 12 (8, 5, 2) => inv
   16. Perm 5 of line lengths in test 12 (5, 8, 2) => inv
   17. Perm 6 of line lengths in test 12 (5, 2, 8) => inv
   18. All sides zero (0, 0, 0) => inv
   19. Non-integer input, side a (@, 4, 5) => inv
   20. Non-integer input, side b (3, $, 5) => inv
   21. Non-integer input, side c (3, 4, %) => inv




                                                             111
                 More test cases
   22. Missing input a (, 4, 5) => invalid
   23. Missing input b (3, , 5) => invalid
   24. Missing input c (3, 4, ) => invalid
   25. Three sides > 0, one side equals the sum of the
    other two (12, 5, 7) => inv
   26. Perm 2 of line lengths in test 25 (12, 7, 5) => inv
   27. Perm 3 of line lengths in test 25 (7, 5, 12) => inv
   28. Perm 4 of line lengths in test 25 (7, 12, 5) => inv
   29. Perm 5 of line lengths in test 25 (5, 12, 7) => inv
   30. Perm 6 of line lengths in test 25 (5, 7, 12) => inv
                                                              112
                More test cases
   31. Three sides at max values (32767, 32767,
    323767) => inv
   32. Two sides at max values (32767, 32767, 1) =>
    inv
   33. One side at max values (32767, 1, 1) => inv




                                                       113
           Testing principles

1. Complete testing is not possible
2. Testing work is creative & difficult
   need to understand the system
   most systems are complex
   need business knowledge, testing
    experience, creativity, insight and testing
    methodology


                                                  114
               Testing principles
4. Testing is risk based
      safety-critical systems need high level
      use risk as basis for allocating the test time
       available and selecting what to test
5. Testing must be planned
       plan overall approach
        – what to test and when to stop
      design tests
      establish expected results for each selected test
       case
                                                           115
          Testing principles

6. Testing requires independence
   unbiased   measurement e.g. test team
   goal is to measure software quality
    accurately
   often conflict with the requirement to
    understand the system being tested
    (principle 2)



                                             116
         Black-box methods
 equivalence partitioning
 boundary-value analysis
 error guessing
 cause-effect graphing




                             117
        Equivalence partitioning
   based on the specification; no need to see the
    code
   divide up the input domain into equivalence
    partitions or classes of data
   each class is treated identically
   any datum chosen from a class is as valid as
    any other
   advantage: reduces the input domain to a
    manageable size
                                                118
    Equivalence partitioning (cont)
 to assist in identifying partitions, look in
  specs for terms such as “range”, “set”
  and other similar words
 equivalence classes should not overlap
 in addition to choosing one datum from
  each class, invalid data may also be
  chosen

                                             119
    Equivalence partitioning (cont)
             example #1
 consider the exam processing program
  where we categorise the grade as A, B,
  C, D, F
 assume A>=90, B>=80, C>=70, D>=60
 we can partition the grades, g, into the
  following partitions: 0<=g<60,
  60<=g<70, 70<=g<80, 80<=g<90,
  g>=90 and g>100, g<0
                                         120
Equivalence partitioning (cont)
         example #1

  input     Output    Expected output
   50                       F
  65                         D
  75                         C
  85                         B
  95                         A
   -5                  Invalid input
  105                  Invalid input




                                        121
    Equivalence partitioning (cont)
             example #2
   string searching program: the program
    prompts the user for a positive integer in the
    range 1 to 20 and then for a string of
    characters of that length. The program then
    prompts for a character and returns the
    position in the string at which the character
    was first found or a message indicating that
    the character was not present in the string.
    The user has the option to search for more
    characters.
                                                     122
    Equivalence partitioning (cont)
             example #2
   There are 3 equivalence classes to
    consider
     one  class of integers in the stated range
     two classes of integers above and below
      the range
   output domain consists of two classes
     the  position at which the character is found
      in the string
     a message stating that it was not found
                                                   123
Equivalence partitioning (cont)
         example #2
 x    input string   search char   response    expected
                                                output
 34                                           wrong input
 0                                            wrong input
 3        abc            c                      pos = 3
                                      y
                         k                    not in string
                                      n




                                                              124
     strengths and weakness of
      equivalence partitioning
 strength: it does reduce size of input
  domain
 well suited to data processing
  applications where input variables may
  be easily identified and take on distinct
  values
 not easily applied to applications where
  input domain is simple yet the
  processing is complex
                                              125
    strengths and weakness of
     equivalence partitioning
 problem: although the specification may
  suggest that a class of data is
  processed identically, this may not be
  the case
 example: y2k problem
 also, the technique does not provide an
  algorithm for finding the partitions

                                        126
      Boundary value analysis
 used in conjunction with equivalence
  partitioning
 this technique focuses on likely sources
  of faults: boundaries of equivalence
  classes
 the technique relies on having created
  the equivalence classes

                                         127
Boundary value analysis (cont)
  input      Output   Expected output
   50                       F
  65                         D
  75                         C
  85                         B
  95                         A
   -5                  Invalid input
  105                  Invalid input
  60                         D
  70                         C
  80                         B
  90                         A



                                        128
    Boundary value analysis (cont)
      string processing program
 integer values of 0, 1, 20 and 21 are
  obvious choices,
 as well as finding the character in the
  first and last position




                                            129
Boundary value analysis (cont)
     x     input string   search char   response    expected
                                                      output
     21                                            out of range
     0                                             out of range
     1          a             a                      pos = 1
                                           y
                              X                    not in string
                                           n
     20   abcdefghijkl        a                      pos = 1
            mnopqrst
                                           y
                               t                    pos = 20
                                           n




                                                                   130
            Error guessing
 an ad hoc approach, based on intuition
  and experience
 identify tests that are likely to expose
  errors
 make a list of possible errors or error
  prone situations and then develop tests
  based on the list

                                         131
        Error guessing (cont)
         some items to try:
 empty or null lists/strings
 zero instances/occurrences
 blanks or null characters in strings
 negative numbers




                                         132
        Error guessing (cont)
        strengths/weaknesses
 intuition frequently accurate
 the technique is efficient
 technique relies on experience; not
  always available
 the ad hoc nature leaves doubt about
  quality of the test: “did I forget any
  typical error situations?”

                                           133
       Cause-effect graphing
 functional or spec-based technique
 systematic approach to selecting a set
  of high-yield test cases that explore
  combinations of input conditions
 rigorous method for transforming a
  natural-language spec into a formal-
  language spec
 exposes incompleteness and
  ambiguities in the spec                  134
    Cause-effect graphing (cont)
 the spec is analyzed, and
 all possible causes are identified: inputs,
  stimuli, anything that will elicit a
  response from the system
 all possible effects are identified:
  outputs, changes in the system state
 causes and effects must be stated so
  that they can be evaluated as either true
  or false                                  135
    Cause-effect graphing (cont)
 causes and effects are combined into a
  boolean graph that describes their
  relationship
 each cause and effect is allocated a
  unique number for reference
 create a boolean graph that shows the
  links between cause and effect
 construct test cases that cover all
  possible combinations of cause/effect 136
     Cause-effect graphing (cont)
   graphs are combined using operators:
     not
     and
     or
     nor




                                           137
     Cause-effect graphing (cont)
      exam processing program
   causes:
     (1) integer in range 1-20
     (2) search char is in string
     (3) search for another char

   effects
     (20) integer out of range
     (21) report position of char in string
     (22) char not found in string
     (23) program terminates                  138
Cause-effect graphing (cont)




                               139
    Cause-effect graphing (cont)
       strengths/weaknesses
 exercises combinations of test data
 expected results are part of test creation
  process
 major drawback is boolean graph: large
  number of causes and effects produce
  highly complex graph
 soln to weakness: identify sub-problems


                                          140
Verification Technique:
     Walkthrough




                          141
    Walkthrough: Two formats
 author is not presenter
 author is presented




                               142
    Walkthrough: those present
 presenter or inquisitor
 oracle: the author of code
 administrator




                                 143
    Walkthrough: key elements
 objective: to detect faults and enforce
  standards
 input: element under test, objectives,
  applicable standards
 output: report, perhaps a checklist,
  perhaps a set of test cases



                                            144
              Walkthrough:
          requirements checklist
   precise, unambiguous and clear
   consistent: no item conflicts with another item
   relevant: each item is relevant to the problem
   testable: will it be possible to determine if the
    item is satisfied
   traceable: will it be possible to trace each item
    through the stages of development



                                                   145
     Walkthrough: code checklist
   data reference errors: is an uninitialized
    variable referenced
   data declaration errors: are there variables
    with similar names
   computation errors: is the l-value smaller than
    the r-value
   comparison errors: are there comparisons
    between variables of different type; are
    differences allowed/handled
                                                  146
    Walkthrough: code checklist
              (cont)
 control flow errors: is there a possibility
  of premature loop exit; infinite loop;
  extended loop
 interface errors: do formal and actual
  parameters match; is the interface
  clearly defined; are constraints specified
  in the interface (ie, pre/post conditions)
 input/output errors: grammatical errors
  in output text                              147
Structural Testing Methodology
          (white-box)
       Branch and Path Testing




                                 148
             Example program:
   string searching program: the program
    prompts the user for a positive integer in the
    range 1 to 20 and then for a string of
    characters of that length. The program then
    prompts for a character and returns the
    position in the string at which the character
    was first found or a message indicating that
    the character was not present in the string.
    The user has the option to search for more
    characters.
                                                     149
main() {
(1 ) char a[20], ch, response = „y‟;
(2) int x, i;
(3) bool found;
(4) cout << “Input an integer between 1 and 20: “;
(5) cin >> x;
(6) while ( x < 1 || x > 20) {
(7)    cout << “Input an integer between 1 and 20: “;
(8)     cin >> x;
    }
(9) cout << “input “ << x << “ characters: “;
(10) for (int i = 0; i < x; ++i) cin >> a[i];
(11) cout << endl;
                                        150
(12) while ( response == „y‟ || response == „Y‟) {
(13)   cout << “input character to search: “;
(14)    cin >> ch;
(15)    found = false;
(16)     i = 0;
(17)     while (!found && i < x)
(18)         if (a[i++] == ch) found = true;
(19)      if (found) cout << ch << “ at: “ << i << endl;
(20)      else cout << ch << “ not in string” << endl;
(21)      cout << “search for another character? [y/n]”;
(22)       cin >> response;
     }
                                                       151
       The sample program:
 Professional programmers will continue
  the sample program “trivial”
 developed to demo a large number of
  testing methods in a small space




                                       152
           Statement testing
 aka statement coverage
 generate test data to exercise every
  statement in the program at least once
 (1) need datum out of range to get into
  first while loop
 (2) need search character in the inputted
  string and one not in the string to get into

                                            153
Test data for statement testing
x    input string   search char   response    expected
                                               output
34                                           wrong input
3        abc            c                      pos = 3
                                     y
                        k                    not in string
                                     n




                                                  154
        Strengths/weaknesses of
            statement testing
 minimum level coverage using structural
  testing
 may be impossible to achieve 100%
  statement coverage
     code only executed in exceptional or
      dangerous circumstances
     code that is unreachable



                                             155
           Strengths/weaknesses of
           statement testing (cont)
 Not very demanding technique; in the
  example we didn‟t generate a value for
  x<1
 does not provide coverage for the
  “NULL else”
     if   (number < 3) ++number;



                                           156
     Lesniak-Betley (1984) investigated
    the effectiveness of statement testing

   technique extended to generate test case for
    “NULL else”
   they found only two more faults in the extended
    technique, but generated many more test cases
    ==> “not worth it!”
   however, the software was a conversion project
    and the old code acted as a spec for the new
    ==> the problem was quite well defined and
    unlikely to be many faults because of poor specs
                                                 157
            Branch testing
 aka branch coverage or decision
  coverage
 generate test data to exercise the true
  and false outcome of every decision
 control flow graph (cfg) helpful in
  representing the program
 construct cfg for string processing
  example
                                            158
159
           Branch testing
     string searching example
 The branches for which we have to
  generate test data occur at nodes 6,
  10.2, 12, 17 and 19
 Look at each of the 5 branches to
  determine test sets to exercise each




                                         160
     Test data for branch testing
x        input string   search char   response    expected
                                                   output
34                                               wrong input
3            abc            c                      pos = 3
                                         y
                            k                    not in string
                                         n




                                                      161
        Strengths/weaknesses
            Branch testing
 Branch testing is one level up from
  statement testing in degree of coverage
 same problem in achieving 100%
  coverage
 resolves the “NULL else” problem
 Undemanding of compound condition:




                                            162
                 DD-path testing
   Decision to decision path testing
   sub-paths in the graph starting at the start node or
    a decision node and finishing at a decision node or
    the end node.
   The path must contain no decision nodes within it
   The testing goal is to cover each DD-path at least
    once
   Find DD-paths & test data for string processing
    example
                                                    163
       Strengths/weaknesses
          DD-path testing
 Equivalent to branch testing; typically
  doesn‟t add any coverage
 Any advantages to DD-path testing over
  branch testing?




                                        164
           Condition coverage
 Generate test data such that all
  conditions in a decision take on both
  outcomes (if possible) at least once
 May not achieve branch coverage:
  consider while (x < 1 || x > 20 )
    x  = 0 causes first false, second true
     x = 34 causes first true, second false

   generate test data for string processing
    program                                  165
Condition coverage test data for
      condition coverage

     x    input string   search char   response    expected
                                                    output
     34                                           wrong input
     0                                            wrong input
     3        abc            c                      pos = 3
                                          n
     1         x             a                    not in string
                                          n




                                                                  166
        Strengths/weakness
        condition coverage
 strength: focuses on condition outcomes
 weakness: may fail to achieve branch
  coverage!




                                       167
     Decision/condition coverage
   Generate test data such that all
    conditions in a decision take on both
    values at least once, and exercise the
    true and false outcomes of every
    decision




                                             168
        Test data for
Decision/Condition coverage
 x    input string   search char   response    expected
                                                output
 34                                           wrong input
 0                                            wrong input
 3        abc            c                      pos = 3
                                      y
                         k                    not in string
                                      n
 1         x             a                    not in string
                                      N




                                                          169
     Strengths/weaknesses of
    Decision/condition coverage
 Addresses one of deficiencies of
  condition coverage by forcing each
  branch to be exercised
 conditions can be masked by lazy
  evaluation; consider:
  while ( !found && i <= x)



                                       170
    Multiple condition coverage
 Generate test data to exercise all
  possible combinations of true and false
  outcomes of conditions in a decision
 Consider compound conditions at lines
  6, 12, 17:
  while (x < 1 || x > 20)
  while (response == ‘y’ || response == ‘Y’)
  while (!found && i < x)
                                          171
      Strengths/weaknesses of
     multiple condition coverage
 Strength: tests all feasible combinations
  or outcomes ==> confidence in boolean
  operators
 weakness:
     no assistance given in how to generate test
      data; (x < 1) ==> x=0 or x = -9999
     expensive! how many combinations are
      there for a decision involving n conditions?
      How many test cases are needed?
                                                172
              Level-i paths
 Defn: a level-0 path is a simple acyclic
  path from start node to the stop node of
  a graph
 Defn: a level-i path, for value of i>0, is a
  path that begins and ends on the nodes
  of a lower level and may be a circuit
 typically level-i paths, when i is at least
  1, are nested loops
                                             173
         Level-i paths (cont)
 Level-i paths are used to guide an
  incremental testing strategy that
  generates test data firstly to cover level-
  0 paths, then level-1 paths (accessing
  them through the level-0 paths),
  followed by level-2 paths, and so on.
 The method incrementally explores
  deeper and deeper levels of iteration
                                            174
175
          The level-i paths:
 level-0: 1-2-4-5-8-9
 level-1: 2-3-2, 5-6-7-5, 9-10-11-13-14-
  16-12, 9-10-11-13-15-16-12
 level-2: 11-12-11




                                            176
     Level-i paths are not always
     subsumed by branch testing
   There are 4 level-i paths

   There two branches:




                                    177
        Strengths/weaknesses of
             Level-i testing
   Readily applicable to both structured &
    unstructured programs
   finding all level-i paths is computationally
    intensive, and
   there are a lot of level-i paths; Woodward
    (1984) analyzed 116 Fortran programs and
    found 45,287,485 level-i paths, compared with
    164 DD-paths and 338 LCSAJs; (45,287,476
    of the level-i‟s were level-0)

                                               178
          Basis path testing
 Provides a method for identifying an
  upper bound for the number of paths
  necessary to achieve branch coverage
 the bound is given by McCabe‟s
  cyclomatic complexity number (1976)
 cyclomatic complexity: given a fully
  connected directed graph G(n, e), with n
  nodes and e edges, V(G) = e - n + 1
                                        179
     Basis path testing (cont)
 Compute V(G) = 21 - 16 + 1 = 6
 Now choose 6 independent circuits from
  the graph, known as the basis set:




                                       180
      Strengths/weaknesses of
         Basis path testing
 Basis set is easy to compute
 the technique is readily applicable to
  both structured and unstructured
  programs
 Drawback: the basis set is not unique;
  there will be obvious paths omitted
 drawback: the number of paths in the
  basis set can greatly exceed the number
  necessary to achieve branch coverage 181
              Path testing
 Generate test data to exercise all
  distinct paths in a program!
 a path that makes i iterations through a
  loop is distinct from a path that makes
  i+1 iterations through the loop, even if
  the same nodes are visited in both
  iterations.
 Thus, there are an infinite number of
  paths in the string processing program! 182
            Path testing (cont)
 Need to limit the number of paths:
  choose equivalence classes of paths.
 Two paths are considered equivalent if
  they differ only in the number of loop
  iterations, giving two classes of loops:
     one with 0 iterations
     one with n iterations (n > 0)



                                             183
             Path testing (cont)
   If the program is structured, Paige and
    Holthouse (1977) describe a technique
    for characterizing the program by a
    regular expression:
    .  is concatenation
     + is selection
     * is iterations (0 or more)



                                              184
 Use a regular expression to
describe the following graph:




                                185
              To generate paths
 replace (x)* with (x+0), where 0
  represents NULL
 this gives the expression:
  1.2.((3.(4+5).6.2)+0).7
 Expanding, gives three paths:
     1-2-7
     1-2-3-4-6-2-7
     1-2-3-5-6-2-7
                                     186
 Computing the number of paths:
By replacing all values, including null, by
 1, we can compute the number of paths:
 1.2.((3.(4+5).6.2)+0).7
  1.1.(1.(1+1).1.1+1).1 = 3
The regular expression and path number
 computation must be modified slightly
 for repeat/until loops

                                          187
        Strengths/weaknesses of
              Path testing
   There are a lot of paths
     weakness    -- lots of computation
     strength -- combinations of paths are
      exercised that other methods do not
      achieve
   technique is not readily applicable to
    unstructured programs


                                              188
Linear Code Sequence and Jump
 (LCSAJ), Hennell et al. (1984)
 An LCSAJ start point is the target line of
  a jump or the first line of the program.
 An LCSAJ end point is any line that can
  be reached from the start point by an
  unbroken sequence of code and from
  which a jump can be made.
 An LCSAJ is characterized by a start
  line, an end line, and a target line
  (where it jumps to).                     189
               LCSAJ (cont)
   Having identified all the LCSAJs in the
    program, generate test cases to
    exercise all LCSAJs




                                              190
                 LCSAJ (cont)
   sample program:
(1) function factorial(n : integer) return integer is
(2) result : integer := 1;
(3) begin
(4) for I in 2 .. n loop
(5)          result := result *I;
(6) end loop;
(7) return result;
(8) end factorial

                                                        191
               LCSAJ (cont)
   LCSAJs for factorial program:

    LCSAJ     start     end         target
    1         3         4           7
    2         3         6           4
    3         4         6           4
    4         4         4           7
    5         7         7           exit


                                             192
 Strengths/weaknesses of LCSAJ
 tends to exercise loops more thoroughly
  than the other techniques
 difficult to apply




                                        193
Structural Testing Methodology
          (white-box)
        Data Flow Testing




                                 194
      Data flow testing, overview
 Generate test data that follows the pattern
  of data definition & use through the
  program.
 Originally used to statically detect
  anomalies in the code; (e.g., referencing
  an undefined variable)
 Original paper by Laski and Korel (1983)
  suggested two testing strategies based on
  exercising pairs of definitions and uses
  within statements or basic blocks         195
               Data-flow coverage

   Variable V has a def at each place in the
    program where V acquires a value e.g. at
    assignments to V
   V has a c-use where it is used in the
    evaluation of an expression or an output
    statement
   It has a p-use where it occurs in a predicate
    and therefore affects the flow of control of the
    program
       def and c-use represented as nodes, p-use as
        edge on graph                                  196
                  d-u pairs

   The objective of data flow coverage is to
    identify and classify all occurrences of
    variables in a program and for each
    variable generate test data so that all
    definitions and uses are exercised
     no distinction between c-use and p-use
     known as d-u pairs



                                               197
           Example of DU path
1 x := 0;
2 while x < 2
3     begin
4          writeln (“looping”);
5          x := x + 1;
6     end

DU paths for x: 1-2, 1-2-3-4-5, 5-6-2, 5-6-2-3-4-5.
                                                      198
               DU pairs

 The number of DU pairs is always finite
 A testset achieves all-uses coverage if
  its data points cause the execution of
  each DU pair for each variable




                                        199
Data flow testing, overview (cont)
   Rapps and Weyuker (1985) extended
    data flow testing by classifying
    occurrences of variables as:
     def:definitional
     c-use: computational-use
     p-use: predicate-use




                                        200
 Data flow testing, overview (cont)
          test coverage criteria:
 all-nodes (statement coverage)
 all-edges (branch coverage)
 all-defs
 all-p-uses
 all-c-uses/some-p-uses
 all-p-uses/some-c-uses
 all-uses
 all-du-paths
 all-paths                         201
Data flow testing, overview (cont)
test coverage criteria, Rapps/Weyuker (85):




                                          202
               all-du-paths
 Strongest form of data flow testing
 usage includes both p-use and c-use so
  we will just refer to uses
 identify all occurrences of variables in
  the program and then, for each variable,
  generate test data so that all definition to
  use pairs (du-pairs) are exercised.

                                            203
              factorial example
(1) function factorial(n : integer) return integer is
(2) result : integer := 1;
(3) begin
(4) for I in 2 .. n loop
(5)          result := result *I;
(6) end loop;
(7) return result;
(8) end factorial


                                                        204
         Find du-pairs for string
          processing program
   variables of interest are x, i, ch, found,
    response, and array a:
     there are 8 du-pairs for x
     9 du-pairs for i
     3 du-pairs for ch
     4 du-pairs for found
     and one du-pair for response



                                                 205
      Find du-pairs for string
     processing program (cont)
 Array a is a problem since it‟s difficult to
  determine which element in the array is
  being used.
 For example, i is changing dynamically
  as the program executes.
 soln: treat the entire array as one
  variable ==> one du-pair (10-18)

                                             206
      Strengths/weaknesses of
          data flow testing
 Adv: strong form of testing
 Adv: generates test data in the pattern
  that data is manipulated in the program
  rather than following “artificial” branches
 Problem: pointer variables; difficult to
  determine which variable is being
  referenced


                                            207
         Strengths/weaknesses of
          data flow testing (cont)
   Disadv: theoretical limit on number of test
    cases is exponential. Why? 2^d, where d is the
    number of 2-way decisions in the program
   Weyuker (1988) study indicated that, in
    practice, small number of cases are needed
   Weyuker study collaborated by Bieman &
    Schultz (1989): of 143 subroutines, 80% could
    be covered with 10 or fewer paths, 91% with 25
    or fewer, one needed 2^32 paths!

                                               208
        Automating
     Data Flow Testing

Computing global data flow analysis




                                      209
Intro to global data flow analysis
 The computation of data flow
  information can be automated
 Need info about where definitions occur
  (l-values) and uses occur (r-values)
 “global” means local to a function but
  global (across blocks) to the control flow
  graph
 uses data flow equations
                                           210
       Data flow equations
out[S] = gen[S] U (in[S] - kill[S])
 Out[S] is the set of all defs that leave a
  block S (said to be “live”)
 gen[S] all new defs that are generated
  by the block S
 in[S] all defs that enter block S
 kill[S] all defs that are killed by a def in
  block S

                                                 211
Find in[S], out[S], gen[S], kill[S]




                                  212
                Details
 When we write out[S] we imply that
  there is a unique end point from which
  control flows out of a block S
 There are subtleties attached to
  procedure calls, pointer variables and
  arrays



                                           213
         Reaching definition
 A definition of x is a statement that
  assigns to x, or may assign to x
 a definition d reaches a point p if there is
  a path from the point immediately
  following d to p, such that d is not killed
  along that path



                                            214
    More examples of gen, kill, out
 Gen[s] = {d}
 kill[S] = D - {d}
 out[S] = gen[S] U (in[S] - kill[S])




                                        215
    More examples of gen, kill, out
 Gen[S] = gen[S1] U gen[S2]
 kill[S] = kill[S1] intersect kill[S2]
 in[S1] = in[S]
 in[S2] = in[S]
 out[S] = out[S1] U out[S2]




                                          216
                Representing sets
   Sets of definitions, such as gen[S] can be
    represented compactly using bit vectors
       assign a number to each definition of interest in the
        cfg
       the bit vector representing a set of definitions will
        have 1 in position i if the definition numbered i is in
        the set
   the C++ standard library has an efficient
    implementation of sets

                                                             217
       Algorithm to compute
        reaching definitions
Input: cfg for which kill[B] & gen[B] have
       been computed for each block B
output: in[B], out[B] for each block B

method: use an iterative approach, starting with
the estimate that in[B] is empty for all B.
We use a boolean variable, change, to record on
each pass through the blocks whether in has
changed; if not, we‟re finished.

                                                   218
       Algorithm to compute
     reaching definitions (cont)
For each block B do out[B] = gen[B]
change = true
while change do begin
      change = false
      for each block B do
             in[B] = U out[P], P a predecessor of B
             oldout = out[B]
             out[B] = gen[B] U (in[B] - kill[B])
             if out[B] != oldout then change = true
      end for
end while
                                                 219
Flow graph to illustrate
 reaching definitions




                           220
     Computation of in and out sets
Block B   initial    initial    pass 1     pass 1     pass 2     pass 2


          in[B]      out[B]     in[B]      out[B]     in[B]      out[B]


B1        000 0000   111 000    000 0000   111 0000   000 0000   111 0000


B2        000 0000   000 1100   111 0011   001 1110   111 1111   001 1110


B3        000 0000   000 0010   001 1110   000 1110   001 1110   000 1110


B4        000 0000   000 0001   001 1110   001 0111   001 1110   001 0111




                                                                          221

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:22
posted:6/8/2011
language:English
pages:221