Test Progress Metric by nikeborome

VIEWS: 5 PAGES: 13

									                            Test Metrics

•       In order to properly manage testing, we
        need to

    –     Define our test goals
    –     Define or develop test metrics
    –     Gather the data for the test metrics
    –     Use the gathered data and the test metrics to help manage:

         1.   Testing activities
         2.   Product quality assessment
         3.   Projecting / predicting future
                   Test Progress Metric

•       The purpose of this metric is to track and manage
        the progress of testing activities by time.

•       A metric used by IBM (Rochester) is:

    –      (# of test cases executed) / (time unit)


•       This metric is tracked, through the scheduled test
        time, with 3 sets of numbers:

    1.    Planned
    2.    Attempted
    3.    Successfully completed
            Graphical Example of Tests Cases/Time Unit
                       (Accumulative Graph)

                    planned
                    attempted

                    successfully completed test
                    ( not no defect in code)


# of test
cases




                                                  Time units
    Discussion About the Test Progress Metric
•       The chart not only shows the progress, but also is used as a
        mechanism to trigger some action:
    –      If the # of accumulative test cases attempted is less than the
           planned number by some predetermined threshold, we may need
           to increase resources
    –      If the % of successful tests to attempted tests is lower than some
           expected %, then we may need to look at testing procedure and
           techniques
•       This is an accumulative chart. Thus, for large projects with
        multiple testing areas, progressing simultaneously, we may
        need to have a progress chart for each test group to ensure
        that there is no one laggard group.
•       Instead of # of test cases, we may assign weights (e.g.10 – 1) to
        each test case. The weighted test cases allow us to
        differentiate important or difficult ones from others. Then the
        test progress metric may be modified to:

                    (Number of test points executed) / (time unit)
           Test-Defect Arrival Pattern

• Track Defect arrivals by:
  – Time
  – Test phases
• Tracking in-process test-defect arrivals
  should be performed as follows:
  – If possible use the past defect arrival data from a
    “like” product as the baseline
  – Use weeks or days as the time unit for tracking
  – Use the number of defects discovered for each of
    the time unit.
               Tracking Test-Defect Arrival Example




# of defects                          current
                                      product
discovered       historical           actual
                 baseline                           current
      100                                           product
                                                    projection
      80

      60

      40

      20



                 Time unit in weeks                        Release date
                                          Current date
    Discussion about Tracking Defect Arrivals
• Clearly we would like to see the defect arrivals “peak” as early as
  possible, assuming that the arrivals will decrease in the same
  manner as the “baseline” pattern.

• Tracking the test-defect arrival pattern provides us:

   – Information about in-process data of the current data
   – Comparative information against “baseline”
   – Potential projections:
       • Current product quality versus the baseline product
       • Defect arrival pattern after the testing period.

• The defect types or defect severity can also be included in the
  tracking.

   – Arrivals of % of high severity defects compared to total defects by time
     unit
   – Arrivals of different problem types by time unit
                      Defect Backlog

• Defect Backlog is a metric that tracks how many
  defects exist at any instance of time. Thus it can be
  represented as follows:

   Backlog = [ (# of new defects arrived + # of unfixed defects)
     - (# of fixed defects) ]


• Releasing a product with large defect backlog would
  clearly be inviting problems later.
          Graphical Example of Defect Backlog


              # of defects arrived
              # of defects fixed

               # of defect backlog



# of
defects




                                            Target backlog
                                            level on
                                            release date




              Time units (e.g. of week)   Target release date
       Discussions on Defect Backlog
• Goal: There should be a “goal” set for defect
  backlog prior to product release - - - allows a “go or
  no-go” decision.
• Early Focus: While managing to keep a low backlog
  number is important, do not focus on the backlog too
  early in the test cycle; the team may decide not to
  focus on defect discovery.
• Developers Help: There may be extra developer
  resources needed to help in defect fixes if the backlog
  continues to stay high - - - the test manager and the
  development manager must work together
• Severity: Backlog defects should also be broken out
  by severity and type; perhaps only the high severity
  backlogs should get the most attention.
                         Some Other Test Metrics
•   ( a pseudo test metric) Line of code or function point tracked over
    development phases - - - if this changes then perhaps testing resources and
    effort should be altered.

•   Stress Test metric:
     – Percentage of CPU utilization over a period of time
     – # of transactions per time unit over a period of time

•   # of System Crash and re-IPL’s per unit of time (test metric by defect type)

•   Mean time to unplanned-IPL (MTI) :
     – MTI = H/ (I +1)
         • H =total hours of execution
         • I = total # of unplanned IPL
         • We add the one in the denominator to side-step the case where I = 0 problem

•   Number of “Show-Stoppers” or “Critical” Problems
     – # of these problems, or % of these problems relative to total number of problems
     – where these occurred

•   Cyclomatic number of the software (complexity number = # of basis paths)
     – # of basis paths covered by the testing
         • # of problems found on each basis path
         • # of problems fixed for each basis path
      Some More Metrics Related to Testing
1.   % of test cases attempted versus planned – indicator for test
     Progress (project progress)

2.   Number of defects per executed test case – indicator of test
     case Effectiveness or product Quality

3.   Number of failing test cases without resolution – indicator of
     test process Effectiveness

4.   % of test cases that “passed” (no problem found) versus
     executed – indicator of product Quality

5.   % of failed fixes – indicator of fix-change process Quality

6.   % of code or functional completeness – indicator of
     completeness of product or product Quality
          Product Ready for Release?


• During project planning, the goals for release
  had to be set (these goals will be different
  depending on the type of product):

     •   System stability (mean time to failure?)
     •   Defects volume and trend (defect arrival pattern?)
     •   Outstanding critical problem (backlog by problem type ?)
     •   Beta Customer feedback (% of satisfied customers?)
     •   Others – e.g. (% of testers recommending release ?)

								
To top