; Software Reliability Software Reliability Estimates Projections
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Software Reliability Software Reliability Estimates Projections

VIEWS: 10 PAGES: 22

  • pg 1
									 Software Reliability Estimates/
 Projections, Cumulative &
Main Title Slide Dwyer
 Instantaneous - Dave

 With help from:
 Ann Marie Neufelder,
 John D. Musa, Martin Trachtenberg,
 Thomas Downs, Ernest O. Codier and
 Faculty of Rivier College Grad. School
 Math and Computer Science
      Martin Trachtenberg (1985):
• Simulation shows that, with respect to the number
  of detected errors:
     – Testing the functions of the software system in a random or round-
       robin order gives linearly decaying system error rates.
     – Testing each function exhaustively one at a time gives flat system-
       error rates
     – Testing different functions at widely different frequencies gives
       exponentially decaying system error rates [Operational Profile
       Testing], and
     – Testing strategies which result in linear decaying error rates tend to
       require the fewest tests to detect a given number of errors.

10/29/2002                                                                  2
             Thomas Downs (1985):
“In this paper, an approach to the modeling of
software testing is described. A major aim of this
approach is to allow the assessment of the effects
of different testing (and debugging) strategies in
different situations. It is shown how the techniques
developed can be used to estimate, prior to the
commencement of testing, the optimum allocation
of test effort for software which is to be
nonuniformly executed in its operational phase.”
10/29/2002                                             3
 There are Two Basic Types of Software
           Reliability Models
• Predictors - predict reliability of software at
    some future time. Prediction made prior to
    development or test as early as concept
    phase. Normally based on historical data.
• Estimators - estimate reliability of software
    at some present or future time based on data
    collected from current development and/or
    test. Normally used later in life cycle than
    predictors.
10/29/2002                                        4
     A Pure Approach Reflects the
       True Nature of Software
• The execution of software takes the form of the
  execution of a sequence of M paths.
• The actual number of paths affected by an
  arbitrary fault is unknown and can be treated as a
  random variable, c.
• Not all paths are equally likely to be executed in a
  randomly selected execution profile.


10/29/2002                                               5
                                 Start



              x1
                                                  xN
                                   x2
               3

                                                                M
1       2


             2 paths affected by x1         „M‟ total paths
             1 path affected by x2           „N‟ total faults initially
10/29/2002   „c‟ paths affected by an arbitrary fault                     6
                     Further...
      In the operational phase of many large software
       systems, some sections of code are executed
       much more frequently than others.
      Faults located in heavily used sections of code
       are much more likely to be detected early.




10/29/2002                                               7
Downs (IEEE Trans. on SW Eng. April, 1985)
 Showed that Approximations can be Made

 Each time a path is selected for testing, all
  paths are equally likely to be selected.
• The actual number of paths affected by an
  arbitrary fault is a constant




10/29/2002                                        8
             My Data Assumptions
• Cumulative 8 Hr. test shifts are recorded VS
  the number of errors.
• Each first instance is plotted
 The last data point will be at the end of the
  test time, even though there was no error,
  because a long interval without error is
  more significant than an interval with an
  error.
10/29/2002                                    9
                  Other Assumptions
      Only integration & system test data are used.
      Problems will be designated as priority 1, 2 or 3
       (Ref DoD-STD-2167A) where:
              Priority 1: “Prevents mission essential capability”
              Priority 2: “Adversely affects mission essential
               capability with no alternative workaround”
              Priority 3: “Adversely affects mission essential
               capability with alternative workaround”


10/29/2002                                                           10
             Downs Showed  ~ faults/path

• j = (N – j), where:
   – N = the total number of faults,
   – j = the number of corrected faults,
   –  = -r log(1 – c/M),
             • r = the number of paths executed/unit time,
             • c = the average number of paths effected by each
               fault and
             • M = the total number of paths

10/29/2002                                                        11
    Failure Rate is proportional to failure
    number, Downs: j  (N – j)r(c/M).




10/29/2002                                    12
 Failure rate plots against failure number for a range
  of non-uniform testing profiles, M1, M2 paths and
  N1, N2 initial faults in those paths. (Logarithmic?)




10/29/2002                                           13
       Imagine two main segments

              Segment 1   Segment 2




10/29/2002                            14
  After testing segment 1, someone asks:

• Given 10 faults found, what‟s the reliability
  of the code?
• Responses:
     – Don‟t know how many other faults remain in
       section 1, let alone are in section 2
     – Don‟t know how often sections 1, 2 are used.
     – Did we plot failure intensity vs faults?
     – Why didn‟t we test to the operational profile?
10/29/2002                                              15
  By reference to Duane‟s derivation
       for hardware reliability,
             (Ref. E. O. Codier, RAMS - 1968)




10/29/2002                                      16
    Instantaneous Failure Intensity
             for Software




10/29/2002                            17
             Priority 1 Data Plotted




10/29/2002                             18
       Priority 1 and 2 Data Plotted




10/29/2002                             19
 Point Estimates vs Instantaneous




10/29/2002                      20
10/29/2002   21
For copy of paper, e-mail request to:

     david.j.dwyer@baesystems.com

								
To top
;