Form Restore Test

Document Sample
Form Restore Test Powered By Docstoc
					        Objectives of Test Planning

• Record, organize and guide testing activities

• Schedule testing activities according to the test
  strategy and project deadlines

• Describe the way to show confidence to customers

• Provide a basis for re-testing during system

• Provide a basis for evaluating and improving
  testing process
      IEEE 829 Standard Test Plan

• Revision: 1998

• Describes scope, approach resources, schedule of
  intended testing activities.

• Identifies
   – test items,
   – features to be tested,
   – testing tasks,
   – who will do each task, and
   – any risks requiring contingency planning.
        IEEE 829 Test Plan Outline

1. Test plan identifier

2. Introduction (refers to Project plan, Quality
   assurance plan, Configuration management plan,

3. Test items - identify test items including
   version/revision level (e.g requirements, design,
   code, etc.)

4. Features to be tested
        IEEE 829 Test Plan Outline

5. Features not to be tested

6. Testing Approach

7. Significant constraints on testing (test item
   availability, testing-resource availability,

8. Item pass/fail criteria

9. Suspension criteria and resumption requirements
        IEEE 829 Test Plan Outline

10. Test deliverables (e.g. test design specifications,
    test cases specifications, test procedure
    specifications, test logs, test incident reports,
    test summary report)

11. Testing tasks

12. Environmental needs

13. Responsibilities
        IEEE 829 Test Plan Outline

14. Staffing and training needs

15. Schedule

16. Risks and contingencies

17. Approvals

18. References
  IEEE 829 Test Design Specification

1. Test design specification identifier

2. Features to be tested
   –   Features addressed by document

3. Approach refinements

4. Test identification
   –   Identifier and description of test cases
       associated with the design

5. Features pass/fail criteria
   –   Pass/fail criteria for each feature
    IEEE 829 Test Case Specification

1. Test case specification identifier

2. Test items
   –   Items and features to be exercised by the test case.

3. Input specifications
   –    Input required to execute the test case, data bases,
       files, etc.

4. Output specifications

5. Environmental needs

6. Special procedural requirements

7. Inter-case dependencies
     IEEE 829 Test Procedure Outline
1. Purpose
2. Special requirements
3. Procedure steps
    a) Log – how to log results
    b) Set Up – how to prepare for testing
    c) Start – how to begin procedure execution
    d) Proceed – procedure actions
    e) Measure – how test measurements will be made
    f) Shut Down – how to suspend testing procedure
    g) Restart – how to resume testing procedure
    h) Stop – how to bring execution to an orderly halt
    i) Wrap Up – how to restore the environment
    j) Contingencies – how to deal with anomalous events during
           IEEE 829 Test Log Outline

1.   Test log identifier

2. Description
     –   Information on all the entries in the log

3. Activity and event entries
     a) Execution Description
     b) Procedure Results - observable results (e.g. messages),
     c) Environmental Information specific to the entry
     d) Anomalous Events (if any)
     e) Incident Report Identifiers (identifier of test incident
        reports if any generated)
    IEEE 829 Test Incident Report (1)
1. Test incident report identifier
2. Summary - items involved, references to linked documents (e.g.
   procedure, test case, log)
3. Incident description
    a. Inputs
    b. Expected results
    c. Actual results
    d. Date and time
    e. Anomalies
    f. Procedure step
    g. Environment
    h. Attempts to repeat
    i. Testers
    j. Observers
   IEEE 829 Test Incident Report (2)

4. Impact – on testing process
   –   S: Show Stopper – testing totally blocked,
       bypass needed
   –   H: High - major portion of test is partially
       blocked, test can continue with severe
       restrictions, bypass needed
   –   M: Medium - test can continue but with minor
       restrictions, no bypass needed
   –   L: Low – testing not affected, problem is
      IEEE 829 Test Summary Report
1. Test summary report identifier
2. Summary - Summarize the evaluation of the test items,
   references to plans, logs, incident reports
3. Variances – of test items (from specification), plan,
   procedure, ...
4. Comprehensive assessment - of testing process against
   comprehensiveness criteria specified in test plan
5. Summary of results – issues (resolved, unresolved)
6. Evaluation - overall evaluation of each test item including its
7. Summary of activities
8. Approvals - names and titles of all persons who must approve
   the report
                System Testing

• Performed after the software has been

• Test of entire system, as customer would see it.

• High-order testing criteria should be expressed in
  the specification in a measurable way.
                  System Testing
• Check if system satisfies requirements for:
   – Functionality
   – Reliability
   – Recovery
   – Multitasking
   – Device and Configuration
   – Security
   – Compatibility
   – Stress
   – Performance
   – Serviceability
   – Ease/Correctness of installation
                  System Testing

• Acceptance Tests
   – System tests carried out by customers or under
     customers’ supervision
   – Verifies if the system works according to the customers’
• Common Types of Acceptance Tests
   – Alpha testing: end user testing performed on a system
     that may have incomplete features, within the
     development environment
      – Performed by an in-house testing panel including end-
   – Beta testing: an end user testing performed within the
     user environment.
              Functional Testing

• Ensure that the system supports its functional

• Test cases derived from statement of
   – traditional form
   – use cases
Deriving Test Cases from Requirements

• Involve clarification and restatement of the requirements to
  put them into a testable form.
   – Obtain a point form formulation
       – Enumerate single requirements
       – Group related requirements
   – For each requirement:
       – Create a test case that demonstrates the
       – Create a test case that attempts to falsify the
           – For example: try something forbidden.
       – Test boundaries and constraints when possible
Deriving Test Cases from Requirements
• Example: Requirements for a video rental system
• The system shall allow rental and return of films
   – 1. If a film is available for rental then it may be lent to a
       – 1.1 A film is available for rental until all copies have
         been simultaneously borrowed.
   – 2. If a film was unavailable for rental, then returning the
     film makes it available.
   – 3. The return date is established when the film is lent
     and must be shown when the film is returned.
   – 4. It must be possible for an inquiry on a rented film to
     reveal the current borrower.
   – 5. An inquiry on a member will reveal any films they
     currently have on rental.
Deriving Test Cases from Requirements

•   Test situations for requirement 1
    – Attempt to borrow an available film.
    – Attempt to borrow an unavailable film.

•   Test situations for requirement 1.1
    – Attempt to borrow a film for which there are
      multiple copies, all of which have been rented.
    – Attempt to borrow a film for which all copies
      but one have been rented.
Deriving Test Cases from Requirements

• Test situations for requirement 2
    – Borrow an unavailable film.
    – Return a film and borrow it again.

•   Test situations for requirement 3.
    – Borrow a film, return it and check dates
    – Check date on a non-returned film.
Deriving Test Cases from Requirements

• Test situations for requirement 4
    – Inquiry on rented film.
    – Inquiry on returned film.
    – Inquiry on a film which has been just returned.

•   Test situations for requirement 5
    – Inquiry on member with no films.
    – Inquiry on member with 1 film.
    – Inquiry on member with multiple films.
    Deriving Test Cases from Use Cases

•   For all use cases:
    1.   Develop a graph of scenarios
    2. Determine all possible scenarios
    3. Analyze and rank scenarios
    4. Generate test cases from scenarios to meet a
       coverage goal
    5. Execute test cases
                   Scenario Graph

• Generated from a use case
• Nodes correspond to point where system waits for an event
   – environment event, system reaction
• There is a single starting node
• End of use case is finish node
• Edges correspond to event occurrences
   – May include conditions and looping edges
• Scenario:
   – Path from starting node to a finish node
           Use Case Scenario Graph (1)
Title: User login                          1a: card is
                                            not valid    1
Actors: User
                                                 1a.1    2
Precondition: System is ON
1. User inserts a card                   1a.2
2. System asks for personal                                           4a.1
   identification number (PIN)
                                    4b:PIN invalid       4
3. User types PIN                  and attempts ≥ 4
                                                          4a:PIN invalid and
4. System validates user                          4b.1   5 attempts < 4
5. System displays a welcome             4a.2            6
   message to user
6. System ejects card
Postcondition: User is logged in
          Use Case Scenario Graph (2)
Alternatives:                                1a: card is
                                              not valid    1
1a: Card is not valid
                                                   1a.1    2
1a.1: System emits alarm
1a.2: System ejects card                   1a.2
4a: User identification is invalid                                      4a.1
   AND number of attempts < 4         4b:PIN invalid       4
                                     and attempts ≥ 4
4a.1 Ask for PIN again and go                               4a:PIN invalid and
   back                                             4b.1   5 attempts < 4

4b: User identification is invalid
                                           4a.2            6
  AND number of attempts ≥ 4
4b.1: System emits alarm
4b.2: System ejects card

• Paths from start to finish

• The number of times loops are taken needs to be restricted
  to keep the number of scenarios finite.

   ID   Events                        Description
   1    1-2-3-4-5-6                   User login with regular card.
                                      Correct PIN on first try. Normal
   2    1-1a.1-1a.2                   User login with non-regular card
   3    1-2-3-4-2-3-4-5-6             User login with regular card.
                                      Wrong PIN on first try. Correct
                                      PIN on second try.
   4    1-2-3-4-(2*-3-4)3-4b.1-4b.2   User login with regular card.
                                      Wrong PIN on all four tries.
                  Scenario Ranking

• If there are too many scenarios to test:
   – Ranking may be based on criticality and frequency
   – Can use operational profile, if available
       – “Operational profile”: statistical measurement of
         typical user activity of the system.
       – Example: what percentage of users would typically be
         using any particular feature at any time.

• Always include main scenario
   – Should be tested first
            Test Case generation

• Satisfy a coverage goal. For example:
   – All branches in graph of scenarios (minimal
     coverage goal)
   – All scenarios
   – n most critical scenarios
                 Example of Test Case

Test Case: TC1
Goal: Test the main course of events for the ATM system.
Scenario Reference: 1
Setup: Create a Card #2411 with PIN #5555 as valid user
   identification, System is ON
Course of test case

# External event            Reaction                                Comment
1   User inserts card #2411 System asks for Personal
                            Identification Number (PIN)
2   User types PIN #5555    System validates user identification.
                            System displays a welcome message
                            to the user.

Pass criteria: User is logged in
           Forced-Error Test (FET)

• Objective: to force system into all error
   – Basis: set of error messages for system.
• Checks
   – Error-handling design and communication
     methods consistency
   – Detection and handling of common error
   – System recovery from each error condition
   – Correction of unstable states caused by errors
         Forced-Error Test (FET)

• Verification of error messages to ensure:
   – Message matches type of error detected.
   – Description of error is clear and concise.
   – Message does not contain spelling or
     grammatical errors.
   – User is offered reasonable options for getting
     around or recovering from error condition.
           Forced-Error Test (FET)

• How to obtain a list of error conditions?
   – Obtain list of error messages from the developers
   – Interviewing the developers
   – Inspecting the String data in a resource file
   – Information from specifications
   – Using a utility to extract test strings out of the binary
     or scripting sources
   – Analyzing every possible event with an eye to error cases
   – Using your experience
   – Using a standard valid/invalid input test matrix
             Forced-Error Test (FET)

•   For each error condition :
    1.   Force the error condition.
    2. Check the error detection logic
    3. Check the handling logic
         –   Does the application offer adequate forgiveness and
             allow the user to recover from the mistakes
         –   Does the application itself handle the error
             condition gracefully?
         –   Does the system recover gracefully?
         –   When the system is restarted, it is possible that
             not all services will restart successfully?
       Forced-Error Test (FET)

4. Check the error communication
   –   Determine whether an error message
   –   Analyze the accuracy of the error message
   –   Note that the communication can be in
       another medium such as an audio cue or
       visual cue
5. Look for further problems
                 Usability Testing

• Checks ability to learn, use system to perform required task
   – Usability requirements usually not explicitly specified
• Factors influencing ease of use of system
   – Accessibility: Can users enter, navigate, and exit the
     system with relative ease?
   – Responsiveness: Can users do what they want, when they
     want, in an intuitive/convenient way?
   – Efficiency: Can users carry out tasks in an optimal
     fashion with respect to time, number of steps, etc.?
   – Comprehensibility: Can users quickly grasp how to use
     the system, its help functions, and associated
               Usability Testing

• Typical activities for usability testing
   – Controlled experiments in simulated working
     environments using novice and expert end-
   – Post-experiment protocol analysis by human
     factors experts, psychologists, etc

• Main objective: collect data to improve usability of
            Installability Testing

• Focus on requirements related to installation
   – relevant documentation
   – installation processes
   – supporting system functions
               Installability Testing
• Examples of test scenarios
   – Install and check under the various option given (e.g.
     minimum setup, typical setup, custom setup).
   – Install and check under minimum configuration.
   – Install and check on a clean system.
   – Install and check on a dirty system (loaded system).
   – Install of upgrades targeted to an operating system.
   – Install of upgrades targeted to new functionality.
   – Reduce amount of free disk space during installation
   – Cancel installation midway
   – Change default target installation path
   – Uninstall and check if all program files and install (empty)
     directories have been removed.
            Installability Testing

• Test cases should include
   – Start / entry state
   – Requirement to be tested (goal of the test)
   – Install/uninstall scenario (actions and inputs)
   – Expected outcome (final state of the system).
           Serviceability Testing

• Focus on maintenance requirements
  – Change procedures (for various adaptive,
    perfective, and corrective service scenarios)
  – Supporting documentation
  – All system diagnostic tools
   Performance/Stress/Load Testing

• Performance Testing
  – Evaluate compliance to specified performance
    requirements for:
     – Throughput
     – Response time
     – Memory utilization
     – Input/output rates
     – etc.
  – Look for resource bottlenecks
   Performance/Stress/Load Testing

• Stress testing - focus on system behavior at, near
  or beyond capacity conditions
   – Push system to failure
   – Often done in conjunction with performance
   – Emphasis near specified load, volume
   – Checks for graceful failures, non-abrupt
     performance degradation.
   Performance/Stress/Load Testing

• Load Testing - verifies handling of a particular
  load while maintaining acceptable response times
   – done in conjunction with performance testing
       Performance Testing Phases

• Planning phase

• Testing phase

• Analysis phase
       Performance Testing Process
              Planning phase
• Define objectives, deliverables, expectations

• Gather system and testing requirements
   – environment and resources
   – workload (peak, low)
   – acceptable response time

• Select performance metrics to collect
   – e.g. Transactions per second (TPS), Hits per
     second, Concurrent connections, Throughput,
        Performance Testing Process
               Planning phase
• Identify tests to run and decide when to run them.
   – Often selected functional tests are used as the test
   – Use an operational profile to match “typical”usage.
   – Decide how to run the tests:
      – Baseline Test
      – 2x/3x/4x baseline tests
      – Longevity (endurance) test
• Decide on a tool/application service provider option
   – to generate loads (replicate numerous instances of test
• Write test plan, design user-scenarios, create test scripts
      Performance Testing Process
             Testing Phase
• Testing phase
  – Generate test data
  – Set-up test bed
     – System under test
     – Test environment performance monitors
  – Run tests
  – Collect results data
       Performance Testing Process
             Analysis Phase
• Analyze results to locate source of problems
      – Software problems
      – Hardware problems

• Change system to optimize performance
      – Software optimization
      – Hardware optimization

• Design additional tests (if test objective not met)
                Configuration Testing

• Configuration testing – test all supported hardware and
  software configurations
   – Factors:
       – Hardware: processor, memory
       – Operating system: type, version
       – Device drivers
       – Run-time environments: JRE, .NET

• Consist of running a set of tests under different
  configurations exercising main set of system features
              Configuration Testing

• Huge number of potential configurations

• Need to select configurations to be tested
   – decide the type of hardware needed to be tested
   – select hardware brands, models, device drivers to test
   – decide which hardware features, modes, options are
   – pare down identified configurations to a manageable set
       – e.g.: based on popularity, age
            Compatibility Testing

• Compatibility testing – test for
   – compatibility with other system resources in
     operating environment
      – e.g., software, databases, standards, etc.
   – source- or object-code compatibility with
     different operating environment versions
   – compatibility/conversion testing
      – when conversion procedures or processes
        are involved
               Security Testing

• Focus on vulnerabilities to unauthorized access or

• Objective: identify any vulnerabilities and protect
  a system.
   – Data
      – Integrity
      – Confidentiality
      – Availability
   – Network computing resources
   Security Testing – Threat Modelling
• To evaluate a software system for security issues
   – Identify areas of software susceptible to be exploited
     for security attacks.
• Threat Modeling steps:
   1. Assemble threat modelling team (developers, testers,
      security experts)
   2. Identify assets (what could be of interest to attackers)
   3. Create an architecture overview (major technological
      pieces and how they communicate, trust boundaries
      between pieces)
   4. Decompose the application (identify how/where data
      flows through the system, what are data protection
       – Based on data flow and state diagrams
    Security Testing – Threat Modelling
•   Threat Modeling steps
     4. Identify the Threats
         – Consider each component as target,
         – How could components be improperly
         – Is it possible to prevent authorized users access to
         – Could anyone gain access and take control of the system?
     5. Document the Threats (description, target, form of attack,
     6. Rank the Threats based on:
         – Damage potential
         – Reproducibility
         – Exploitability
         – Affected users
         – Discoverability
           Security Testing
      Common System Vulnerabilities
• Buffer overflow
• Command line (shell) execution
• Backdoors
• Web scripting language weakness
• Password cracking
• Unprotected access
• Information leaks
   – Hard coding of id/password information
   – Revealing error messages
   – Directory browsing
                 Security Testing
                 Buffer Overflow
•   One of the most commonly exploited

•   Caused by:
    1. The fact that in x86 systems, a program stack
       can mix both data (local function variables)
       and executable code.
    2. The way program stack is allocated in x86
       systems (stack grows up-to-down).
    3. A lack of boundary checks when allocating
       buffer space in memory in program code (a
       typical bug).
              Security Testing
              Buffer Overflow
void parse(char *arg)
{                             Exit address (4 bytes)

   char param[1024];
   int localdata;             Main stack (N bytes)

                              Return address (4 bytes)
   return;                    param (1024 bytes)
main(int argc, char **argv)   localdata (4 bytes)
                 Security Testing
                  SQL injection
• Security attack consisting of:
   – Entering unexpected SQL code in a form in order to
     manipulate a database in unanticipated ways
   – Attacker’s expectation: back processing is supported by
     SQL database
• Caused by:
   – The ability to string multiple SQL statements together
     and to execute them in a batch
   – Using text obtained from user interface directly in SQL
                    Security Testing
                     SQL injection
• Example: Designer intends to complete this SQL statement with
  values obtained from two user interface fields.
   SELECT * FROM bank
   WHERE LOGIN = '$id' AND PASSWORD = '$password'

• Malicious user enters:
   Login = ADMIN
   Password = anything' OR 'x'='x

• Result:
   SELECT * FROM bank
   WHERE LOGIN = 'ADMIN' AND PASSWORD = 'anything' OR 'x'='x'
                 Security Testing
                  SQL injection
• Avoidance:
   – Do not copy text directly from input fields through to
     SQL statements.
   – Input sanitizing (define acceptable field contents with
     regular expressions)
   – Escape/Quotesafe the input (using predefined quote
   – Use bound parameter (e.g. prepareStatement)
   – Limit database access
   – Use stored procedures for database access
   – Configure error reporting (not to give too much
               Security Testing
• Penetration Testing – try to penetrate a system by
  exploiting crackers’ methods
   – Look for default accounts that were not
     protected by system administrators.

• Password Testing – using password cracking tools
   – Example: passwords should not be words in a
                  Security Testing
• Buffer Overflows - systematical testing of all buffers
   – Sending large amount of data
   – Check boundary conditions on buffers
      – Data that is exactly the buffer size
      – Data with length (buffer size – 1)
      – Data with length (buffer size + 1)
   – Writing escape and special characters
   – Ensure safe String functions are used
• SQL Injection
   – Entering invalid characters in form fields (escape,
     quotes, SQL comments, ...)
   – Checking error messages
            Concurrency Testing

• Investigate simultaneous execution of multiple
  tasks / threads / processes / applications.
• Potential sources of problems
   – interference among the executing sub-tasks
   – interference when multiple copies are running
   – interference with other executing products
• Tests designed to reveal possible timing errors,
  force contention for shared resources, etc
   – Problems include deadlock, starvation, race
     conditions, memory
            Multitasking Testing

• Difficulties
   – Test reproducibility not guaranteed
      – varying order of tasks execution
      – same test may find problem in a run and fail
        to find any problem in other runs
      – tests need to be run several times
   – Behavior can be platform dependent (hardware,
     operating system, ...)
            Multitasking Testing

• Logging can help detect problems – log when
   – tasks start and stop
   – resources are obtained and released
   – particular functions are called
   – ...
• System model analysis can be effective for finding
  some multitasking issues at the specification level
   – using: FSM based approaches, SDL, TTCN,
     UCM, ...
   – intensively used in telecommunications
                 Recovery Testing
• Ability to recover from failures, exceptional conditions
  associated with hardware, software, or people
   – Detecting failures or exceptional conditions
   – Switchovers to standby systems
   – Recovery of execution state and configuration (including
     security status)
   – Recovery of data and messages
   – Replacing failed components
   – Backing-out of incomplete transactions
   – Maintaining audit trails
   – External procedures
       – e.g. storing backup media or various disaster
               Reliability Testing

• Popularized by the Cleanroom development
  approach (from IBM)
• Application of statistical techniques to data
  collected during system development and
  operation (an operational profile) to specify,
  predict, estimate, and assess the reliability of
  software-based systems.
• Reliability requirements may be expressed in
  terms of
   – Probability of no failure in a specified time
   – Expected mean time to failure (MTTF)
Reliability Testing – Statistical Testing
•    Statistical testing based on a usage model.

1.   Development of an operational usage model of the software
2.   Random generation of test cases from the usage model
3.   Interpretation of test results according to mathematical and
     statistical models to yield measures of software quality and test
Reliability Testing – Statistical Testing

•   Usage model
    – Represents possible uses of the software
    – Can be specified under different contexts; for example:
       – normal usage context
       – stress conditions
       – hazardous conditions
       – maintenance conditions
    – Can be represented as a transition graph where:
       – Nodes are usages states.
       – Arcs are transitions between usages states.
Reliability Testing – Statistical Testing

•   Example – Security alarm system
    – For use on doors, windows, boxes, etc.
       – Has a detector that sends a trip signal when motion is
    – Activated by pressing the Set button.
       – Light in Set button illuminated when security alarm is
    – An alarm emitted if trip signal occurs while the device is
       – A 3 digit code must be entered to turn off the alarm.
       – If a mistake is made while entering the code, the user
         must press the Clear button before retrying.
    – Each unit has a hard-coded deactivation code.
Security alarm – Stimuli and Responses

 Stimulus         Description
 Set (S)          Device is activated
 Trip (T)         Signal from detector
 Bad digit (B)    Incorrect digit for 3-digit code
 Clear (C)        Clear entry
 Good digit (G)   Digit that is part of 3-digit code

 Response         Description

 Light on         Set button illuminated
 Light off        Set button not illuminated
 Alarm on         High-pitched sound activated
 Alarm off        High-pitched sound deactivated
Reliability Testing – Statistical Testing

• Usage model for alarm system
Usage Model for Alarm system with

                      •   Usage Probabilities
                           – Trip stimuli
                             probability is 0.05 in
                             states Ready, Entry
                             Error, 1_OK, 2_OK.
                           – Other stimuli that
                             cause a state change
                             have equal

                      •   Results in a Markov chain
Reliability Testing – Statistical Testing

•   Usage probabilities are obtained from
    – field data,
    – estimates from customer interviews
    – using instrumentation of prior versions of the

• For the approach to be effective, probabilities
  must reflect future usage
Reliability Testing – Statistical Testing
• Usage Model analysis
• Based on standard calculations on a Markov chain
• Possible to obtain estimates for:
   – Long-run occupancy of each state - the usage profile as a
     percentage of time spent in each state.
   – Occurrence probability - probability of occurrence of
     each state in a random use of the software.
   – Occurrence frequency - expected number of occurrences
     of each state in a random use of the software.
   – First occurrence - for each state, the expected number
     of uses of the software before it will first occur.
   – Expected sequence length - the expected number of
     state transitions in a random use of the software; the
     average length of a use case or test case.
Reliability Testing – Statistical Testing

• Test case generation:
   – Traverse the usage model, guided by the
     transition probabilities.
   – Each test case:
      – Starts at the initial node and end at an exit
      – Consists of a succession of stimuli
   – Test cases are random walks through the usage
      – Random selection used at each state to
        determine next stimulus.
     Randomly generated test case

#    Stimulus   Next state
1    S          Ready
2    G          1_OK
3    G          2_OK
4    C          Ready
5    B          Entry error
6    C          Ready
7    B          Entry error
8    C          Ready
9    G          1_OK
10   G          2_OK
11   G          Software terminated
Reliability Testing – Statistical Testing
• Measures of Test Sufficiency (when to stop testing?)
• Usage Chain- usage model that generates test cases.
   – used to determine each state long-run occupancy
• Testing Chain - used during testing to track actual state
• Add a counter to each arc initialized to 0
• Increment the counter of an arc whenever a test-case
  execute the transition
   – Discriminant - difference between usage chain and
     testing chain (degree to which testing experience is
     representative to the expected usage)
•  testing can stop when the discriminant plateaus
Reliability Testing – Statistical Testing

• Reliability measurement

• Failure states - added to the testing chain as
  failures occur during testing

• Software reliability - probability of taking a
  random walk through the testing chain from
  invocation to termination without encountering a
  failure state.

• Mean Time to Failure (MTTF) - average number of
  test cases until a failure occurs
     Example: All tests pass

#    Verdict   D(U,T)   % states   % arcs
1    Pass      --       60.0       22.6
2    Pass      --       100.0      58.1
3    Pass      --       100.0      67.7

14   Pass      --       100.0      96.8
15   Pass      0.0059   100.0      100.0
16   Pass      0.0055   100.0      100.0

29   Pass      0.0020   100.0      100.0
30   Pass      0.0019   100.0      100.0
            Example with failure cases

#    Verdict        MTTF         Reliability
1    Pass           --           1.0
2    Pass           --           1.0
3    Fail           3.0          0.667
4    Pass           4.0          0.75
5    Pass           5.0          0.8
6    Pass           6.0          0.833
7    Fail           3.5          0.714
8    Pass           4.0          0.75
9    Pass           4.5          0.778
10   Pass           5.0          0.8
11   Pass           5.5          0.818
12   Fail           4.0          0.75
13   Pass           4.333        0.769
14   Pass           4.667        0.786
              Regression Testing

• Purpose: In a new version of software, ensure
  that functionality of previous versions has not
  been adversely affected.
   – Example: in release 4 of software, verify that
     all (unchanged) functionality of versions 1, 2,
     and 3 still work.

• Why is it necessary?
   – One of the most frequent occasions when
     software faults are introduced is when the
     software is modified.
        Regression Test Selection (1)

• In version 1 of the software, choose a set of tests (usually
  at the system level) that has the “best coverage” given the
  resources that are available to develop and run the tests.

• Usually take system tests that were run manually prior to
  the release of version 1, and create a version of the that can
  run automatically.
   – boundary tests
   – tests that revealed bugs
   – tests for customer-reported bugs

• Depends on tools available, etc.
        Regression Test Selection (2)

• With a new version N of the software, the regression test
  suite will need updating:
   – new tests, for new functionality
   – updated tests, for previous functionality that has
   – deleted tests, for functionality that has been removed

• There is a tendency for “infinite growth” of regression tests
   – Periodic analyses are needed to keep the size of the test
     suite manageable, even for automated execution.
    Regression Test Management (1)

• The regression package for the previous version(s)
  must be preserved as long as the software version
  is being supported.
• Suppose that version 12 of software is currently
  being developed.
   – If versions 7, 8, 9, 10, and 11 of software are
     still being used by customers and are officially
     supported, all five of these regression packages
     must be kept
• Configuration management of regression suites is
  essential !
         Configuration Management of
              Regression Suites

              Software     Regression
               version       suite

                 7            7         Regression
                                        suites include
                 8            8         all functionality
                                        up to indicated
                 9            9         version number

                 10           10
                 11           11
development      12
    When to run the regression suite?

• The version 11 suite would be run on version 12 of the
  software prior to the version 12 release
   – It may be run at all version 12 recompilations during

• If a problem report results in a bug fix in version 9 of the
  software, the version 9 suite would be run to ensure the bug
  fix did not introduce another fault.
   – If the fix is propagated to versions 10, 11, and 12, then
     the version 10 regression suite would run against product
     version 10, and the version 11 regression suite would run
     against product version 11 and new version 12.
              Bug fixes in prior releases

               Software          Regression
                version            suite

                  7                 7
                  8                 8
bug fix
here              9                 9
                  10                10
                  11                11
development                              must be re-run
  Constraints on Regression Execution

• “Ideal”: would like to re-run entire regression test suite for
  each re-compile of new software version, or bug fix in a
  previous version

• Reality: test execution and (especially) result analysis may
  take too long, especially with large regression suites.
   – Test resources often shared with new functionality

• May only be able to re-run entire test suite only at
  significant project milestones
   – example: prior to product release
                Test selection

• When a code modification takes place, can we run
  “only” the regression tests related to the code
   – Issues:
      – How far does the effect of a change
        propagate through the system?
      – Traceability: keeping a link that stores the
        relationship between a test, and what parts
        of the software are covered by the test.
          Change Impact Analysis

• Module firewall strategy: if module M7 changes,
  retest all modules with a “use” or “used-by”


               M2          M3

          M4    M5        M6    M7

          M8         M9        M10
                 OO Class Firewalls

• Suppose A is a superclass of A, and B is modified. Then:
   – B should be retested
   – A should be retested if B has an effect on inherited
     members of A

• Suppose A is an aggregate class that includes B, and B is
  modified. Then A and B should be retested.

• Suppose class A is associated to class B (by access to data
  members or message passing), and B is modified. Then A and
  B should be retested.

• The transitive closure of such relationships also needs to be

A                            H

B                    G
C   D                                    K     L

E                                             inheritance
F                N       O                    aggregation
                                             modified class
                                             class to re-test

Shared By:
Description: Form Restore Test document sample