Docstoc

Testing Methodology

Document Sample
Testing Methodology Powered By Docstoc
					Software Testing




    Confidential




      Cognizant
     Technology
      Solutions
                                                     Table of Contents
1         INTRODUCTION TO SOFTWARE ............................................................................ 7
    1.1      EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE .................................................. 7
    1.2      THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE ............................ 7
    1.3      BROAD CATEGORIES OF TESTING ............................................................................. 8
    1.4      W IDELY EMPLOYED TYPES OF TESTING ..................................................................... 8
    1.5      THE TESTING TECHNIQUES....................................................................................... 9
    1.6      CHAPTER SUMMARY ................................................................................................ 9
2         BLACK BOX AND WHITE BOX TESTING .............................................................. 11
    2.1      INTRODUCTION ...................................................................................................... 11
    2.2      BLACK BOX TESTING............................................................................................... 11
    2.3      TESTING STRATEGIES/TECHNIQUES ........................................................................ 13
    2.4      BLACK BOX TESTING METHODS............................................................................... 14
    2.5      BLACK BOX (VS) W HITE BOX.................................................................................. 16
    2.6      WHITE BOX TESTING ........................................................................................ 18
3         GUI TESTING .......................................................................................................... 23
    3.1      SECTION 1 - W INDOWS COMPLIANCE TESTING ........................................................ 23
    3.2      SECTION 2 - SCREEN VALIDATION CHECKLIST ......................................................... 25
    3.3      SPECIFIC FIELD TESTS ........................................................................................... 29
    3.4      VALIDATION TESTING - STANDARD ACTIONS ............................................................ 30
4         REGRESSION TESTING ........................................................................................ 33
    4.1      W HAT IS REGRESSION TESTING .............................................................................. 33
    4.2      TEST EXECUTION ................................................................................................... 34
    4.3      CHANGE REQUEST................................................................................................. 35
    4.4      BUG TRACKING ...................................................................................................... 35
    4.5      TRACEABILITY MATRIX ........................................................................................... 36
5         PHASES OF TESTING ............................................................................................ 39
    5.1      INTRODUCTION ...................................................................................................... 39
    5.2      TYPES AND PHASES OF TESTING ............................................................................ 39
    5.3      THE ―V‖MODEL ...................................................................................................... 40
6         INTEGRATION TESTING ........................................................................................ 43
    6.1      GENERALIZATION OF MODULE TESTING CRITERIA ............................................................44
7         ACCEPTANCE TESTING ........................................................................................ 49
    7.1      INTRODUCTION – ACCEPTANCE TESTING ................................................................. 49
    7.2      FACTORS INFLUENCING ACCEPTANCE TESTING ....................................................... 49
    7.3      CONCLUSION ......................................................................................................... 50
8         SYSTEM TESTING.................................................................................................. 51
    8.1      INTRODUCTION TO SYSTEM TESTING ................................................................. 51
    8.2      NEED FOR SYSTEM TESTING .................................................................................. 51
    8.3      SYSTEM TESTING TECHNIQUES .............................................................................. 52
    8.4      FUNCTIONAL TECHNIQUES ...................................................................................... 53
    8.5      CONCLUSION: ........................................................................................................ 53
Performance Testing Process & Methodology                               Proprietary & Confidential
-2-
9       UNIT TESTING ........................................................................................................ 54
    9.1 INTRODUCTION TO UNIT TESTING ............................................................................ 54
    9.2 UNIT TESTING –FLOW: ........................................................................................... 55
    UNIT TESTING – BLACK BOX APPROACH........................................................................... 56
    UNIT TESTING – W HITE BOX APPROACH........................................................................... 56
    UNIT TESTING – FIELD LEVEL CHECKS ....................................................................... 56
    UNIT TESTING – FIELD LEVEL VALIDATIONS..................................................................... 56
    UNIT TESTING – USER INTERFACE CHECKS ...................................................................... 56
    9.3 EXECUTION OF UNIT TESTS .................................................................................... 57
    UNIT TESTING FLOW : ...................................................................................................... 57
    DISADVANTAGE OF UNIT TESTING ............................................................................... 59
    METHOD FOR STATEMENT COVERAGE.............................................................................. 59
    RACE COVERAGE ....................................................................................................... 60
    9.4 CONCLUSION ......................................................................................................... 60
10      TEST STRATEGY ................................................................................................... 62
    10.1        INTRODUCTION .................................................................................................. 62
    10.2        KEY ELEMENTS OF TEST MANAGEMENT: ............................................................. 62
    10.3        TEST STRATEGY FLOW : .................................................................................... 63
    10.4        GENERAL TESTING STRATEGIES......................................................................... 65
    10.5        NEED FOR TEST STRATEGY ............................................................................... 65
    10.6        DEVELOPING A TEST STRATEGY ......................................................................... 66
    10.7        CONCLUSION:.................................................................................................... 66
11      TEST PLAN ............................................................................................................. 68
    11.1   W HAT IS A TEST PLAN? ..................................................................................... 68
    CONTENTS OF A TEST PLAN ............................................................................................. 68
    11.2   CONTENTS (IN DETAIL) ....................................................................................... 68
12      TEST DATA PREPARATION - INTRODUCTION ................................................... 71
    12.1        CRITERIA FOR TEST DATA COLLECTION .............................................................. 72
    12.2        CLASSIFICATION OF TEST DATA TYPES ............................................................... 79
    12.3        ORGANIZING THE DATA ...................................................................................... 80
    12.4        DATA LOAD AND DATA MAINTENANCE................................................................. 82
    12.5        TESTING THE DATA ............................................................................................ 83
    12.6        CONCLUSION..................................................................................................... 84
13      TEST LOGS - INTRODUCTION .............................................................................. 85
    13.1        FACTORS DEFINING THE TEST LOG GENERATION................................................ 85
    13.2        COLLECTING STATUS DATA ............................................................................... 86
14      TEST REPORT ........................................................................................................ 92
    14.1        EXECUTIVE SUMMARY ....................................................................................... 92
15      DEFECT MANAGEMENT ........................................................................................ 95
    15.1        DEFECT ............................................................................................................ 95
    15.2        DEFECT FUNDAMENTALS ................................................................................... 95
    15.3        DEFECT TRACKING ............................................................................................ 96
    15.4        DEFECT CLASSIFICATION ................................................................................... 97
    15.5        DEFECT REPORTING GUIDELINES ....................................................................... 98
Performance Testing Process & Methodology                               Proprietary & Confidential
-3-
16     AUTOMATION ....................................................................................................... 101
   16.1       W HY AUTOMATE THE TESTING PROCESS? ........................................................ 101
   16.2       AUTOMATION LIFE CYCLE ................................................................................ 103
   16.3       PREPARING THE TEST ENVIRONMENT ............................................................... 105
   16.4       AUTOMATION METHODS ................................................................................... 108
17         GENERAL AUTOMATION TOOL COMPARISON ............................................ 111
   17.1       FUNCTIONAL TEST TOOL MATRIX ..................................................................... 111
   17.2       RECORD AND PLAYBACK .................................................................................. 111
   17.3       W EB TESTING ................................................................................................. 112
   17.4       DATABASE TESTS ............................................................................................ 112
   17.5       DATA FUNCTIONS ............................................................................................ 112
   17.6       OBJECT MAPPING ............................................................................................ 113
   17.7       IMAGE TESTING ............................................................................................... 114
   17.8       TEST/ERROR RECOVERY.................................................................................. 114
   17.9       OBJECT NAME MAP ......................................................................................... 114
   17.10      OBJECT IDENTITY TOOL ................................................................................... 115
   17.11      EXTENSIBLE LANGUAGE ................................................................................... 115
   17.12      ENVIRONMENT SUPPORT ................................................................................. 116
   17.13      INTEGRATION .................................................................................................. 116
   17.14      COST .............................................................................................................. 116
   17.15      EASE OF USE.................................................................................................. 117
   17.16      SUPPORT ........................................................................................................ 117
   17.17      OBJECT TESTS ................................................................................................ 117
   17.18      MATRIX ........................................................................................................... 118
   17.19      MATRIX SCORE ................................................................................................ 118
18     SAMPLE TEST AUTOMATION TOOL .................................................................. 119
   18.1       RATIONAL SUITE OF TOOLS .............................................................................. 119
   18.2       RATIONAL ADMINISTRATOR .............................................................................. 120
   18.3       RATIONAL ROBOT ............................................................................................ 124
   18.4       ROBOT LOGIN WINDOW .................................................................................... 125
   18.5       RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT ................................................... 126
   18.6       RECORD AND PLAYBACK OPTIONS .................................................................... 127
   18.7       VERIFICATION POINTS ...................................................................................... 129
   18.8       ABOUT SQABASIC HEADER FILES .................................................................... 131
   18.9       ADDING DECLARATIONS TO THE GLOBAL HEADER FILE...................................... 131
   18.10      INSERTING A COMMENT INTO A GUI SCRIPT: ..................................................... 131
   18.11      ABOUT DATA POOLS ........................................................................................ 132
   18.12      DEBUG MENU .................................................................................................. 132
   18.13      COMPILING THE SCRIPT .................................................................................... 133
   18.14      COMPILATION ERRORS ..................................................................................... 134
19         RATIONAL TEST MANAGER ........................................................................... 136
   19.1       TEST MANAGER-RESULTS SCREEN................................................................... 137
20         SUPPORTED ENVIRONMENTS ...................................................................... 139
   20.1       OPERATING SYSTEM ........................................................................................ 139
   20.2       PROTOCOLS .................................................................................................... 139
   20.3       W EB BROWSERS ............................................................................................. 139
Performance Testing Process & Methodology                             Proprietary & Confidential
-4-
   20.4        MARKUP LANGUAGES....................................................................................... 139
   20.5        DEVELOPMENT ENVIRONMENTS ........................................................................ 139
21     PERFORMANCE TESTING .................................................................................. 140
   21.1        W HAT IS PERFORMANCE TESTING? .................................................................. 140
   21.2        W HY PERFORMANCE TESTING? ........................................................................ 140
   21.3        PERFORMANCE TESTING OBJECTIVES .............................................................. 141
   21.4        PRE-REQUISITES FOR PERFORMANCE TESTING ................................................ 141
   21.5        PERFORMANCE REQUIREMENTS ....................................................................... 142
22     PERFORMANCE TESTING PROCESS................................................................ 143
   22.1        PHASE 1 – REQUIREMENTS STUDY ................................................................... 144
   22.2        PHASE 2 – TEST PLAN ..................................................................................... 145
   22.3        PHASE 3 – TEST DESIGN ................................................................................. 145
   22.4        PHASE 4 –SCRIPTING ...................................................................................... 146
   22.5        PHASE 5 – TEST EXECUTION ............................................................................ 147
   22.6        PHASE 6 – TEST ANALYSIS .............................................................................. 147
   22.7        PHASE 7 – PREPARATION OF REPORTS ............................................................ 148
   22.8        COMMON MISTAKES IN PERFORMANCE TESTING ............................................... 149
   22.9        BENCHMARKING LESSONS ............................................................................... 149
23     TOOLS ................................................................................................................... 151
   23.1        LOADRUNNER 6.5 ........................................................................................... 151
   23.2        W EBLOAD 4.5 ................................................................................................. 151
   23.3        ARCHITECTURE BENCHMARKING ...................................................................... 158
   23.4        GENERAL TESTS ............................................................................................. 159
24     PERFORMANCE METRICS .................................................................................. 160
   24.1        CLIENT SIDE STATISTICS.................................................................................. 160
   24.2        SERVER SIDE STATISTICS ................................................................................ 161
   24.3        NETWORK STATISTICS ..................................................................................... 161
   24.4        CONCLUSION................................................................................................... 161
25     LOAD TESTING ..................................................................................................... 163
   25.1        W HY IS LOAD TESTING IMPORTANT ? ................................................................. 163
   25.2        W HEN SHOULD LOAD TESTING BE DONE? .......................................................... 163
26     LOAD TESTING PROCESS .................................................................................. 164
   26.1        SYSTEM ANALYSIS........................................................................................... 164
   26.2        USER SCRIPTS ................................................................................................ 164
   26.3        SETTINGS ....................................................................................................... 164
   26.4        PERFORMANCE MONITORING ........................................................................... 165
   26.5        ANALYZING RESULTS ....................................................................................... 165
   26.6        CONCLUSION................................................................................................... 165
27     STRESS TESTING ................................................................................................ 167
   27.1        INTRODUCTION TO STRESS TESTING................................................................. 167
   27.2        BACKGROUND TO AUTOMATED STRESS TESTING .............................................. 168
   27.3        AUTOMATED STRESS TESTING IMPLEMENTATION .............................................. 170
   27.4        PROGRAMMABLE INTERFACES .......................................................................... 170
Performance Testing Process & Methodology                              Proprietary & Confidential
-5-
   27.5       GRAPHICAL USER INTERFACES ........................................................................ 171
   27.6       DATA FLOW DIAGRAM ...................................................................................... 171
   27.7       TECHNIQUES USED TO ISOLATE DEFECTS ......................................................... 172
28     TEST CASE COVERAGE ..................................................................................... 174
   28.1       TEST COVERAGE ............................................................................................. 174
   28.2       TEST COVERAGE MEASURES ............................................................................ 174
   28.3       PROCEDURE-LEVEL TEST COVERAGE .............................................................. 175
   28.4       LINE-LEVEL TEST COVERAGE........................................................................... 175
   28.5       CONDITION COVERAGE AND OTHER MEASURES ................................................ 175
   28.6       HOW TEST COVERAGE TOOLS W ORK ............................................................... 175
   28.7       TEST COVERAGE TOOLS AT A GLANCE ............................................................. 177
29     TEST CASE POINTS-TCP .................................................................................... 178
   29.1       W HAT IS A TEST CASE POINT (TCP) ................................................................ 178
   29.2       CALCULATING THE TEST CASE POINTS: ............................................................ 178
   29.3       CHAPTER SUMMARY ........................................................................................ 180




Performance Testing Process & Methodology                          Proprietary & Confidential
-6-
1 Introduction to Software

1.1 Evolution of the Software Testing discipline

The effective functioning of modern systems depends on our ability to produce software
in a cost-effective way. The term software engineering was first used at a 1968 NATO
workshop in West Germany. It focused on the growing software crisis! Thus we see that
the software crisis on quality, reliability, high costs etc. started way back when most of
today‘s software testers were not even born!

The attitude towards Software Testing underwent a major positive change in the recent
years. In the 1950‘s when Machine languages were used, testing is nothing but
debugging. When in the 1960‘s, compilers were developed, testing started to be
considered a separate activity from debugging. In the 1970‘s when the software
engineering concepts were introduced, software testing began to evolve as a technical
discipline. Over the last two decades there has been an increased focus on better, faster
and cost-effective software. Also there has been a growing interest in software safety,
protection and security and hence an increased acceptance of testing as a technical
discipline and also a career choice!.

 Now to answer, ―What is Testing?‖ we can go by the famous definition of Myers, which
says, ―Testing is the process of executing a program with the intent of finding errors‖



1.2 The Testing process and the Software Testing Life
    Cycle

Every testing project has to follow the waterfall model of the testing process.
The waterfall model is as given below
        1.Test Strategy & Planning

                    2.Test Design

                              3.Test Environment setup

                                        4.Test Execution

                                               5.Defect Analysis & Tracking

                                                       6.Final Reporting

According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.

Software Testing has been accepted as a separate discipline to the extent that there is a
separate life cycle for the testing activity. Involving software testing in all phases of the
Performance Testing Process & Methodology              Proprietary & Confidential
-7-
software development life cycle has become a necessity as part of the software quality
assurance process. Right from the Requirements study till the implementation, there
needs to be testing done on every phase. The V-Model of the Software Testing Life
Cycle along with the Software Development Life cycle given below indicates the various
phases or levels of testing.


            Requirement
            Study                                                    Production Verification
                                                                     Testing
               High Level                                         User Acceptance
               Design                                             Testing
                  Low Level
                  Design                                   System Testing


                              Unit                  Integration
                              Testing               Testing


                                            SDLC - STLC


1.3 Broad Categories of Testing

Based on the V-Model mentioned above, we see that there are two categories of testing
activities that can be done on software, namely,
           Static Testing
           Dynamic Testing
The kind of verification we do on the software work products before the process of
compilation and creation of an executable is more of Requirement review, design review,
code review, walkthrough and audits. This type of testing is called Static Testing. When
we test the software by executing and comparing the actual & expected results, it is
called Dynamic Testing

1.4 Widely employed Types of Testing

From the V-model, we see that are various levels or phases of testing, namely, Unit
testing, Integration testing, System testing, User Acceptance testing etc.
Let us see a brief definition on the widely employed types of testing.

Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify
if it satisfies its functional specification or its intended design structure.

Integration Testing: Testing which takes place as sub elements are combined (i.e.,
integrated) to form higher-level elements

Regression Testing: Selective re-testing of a system to verify the modification (bug
fixes) have not caused unintended effects and that system still complies with its specified
requirements

Performance Testing Process & Methodology                 Proprietary & Confidential
-8-
System Testing: Testing the software for the required specifications on the intended
hardware

Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria, which enables a customer to determine whether to
accept the system or not.

Performance Testing: To evaluate the time taken or response time of the system to
perform it‘s required functions in comparison

Stress Testing: To evaluate a system beyond the limits of the specified requirements or
system resources (such as disk space, memory, processor utilization) to ensure the
system do not break unexpectedly

Load Testing: Load Testing, a subset of stress testing, verifies that a web site can
handle a particular number of concurrent users while maintaining acceptable response
times


Alpha Testing: Testing of a software product or system conducted at the developer‘s
site by the customer

Beta Testing: Testing conducted at one or more customer sites by the end user of a
delivered software product system.



1.5 The Testing Techniques

To perform these types of testing, there are two widely used testing techniques. The
above said testing types are performed based on the following testing techniques.

Black-Box testing technique:
         This technique is used for testing based solely on analysis of requirements
(specification, user documentation.). Also known as functional testing.

White-Box testing technique:
         This technique us used for testing based on analysis of internal logic (design,
code, etc.)(But expected results still come requirements). Also known as Structural
testing.

These topics will be elaborated in the coming chapters



1.6 Chapter Summary

          This chapter covered the Introduction and basics of software testing mentioning
about
                   Evolution of Software Testing
Performance Testing Process & Methodology           Proprietary & Confidential
-9-
                   The Testing process and lifecycle
                   Broad categories of testing
                   Widely employed Types of Testing
                   The Testing Techniques




Performance Testing Process & Methodology         Proprietary & Confidential
- 10 -
2 Black Box and White Box testing

2.1 Introduction
Test Design refers to understanding the sources of test cases, test coverage, how to
develop and document test cases, and how to build and maintain test data. There are 2
primary methods by which tests can be designed and they are:

     -    BLACK BOX
     -    WHITE BOX

Black-box test design treats the system as a literal "black-box", so it doesn't explicitly
use knowledge of the internal structure. It is usually described as focusing on testing
functional requirements. Synonyms for black-box include: behavioral, functional, opaque-
box, and closed-box.

White-box test design allows one to peek inside the "box", and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. It is used to
detect errors by means of execution-oriented test cases. Synonyms for white-box include:
structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer
the terms "behavioral" and "structural". Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged. In practice, it hasn't proven useful to use a single test design
method. One has to use a mixture of different methods so that they aren't hindered by
the limitations of a particular one. Some call this "gray-box" or "translucent-box" test
design, but others wish we'd stop talking about boxes altogether!!!

2.2 Black box testing
Black Box Testing is testing without knowledge of the internal workings of the item
being tested. For example, when black box testing is applied to software engineering,
the tester would only know the "legal" inputs and what the expected outputs should be,
but not how the program actually arrives at those outputs. It is because of this that black
box testing can be considered testing with respect to the specifications, no other
knowledge of the program is necessary. For this reason, the tester and the programmer
can be independent of one another, avoiding programmer bias toward his own work. For
this testing, test groups are often used,

Though centered around the knowledge of user requirements, black box tests do not
necessarily involve the participation of users. Among the most important black box tests
that do not involve users are functionality testing, volume tests, stress tests, recovery
testing, and benchmarks . Additionally, there are two types of black box test that involve
users, i.e. field and laboratory tests. In the following the most important aspects of these
black box tests will be described briefly.


Performance Testing Process & Methodology        Proprietary & Confidential
- 11 -
2.2.1.1 Black box testing - without user involvement
The so-called ``functionality testing'' is central to most testing exercises. Its primary
objective is to assess whether the program does what it is supposed to do, i.e. what is
specified in the requirements. There are different approaches to functionality testing. One
is the testing of each program feature or function in sequence. The other is to test module
by module, i.e. each function where it is called first.

The objective of volume tests is to find the limitations of the software by processing a
huge amount of data. A volume test can uncover problems that are related to the
efficiency of a system, e.g. incorrect buffer sizes, a consumption of too much memory
space, or only show that an error message would be needed telling the user that the
system cannot process the given amount of data.

During a stress test, the system has to process a huge amount of data or perform many
function calls within a short period of time. A typical example could be to perform the
same function from all workstations connected in a LAN within a short period of time (e.g.
sending e-mails, or, in the NLP area, to modify a term bank via different terminals
simultaneously).

The aim of recovery testing is to make sure to which extent data can be recovered after a
system breakdown. Does the system provide possibilities to recover all of the data or part
of it? How much can be recovered and how? Is the recovered data still correct and
consistent? Particularly for software that needs high reliability standards, recovery testing
is very important.
The notion of benchmark tests involves the testing of program efficiency. The efficiency
of a piece of software strongly depends on the hardware environment and therefore
benchmark tests always consider the soft/hardware combination. Whereas for most
software engineers benchmark tests are concerned with the quantitative measurement of
specific operations, some also consider user tests that compare the efficiency of different
software systems as benchmark tests. In the context of this document, however,
benchmark tests only denote operations that are independent of personal variables.

2.2.1.2 Black box testing - with user involvement
For tests involving users, methodological considerations are rare in SE literature. Rather,
one may find practical test reports that distinguish roughly between field and laboratory
tests. In the following only a rough description of field and laboratory tests will be given.
E.g. Scenario Tests. The term ``scenario'' has entered software evaluation in the early
1990s . A scenario test is a test case which aims at a realistic user background for the
evaluation of software as it was defined and performed It is an instance of black box
testing where the major objective is to assess the suitability of a software product for
every-day routines. In short it involves putting the system into its intended use by its
envisaged type of user, performing a standardised task.

In field tests users are observed while using the software system at their normal working
place. Apart from general usability-related aspects, field tests are particularly useful for
assessing the interoperability of the software system, i.e. how the technical integration of
the system works. Moreover, field tests are the only real means to elucidate problems of
the organisational integration of the software system into existing procedures. Particularly
in the NLP environment this problem has frequently been underestimated. A typical

Performance Testing Process & Methodology         Proprietary & Confidential
- 12 -
example of the organisational problem of implementing a translation memory is the
language service of a big automobile manufacturer, where the major implementation
problem is not the technical environment, but the fact that many clients still submit their
orders as print-out, that neither source texts nor target texts are properly organised and
stored and, last but not least, individual translators are not too motivated to change their
working habits.

Laboratory tests are mostly performed to assess the general usability of the system. Due
to the high laboratory equipment costs laboratory tests are mostly only performed at big
software houses such as IBM or Microsoft. Since laboratory tests provide testers with
many technical possibilities, data collection and analysis are easier than for field tests.

2.3 Testing Strategies/Techniques
         Black box testing should make use of randomly generated inputs (only a test
          range should be specified by the tester), to eliminate any guess work by the
          tester as to the methods of the function
         Data outside of the specified input range should be tested to check the
          robustness of the program
         Boundary cases should be tested (top and bottom of specified range) to make
          sure the highest and lowest allowable inputs produce proper output
         The number zero should be tested when numerical data is to be input
         Stress testing should be performed (try to overload the program with inputs to
          see where it reaches its maximum capacity), especially with real time systems
         Crash testing should be performed to see what it takes to bring the system down
         Test monitoring tools should be used whenever possible to track which tests
          have already been performed and the outputs of these tests to avoid repetition
          and to aid in the software maintenance
         Other functional testing techniques include: transaction testing, syntax testing,
          domain testing, logic testing, and state testing.
         Finite state machine models can be used as a guide to design functional tests
         According to Beizer the following is a general order by which tests should be
          designed:

                         1. Clean tests against requirements.
                         2. Additional structural tests for branch coverage,
                            as needed.
                         3. Additional tests for data-flow coverage as
                            needed.
                         4. Domain tests not covered by the above.
                         5. Special techniques as appropriate--syntax, loop,
                            state, etc.
                         6. Any dirty tests not covered by the above.




Performance Testing Process & Methodology           Proprietary & Confidential
- 13 -
2.4 Black box testing Methods
2.4.1 Graph-based Testing Methods

         Black-box methods based on the nature of the relationships (links) among the
          program objects (nodes), test cases are designed to traverse the entire graph
         Transaction flow testing (nodes represent steps in some transaction and links
          represent logical connections between steps that need to be validated)
         Finite state modeling (nodes represent user observable states of the software
          and links represent transitions between states)
         Data flow modeling (nodes are data objects and links are transformations from
          one data object to another)
         Timing modeling (nodes are program objects and links are sequential
          connections between these objects, link weights are required execution times)

2.4.2     Equivalence Partitioning

         Black-box technique that divides the input domain into classes of data from which
          test cases can be derived
         An ideal test case uncovers a class of errors that might require many arbitrary
          test cases to be executed before a general error is observed
         Equivalence class guidelines:

               1. If input condition specifies a range, one valid and two invalid equivalence
                  classes are defined
               2. If an input condition requires a specific value, one valid and two invalid
                  equivalence classes are defined
               3. If an input condition specifies a member of a set, one valid and one
                  invalid equivalence class is defined
               4. If an input condition is Boolean, one valid and one invalid equivalence
                  class is defined

2.4.3 Boundary Value Analysis

         Black-box technique that focuses on the boundaries of the input domain rather
          than its center

         BVA guidelines:

               1. If input condition specifies a range bounded by values a and b, test
                  cases should include a and b, values just above and just below a and b
               2. If an input condition specifies and number of values, test cases should
                  be exercise the minimum and maximum numbers, as well as values just
                  above and just below the minimum and maximum values
               3. Apply guidelines 1 and 2 to output conditions, test cases should be
                  designed to produce the minimum and maxim output reports

Performance Testing Process & Methodology          Proprietary & Confidential
- 14 -
               4. If internal program data structures have boundaries (e.g. size limitations),
                  be certain to test the boundaries

2.4.4 Comparison Testing

         Black-box testing for safety critical systems in which independently developed
          implementations of redundant systems are tested for conformance to
          specifications
         Often equivalence class partitioning is used to develop a common set of test
          cases for each implementation

2.4.5 Orthogonal Array Testing

         Black-box technique that enables the design of a reasonably small set of test
          cases that provide maximum test coverage
         Focus is on categories of faulty logic likely to be present in the software
          component (without examining the code)
         Priorities for assessing tests using an orthogonal array

               1. Detect and isolate all single mode faults
               2. Detect all double mode faults
               3. Multimode faults

2.4.6 Specialized Testing

         Graphical user interfaces
         Client/server architectures
         Documentation and help facilities
         Real-time systems

               1.   Task testing (test each time dependent task independently)
               2.   Behavioral testing (simulate system response to external events)
               3.   Intertask testing (check communications errors among tasks)
               4.   System testing (check interaction of integrated system software and
                    hardware)

2.4.7 Advantages of Black Box Testing

         More effective on larger units of code than glass box testing
         Tester needs no knowledge of implementation, including specific programming
          languages
         Tester and programmer are independent of each other
         Tests are done from a user's point of view
         Will help to expose any ambiguities or inconsistencies in the specifications
         Test cases can be designed as soon as the specifications are complete


Performance Testing Process & Methodology           Proprietary & Confidential
- 15 -
2.4.8 Disadvantages of Black Box Testing

         Only a small number of possible inputs can actually be tested, to test every
          possible input stream would take nearly forever
         Without clear and concise specifications, test cases are hard to design
         There may be unnecessary repetition of test inputs if the tester is not informed of
          test cases the programmer has already tried
         May leave many program paths untested
         Cannot be directed toward specific segments of code which may be very
          complex (and therefore more error prone)
         Most testing related research has been directed toward glass box testing

2.5 Black Box (Vs) White Box

An easy way to start up a debate in a software testing forum is to ask the difference
between black box and white box testing. These terms are commonly used, yet everyone
seems to have a different idea of what they mean.

Black box testing begins with a metaphor. Imagine you‘re testing an electronics system.
It‘s housed in a black box with lights, switches, and dials on the outside. You must test it
without opening it up, and you can‘t see beyond its surface. You have to see if it works
just by flipping switches (inputs) and seeing what happens to the lights and dials
(outputs). This is black box testing. Black box software testing is doing the same thing,
but with software. The actual meaning of the metaphor, however, depends on how you
define the boundary of the box and what kind of access the ―blackness‖ is blocking.

An opposite test approach would be to open up the electronics system, see how the
circuits are wired, apply probes internally and maybe even disassemble parts of it. By
analogy, this is called white box testing,

To help understand the different ways that software testing can be divided between black
box and white box techniques, consider the Five-Fold Testing System. It lays out five
dimensions that can be used for examining testing:

1.People(who does the testing)
2. Coverage (what gets tested)
3. Risks (why you are testing)
4.Activities(how you are testing)
5. Evaluation (how you know you‘ve found a bug)


Let‘s use this system to understand and clarify the characteristics of black box and white
box                                                                                testing.

People: Who does the testing?


Performance Testing Process & Methodology          Proprietary & Confidential
- 16 -
Some people know how software works (developers) and others just use it (users).
Accordingly, any testing by users or other non-developers is sometimes called ―black
box‖ testing. Developer testing is called ―white box‖ testing. The distinction here is based
on what the person knows or can understand.


Coverage: What is tested?
If we draw the box around the system as a whole, ―black box‖ testing becomes another
name for system testing. And testing the units inside the box becomes white box testing.
This is one way to think about coverage. Another is to contrast testing that aims to cover
all the requirements with testing that aims to cover all the code. These are the two most
commonly used coverage criteria. Both are supported by extensive literature and
commercial tools. Requirements-based testing could be called ―black box‖ because it
makes sure that all the customer requirements have been verified. Code-based testing is
often called ―white box‖ because it makes sure that all the code (the statements, paths,
or decisions) is exercised.

Risks: Why are you testing?
Sometimes testing is targeted at particular risks. Boundary testing and other attack-based
techniques are targeted at common coding errors. Effective security testing also requires
a detailed understanding of the code and the system architecture. Thus, these
techniques might be classified as ―white box‖. Another set of risks concerns whether the
software will actually provide value to users. Usability testing focuses on this risk, and
could be termed ―black box.‖

Activities: How do you test?
A common distinction is made between behavioral test design, which defines tests based
on functional requirements, and structural test design, which defines tests based on the
code itself. These are two design approaches. Since behavioral testing is based on
external functional definition, it is often called ―black box,‖ while structural testing—based
on the code internals—is called ―white box.‖ Indeed, this is probably the most commonly
cited definition for black box and white box testing. Another activity-based distinction
contrasts dynamic test execution with formal code inspection. In this case, the metaphor
maps test execution (dynamic testing) with black box testing, and maps code inspection
(static testing) with white box testing. We could also focus on the tools used. Some tool
vendors refer to code-coverage tools as white box tools, and tools that facilitate applying
inputs and capturing inputs—most notably GUI capture replay tools—as black box tools.
Testing is then categorized based on the types of tools used.

Evaluation: How do you know if you‘ve found a bug?
There are certain kinds of software faults that don‘t always lead to obvious failures. They
  may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are
 examples. Certain test techniques seek to make these kinds of problems more visible.
Related techniques capture code history and stack information when faults occur, helping
   with diagnosis. Assertions are another technique for helping to make problems more
visible. All of these techniques could be considered white box test techniques, since they
   use code instrumentation to make the internal workings of the software more visible.
   These contrast with black box techniques that simply look at the official outputs of a
                                         program.


Performance Testing Process & Methodology         Proprietary & Confidential
- 17 -
White box testing is concerned only with testing the software product, it cannot guarantee
that the complete specification has been implemented. Black box testing is concerned
only with testing the specification, it cannot guarantee that all parts of the implementation
have been tested. Thus black box testing is testing against the specification and will
discover faults of omission, indicating that part of the specification has not been fulfilled.
White box testing is testing against the implementation and will discover
faults of commission, indicating that part of the implementation is faulty. In order to fully
test a software product both black and white box testing are required.
White box testing is much more expensive than black box testing. It requires the source
code to be produced before the tests can be planned and is much more laborious in the
determination of suitable input data and the determination if the software is or is not
correct. The advice given is to start test planning with a black box test approach as soon
as the specification is available. White box planning should commence as soon as all
black box tests have been successfully passed, with the production of flowgraphs and
determination of paths. The paths should then be checked against the black box test plan
and any additional required test runs determined and applied.

The consequences of test failure at this stage may be very expensive. A failure of a white
box test may result in a change which requires all black box testing to be repeated and
the re-determination of the white box paths

To conclude, apart from the above described analytical methods of both glass and black
box testing, there are further constructive means to guarantee high quality software end
products. Among the most important constructive means are the usage of object-oriented
programming tools, the integration of CASE tools, rapid prototyping, and last but not least
the involvement of users in both software development and testing procedures


Summary :
Black box testing can sometimes describe user-based testing (people); system or
requirements-based testing (coverage); usability testing (risk); or behavioral testing or
capture replay automation (activities). White box testing, on the other hand, can
sometimes describe developer-based testing (people); unit or code-coverage testing
(coverage); boundary or security testing (risks); structural testing, inspection or code-
coverage automation (activities); or testing based on probes, assertions, and logs
(evaluation).




2.6 WHITE BOX TESTING

Software testing approaches that examine the program structure and derive test data
from the program logic. Structural testing is sometimes referred to as clear-box testing
since white boxes are considered opaque and do not really permit visibility into the code.

Synonyms for white box testing

         Glass Box testing
         Structural testing
Performance Testing Process & Methodology         Proprietary & Confidential
- 18 -
         Clear Box testing
         Open Box Testing


Types of White Box testing

A typical rollout of a product is shown in figure 1 below.




The purpose of white box testing

Initiate a strategic initiative to build quality throughout the life cycle of a software product
or service.
Provide a complementary function to black box testing.
Perform complete coverage at the component level.
Improve quality by optimizing performance.

Practices :

This section outlines some of the general practices comprising white-box testing process.
In general, white-box testing practices have the
following considerations:
     1. The allocation of resources to perform class and method analysis and to
        document and review the same.
     2. Developing a test harness made up of stubs, drivers and test object libraries.
     3. Development and use of standard procedures, naming conventions and libraries.
     4. Establishment and maintenance of regression test suites and procedures.
     5. Allocation of resources to design, document and manage a test history library.
     6. The means to develop or acquire tool support for automation of
        capture/replay/compare, test suite execution, results verification and
        documentation capabilities.

Performance Testing Process & Methodology           Proprietary & Confidential
- 19 -
1 Code Coverage Analysis
          1.1 Basis Path Testing
          A testing mechanism proposed by McCabe whose aim is to derive a logical
          complexity measure of a procedural design and use this as a guide for defining a
          basic set of execution paths. These are test cases that exercise basic set will
          execute every statement at least once.

                    1.1.1 Flow Graph Notation

                    A notation for representing control flow similar to flow charts and UML
                    activity diagrams.

                    1.1.2 Cyclomatic Complexity

                    The cyclomatic complexity gives a quantitative measure of 4the logical
                    complexity. This value gives the number of independent paths in the
                    basis set, and an upper bound for the number of tests to ensure that
                    each statement is executed at least once. An independent path is any
                    path through a program that introduces at least one new set of
                    processing statements or a new condition (i.e., a new edge). Cyclomatic
                    complexity provides upper bound for number of tests required to
                    guarantee coverage of all program statements.

          1.2 Control Structure testing
                    1.2.1 Conditions Testing
                    Condition testing aims to exercise all logical conditions in a program
                    module. They may define:
                         Relational expression: (E1 op E2), where E1 and E2 are
                            arithmetic expressions.
                         Simple condition: Boolean variable or relational expression,
                            possibly proceeded by a NOT operator.
                         Compound condition: composed of two or more simple
                            conditions, Boolean operators and parentheses.
                         Boolean expression : Condition without Relational expressions.

                    1.2.2 Data Flow Testing
                    Selects test paths according to the location of definitions and use of
                    variables.

                    1.2.3 Loop Testing
                    Loops fundamental to many algorithms. Can define loops as simple,
                    concatenated, nested, and unstructured.
                    Examples:




Performance Testing Process & Methodology            Proprietary & Confidential
- 20 -
                    Note that unstructured loops are not to be tested . rather, they are
                    redesigned.


2 Design by Contract (DbC)

DbC is a formal way of using comments to incorporate specification information into the
code itself. Basically, the code specification is expressed unambiguously using a formal
language that describes the code's implicit contracts. These contracts specify such
requirements as:
               Conditions that the client must meet before a method is invoked.
               Conditions that a method must meet after it executes.
               Assertions that a method must satisfy at specific points of its execution

Tools that check DbC contracts at runtime such as JContract
[http://www.parasoft.com/products/jtract/index.htm] are used to perform this function.

3 Profiling

Profiling provides a framework for analyzing Java code performance for speed and heap
memory use. It identifies routines that are consuming the majority of the CPU time so
that problems may be tracked down to improve performance.
These include the use of Microsoft Java Profiler API and Sun‘s profiling tools that are
bundled with the JDK. Third party tools such as JaViz
[http://www.research.ibm.com/journal/sj/391/kazi.html] may also be used to perform this
function.


4 Error Handling



Performance Testing Process & Methodology            Proprietary & Confidential
- 21 -
Exception and error handling is checked thoroughly are simulating partial and complete
fail-over by operating on error causing test vectors. Proper error recovery, notification and
logging are checked against references to validate program design.

5 Transactions

Systems that employ transaction, local or distributed, may be validated to ensure that
ACID (Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is
tested individually against a reference data set.

Transactions are checked thoroughly for partial/complete commits and rollbacks
encompassing databases and other XA compliant transaction processors.


Advantages of White Box Testing

         Forces test developer to reason carefully about implementation
         Approximate the partitioning done by execution equivalence
         Reveals errors in "hidden" code
         Beneficent side-effects

Disadvantages of White Box Testing

         Expensive
         Cases omitted in the code could be missed out.




Performance Testing Process & Methodology         Proprietary & Confidential
- 22 -
3      GUI Testing
What is GUI Testing?

GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any
application has to be user-friendly. The end user should be comfortable while using all
the components on screen and the components should also perform their functionality
with utmost clarity. Hence it becomes very essential to test the GUI components of any
application. GUI Testing can refer to just ensuring that the look-and-feel of the application
is acceptable to the user, or it can refer to testing the functionality of each and every
component involved.

The following is a set of guidelines to ensure effective GUI Testing and can be used even
as a checklist while testing a product / application.



3.1 Section 1 - Windows Compliance Testing
3.1.1 Application
Start Application by Double Clicking on its ICON. The Loading message should show the
application name, version number, and a bigger pictorial representation of the icon. No
Login is necessary. The main window of the application should have the same caption as
the caption of the icon in Program Manager. Closing the application should result in an
"Are you Sure" message box Attempt to start application twice. This should not be
allowed - you should be returned to main window. Try to start the application twice as it is
loading. On each window, if the application is busy, then the hour glass should be
displayed. If there is no hour glass, then some enquiry in progress message should be
displayed. All screens should have a Help button (i.e.) F1 key should work the same.

If Window has a Minimize Button, click it. Window should return to an icon on the bottom
of the screen. This icon should correspond to the Original Icon under Program Manager.
Double Click the Icon to return the Window to its original size. The window caption for
every application should have the name of the application and the window name -
especially the error messages. These should be checked for spelling, English and clarity,
especially on the top of the screen. Check does the title of the window make sense. If the
screen has a Control menu, then use all un-grayed options.

Check all text on window for Spelling/Tense and Grammar.
Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.
Tab order should be left to right, and Up to Down within a group box on the screen. All
controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field
with text in it should highlight the entire text in the field. The text in the Micro Help line
should change - Check for spelling, clarity and non-updateable etc. If a field is disabled
(grayed) then it should not get focus. It should not be possible to select them with either
the mouse or by using TAB. Try this for every grayed control.


Performance Testing Process & Methodology         Proprietary & Confidential
- 23 -
Never updateable fields should be displayed with black text on a gray background with a
black label. All text should be left justified, followed by a colon tight to it. In a field that
may or may not be updateable, the label text and contents changes from black to gray
depending on the current status. List boxes are always white background with black text
whether they are disabled or not. All others are gray.

In general, double-clicking is not essential. In general, everything can be done using both
the mouse and the keyboard. All tab buttons should have a distinct letter.

3.1.2 Text Boxes
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow
to Insert Bar. If it doesn't then the text in the box should be gray or non-updateable. Refer
to previous page. Enter text into Box Try to overflow the text by typing to many characters
- should be stopped Check the field width with capitals W. Enter invalid characters -
Letters in amount fields, try strange characters like + , - * etc. in All fields. SHIFT and
Arrow should Select Characters. Selection should also be possible with mouse. Double
Click should select all text in box.

3.1.3 Option (Radio Buttons)
Left and Right arrows should move 'ON' Selection. So should Up and Down. Select with
mouse by clicking.

3.1.4 Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE
should do the same.

3.1.5 Command Buttons
If Command Button leads to another Screen, and if the user can enter or change details
on the other screen then the Text on the button should be followed by three dots. All
Buttons except for OK and Cancel should have a letter Access to them. This is indicated
by a letter underlined in the button text. Pressing ALT+Letter should activate the button.
Make sure there is no duplication. Click each button once with the mouse - This should
activate Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate The above are VERY
IMPORTANT, and should be done for EVERY command Button. Tab to another type of
control (not a command button). One button on the screen should be default (indicated by
a thick black border). Pressing Return in ANY no command button control should activate
it.
If there is a Cancel Button on the screen, then pressing <Esc> should activate it. If
pressing the Command button results in uncorrectable data e.g. closing an action step,
there should be a message phrased positively with Yes/No answers where Yes results in
the completion of the action.

3.1.6 Drop Down List Boxes
Pressing the Arrow should give list of options. This List may be scrollable. You should not
be able to type text in the box. Pressing a letter should bring you to the first item in the list
with that start with that letter. Pressing ‗Ctrl - F4‘ should open/drop down the list box.

Performance Testing Process & Methodology           Proprietary & Confidential
- 24 -
Spacing should be compatible with the existing windows spacing (word etc.). Items
should be in alphabetical order with the exception of blank/none, which is at the top or the
bottom of the list box. Drop down with the item selected should be display the list with the
selected item on the top. Make sure only one space appears, shouldn't have a blank line
at the bottom.

3.1.7 Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose from list

3.1.8 List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up
and Down Arrow keys. Pressing a letter should take you to the first item in the list starting
with that letter. If there is a 'View' or 'Open' button besides the list box then double
clicking on a line in the List Box, should act in the same way as selecting and item in the
list box, then clicking the command button. Force the scroll bar to appear, make sure all
the data can be seen in the box.



3.2 Section 2 - Screen Validation Checklist
3.2.1 Aesthetic Conditions:

     1.    Is the general screen background the correct color?
     2.    Are the field prompts the correct color?
     3.    Are the field backgrounds the correct color?
     4.    In read-only mode, are the field prompts the correct color?
     5.    In read-only mode, are the field backgrounds the correct color?
     6.    Are all the screen prompts specified in the correct screen font?
     7.    Is the text in all fields specified in the correct screen font?
     8.    Are all the field prompts aligned perfectly on the screen?
     9.    Are all the field edit boxes aligned perfectly on the screen?
     10.   Are all group boxes aligned correctly on the screen?
     11.   Should the screen be resizable?
     12.   Should the screen be allowed to minimize?
     13.   Are all the field prompts spelt correctly?
     14.   Are all character or alphanumeric fields left justified? This is the default unless
           otherwise specified.
     15.   Are all numeric fields right justified? This is the default unless otherwise
           specified.
     16.   Is all the micro-help text spelt correctly on this screen?
     17.   Is all the error message text spelt correctly on this screen?
     18.   Is all user input captured in UPPER case or lowercase consistently?
     19.   Where the database requires a value (other than null) then this should be
           defaulted into fields. The user must either enter an alternative valid value or
           leave the default value intact.
     20.   Assure that all windows have a consistent look and feel.
     21.   Assure that all dialog boxes have a consistent look and feel.

Performance Testing Process & Methodology           Proprietary & Confidential
- 25 -
3.2.2 Validation Conditions:

     1.    Does a failure of validation on every field cause a sensible user error message?
     2.    Is the user required to fix entries, which have failed validation tests?
     3.    Have any fields got multiple validation rules and if so are all rules being applied?
     4.    If the user enters an invalid value and clicks on the OK button (i.e. does not TAB
           off the field) is the invalid entry identified and highlighted correctly with an error
           message?
     5.    Is validation consistently applied at screen level unless specifically required at
           field level?
     6.    For all numeric fields check whether negative numbers can and should be able to
           be entered.
     7.    For all numeric fields check the minimum and maximum values and also some
           mid-range values allowable?
     8.    For all character/alphanumeric fields check the field to ensure that there is a
           character limit specified and that this limit is exactly correct for the specified
           database size?
     9.    Do all mandatory fields require user input?
     10.   If any of the database columns don't allow null values then the corresponding
           screen fields must be mandatory. (If any field, which initially was mandatory, has
           become optional then check whether null values are allowed in this field.)

3.2.3 Navigation Conditions:

     1. Can the screen be accessed correctly from the menu?
     2. Can the screen be accessed correctly from the toolbar?
     3. Can the screen be accessed correctly by double clicking on a list control on the
        previous screen?
     4. Can all screens accessible via buttons on this screen be accessed correctly?
     5. Can all screens accessible by double clicking on a list control be accessed
        correctly?
     6. Is the screen modal? (i.e.) Is the user prevented from accessing other functions
        when this screen is active and is this correct?
     7. Can a number of instances of this screen be opened at the same time and is this
        correct?

3.2.4 Usability Conditions:

     1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the
        default unless otherwise specified.
     2. Is all date entry required in the correct format?
     3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
     4. Do the Shortcut keys work correctly?
     5. Have the menu options that apply to your screen got fast keys associated and
        should they have?
     6. Does the Tab Order specified on the screen go in sequence from Top Left to
        bottom right? This is the default unless otherwise specified.
     7. Are all read-only fields avoided in the TAB sequence?

Performance Testing Process & Methodology            Proprietary & Confidential
- 26 -
     8. Are all disabled fields avoided in the TAB sequence?
     9. Can the cursor be placed in the microhelp text box by clicking on the text box
         with the mouse?
     10. Can the cursor be placed in read-only fields by clicking in the field with the
         mouse?
     11. Is the cursor positioned in the first input field or control when the screen is
         opened?
     12. Is there a default button specified on the screen?
     13. Does the default button work correctly?
     14. When an error message occurs does the focus return to the field in error when
         the user cancels it?
     15. When the user Alt+Tab's to another application does this have any impact on the
         screen upon return to the application?
     16. Do all the fields edit boxes indicate the number of characters they will hold by
         there length? e.g. a 30 character field should be a lot longer

3.2.5 Data Integrity Conditions:

     1. Is the data saved when the window is closed by double clicking on the close
        box?
     2. Check the maximum field lengths to ensure that there are no truncated
        characters?
     3. Where the database requires a value (other than null) then this should be
        defaulted into fields. The user must either enter an alternative valid value or
        leave the default value intact.
     4. Check maximum and minimum field values for numeric fields?
     5. If numeric fields accept negative values can these be stored correctly on the
        database and does it make sense for the field to accept negative numbers?
     6. If a set of radio buttons represents a fixed set of values such as A, B and C then
        what happens if a blank value is retrieved from the database? (In some situations
        rows can be created on the database by other functions, which are not screen
        based, and thus the required initial values can be incorrect.)
     7. If a particular set of data is saved to the database check that each value gets
        saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding
        of numeric values.

3.2.6 Modes (Editable Read-only) Conditions:

     1. Are the screen and field colors adjusted correctly for read-only mode?
     2. Should a read-only mode be provided for this screen?
     3. Are all fields and controls disabled in read-only mode?
     4. Can the screen be accessed from the previous screen/menu/toolbar in read-only
        mode?
     5. Can all screens available from this screen be accessed in read-only mode?
     6. Check that no validation is performed in read-only mode.




Performance Testing Process & Methodology        Proprietary & Confidential
- 27 -
3.2.7 General Conditions:

     1.    Assure the existence of the "Help" menu.
     2.    Assure that the proper commands and options are in each menu.
     3.    Assure that all buttons on all tool bars have a corresponding key commands.
     4.    Assure that each menu command has an alternative (hot-key) key sequence,
           which will invoke it where appropriate.
     5.    In drop down list boxes, ensure that the names are not abbreviations / cut short
     6.    In drop down list boxes, assure that the list and each entry in the list can be
           accessed via appropriate key / hot key combinations.
     7.    Ensure that duplicate hot keys do not exist on each screen
     8.    Ensure the proper usage of the escape key (which is to undo any changes that
           have been made) and generates a caution message "Changes will be lost -
           Continue yes/no"
     9.    Assure that the cancel button functions the same as the escape key.
     10.   Assure that the Cancel button operates, as a Close button when changes have
           been made that cannot be undone.
     11.   Assure that only command buttons, which are used by a particular window, or in
           a particular dialog box, are present. – (i.e) make sure they don't work on the
           screen behind the current screen.
     12.   When a command button is used sometimes and not at other times, assures that
           it is grayed out when it should not be used.
     13.   Assure that OK and Cancel buttons are grouped separately from other command
           buttons.
     14.   Assure that command button names are not abbreviations.
     15.   Assure that all field labels/names are not technical labels, but rather are names
           meaningful to system users.
     16.   Assure that command buttons are all of similar size and shape, and same font &
           font size.
     17.   Assure that each command button can be accessed via a hot key combination.
     18.   Assure that command buttons in the same window/dialog box do not have
           duplicate hot keys.
     19.   Assure that each window/dialog box has a clearly marked default value
           (command button, or other object) which is invoked when the Enter key is
           pressed - and NOT the Cancel or Close button
     20.   Assure that focus is set to an object/button, which makes sense according to the
           function of the window/dialog box.
     21.   Assure that all option buttons (and radio buttons) names are not abbreviations.
     22.   Assure that option button names are not technical labels, but rather are names
           meaningful to system users.
     23.   If hot keys are used to access option buttons, assure that duplicate hot keys do
           not exist in the same window/dialog box.
     24.   Assure that option box names are not abbreviations.
     25.   Assure that option boxes, option buttons, and command buttons are logically
           grouped together in clearly demarcated areas "Group Box"
     26.   Assure that the Tab key sequence, which traverses the screens, does so in a
           logical way.
     27.   Assure consistency of mouse actions across windows.
     28.   Assure that the color red is not used to highlight active objects (many individuals
           are red-green color blind).
Performance Testing Process & Methodology           Proprietary & Confidential
- 28 -
     29. Assure that the user will have control of the desktop with respect to general color
         and highlighting (the application should not dictate the desktop background
         characteristics).
     30. Assure that the screen/window does not have a cluttered appearance
     31. Ctrl + F6 opens next tab within tabbed window
     32. Shift + Ctrl + F6 opens previous tab within tabbed window
     33. Tabbing will open next tab within tabbed window if on last field of current tab
     34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed
         window
     35. Tabbing will go onto the next editable field in the window
     36. Banner style & size & display exact same as existing windows
     37. If 8 or less options in a list box, display all options on open of list box - should be
         no need to scroll
     38. Errors on continue will cause user to be returned to the tab and the focus should
         be on the field causing the error. (i.e the tab is opened, highlighting the field with
         the error on it)
     39. Pressing continue while on the first tab of a tabbed window (assuming all fields
         filled correctly) will not open all the tabs.
     40. On open of tab focus will be on first editable field
     41. All fonts to be the same
     42. Alt+F4 will close the tabbed window and return you to main screen or previous
         screen (as appropriate), generating "changes will be lost" message if necessary.
     43. Microhelp text for every enabled field & button
     44. Ensure all fields are disabled in read-only mode
     45. Progress messages on load of tabbed screens
     46. Return operates continue
     47. If retrieve on load of tabbed window fails window should not open

3.3 Specific Field Tests
3.3.1 Date Field Checks

     1. Assure that leap years are validated correctly & do not                          cause
        errors/miscalculations.
     2. Assure that month code 00 and 13 are validated correctly & do not                cause
        errors/miscalculations.
     3. Assure that 00 and 13 are reported as errors.
     4. Assure that day values 00 and 32 are validated correctly & do not                cause
        errors/miscalculations.
     5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause              errors/
        miscalculations.
     6. Assure that Feb. 30 is reported as an error.
     7. Assure that century change is validated correctly & does not cause              errors/
        miscalculations.
     8. Assure that out of cycle dates are validated correctly & do not                  cause
        errors/miscalculations.




Performance Testing Process & Methodology           Proprietary & Confidential
- 29 -
3.3.2 Numeric Fields

     1.    Assure that lowest and highest values are handled correctly.
     2.    Assure that invalid values are logged and reported.
     3.    Assure that valid values are handles by the correct procedure.
     4.    Assure that numeric fields with a blank in position 1 are processed or reported as
           an error.
     5.    Assure that fields with a blank in the last position are processed or reported as
           an error an error.
     6.    Assure that both + and - values are correctly processed.
     7.    Assure that division by zero does not occur.
     8.    Include value zero in all calculations.
     9.    Include at least one in-range value.
     10.   Include maximum and minimum range values.
     11.   Include out of range values above the maximum and below the minimum.
     12.   Assure that upper and lower values in ranges are handled correctly.

3.3.3 Alpha Field Checks

     1.    Use blank and non-blank data.
     2.    Include lowest and highest values.
     3.    Include invalid characters & symbols.
     4.    Include valid characters.
     5.    Include data items with first position blank.
     6.    Include data items with last position blank.

3.4 Validation Testing - Standard Actions
3.4.1      Examples of Standard Actions - Substitute your specific
           commands
Add
View
Change
Delete
Continue - (i.e. continue saving changes or additions)
Add
View
Change
Delete
Cancel - (i.e. abandon changes or additions)
Fill each field - Valid data
Fill each field - Invalid data
Different Check Box / Radio Box combinations
Scroll Lists / Drop Down List Boxes
Help
Performance Testing Process & Methodology            Proprietary & Confidential
- 30 -
Fill Lists and Scroll
Tab
Tab Sequence
Shift Tab

3.4.2 Shortcut keys / Hot Keys

Note: The following keys are used in some windows applications, and are
included                  as                  a                   guide.

Key            No Modifier              Shift           CTRL                   ALT

F1             Help                     Enter   Help N/A                       N/A
                                        Mode

F2             N/A                      N/A             N/A                    N/A

F3             N/A                      N/A             N/A                    N/A

F4             N/A                      N/A             Close          Close
                                                        Document     / Application.
                                                        Child window.

F5             N/A                      N/A             N/A                    N/A

F6             N/A                      N/A             N/A                    N/A

F7             N/A                      N/A             N/A                    N/A

F8             Toggle extend Toggle      Add N/A                               N/A
               mode,       if mode,        if
               supported.     supported.

F9             N/A                      N/A             N/A                    N/A

F10            Toggle      menu N/A                     N/A                    N/A
               bar activation.

F11,           N/A                      N/A             N/A                    N/A
F12

Tab            Move to next Move              to Move to next Switch       to
               active/editable previous          open         previously used
               field.          active/editable   Document or application.
Performance Testing Process & Methodology         Proprietary & Confidential
- 31 -
                                          field.         Child window.          (Holding down
                                                         (Adding                the ALT key
                                                         SHIFT                  displays       all
                                                         reverses   the         open
                                                         order       of         applications).
                                                         movement).

Alt            Puts focus on N/A                         N/A                    N/A
               first    menu
               command (e.g.
               'File').



3.4.3 Control Shortcut Keys

Key                               Function

CTRL + Z                          Undo

CTRL + X                          Cut

CTRL + C                          Copy

CTRL + V                          Paste

CTRL + N                          New

CTRL + O                          Open

CTRL + P                          Print

CTRL + S                          Save

CTRL + B                          Bold*

CTRL + I                          Italic*

CTRL + U                          Underline*
* These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these
operations.


Performance Testing Process & Methodology          Proprietary & Confidential
- 32 -
4 Regression Testing

4.1 What is regression Testing
         Regression testing is the process of testing changes to computer programs to
          make sure that the older programming still works with the new changes.
         Regression testing is a normal part of the program development process. Test
          department coders develop code test scenarios and exercises that will test new
          units of code after they have been written.
         Before a new version of a software product is released, the old test cases are run
          against the new version to make sure that all the old capabilities still work. The
          reason they might not work because changing or adding new code to a program
          can easily introduce errors into code that is not intended to be changed.
         The selective retesting of a software system that has been modified to ensure
          that any bugs have been fixed and that no other previously working functions
          have failed as a result of the reparations and that newly added features have not
          created problems with previous versions of the software. Also referred to as
          verification testing
         Regression testing is initiated after a programmer has attempted to fix a
          recognized problem or has added source code to a program that may have
          inadvertently introduced errors.
         It is a quality control measure to ensure that the newly modified code still
          complies with its specified requirements and that unmodified code has not been
          affected by the maintenance activity.




Performance Testing Process & Methodology         Proprietary & Confidential
- 33 -
4.2 Test Execution
Test Execution is the heart of the testing process. Each time your application changes,
you will want to execute the relevant parts of your test plan in order to locate defects and
assess quality.



4.2.1 Create Test Cycles
During this stage you decide the subset of tests from your test database you want to
execute.
Usually you do not run all the tests at once. At different stages of the quality assurance
process, you need to execute different tests in order to address specific goals. A related
group of tests is called a test cycle, and can include both manual and automated tests
Example: You can create a cycle containing basic tests that run on each build of the
application throughout development. You can run the cycle each time a new build is
ready, to determine the application's stability before beginning more rigorous testing.
Example: You can create another set of tests for a particular module in your application.
This test cycle includes tests that check that module in depth.
To decide which test cycles to build, refer to the testing goals you defined at the
beginning of the process. Also consider issues such as the current state of the
application and whether new functions have been added or modified.
Following are examples of some general categories of test cycles to consider:
         sanity cycle checks the entire system at a basic level (breadth, rather than
          depth) to see that it is functional and stable. This cycle should include basic-level
          tests containing mostly positive checks.
         normal cycle tests the system a little more in depth than the sanity cycle. This
          cycle can group medium-level tests, containing both positive and negative
          checks.
         advanced cycle tests both breadth and depth. This cycle can be run when more
          time is available for testing. The tests in the cycle cover the entire application
          (breadth), and also test advanced options in the application (depth).
         regression cycle tests maintenance builds. The goal of this type of cycle is to
          verify that a change to one part of the software did not break the rest of the
          application. A regression cycle includes sanity-level tests for testing the entire
          software, as well as in-depth tests for the specific area of the application that was
          modified.



4.2.2 Run Test Cycles (Automated & Manual Tests)
Once you have created cycles that cover your testing objectives, you begin executing the
tests in the cycle. You perform manual tests using the test steps. Testing Tools executes
automated tests for you. A test cycle is complete only when all tests-automatic and
Performance Testing Process & Methodology           Proprietary & Confidential
- 34 -
manual-have been run.
         With Manual Test Execution you follow the instructions in the test steps of each
          test. You use the application, enter input, compare the application output with the
          expected output, and log the results. For each test step you assign either pass or
          fail status.
         During Automated Test Execution you create a batch of tests and launch the
          entire batch at once. Testing Tools runs the tests one at a time. It then imports
          results, providing outcome summaries for each test.

4.2.3 Analyze Test Results
After every test run one analyze and validate the test results. And have to identify all the
failed steps in the tests and to determine whether a bug has been detected, or if the
expected result needs to be updated.

4.3 Change Request
4.3.1 Initiating a Change Request
A user or developer wants to suggest a modification that would improve an existing
application, notices a problem with an application, or wants to recommend an
enhancement. Any major or minor request is considered a problem with an application
and will be entered as a change request.


4.3.2 Type of Change Request
Bug the application works incorrectly or provides incorrect information. (for example, a
letter is allowed to be entered in a number field)
Change a modification of the existing application. (for example, sorting the files
alphabetically by the second field rather than numerically by the first field makes them
easier to find)
Enhancement new functionality or item added to the application. (for example, a new
report, a new field, or a new button)


4.3.3 Priority for the request
Low the application works but this would make the function easier or more user friendly.
High the application works, but this is necessary to perform a job.
Critical the application does not work, job functions are impaired and there is no work
around. This also applies to any Section 508 infraction.



4.4 Bug Tracking
         Locating and repairing software bugs is an essential part of software
          development.
         Bugs can be detected and reported by engineers, testers, and end-users in all

Performance Testing Process & Methodology          Proprietary & Confidential
- 35 -
          phases of the testing process.
         Information about bugs must be detailed and organized in order to schedule bug
          fixes and determine software release dates.




Bug Tracking involves two main stages: reporting and tracking.

4.4.1 Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or
defects) that you detected. The bugs are stored in a database so that you can manage
them and analyze the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it.
You also make sure that the QA and development personnel involved in fixing the bug
are notified.

4.4.2 Track and Analyze Bugs
The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and
closed.
              First you report New bugs to the database, and provide all necessary
               information to reproduce, fix, and follow up the bug.
              The Quality Assurance manager or Project manager periodically reviews all
               New bugs and decides which should be fixed. These bugs are given the
               status Open and are assigned to a member of the development team.
              Software developers fix the Open bugs and assign them the status Fixed.
              QA personnel test a new build of the application. If a bug does not reoccur, it
               is Closed. If a bug is detected again, it is reopened.
Communication is an essential part of bug tracking; all members of the development and
quality assurance team must be well informed in order to insure that bugs information is
up to date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your
application. You can use data analysis tools such as re-ports and graphs in interpret bug
data.



4.5 Traceability Matrix
A traceability matrix is created by associating requirements with the products that satisfy
them. Tests are associated with the requirements on which they are based and the
product tested to meet the requirement. Below is a simple traceability matrix structure.

Performance Testing Process & Methodology           Proprietary & Confidential
- 36 -
There can be more things included in a traceability matrix than shown below. Traceability
requires unique identifiers for each requirement and product. Numbers for products are
established in a configuration management (CM) plan.




Traceability ensures completeness, that all lower level requirements derive from
higher level requirements, and that all higher level requirements are allocated to
lower level requirements. Traceability is also used in managing change and
provides the basis for test planning.

SAMPLE TRACEABILITY MATRIX

A traceability matrix is a report from the requirements database or repository.
The examples below show traceability between user and system requirements.
User requirement identifiers begin with "U" and system requirements with "S."




Tracing S12 to its source makes it clear this requirement is erroneous: it must be
eliminated, rewritten, or the traceability corrected.




Performance Testing Process & Methodology       Proprietary & Confidential
- 37 -
In addition to traceability matrices, other reports are necessary to manage requirements.
What goes into each report depends on the information needs of those receiving the
report(s). Determine their information needs and document the information that will be
associated with the requirements when you set up your requirements database or
repository




Performance Testing Process & Methodology       Proprietary & Confidential
- 38 -
5 Phases of Testing
5.1 Introduction
The Primary objective of testing effort is to determine the conformance to requirements
specified in the contracted documents. The integration of this code with the internal code
is the important objective. Goal is to evaluate the system as a whole, not its parts
Techniques can be structural or functional.
Techniques can be used in any stage that tests the system as a whole (System testing
,Acceptance Testing, Unit testing, Installation, etc.)

5.2 Types and Phases of Testing
SDLC Document                                          QA Document
Software Requirement Specification                     Requirement Checklist
Design Document                                        Design Checklist
Functional Specification                               Functional Checklist
Design Document & Functional Specs                     Unit Test Case Documents
Design Document & Functional Specs                     Integration Test Case Documents
Design Document & Functional Specs                     System Test Case Documents
Unit / System / Integration Test Case Documents        Regression Test Case Documents
Functional Specs, Performance Criteria                 Performance Test Case Documents
Software Requirement Specification, Unit / System      User Acceptance Test Case
/ Integration / Regression / Performance Test          Documents.
Case Documents




Performance Testing Process & Methodology        Proprietary & Confidential
- 39 -
5.3 The “V”Model




 Requirements                                                              Acceptance Testing




         Specification                                             System Testing




                  Architecture                             Integration Testing




                       Detailed Design               Unit Testing



                                            Coding




Performance Testing Process & Methodology             Proprietary & Confidential
- 40 -
  Requirement                         Requirement
     Study                             Checklist

                                       Software
                                     Requirement
                                      Specification
   Software                           Functional
  Requirement                        Specification
  Specification                        Checklist
                                      Functional
                                     Specification
                                      Document
   Functional                        Architecture
  Specification                         Design
   Document
  Architecture                          Detailed
     Design                              Design
                                       Document
                       Coding

   Functional                       Unit Test Case
  Specification                      Documents
   Document
                                    Unit Test Case
    Design                             Document
                                     System Test
  Document                          Case Document
  Functional                          Integration
 Specification                         Test Case
  Document                             Document
Unit/Integratio                       Regression
n/System Test                          Test Case
     Case                              Document
  Functional
  Documents
 Specification                       Performance
 Performance
  Document                          Test Cases and
    Criteria                          Scenarios
   Software
 Requirement
  Regression
 Specification                      User Acceptance
   Test Case                           Test Case
 Performance
  Document                          Documents/Sce
Test Cases and                           narios
   Scenarios




Performance Testing Process & Methodology             Proprietary & Confidential
- 41 -
                          Regression                                Requirement
 Requirement
                            Round 3                                 s Review
           s
                          Performance
                               Testing

                               Regression
       Specification             Round 2                    Specification            System
                                                            Review                   Testing




                Architecture           Regression    Architectur           Integration
                                         Round 1     e Review              Testing




                             Detailed                              Unit
                                               Design
                              Design                               Testing
                                               Review

                                            Code       Code
                                                    Walkthrough




Performance Testing Process & Methodology               Proprietary & Confidential
- 42 -
6 Integration Testing
One of the most significant aspects of a software development project is the integration
strategy. Integration may be performed all at once, top-down, bottom-up, critical piece
first, or by first integrating functional subsystems and then integrating the subsystems in
separate phases using any of the basic strategies. In general, the larger the project, the
more important the integration strategy.
Very small systems are often assembled and tested in one phase. For most real systems,
this is impractical for two major reasons. First, the system would fail in so many places at
once that the debugging and retesting effort would be impractical
Second, satisfying any white box testing criterion would be very difficult, because of the
vast amount of detail separating the input data from the individual code modules. In fact,
most integration testing has been traditionally limited to ``black box'' techniques.
Large systems may require many integration phases, beginning with assembling modules
into low-level subsystems, then assembling subsystems into larger subsystems, and
finally assembling the highest level subsystems into the complete system.
To be most effective, an integration testing technique should fit well with the overall
integration strategy. In a multi-phase integration, testing at each phase helps detect
errors early and keep the system under control. Performing only cursory testing at early
integration phases and then applying a more rigorous criterion for the final stage is really
just a variant of the high-risk "big bang" approach. However, performing rigorous testing
of the entire software involved in each integration phase involves a lot of wasteful
duplication of effort across phases. The key is to leverage the overall integration structure
to allow rigorous testing at each phase while minimizing duplication of effort.
It is important to understand the relationship between module testing and integration
testing. In one view, modules are rigorously tested in isolation using stubs and drivers
before any integration is attempted. Then, integration testing concentrates entirely on
module interactions, assuming that the details within each module are accurate. At the
other extreme, module and integration testing can be combined, verifying the details of
each module's implementation in an integration context. Many projects compromise,
combining module testing with the lowest level of subsystem integration testing, and then
performing pure integration testing at higher levels. Each of these views of integration
testing may be appropriate for any given project, so an integration testing method should
be flexible enough to accommodate them all.

 Combining module testing with bottom-up integration.




Performance Testing Process & Methodology         Proprietary & Confidential
- 43 -
6.1 Generalization of module testing criteria
Module testing criteria can often be generalized in several possible ways to support
integration testing. As discussed in the previous subsection, the most obvious
generalization is to satisfy the module testing criterion in an integration context, in effect
using the entire program as a test driver environment for each module. However, this
trivial kind of generalization does not take advantage of the differences between module
and integration testing. Applying it to each phase of a multi-phase integration strategy, for
example, leads to an excessive amount of redundant testing.
More useful generalizations adapt the module testing criterion to focus on interactions
between modules rather than attempting to test all of the details of each module's
implementation in an integration context. The statement coverage module testing
criterion, in which each statement is required to be exercised during module testing, can
be generalized to require each module call statement to be exercised during integration
testing. Although the specifics of the generalization of structured testing are more
detailed, the approach is the same. Since structured testing at the module level requires
that all the decision logic in a module's control flow graph be tested independently, the
appropriate generalization to the integration level requires that just the decision logic
involved with calls to other modules be tested independently.

Module design complexity
Rather than testing all decision outcomes within a module independently, structured
testing at the integration level focuses on the decision outcomes that are involved with
module calls. The design reduction technique helps identify those decision outcomes, so

Performance Testing Process & Methodology          Proprietary & Confidential
- 44 -
that it is possible to exercise them independently during integration testing. The idea
behind design reduction is to start with a module control flow graph, remove all control
structures that are not involved with module calls, and then use the resultant "reduced"
flow graph to drive integration testing. Figure 7-2 shows a systematic set of rules for
performing design reduction. Although not strictly a reduction rule, the call rule states that
function call ("black dot") nodes cannot be reduced. The remaining rules work together to
eliminate the parts of the flow graph that are not involved with module calls. The
sequential rule eliminates sequences of non-call ("white dot") nodes. Since application of
this rule removes one node and one edge from the flow graph, it leaves the cyclomatic
complexity unchanged. However, it does simplify the graph so that the other rules can be
applied. The repetitive rule eliminates top-test loops that are not involved with module
calls. The conditional rule eliminates conditional statements that do not contain calls in
their bodies. The looping rule eliminates bottom-test loops that are not involved with
module calls. It is important to preserve the module's connectivity when using the looping
rule, since for poorly-structured code it may be hard to distinguish the ``top'' of the loop
from the ``bottom.'' For the rule to apply, there must be a path from the module entry to
the top of the loop and a path from the bottom of the loop to the module exit. Since the
repetitive, conditional, and looping rules each remove one edge from the flow graph, they
each reduce cyclomatic complexity by one.
Rules 1 through 4 are intended to be applied iteratively until none of them can be applied,
at which point the design reduction is complete. By this process, even very complex logic
can be eliminated as long as it does not involve any module calls.




Performance Testing Process & Methodology         Proprietary & Confidential
- 45 -
Incremental integration
Hierarchical system design limits each stage of development to a manageable effort, and
it is important to limit the corresponding stages of testing as well. Hierarchical design is
most effective when the coupling among sibling components decreases as the
component size increases, which simplifies the derivation of data sets that test
interactions among components. The remainder of this section extends the integration
testing techniques of structured testing to handle the general case of incremental
integration, including support for hierarchical design. The key principle is to test just the
interaction among components at each integration stage, avoiding redundant testing of
previously integrated sub-components.


Performance Testing Process & Methodology         Proprietary & Confidential
- 46 -
To extend statement coverage to support incremental integration, it is required that all
module call statements from one component into a different component be exercised at
each integration stage. To form a completely flexible "statement testing" criterion, it is
required that each statement be executed during the first phase (which may be anything
from single modules to the entire program), and that at each integration phase all call
statements that cross the boundaries of previously integrated components are tested.
Given hierarchical integration stages with good cohesive partitioning properties, this limits
the testing effort to a small fraction of the effort to cover each statement of the system at
each integration phase.
Structured testing can be extended to cover the fully general case of incremental
integration in a similar manner. The key is to perform design reduction at each integration
phase using just the module call nodes that cross component boundaries, yielding
component-reduced graphs, and exclude from consideration all modules that do not
contain any cross-component calls.
Figure 7-7 illustrates the structured testing approach to incremental integration. Modules
A and C have been previously integrated, as have modules B and D. It would take three
tests to integrate this system in a single phase. However, since the design predicate
decision to call module D from module B has been tested in a previous phase, only two
additional tests are required to complete the integration testing. Modules B and D are
removed from consideration because they do not contain cross-component calls, the
component module design complexity of module A is 1, and the component module
design complexity of module C is 2.




Performance Testing Process & Methodology         Proprietary & Confidential
- 47 -
Performance Testing Process & Methodology   Proprietary & Confidential
- 48 -
7 Acceptance Testing
7.1 Introduction – Acceptance Testing
In software engineering, acceptance testing is formal testing conducted to determine
whether a system satisfies its acceptance criteria and thus whether the customer should
accept the system.
The main types of software testing are:
Component.
Interface.
System.
Acceptance.
Release.
Acceptance Testing checks the system against the "Requirements". It is similar to
systems testing in that the whole system is checked but the important difference is the
change in focus:
Systems Testing checks that the system that was specified has been delivered.
Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer should always do acceptance testing. The customer
knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment.

The forms of the tests may follow those in system testing, but at all times they are
informed by the business needs.

The test procedures that lead to formal 'acceptance' of new or changed systems. User
Acceptance Testing is a critical phase of any 'systems' project and requires significant
participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be
developed in order to plan precisely, and in detail, the means by which 'Acceptance' will
be achieved. The final part of the UAT can also include a parallel run to prove the system
against the current system.

7.2 Factors influencing Acceptance Testing
The User Acceptance Test Plan will vary from system to system but, in general, the
testing should be planned in order to provide a realistic and adequate exposure of the
system to all reasonably expected events. The testing can be based upon the User
Requirements Specification to which the system should conform.
As in any system though, problems will arise and it is important to have determined what
will be the expected and required responses from the various parties concerned;
including Users; Project Team; Vendors and possibly Consultants / Contractors.
In order to agree what such responses should be, the End Users and the Project Team
need to develop and agree a range of 'Severity Levels'. These levels will range from (say)
1 to 6 and will represent the relative severity, in terms of business / commercial impact, of
a problem with the system, found during testing. Here is an example which has been
used successfully; '1' is the most severe; and '6' has the least impact :-
'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of
this error / bug


Performance Testing Process & Methodology         Proprietary & Confidential
- 49 -
Critical Problem; testing can continue but we cannot go into production (live) with this
problem
Major Problem; testing can continue but live this feature will cause severe disruption to
business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only
minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should be
corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to
the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must
then agree upon the responsibilities and required actions for each category of
problem. For example, you may demand that any problems in severity level 1, receive
priority response and that all testing will cease until such level 1 problems are resolved.
Caution. Even where the severity levels and the responses to each have been agreed by
all parties; the allocation of a problem into its appropriate severity level can be subjective
and open to question. To avoid the risk of lengthy and protracted exchanges over the
categorisation of problems; we strongly advised that a range of examples are agreed in
advance to ensure that there are no fundamental areas of disagreement; or, or if there
are, these will be known in advance and your organisation is forewarned.
Finally, it is crucial to agree the Criteria for Acceptance. Because no system is
entirely fault free, it must be agreed between End User and vendor, the maximum
number of acceptable 'outstandings' in any particular category. Again, prior consideration
of this is advisable.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a range
of conditions. These conditions need to be analysed as they may, perhaps
unintentionally, seek additional functionality which could be classified as scope creep. In
any event, any and all fixes from the software developers, must be subjected to rigorous
System Testing and, where appropriate Regression Testing.

7.3 Conclusion
Hence the goal of acceptance testing should verify the overall quality, correct operation,
scalability, completeness, usability, portability, and robustness of the functional
components supplied by the Software system.




Performance Testing Process & Methodology         Proprietary & Confidential
- 50 -
8 SYSTEM TESTING
8.1 Introduction to SYSTEM TESTING
For most organizations, software and system testing represents a significant element of a
project's cost in terms of money and management time. Making this function more
effective can deliver a range of benefits including reductions in risk, development costs
and improved 'time to market' for new systems.
Systems with software components and software-intensive systems are more and more
complex everyday. Industry sectors such as telecom, automotive, railway, and
aeronautical and space, are good examples. It is often agreed that testing is essential to
manufacture reliable products. However, the validation process does not often receive
the required attention. Moreover, the validation process is close to other activities such as
conformance, acceptance and qualification testing.
The difference between function testing and system testing is that now the focus is on
the whole application and its environment . Therefore the program has to be given
completely. This does not mean that now single functions of the whole program are
tested, because this would be too redundant. The main goal is rather to demonstrate the
discrepancies of the product from its requirements and its documentation. In other
words, this again includes the question, ``Did we build the right product?'' and not just,
``Did we build the product right?''
However, system testing does not only deal with this more economical problem, it also
contains some aspects that are orientated on the word ``system'' . This means that those
tests should be done in the environment for which the program was designed, like a
mulituser network or whetever. Even security guide lines have to be included. Once
again, it is beyond doubt that this test cannot be done completely, and nevertheless,
while this is one of the most incomplete test methods, it is one of the most important.
A number of time-domain software reliability models attempt to predict the growth of a
system's reliability during the system test phase of the development life cycle. In this
paper we examine the results of applying several types of Poisson-process models to the
development of a large system for which system test was performed in two parallel
tracks, using different strategies for test data selection.
we will test that the functionality of your systems meets with your specifications,
integrating with which-ever type of development methodology you are applying. We test
for errors that users are likely to make as they interact with the application as well as your
application‘s ability to trap errors gracefully. These techniques can be applied flexibly,
whether testing a financial system, e-commerce, an online casino or games testing.
 System Testing is more than just functional testing, however, and can, when appropriate,
also encompass many other types of testing, such as:
         o security
         o load/stress
         o performance
         o browser compatibility
         o localisation

8.2 Need for System Testing
Effective software testing, as a part of software engineering, has been proven over the
last 3 decades to deliver real business benefits including:


Performance Testing Process & Methodology         Proprietary & Confidential
- 51 -
      reduction of costs                    Reduce rework and support overheads
      increased productivity                More effort spent on developing new
                                            functionality and less on "bug fixing" as
                                            quality increases
      reduce commercial risks               If it goes wrong, what is the potential impact
                                            on your commercial goals? Knowledge is
                                            power, so why take a leap of faith while your
                                            competition step forward with confidence?
These benefits are achieved as a result of some fundamental principles of testing, for
example, increased independence naturally increases objectivity.
 Your test strategy must take into consideration the risks to your organisation, commercial
and technical. You will have a personal interest in its success in which case it is only
human for your objectivity to be compromised.

8.3 System Testing Techniques
Goal is to evaluate the system as a whole, not its parts
Techniques can be structural or functional
Techniques can be used in any stage that tests the system as a whole (acceptance,
installation, etc.)
Techniques not mutually exclusive
Structural techniques
stress testing - test larger-than-normal capacity in terms of transactions, data, users,
speed, etc.
execution testing- test performance in terms of speed, precision, etc.
recovery testing - test how the system recovers from a disaster, how it handles corrupted
data, etc.
operations testing - test how the system fits in with existing operations and procedures in
the user organization
compliance testing - test adherence to standards
security testing - test security requirements
Functional techniques
requirements testing - fundamental form of testing - makes sure the system does what it‘s
required to do
regression testing - make sure unchanged functionality remains unchanged
error-handling testing - test required error-handling functions (usually user error)
manual-support testing - test that the system can be used properly - includes user
documentation
intersystem handling testing - test that the system is compatible with other systems in
the environment
control testing - test required control mechanisms
parallel testing - feed same input into two versions of the system to make sure they
produce the same output


Unit Testing
Performance Testing Process & Methodology            Proprietary & Confidential
- 52 -
Goal is to evaluate some piece (file, program, module, component, etc.) in isolation
Techniques can be structural or functional
In practice, it‘s usually ad-hoc and looks a lot like debugging
More structured approaches exist



8.4 Functional techniques
input domain testing - pick test cases representative of the range of allowable input,
including high, low, and average values
equivalence partitioning - partition the range of allowable input so that the program is
expected to behave similarly for all inputs in a given partition, then pick a test case from
each partition
 boundary value - choose test cases with input values at the boundary (both inside and
outside) of the allowable range
syntax checking - choose test cases that violate the format rules for input
special values - design test cases that use input values that represent special situations
output domain testing - pick test cases that will produce output at the extremes of the
output domain
Structural techniques
statement testing - ensure the set of test cases exercises every statement at least once
branch testing - each branch of an if/then statement is exercised
conditional testing - each truth statement is exercised both true and false
expression testing - every part of every expression is exercised
path testing - every path is exercised (impossible in practice)

Error-based techniques
basic idea is that if you know something about the nature of the defects in the code, you
can estimate whether or not you‘ve found all of them or not
fault seeding - put a certain number of known faults into the code, then test until they are
all found
mutation testing - create mutants of the program by making single changes, then run test
cases until all mutants have been killed
historical test data - an organization keeps records of the average numbers of defects in
the products it produces, then tests a new product until the number of defects found
approaches the expected number

8.5 Conclusion:
Hence the system Test phase should begin once modules are integrated enough to
perform tests in a whole system environment. System testing can occur in parallel with
integration test, especially with the top-down method.




Performance Testing Process & Methodology         Proprietary & Confidential
- 53 -
9         Unit Testing
9.1 Introduction to Unit Testing
          Unit testing. Isn't that some annoying requirement that we're going to ignore?
          Many developers get very nervous when you mention unit tests.Usually this is a
          vision of a grand table with every single method listed, along with the expected
          results and pass/fail date. It's important, but not relevant in most programming
          projects.

          The unit test will motivate the code that you write. In a sense, it is a little design
          document that says, "What will this bit of code do?" Or, in the language of object
          oriented programming, "What will these clusters of objects do?"

          The crucial issue in constructing a unit test is scope. If the scope is too narrow,
          then the tests will be trivial and the objects might pass the tests, but there will be
          no design of their interactions. Certainly, interactions of objects are the crux of
          any object oriented design.

          Likewise, if the scope is too broad, then there is a high chance that not every
          component of the new code will get tested. The programmer is then reduced to
          testing-by-poking-around, which is not an effective test strategy.


          Need for Unit Test
          How do you know that a method doesn't need a unit test? First, can it be tested
          by inspection? If the code is simple enough that the developer can just look at it
          and verify its correctness then it is simple enough to not require a unit test. The
          developer should know when this is the case.

          Unit tests will most likely be defined at the method level, so the art is to define the
          unit test on the methods that cannot be checked by inspection. Usually this is the
          case when the method involves a cluster of objects. Unit tests that isolate
          clusters of objects for testing are doubly useful, because they test for failures,
          and they also identify those segments of code that are related. People who revisit
          the code will use the unit tests to discover which objects are related, or which
          objects form a cluster. Hence: Unit tests isolate clusters of objects for future
          developers.

          Another good litmus test is to look at the code and see if it throws an error or
          catches an error. If error handling is performed in a method, then that method
          can break. Generally, any method that can break is a good candidate for having
          a unit test, because it may break at some time, and then the unit test will be there
          to help you fix it.

          The danger of not implementing a unit test on every method is that the coverage
          may be incomplete. Just because we don't test every method explicitly doesn't
          mean that methods can get away with not being tested. The programmer should

Performance Testing Process & Methodology            Proprietary & Confidential
- 54 -
          know that their unit testing is complete when the unit tests cover at the very least
          the functional requirements of all the code. The careful programmer will know
          that their unit testing is complete when they have verified that their unit tests
          cover every cluster of objects that form their application.


Life Cycle Approach to Testing

Testing will occur throughout the project lifecycle i.e., from Requirements till User
Acceptance Testing.The main Objective to Unit Testing are as follows :

•To execute a program with the intent of finding an error.;
• To uncover an as-yet undiscovered error ; and
• Prepare a test case with a high probability of finding an as-yet undiscovered error.

Levels of Unit Testing
•UNIT
•100% code coverage
• INTEGRATION
• SYSTEM
•• ACCEPTANCE
• MAINTENANCE AND REGRESSION

Concepts in Unit Testing:
•The most 'micro' scale of testing;
•To test particular functions or code modules.
•Typically done by the programmer and not by testers.
• As it requires detailed knowledge of the internal program design and code.
• Not always easily done unless the application has a well-designed architecture
with tight code;



9.2 Unit Testing –Flow:

                      driver                                              interface
                                                                          local data structures
                                                                          boundary conditions
            Module                                                        independent paths
                                                                          error handling paths
                                   1 Re
  Stub              Stub
                                     sul
                                     ts TestCases

Performance Testing Process & Methodology           Proprietary & Confidential
- 55 -
Types of Errors detected

 The following are the Types of errors that may be caught
         •    Error in Data Structures
         •    Performance Errors
         •    Logic Errors
         •    Validity of alternate and exception flows
         •    Identified at analysis/design stages

 Unit Testing – Black Box Approach
       •      Field Level Check
          •         Field Level Validation
          •         User Interface Check
          •         Functional Level Check

Unit Testing – White Box Approach
          STATEMENT COVERAGE
          DECISION COVERAGE
          CONDITION COVERAGE
          MULTIPLE CONDITION COVERAGE (nested
          conditions)
          CONDITION/DECISION COVERAGE
          PATH COVERAGE


Unit Testing – FIELD LEVEL CHECKS
          •      Null / Not Null Checks
          •        Uniqueness Checks
          •        Length Checks
          •        Date Field Checks
          •        Numeric Checks
          •        Negative Checks

Unit Testing – Field Level Validations
          •       Test all Validations for an Input field
          •        Date Range Checks (From Date/To Date‘s)
          •        Date Check Validation with System date

Unit Testing – User Interface Checks
•     Readability of the Controls
•      Tool Tips Validation
•      Ease of Use of Interface Across
Performance Testing Process & Methodology       Proprietary & Confidential
- 56 -
•         Tab related Checks
•         User Interface Dialog
•        GUI compliance checks

Unit Testing - Functionality Checks
•        Screen Functionalities
•         Field Dependencies
•         Auto Generation
•         Algorithms and Computations
•        Normal and Abnormal terminations
•        Specific Business Rules if any..

Unit Testing - OTHER MEASURES
           COVERAGE




9.3 Execution of Unit Tests
         Design a test case for every statement to be executed.
         Select the unique set of test cases.
         This measure reports whether each executable statement is encountered.
         Also known as: line coverage, segment coverage and basic block coverage.
         Basic block coverage is the same as statement coverage except the unit of code
          measured is each sequence of non-branching statements.


Example of Unit Testing:
int invoice (int x, int y) {
  int d1, d2, s;
  if (x<=30) d2=100;
  else d2=90;
  s=5*x + 10 *y;
  if (s<200) d1=100;
  else if (s<1000) d1 = 95;
        else d1 = 80;
  return (s*d1*d2/10000);
}




Unit Testing Flow :




Performance Testing Process & Methodology        Proprietary & Confidential
- 57 -
Performance Testing Process & Methodology   Proprietary & Confidential
- 58 -
Advantage of Unit Testing
§ Can be applied directly to object code and does not require processing source code.
§   Performance profilers commonly implement this measure.


DISADVANTAGE of Unit Testing
§Insensitive to some control structures (number of iterations)
§Does not report whether loops reach their termination condition
§Statement coverage is completely insensitive to the logical operators (|| and &&).


Method for Statement Coverage
-Design a test-case for the pass/failure of every decision point
-Select unique set of test cases
§This measure reports whether Boolean expressions tested in control structures (such as
the if-statement and while-statement) evaluated to both true and false.
§The entire Boolean expression is considered one true-or-false predicate regardless of
whether it contains logical-and or logical-or operators.
§Additionally, this measure includes coverage of switch-statement cases, exception
handlers, and interrupt handlers
§Also known as: branch coverage, all-edges coverage, basis path coverage, decision-
decision-path testing
§"Basis path" testing selects paths that achieve decision coverage.
§ADVANTAGE:
Simplicity without the problems of statement coverage

DISADVANTAGE
§This measure ignores branches within boolean expressions which occur due to short-
circuit operators.

Method for Condition Coverage:
-Test if every condition (sub-expression) in decision for true/false
-Select unique set of test cases.
§Reports the true or false outcome of each Boolean sub-expression, separated by
logical-and and logical-or if they occur.
§§Condition coverage measures the sub-expressions independently of each other.

Performance Testing Process & Methodology        Proprietary & Confidential
- 59 -
§Reports whether every possible combination of boolean sub-expressions occurs. As
with condition coverage, the sub-expressions are separated by logical-and and logical-or,
when present.
§The test cases required for full multiple condition coverage of a condition are given by
the logical operator truth table for the condition.
DISADVANTAGE:
§Tedious to determine the minimum set of test cases required, especially for very
complex Boolean expressions
§Number of test cases required could vary substantially among conditions that have
similar complexity
§Condition/Decision Coverage is a hybrid measure composed by the union of condition
coverage and decision coverage.
§It has the advantage of simplicity but without the shortcomings of its component
measures
§This measure reports whether each of the possible paths in each function have been
followed.
§A path is a unique sequence of branches from the function entry to the exit.
§Also known as predicate coverage. Predicate coverage views paths as possible
combinations of logical conditions
§Path coverage has the advantage of requiring very thorough testing

FUNCTION COVERAGE:
§       This measure reports whether you invoked each function or procedure.
§       It is useful during preliminary testing to assure at least some coverage in
all areas of the software.
§       Broad, shallow testing finds gross deficiencies in a test suite quickly.

LOOP COVERAGE
This measure reports whether you executed each loop body zero times, exactly once,
twice and more than twice (consecutively).
For do-while loops, loop coverage reports whether you executed the body exactly once,
and more than once.
The valuable aspect of this measure is determining whether while-loops and for-loops
execute more than once, information not reported by others measure.

RACE COVERAGE
This measure reports whether multiple threads execute the same code at the same time.
Helps detect failure to synchronize access to resources.
Useful for testing multi-threaded programs such as in an operating system.



9.4 Conclusion

Testing irrespective of the phases of testing should encompass the following :
    Cost of Failure associated with defective products getting shipped and used by
        customer is enormous
    To find out whether the integrated product work as per the customer
        requirements
Performance Testing Process & Methodology       Proprietary & Confidential
- 60 -
         To evaluate the product with an independent perspective
         To identify as many defects as possible before the customer finds
         To reduce the risk of releasing the product




Performance Testing Process & Methodology        Proprietary & Confidential
- 61 -
10 Test Strategy
10.1 Introduction
This Document entails you towards the better insight of the Test Strategy and its
methodology.
It is the role of test management to ensure that new or modified service products meet
the business requirements for which they have been developed or enhanced.
The Testing strategy should define the objectives of all test stages and the techniques
that apply. The testing strategy also forms the basis for the creation of a standardized
documentation set, and facilitates communication of the test process and its implications
outside of the test discipline. Any test support tools introduced should be aligned with,
and in support of, the test strategy. Test Approach/Test Architecture are the acronyms for
Test Strategy.
Test management is also concerned with both test resource and test environment
management.

10.2 Key elements of Test Management:
Test organization –the set-up and management of a suitable test organizational
structure and explicit role definition. The project framework under which the testing
activities will be carried out is reviewed, high level test phase plans prepared and
resource schedules considered. Test organization also involves the determination of
configuration standards and the definition of the test environment.
Test planning – the requirements definition and design specifications facilitate in the
identification of major test items and these may necessitate the test strategy to be
updated. A detailed test plan and schedule is prepared with key test responsibilities being
indicated.
Test specifications – required for all levels of testing and covering all categories of test.
The required outcome of each test must be known before the test is attempted.
Unit, integration and system testing – configuration items are verified against the
appropriate specifications and in accordance with the test plan. The test environment
should also be under configuration control and test data and results stored for future
evaluation.
Test monitoring and assessment – ongoing monitoring and assessment of the integrity
of the development and construction. The status of the configuration items should be
reviewed against the phase plans and test progress reports prepared providing some
assurance of the verification and validation activities.
Product assurance – the decision to negotiate the acceptance testing program and the
release and commissioning of the service product is subject to the ‗product assurance‘
role being satisfied with the outcome of the verification activities. Product assurance may
oversee some of the test activity and may participate in process reviews.
A common criticism of construction programmers is that insufficient time is frequently
allocated to the testing and commissioning of the building systems together with the
involvement and subsequent training of the Facilities Management team. Testing and
commissioning is often considered by teams as a secondary activity and given a lower
priority particularly as pressure builds on the program towards completion.
Sufficient time must be dedicated to testing and commissioning as ensuring the systems
function correctly is fairly fundamental to the project‘s success or failure. Traditionally the

Performance Testing Process & Methodology          Proprietary & Confidential
- 62 -
responsibility for testing and commissioning is buried deep within the supply chain as a
sub-contract of a sub-contract. It is possible to gain greater control of this process and
the associated risk through the use of specialists such as Systems Integration who can
be appointed as part of the professional team.
The time necessary for testing and commissioning will vary from project to project
depending upon the complexity of the systems and services that have been installed. The
Project Sponsor should ensure that the professional team and the contractor consider
realistically how much time is needed.
Fitness for purpose checklist:
      Is there a documented testing strategy that defines the objectives of all test
         stages and the techniques that may apply, e.g. non-functional testing and the
         associated techniques such as performance, stress and security etc?
      Does the test plan prescribe the approach to be taken for intended test activities,
         identifying:
      the items to be tested,
      the testing to be performed,
      test schedules,
      resource and facility requirements,
      reporting requirements,
      evaluation criteria,
      risks requiring contingency measures?
      Are test processes and practices reviewed regularly to assure that the testing
         processes continue to meet specific business needs?
For example, e-commerce testing may involve new user interfaces and a business focus
on usability may mean that the organization must review its testing strategies .

10.3 Test Strategy Flow :
Test Cases and Test Procedures should manifest Test Strategy.




Performance Testing Process & Methodology       Proprietary & Confidential
- 63 -
Test Strategy – Selection
Selection of the Test Strategy is based on the following factors
         Product
           Test Strategy based on the Application to help people and teams of people
           in making decisions.
         Based on the Key Potential Risks
                        Suggestion of Wrong Ideas.
                        People will use the Product Incorrectly
                        Incorrect comparison of scenarios.
                        Scenarios may be corrupted.
                        Unable to handle Complex Decisions.
         Determination of Actual Risk.
                        Understand the underlying Algorithm.
                        Simulate the Algorithm in parallel.
                           Capability test each major function.
                        Generate large number of decision scenarios.
                        Create complex scenarios and compare them.
                        Review Documentation and Help.
                        Test for sensitivity to user Error.
Test Strategy Execution:
Understand the decision Algorithm and generate the parallel decision analyzer using the
Perl or Excel that will function as a reference for high volume testing of the app.

Performance Testing Process & Methodology      Proprietary & Confidential
- 64 -
         Create a means to generate and apply large numbers of decision scenarios to
          the product. This will be done using the GUI test Automation system
         or through the direct generation of Decide Right scenario files that
         would be loaded into the product during test.
         Review the Documentation, and the design of the user interface and functionality
          for its sensitivity to user error.
         Test with decision scenarios that are near the limit of complexity allowed by the
          product
         Compare complex scenarios.
         Test the product for the risk of silent failures or corruptions in decision analysis.
         Issues in Execution of the Test Strategy
         The difficulty of understanding and simulating the decision algorithm
         The risk of coincidal failure of both the simulation and the product.
         The difficulty of automating decision tests



10.4 General Testing Strategies
         Top-down
         Bottom-up
         Thread testing
         Stress testing
         Back-to-back testing

10.5 Need for Test Strategy
The objective of testing is to reduce the risks inherent in computer systems. The strategy
must address the risks and present a process that can reduce those risks. The system
concerns on risks then establish the objectives for the test process. The two components
of the testing strategy are the Test Factors and the Test Phase.


                                     Analysis Coding Errors
                                     36%
                                      and
                                     design
                                     Errors 64%



         Test Factor – The risk or issue that needs to be addressed as part of the test
          strategy. The strategy will select those factors that need to be addressed in the
          testing of a specific application system.
         Test Phase – The Phase of the systems development life cycle in which testing
          will occur.


Performance Testing Process & Methodology           Proprietary & Confidential
- 65 -
          Not all the test factors will be applicable to all software systems. The
          development team will need to select and rank the test factors for the specific
          software systems being developed.
          The test phase will vary based on the testing methodology used. For example the
          test phases in as traditional waterfall life cycle methodology will be much
          different from the phases in a Rapid Application Development methodology.

10.6 Developing a Test Strategy
          The test Strategy will need to be customized for any specific software system.
          The applicable test factors would be listed as the phases in which the testing
          must occur.
          Four test steps must be followed to develop a customized test strategy.
              Select and rank Test Factors
              Identify the System Developmental Phases
              Identify the Business risks associated with the System under
                  Development.
              Place risks in the Matrix
                          Requireme
           TestFactor




                                                                                                  Maintain
                                                                                      Integrate
                                                                   Dynamic
                                        Design
           Phase
           s\Test




                                                 Build




                                                                   Test
                          nts




                                                               Risks
            Factor
            s



10.7 Conclusion:
Test Strategy should be developed in accordance with the business risks associated with
the software when the test team develop the test tactics. Thus the Test team needs to
acquire and study the test strategy that should question the following:

         What is the relationship of importance among the test factors?
         Which of the high level risks are the most significant?
         What damage can be done to the business if the software fails to perform
          correctly?
         What damage can be done to the business if the business if the software is not
          completed on time?
         Who are the individuals most knowledgeable in understanding the impact of the
          identified business risks?

Hence the Test Strategy must address the risks and present a process that can reduce
those risks. The system accordingly focuses on risks thereby establishes the objectives
for the test process.


Performance Testing Process & Methodology                Proprietary & Confidential
- 66 -
Performance Testing Process & Methodology   Proprietary & Confidential
- 67 -
11 TEST PLAN
11.1 What is a Test Plan?
        A Test Plan can be defined as a document that describes the scope, approach,
        resources and schedule of intended test activities. It identifies test items, the
        features to be tested, the testing tasks, who will do each task, and any risks
        requiring contingency planning.
        The main purpose of preparing a Test Plan is that everyone concerned with the
        project are in sync with regards to the scope, responsibilities, deadlines and
        deliverables for the project. It is in this respect that reviews and a sign-off are very
        important since it means that everyone is in agreement of the contents of the test
        plan and this also helps in case of any dispute during the course of the project
        (especially between the developers and the testers).
Purpose of preparing a Test Plan
        A Test Plan is a useful way to think through the efforts needed to validate the
        acceptability of a software product.
        The completed document will help people outside the test group understand the
        'why' and 'how' of product validation.
        It should be thorough enough to be useful but not so thorough that no one outside
        the test group will read it.

Contents of a Test Plan
               1.  Purpose
               2.  Scope
               3.  Test Approach
               4.  Entry Criteria
               5.  Resources
               6.  Tasks / Responsibilities
               7.  Exit Criteria
               8.  Schedules / Milestones
               9.  Hardware / Software Requirements
               10. Risks & Mitigation Plans
               11. Tools to be used
               12. Deliverables
               13. References
                        a. Procedures
                        b. Templates
                        c. Standards/Guidelines
               14. Annexure
               15. Sign-Off



11.2 Contents (in detail)
          Purpose
                This section should contain the purpose of preparing the test plan


Performance Testing Process & Methodology            Proprietary & Confidential
- 68 -
Scope
This section should talk about the areas of the application which are to be tested by the
QA team and specify those areas which are definitely out of scope (screens, database,
mainframe processes etc).

Test Approach
This would contain details on how the testing is to be performed and whether any specific
strategy is to be followed (including configuration management).

Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.)
pre-requisites. For example: Timely environment set up, starting the web server / app
server, successful implementation of the latest build etc.

Resources
This section should list out the people who would be involved in the project and their
designation etc.

Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the
various members in the project.

Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test
environment, database refresh etc.

Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met
in the course of the project.

Hardware / Software Requirements
This section would contain the details of PC‘s / servers required (with the configuration)
to install the application or perform the testing; specific software that needs to be installed
on the systems to get the application running or to connect to the database; connectivity
related issues etc.

Risks & Mitigation Plans
This section should list out all the possible risks that can arise during the testing and the
mitigation plans that the QA team plans to implement incase the risk actually turns into a
reality.

Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project
(e.g.) WinRunner, Test Director, PCOM, WinSQL.

Deliverables
This section contains the various deliverables that are due to the client at various points
of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include
Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates
for all these could also be attached.

Performance Testing Process & Methodology            Proprietary & Confidential
- 69 -
References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)

Annexure
This could contain embedded documents or links to documents which have been / will be
        used in the course of testing (e.g.) templates used for reports, test cases etc.
        Referenced documents can also be attached here.

Sign-Off
This should contain the mutual agreement between the client and the QA team with both
        leads / managers signing off their agreement on the Test Plan.




Performance Testing Process & Methodology      Proprietary & Confidential
- 70 -
12 Test Data Preparation - Introduction
A System is programmed by its data. Functional testing can suffer if data is poor, and
good data can help improve functional testing. Good test data can be structured to
improve understanding and testability. Its contents, correctly chosen, can reduce
maintenance effort and allow flexibility. Preparation of the data can help to focus the
business where requirements are vague.
The first stage of any recogniser development project is data preparation.
Test data should however, be prepared which is representative of normal business
transactions. Actual customer names or contact details should also not be used for such
tests. It is recommended that a full test environment be set up for use in the applicable
circumstances.
Each separate test should be given a unique reference number which will identify the
Business Process being recorded, the simulated conditions used, the persons involved in
the testing process and the date the test was carried out. This will enable the monitoring
and testing reports to be co-coordinated with any feedback received.
Tests must be planned and thought out a head of time; you have to decide such things as
what exactly you are testing and testing for, the way the test is going to be run and
applied, what steps are required, etc.
Testing is the process of creating, implementing and evaluating tests.
Effective quality control testing requires some basic goals and understanding:
You must understand what you are testing; if you're testing a specific functionality, you
must know how it's supposed to work, how the protocols behave, etc.
You should have a definition of what success and failure are. In other words, is close
enough good enough?
You should have a good idea of a methodology for the test, the more formal a plan the
better; you should design test cases.
You must understand the limits inherent in the tests themselves.
You must have a consistent schedule for testing; performing a specific set of tests at
appropriate points in the process is more important than running the tests at a specific
time.
Roles of Data in Functional Testing
Testing consumes and produces large amounts of data. Data describes the initial
conditions for a test, forms the input, is the medium through which the tester influences
the software. Data is manipulated, extrapolated, summarized and referenced by the
functionality under test, which finally spews forth yet more data to be checked against
expectations. Data is a crucial part of most functional testing.
This paper sets out to illustrate some of the ways that data can influence the test
process, and will show that testing can be improved by a careful choice of input data. In
doing this, the paper will concentrate most on data-heavy applications; those which use
databases or are heavily influenced by the data they hold. The paper will focus on input
data, rather than output data or the transitional states the data passes through during
processing, as input data has the greatest influence on functional testing and is the
simplest to manipulate. The paper will not consider areas where data is important to non-
functional testing, such as operational profiles, massive datasets and environmental
tuning.
A SYSTEM IS PROGRAMMED BY ITS DATA
Many modern systems allow tremendous flexibility in the way their basic functionality can
be used.

Performance Testing Process & Methodology       Proprietary & Confidential
- 71 -
Configuration data can dictate control flow, data manipulation, presentation and user
interface. A system can be configured to fit several business models, work (almost)
seamlessly with a variety of cooperative systems and provide tailored experiences to a
host of different users. A business may look to an application's configurability to allow
them to keep up with the market without being slowed by the development process, an
individual may look for a personalized experience from commonly-available
software.
FUNCTIONAL TESTING SUFFERS IF DATA IS POOR
Tests with poor data may not describe the business model effectively, they may be hard
to maintain, or require lengthy and difficult setup. They may obscure problems or avoid
them altogether. Poor data tends to result in poor tests, that take longer to execute.
GOOD DATA IS VITAL TO RELIABLE TEST RESULTS
An important goal of functional testing is to allow the test to be repeated with the same
result, and varied to allow diagnosis. Without this, it is hard to communicate problems to
coders, and it can become difficult to have confidence in the QA team's results, whether
they are good or bad. Good data allows diagnosis, effective reporting, and allows tests to
be repeated with confidence,.
GOOD DATA CAN HELP TESTING STAY ON SCHEDULE
An easily comprehensible and well-understood dataset is a tool to help communication.
Good data can greatly assist in speedy diagnosis and rapid re-testing. Regression
testing and automated test maintenance can be made speedier and easier by using good
data, while an elegantly-chosen dataset can often allow new tests without the overhead
of new data.
A formal test plan is a document that provides and records important information about a
test project, for example:
project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
use cases and/or test cases

12.1 Criteria for Test Data Collection
This section of the Document specifies the description of the test data needed to test
recovery of each business process.

     Identify Who is to Conduct the Tests
In order to ensure consistency of the testing process throughout the organization, one or
more members of the Business Continuity Planning (BCP) Team should be nominated to
co-ordinate the testing process within each business unit, a nominated testing and across
the organization. Each business process should be thoroughly tested and the coordinator
should ensure that each business unit observes the necessary rules associated with
ensuring that the testing process is carried out within a realistic environment.
This section of the BCP should contain the names of the BCP Team members nominated
to co-ordinate the testing process. It should also list the duties of the appointed co-
ordinators.
      Identify Who is to Control and Monitor the Tests
Performance Testing Process & Methodology        Proprietary & Confidential
- 72 -
In order to ensure consistency when measuring the results, the tests should be
independently monitored. This task would normally be carried out by a nominated
member of the Business Recovery Team or a member of the Business Continuity
Planning Team.
This section of the BCP will contain the names of the persons nominated to monitor the
testing process throughout the organization. It will also contain a list of the duties to be
undertaken by the monitoring staff.

       Prepare Feedback Questionnaires
It is vital to receive feedback from the persons managing and participating in each of the
tests. This feedback will hopefully enable weaknesses within the Business Recovery
Process to be identified and eliminated. Completion of feedback forms should be
mandatory for all persons participating in the testing process. The forms should be
completed either during the tests (to record a specific issue) or as soon after finishing as
practical. This will enable observations and comments to be recorded whilst the event is
still fresh in the persons mind.
This section of the BCP should contain a template for a Feedback Questionnaire.
Prepare Budget for Testing Phase
Each phase of the BCP process which incurs a cost requires that a budget be prepared
and approved. The 'Preparing for a Possible Emergency' Phase of the BCP process will
involve the identification and implementation of strategies for back up and recovery of
data files or a part of a business process. It is inevitable that these back up and recovery
processes will involve additional costs. Critical parts of the business process such as the
IT systems, may require particularly expensive back up strategies to be implemented.
Where the costs are significant they should be approved separately with a specific
detailed budget for the establishment costs and the ongoing maintenance costs.
This section of the BCP will contain a list of the testing phase activities and a cost for
each. It should be noted whenever part of the costs is already incorporated with the
organization‘s overall budgeting process.
      Training Core Testing Team for each Business Unit
In order for the testing process to proceed smoothly, it is necessary for the core testing
team to be trained in the emergency procedures. This is probably best handled in a
workshop environment and should be presented by the persons responsible for
developing the emergency procedures.
This section of the BCP should contain a list of the core testing team for each of the
business units who will be responsible for coordinating and undertaking the Business
Recovery Testing process.
It is important that clear instructions are given to the Core Testing Team regarding the
simulated conditions which have to be observed.
Conducting the Tests
The tests must be carried out under authentic conditions and all participants must take
the process seriously. It is important that all persons who are likely to be involved with
recovering a particular business process in the event of an emergency should participate
in the testing process. It should be mandatory for the management of a business unit to
be present when that unit is involved with conducting the tests.

Test each part of the Business Recovery Process
In so far as it is practical, each critical part of the business recovery process should be
fully tested. Every part of the procedures included as part of the recovery process is to be
tested to ensure validity and relevance.

Performance Testing Process & Methodology          Proprietary & Confidential
- 73 -
This section of the BCP is to contain a list of each business process with a test schedule
and information on the simulated conditions being used. The testing co-ordination and
monitoring will endeavor to ensure that the simulated environments are maintained
throughout the testing process, in a realistic manner.

Test Accuracy of Employee and Vendor Emergency Contact Numbers
During the testing process the accuracy of employee and vendor emergency contact
information is to be re-confirmed. All contact numbers are to be validated for all involved
employees. This is particularly important for management and key employees who are
critical to the success of the recovery process. This activity will usually be handled by the
HRM Department or Division.
Where, in the event of an emergency occurring outside of normal business hours, a large
number of persons are to be contacted, a hierarchical process could be used whereby
one person contacts five others. This process must have safety features incorporated to
ensure that if one person is not contactable for any reason then this is notified to a
nominated controller. This will enable alternative contact routes to be used.
Assess Test Results
Prepare a full assessment of the test results for each business process. The following
questions may be appropriate:
Were objectives of the Business Recovery Process and the testing process met - if not,
provide further comment
Were simulated conditions reasonably "authentic" - if not, provide further comment
Was test data representative - if not, provide further comment
Did the tests proceed without any problems - if not, provide further comment
What were the main comments received in the feedback questionnaires
Each test should be assessed as either fully satisfactory, adequate or requiring further
testing.

Training Staff in the Business Recovery Process
All staff should be trained in the business recovery process. This is particularly important
when the procedures are significantly different from those pertaining to normal
operations. This training may be integrated with the training phase or handled separately.
The training should be carefully planned and delivered on a structured basis. The training
should be assessed to verify that it has achieved its objectives and is relevant for the
procedures involved.
Training may be delivered either using in-house resources or external resources
depending upon available skills and related costs.

Managing the Training Process
For the BCP training phase to be successful it has to be both well managed and
structured. It will be necessary to identify the objective and scope for the training, what
specific training is required, who needs it and a budget prepared for the additional costs
associated with this phase.

Develop Objectives and Scope of Training
The objectives and scope of the BCP training activities are to be clearly stated within the
plan.
The BCP should contain a description of the objectives and scope of the training phase.
This will enable the training to be consistent and organized in a manner where the results
can be measured, and the training fine tuned, as appropriate.

Performance Testing Process & Methodology         Proprietary & Confidential
- 74 -
The objectives for the training could be as follows :
"To train all staff in the particular procedures to be followed during the business recovery
process".
The scope of the training could be along the following lines :
"The training is to be carried out in a comprehensive and exhaustive manner so that staff
become familiar with all aspects of the recovery process. The training will cover all
aspects of the Business Recovery activities section of the BCP including IT systems
recovery".
Consideration should also be given to the development of a comprehensive corporate
awareness program for communicating the procedures for the business recovery
process.
Training Needs Assessment
The plan must specify which person or group of persons requires which type of training. It
is necessary for all new or revised processes to be explained carefully to the staff. For
example it may be necessary to carry out some process manually if the IT system is
down for any length of time. These manual procedures must be fully understood by the
persons who are required to carry them out. For larger organizations it may be practical
to carry out the training in a classroom environment, however, for smaller organizations
the training may be better handled in a workshop style.
This section of the BCP will identify for each business process what type of training is
required and which persons or group of persons need to be trained.
Training Materials Development Schedule
Once the training needs have been identified it is necessary to specify and develop
suitable training materials. This can be a time consuming task and unless priorities are
given to critical training programmes, it could delay the organization in reaching an
adequate level of preparedness.
This section of the BCP contains information on each of the training programmes with
details of the training materials to be developed, an estimate of resources and an
estimate of the completion date.
Prepare Training Schedule
Once it has been agreed who requires training and the training materials have been
prepared a detailed training schedule should be drawn up.
This section of the BCP contains the overview of the training schedule and the groups of
persons receiving the training.
Communication to Staff
Once the training is arranged to be delivered to the employees, it is necessary to advise
them about the training programmes they are scheduled to attend.
This section of the BCP contains a draft communication to be sent to each member of
staff to advise them about their training schedule. The communication should provide for
feedback from the staff member where the training dates given are inconvenient.
A separate communication should be sent to the managers of the business units advising
them of the proposed training schedule to be attended by their staff. Each member of
staff will be given information on their role and responsibilities applicable in the event of
an emergency.
Prepare Budget for Training Phase
Each phase of the BCP process which incurs a cost requires that a budget be prepared
and approved. Depending upon the cross charging system employed by the organization,
the training costs will vary greatly. However, it has to be recognized that, however well
justified, training incurs additional costs and these should be approved by the appropriate
authority within the organization.

Performance Testing Process & Methodology         Proprietary & Confidential
- 75 -
This section of the BCP will contain a list of the training phase activities and a cost for
each. It should be noted whenever part of the costs is already incorporated with the
organization‘s overall budgeting process.
Assessing the Training
The individual BCP training programmes and the overall BCP training process should be
assessed to ensure its effectiveness and applicability. This information will be gathered
from the trainers and also the trainees through the completion of feedback
questionnaires.
Feedback Questionnaires
Assess Feedback
Feedback Questionnaires
It is vital to receive feedback from the persons managing and participating in each of the
training programmes. This feedback will enable weaknesses within the Business
Recovery Process, or the training, to be identified and eliminated. Completion of
feedback forms should be mandatory for all persons participating in the training process.
The forms should be completed either during the training (to record a specific issue) or as
soon after finishing as practical. This will enable observations and comments to be
recorded whilst the event is still fresh in the persons mind.
This section of the BCP should contain a template for a Feedback Questionnaire for the
training phase.
Assess Feedback
The completed questionnaires from the trainees plus the feedback from the trainers
should be assessed. Identified weaknesses should be notified to the BCP Team Leader
and the process strengthened accordingly.
The key issues raised by the trainees should be noted and consideration given to
whether the findings are critical to the process or not. If there are a significant number of
negative issues raised then consideration should be given to possible re-training once the
training materials, or the process, have been improved.
This section of the BCP will contain a format for assessing the training feedback.
Keeping the Plan Up-to-date
Changes to most organizations occur all the time. Products and services change and
also their method of delivery. The increase in technological based processes over the
past ten years, and particularly within the last five, have significantly increased the level
of dependency upon the availability of systems and information for the business to
function effectively. These changes are likely to continue and probably the only certainty
is that the pace of change will continue to increase. It is necessary for the BCP to keep
pace with these changes in order for it to be of use in the event of a disruptive
emergency. This chapter deals with updating the plan and the managed process which
should be applied to this updating activity.
Maintaining the BCP
It is necessary for the BCP updating process to be properly structured and controlled.
Whenever changes are made to the BCP they are to be fully tested and appropriate
amendments should be made to the training materials. This will involve the use of
formalized change control procedures under the control of the BCP Team Leader.
Change Controls for Updating the Plan
It is recommended that formal change controls are implemented to cover any changes
required to the BCP. This is necessary due to the level of complexity contained within the
BCP. A Change request Form / Change Order form is to be prepared and approved in
respect of each proposed change to the BCP.


Performance Testing Process & Methodology         Proprietary & Confidential
- 76 -
This section of the BCP will contain a Change Request Form / Change Order to be used
for all such changes to the BCP.
Responsibilities for Maintenance of Each Part of the Plan
Each part of the plan will be allocated to a member of the BCP Team or a Senior
Manager with the organization who will be charged with responsibility for updating and
maintaining the plan. The BCP Team Leader will remain in overall control of the BCP but
business unit heads will need to keep their own sections of the BCP up to date at all
times. Similarly, HRM Department will be responsible to ensure that all emergency
contact numbers for staff are kept up to date. It is important that the relevant BCP
coordinator and the Business Recovery Team are kept fully informed regarding any
approved changes to the plan.
Test All Changes to Plan
The BCP Team will nominate one or more persons who will be responsible for co-
ordinating all the testing processes and for ensuring that all changes to the plan are
properly tested. Whenever changes are made or proposed to the BCP, the BCP Testing
Co-ordinator will be notified. The BCP Testing Co-ordinator will then be responsible for
notifying all affected units and for arranging for any further testing activities.
This section of the BCP contains a draft communication from the BCP Co-ordinator to
affected business units and contains information about the changes which require testing
or re-testing.
Advise Person Responsible for BCP Training
A member of the BCP Team will be given responsibility for co-ordinating all training
activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP
Training Co-ordinator of all approved changes to the BCP in order that the training
materials can be updated. An assessment should be made on whether the change
necessitates any re-training activities.
Advise Person Responsible for BCP Training
A member of the BCP Team will be given responsibility for co-ordinating all training
activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP
Training Co-ordinator of all approved changes to the BCP in order that the training
materials can be updated. An assessment should be made on whether the change
necessitates any re-training activities.
Problems which can be caused by Poor Test Data
Most testers are familiar with the problems that can be caused by poor data. The
following list details the most common problems familiar to the author. Most projects
experience these problems at some stage - recognizing them early can allow their effects
to be mitigated.
Unreliable test results.
Running the same test twice produces inconsistent results. This can be a symptom of an
uncontrolled environment, unrecognized database corruption, or of a failure to recognize
all the data that is influential on the system.
Degradation of test data over time.
Program faults can introduce inconsistency or corruption into a database. If not spotted at
the time of generation, they can cause hard-to-diagnose failures that may be apparently
unrelated to the original fault. Restoring the data to a clean set gets rid of the symptom,
but the original fault is undiagnosed and can carry on into live operation and perhaps
future releases. Furthermore, as the data is restored, evidence of the fault is lost.
Increased test maintenance cost
        If each test has its own data, the cost of test maintenance is
        correspondingly increased.

Performance Testing Process & Methodology        Proprietary & Confidential
- 77 -
If that data is itself hard to understand or manipulate, the cost increases further.
Reduced flexibility in test execution
If datasets are large or hard to set up, some tests may be excluded from a
       test run.
If the datasets are poorly constructed, it may not be time-effective to construct
       further data to support investigatory tests.
Obscure results and bug reports
                   Without clearly comprehensible data, testers stand a greater chance
of missing important diagnostic features of a failure, or indeed of missing the failure
entirely. Most reports make reference to the input data and the actual and expected
results. Poor data can make these reports hard to understand.
Larger proportion of problems can be traced to poor data
A proportion of all failures logged will be found, after further analysis, not to be faults at
all. Data can play a significant role in these failures. Poor data will cause more of these
problems.
Less time spent hunting bugs
The more time spent doing unproductive testing or ineffective test maintenance, the less
time spent testing.
Confusion between developers, testers and business
Each of these groups has different data requirements. A failure to understand each
others data can lead to ongoing confusion.
Requirements problems can be hidden in inadequate data
It is important to consider inputs and outputs of a process for requirements modeling.
Inadequate data can lead to ambiguous or incomplete requirements.
Simpler to make test mistakes
Everybody makes mistakes. Confusing or over-large datasets can make data selection
mistakes more common.
Unwieldy volumes of data
Small datasets can be manipulated more easily than large datasets. A few datasets are
easier to manage than many datasets.
Business data not representatively tested
Test requirements, particularly in configuration data, often don't reflect the way the
system will be used in practice. While this may arguably lead to broad testing for a variety
of purposes, it can be hard for the business or the end users to feel confidence in the test
effort if they feel distanced from it.
Inability to spot data corruption caused by bugs
A few well-known datasets can be more easily be checked than a large number
of complex datasets, and may lend themselves to automated testing / sanity checks.
A readily understandable dataset can allow straightforward diagnosis; a complex dataset
will positively hinder diagnosis.
Poor database/environment integrity
If a large number of testers, or tests, share the same dataset, they can influence and
corrupt each others results as they change the data in the system. This can not only
cause false results, but can lead to database integrity problems and data corruption. This
can make portions of the application untestable for many testers simultaneously.




Performance Testing Process & Methodology         Proprietary & Confidential
- 78 -
12.2 Classification of Test Data Types
In the process of testing a system, many references are made to "The Data" or "Data
Problems".
Although it is perhaps simpler to discuss data in these terms, it is useful to be able to
classify the data according to the way it is used. The following broad categories allow
data to be handled and discussed more easily.
Environmental data
Environmental data tells the system about its technical environment. It includes
communications
addresses, directory trees and paths and environmental variables. The current date and
time can be seen as environmental data.
Setup data
Setup data tells the system about the business rules. It might include a cross reference
between country and delivery cost or method, or methods of debt collection from different
kinds of customers.
Typically, setup data causes different functionality to apply to otherwise similar data. With
an effective approach to setup data, business can offer new intangible products without
developing new functionality - as can be seen in the mobile phone industry, where new
billing products are supported and indeed created by additions to the setup data.
Input data
Input data is the information input by day-to-day system functions. Accounts, products,
orders, actions, documents can all be input data. For the purposes of testing, it is useful
to split the categorization once more:
FIXED INPUT DATA
Fixed input data is available before the start of the test, and can be seen as part of the
test
conditions.
CONSUMABLE INPUT DATA
Consumable input data forms the test input
It can also be helpful to qualify data after the system has started to use it;
Transitional data
Transitional data is data that exists only within the program, during processing of input
data.
Transitional data is not seen outside the system (arguably, test handles and
instrumentation make it output data), but its state can be inferred from actions that the
system has taken. Typically held in internal system variables, it is temporary and is lost at
the end of processing.
Output data
Output data is all the data that a system outputs as a result of processing input data and
events. It
generally has a correspondence with the input data (cf. Jackson's Structured
Programming
methodology), and includes not only files, transmissions, reports and database updates,
but can also include test measurements. A subset of the output data is generally
compared with the expected results at the end of test execution. As such, it does not
directly influence the quality of the tests.




Performance Testing Process & Methodology         Proprietary & Confidential
- 79 -
12.3 Organizing the data
A key part of any approach to data is the way the data is organized; the way it is chosen
and described, influenced by the uses that are planned for it. A good approach increases
data reliability, reduces data maintenance time and can help improve the test process.
Good data assists testing, rather than hinders it.
Permutations
Most testers are familiar with the concept of permutation; generating tests so that all
possible
permutations of inputs are tested. Most are also familiar with the ways in which this
generally vast set can be cut down.
Pair wise, or combinatorial testing addresses this problem by generating a set of tests
that allow all possible pairs of combinations to be tested. Typically, for non-trivial sets,
this produces a far smaller set of tests than the brute-force approach for all permutations,
The same techniques can be applied to test data; the test data can contain all possible
pairs of permutations in a far smaller set than that which contains all possible
permutations.
This allows a small, easy to handle dataset - which also allows a wide range of tests.
This small, and easy to manipulate dataset is capable of supporting many tests. It allows
complete pairwise coverage, and so is comprehensive enough to allow a great many
new, ad-hoc, or diagnostic tests. Database changes will affect it, but the data
maintenance required will be greatly lessened by the small size of the dataset and the
amount of reuse it allows. Finally, this method of working with fixed input data can help
greatly in testing the setup data.
This method is most appropriate when used, as above, on fixed input data. It is most
effective when the following conditions are satisfied. Fortunately, these criteria apply to
many traditional database-based systems:



test.
To sum up, permutation helps because:

                  ood test coverage without having to construct massive datasets


                                            - particularly setup data

Partitioning
Partitions allow data access to be controlled, reducing uncontrolled changes in the data.
Partitions can be used independently; data use in one area will have no effect on the
results of tests in another.
Data can be safely and effectively partitioned by machine / database / application
instance, although this partitioning can introduce configuration management problems in
software version, machine setup, environmental data and data load/reload. A useful and
basic way to start with partitions is to set up, not a single environment for each test or
tester, but to set up three shared by many users, so allowing different kinds of data use.
These three have the following characteristics:
Performance Testing Process & Methodology                 Proprietary & Confidential
- 80 -
Safe area

                  nges the data, so the area can be trusted.

Change area



Scratch area
                           tive update tests and those which have unusual
requirements.


Testing rarely has the luxury of completely separate environments for each test and each
tester.
Controlling data, and the access to data, in a system can be fraught. Many different
stakeholders have
different requirements of the data, but a common requirement is that of exclusive use.
While the impact of this requirement should not be underestimated, a number of
stakeholders may be able to work with the same environmental data, and to a lesser
extent, setup data - and their work may not need to change the environmental or setup
data. The test strategy can take advantage of this by disciplined use of text / value fields,
allowing the use of 'soft' partitions.
'Soft' partitions allow the data to be split up conceptually, rather than physically. Although
testers are able to interfere with each others tests, the team can be educated to avoid
each others work. If, for instance, tester 1's tests may only use customers with Russian
nationality and tester 2's tests only with French, the two sets of work can operate
independently in the same dataset. A safe area could consist of London addresses, the
change area Manchester addresses, and the scratch area Bristol addresses.
Typically, values in free-text fields are used for soft partitioning.
Data partitions help because:

                                                    environments/machines

Clarity
Permutation techniques may make data easier to grasp by making the datasets small
and commonly used, but we can make our data clearer still by describing each row in its
own free text fields, allowing testers to make a simple comparison between the free text
(which is generally displayed on output), and actions based on fields which tend not to be
directly displayed. Use of free text fields with some correspondence to the internals of the
record allows output to be checked more easily.
Testers often talk about items of data, referring to them by anthropomorphic
personification - that is to say, they give them names. This allows shorthand, but also
acts as jargon, excluding those who are not in the know. Setting this data, early on in
testing, to have some meaningful value can be very useful, allowing testers to sense
check input and output data, and choose appropriate input data for investigative tests.
Reports, data extracts and sanity checks can also make use of these; sorting or selecting
on a free text field that should have some correspondence with a functional field can help
spot problems or eliminate unaffected data.


Performance Testing Process & Methodology           Proprietary & Confidential
- 81 -
Data is often used to communicate and illustrate problems to coders and to the business.
However, there is generally no mandate for outside groups to understand the format or
requirements of test data.
Giving some meaning to the data that can be referred to directly can help with improving
mutual understanding.
Clarity helps because:




                                 for investigative tests



12.4 Data Load and Data Maintenance
An important consideration in preparing data for functional testing is the ways in which
the data can be loaded into the system, and the possibility and ease of maintenance.
Loading the data
Data can be loaded into a test system in three general ways.

The data can be manually entered, or data entry can be automated by using a
capture/replay tool.
This method can be very slow for large datasets. It uses the system's own validation and
insertion methods, and can both be hampered by faults in the system, and help pinpoint
them. If the system is working well, data integrity can be ensured by using this method,
and internally assigned keys are likely to be effective and consistent.
Data can be well-described in test scripts, or constructed and held in flat files. It may,
however, be input in an ad-hoc way, which is unlikely to gain the advantages of good
data listed above.

Data load tools directly manipulate the system's underlying data structures. As they do
not use the system's own validation, they can be the only way to get broken data into the
system in a consistent fashion. As they do not use the system to load the data, they can
provide a convenient workaround to known faults in the system's data load routines.
However, they may come up against problems when generating internal keys, and can
have problems with data integrity and parent/child relationships.
Data loaded can have a range of origins. In some cases, all new data is created for
testing. This data may be complete and well specified, but can be hard to generate. A
common compromise is to use old data from an existing system, selected for testing,
filtered for relevance and duplicates and migrated to the target data format. In some
cases, particularly for minor system upgrades, the complete set of live data is loaded into
the system, but stripped of personal details for privacy reasons. While this last method
may seem complete, it has disadvantages in that the data may not fully support testing,
and that the large volume of data may make test results hard to interpret.

Some tests simply take whatever is in the system and try to test with it. This can be
appropriate where a dataset is known and consistent, or has been set up by a prior round
of testing. It can also be appropriate in environments where data cannot be reloaded,
such as the live system. However, it can be symptomatic of an uncontrolled approach to
data, and is not often desirable.

Performance Testing Process & Methodology                  Proprietary & Confidential
- 82 -
Environmental data tends to be manually loaded, either at installation or by manipulating
environmental or configuration scripts. Large volumes of setup data can often be
generated from existing datasets and loaded using a data load tool, while small volumes
of setup data often have an associated system maintenance function and can be input
using the system. Fixed input data may be generated or migrated and is loaded using
any and all of the methods above, while consumable input data is typically listed in test
scripts or generated as an input to automation tools.
When data is loaded, it can append itself to existing data, overwrite existing data, or
delete existing data first. Each is appropriate in different circumstances, and due
consideration should be given to the consequences.



12.5 Testing the Data
A theme bought out at the start of this paper was 'A System is Programmed by its Data'.
In order to test the system, one must also test the data it is configured with; the
environmental and setup data.
Environmental data is necessarily different between the test and live environment.
Although testing can verify that the environmental variables are being read and used
correctly, there is little point in testing their values on a system other than the target
system. Environmental data is often checked manually on the live system during
implementation and rollout, and the wide variety of possible methods will not be
discussed further here.
Setup data can change often, throughout testing, as the business environment changes –
particularly if there is a long period between requirements gathering and live rollout.
Testing done on the setup data needs to cover two questions;

requires?

Testing for these two questions only becomes possible when that data is controlled.
Aspects of all the elements above come into play;

considered

                                                      ta so that their setup for live can be
properly
tested
When testing the setup data, it is important to have a well-known set of fixed input data
and
consumable input data. This allows the effects of changes made to the setup data to be
assessed
repeat ably and allows results to be compared. The advantages of testing the setup data
include:


                         -configure the software for new business needs with increased
confidence
        -related failures in the live system can be assessed in the light of good data
testing


Performance Testing Process & Methodology         Proprietary & Confidential
- 83 -
12.6 Conclusion
Data can be influential on the quality of testing. Well-planned data can allow flexibility and
help reduce the cost of test maintenance. Common data problems can be avoided or
reduced with preparation and automation. Effective testing of setup data is a necessary
part of system testing, and good data can be used as a tool to enable and improve
communication throughout the project.
The following points summarize the actions that can influence the quality of the data and
the effectiveness of its usage:
     Plan the data for maintenance and flexibility
     Know your data, and make its structure and content transparent
     Use the data to improve understanding throughout testing and
         the business
     Test setup data as you would test functionality




Performance Testing Process & Methodology         Proprietary & Confidential
- 84 -
13 Test Logs - Introduction
Test Problem is a condition that exists within the software system that needs to be
addressed. Carefully and completely documenting a test problem is the first step in
correcting the problem.
The following four attributes should be developed for all the test problems:
Statement of condition. –Tells what it is.
Criteria – Tells what should be.
These two attributes are the basis for a finding. If a comparison between the two gives
little or no practical consequence, no finding exists.
Effect: Tells why the difference between what is and what should be is significant
Cause: Tells the reasons for the deviation. Identification of the cause is the necessary as
a basis for corrective action.
A well developed problem statement will include each of these atttributes.When one or
more these attributes is missing, questions almost arise, such as
           Criteria: Why is the current state inadequate?
           Effect: How significant is it?
           Cause: What could have cause of the problem?

13.1 Factors defining the Test Log Generation
Document Deviation:
Problem statements begin to emerge by process of comparision.Essentially the user
compares‖ what is‖ with ―what should be‖. When a deviation is identified between what is
found to actually exist and what the user thinks is correct or proper , the first essential
step toward development of a problem statement has occurred. It is difficult to visualize
any type of problem that is not in some way characterized by this deviation. The ‗What
is‖: can be called the statement of condition. The ―What should be‖ shall be called the
―Criteria‖. These concepts are the first two and the most basic , attributes of a problem
statement.
The documenting of the deviation is describing the conditions, as they currently exist, and
the criteria, which represents what the user desires.
The actual deviation will be the difference or gap between ―what –is‖ and ― what is
desired‖.
The statement of condition is uncovering and documenting the facts, as they exist.
What is a fact? The statement of condition will of course depend on the nature and extent
of the evidence or support that is examined and noted. For those facts, making up the
statement of condition, the I/S professional will need to ensure that the information is
accurate, well supported, and worded as clearly and precisely as possible.
The statement of condition should document as many of the following attributes as
appropriate of the problem.

Activities Involved:- The specific business or administered activities that are being
performed during Test Log generation are as follows:
Procedures used to perform work. – The specific step-by –step activities that are utilized
in producing the output from the identical activities.
Outputs /Deliverables – The products that are produced from the activity.

Inputs - The triggers,events,or documents that cause this activity to be executed.

Performance Testing Process & Methodology        Proprietary & Confidential
- 85 -
Users/Customers served –The organization ,individuvals,or class users/customers
serviced by this activity.
Deficiencies noted – The status of the results of executing this activity and any
appropriate interpretation of those facts.
The Criterion is the user‘s statement of what is desired. It can be stated in the either
negative or positive terms. For example , it could indicate the need to reduce the
complaints or delays as well as desired processing turn around time.
Work Paper to describe the problem, and document the statement of condition and the
statement of criteria. For example the following Work paper provides the information for
Test Log Documentation:



Field Requirements:                        Field   Instructions for Entering Data
Name of Software Tested : Put the name of the S/W or subsystem tested.
Problem Description: Write a brief narrative description of the variance uncovered from
expectations
Statement of Conditions: Put the results of actual processing that occurred here.
Statement of Criteria: Put what testers believe was the expected result from processing
Effect of Deviation: If this can be estimated , testers should indicate what they believe
the impact or effect of the problem will be on computer processing
Cause of Problem: The testers should indicate what they believe is the cause of the
problem, if known. If the testers re unable to do this , the work paper will be given to the
development team and they should indicate the cause of the problem.
Location of the Problem: The Tests should document where problem occurred as
closely as possible.
Recommended Action: The testers should indicate any recommended action they
believe would be helpful to the project team. If not approved, the alternate action should
be listed or the reason for not following the recommended action should be documented.
Name of the S/W tested:
Problem Description
Statement of Condition
Statement of Criteria
Effect of Deviation
Cause of a Problem
Location of the Problem
Recommended Action

13.2 Collecting Status Data
Four categories of data will be collected during testing. These are explained in the
following paragraphs.
Test Results Data
This data will include,
Test factors -The factors incorporated in the plan, the validation of which
becomes the Test Objective.

Performance Testing Process & Methodology        Proprietary & Confidential
- 86 -
Business objective –The validation that specific business objectives have been
met.
Interface Objectives-Validation that data/Objects can be correctly passed among
Software components.
Functions/Sub functions-Identifiable Software components normally associated
with the requirements of the software.
Units- The smallest identifiable software components
Platform- The hardware and Software environment in which the software system
will operate.
Test Transactions, Test Suites, and Test Events
These are the test products produced by the test team to perform testing.
Test transactions/events: The type of tests that will be conducted during the execution of
tests, which will be based on software requirements.
Inspections – A verification of process deliverables against deliverable specifications.
Reviews: Verification that the process deliverables / phases are meeting the user‘s true
needs.
Defect
This category includes a Description of the individual defects uncovered during the
testing process. This description includes but not limited to :
Data the defect uncovered
Name of the Defect
Location of the Defect
Severity of the Defect
Type of Defect
How the defect was uncovered(Test Data/Test Script)
The Test Logs should add to this information in the form of where the defect originated ,
when it was corrected, and when it was entered for retest.
Storing Data Collected during Testing
It is recommended that a database be established in which to store the results collected
during testing. It is also suggested that the database be put in online through
client/server systems so that with a vested interest in the status of the project can be
readily accessed for the status update.
As described the most common test Report is a simple Spread sheet , which indicates
the project component for which the status is requested, the test that will be performed to
determine the status of that component, and the results of testing at any point of time.

Developing Test Status Reports
Report Software Status
Establish a Measurement Team
Inventory Existing Project Measures
Develop a Consistent Set of Project metrics
Define Process Requirements
Develop and Implement the Process
Monitor the Process
The Test process should produce a continuous series of reports that describe the status
of testing. The test reports are for use of testers, test managers, and the software


Performance Testing Process & Methodology        Proprietary & Confidential
- 87 -
development team. The frequency of the test reports should be based on the discretion
of the team and extensiveness of the test process.

Use of Function/Test matrix:
This shows which tests must be performed in order to validate the functions and also
used to determine the status of testing. Many organizations use spreadsheet package to
maintain test results. The intersection can be coded with a number or symbol to indicate
the following:
1=Test is needed, but not performed
2=Test currently being performed
3=MINOR DEFECT NOTED
4=Major defect noted
5=Test complete and function is defect free for the criteria included in this test
TEST
FUNCTION 1          2       3        4         5         6        7        8         9
A
B
C
D
E
                                         Function Test Matrix

13.2.1              Methods of Test Reporting
Reporting Tools - Use of word processing, database, defect tracking, and graphic tools
to prepare test reports.
Some Database test tools like Data Vision is a database reporting tool similar to Crystal
Reports. Reports can be viewed and printed from the application or output as HTML,
LaTeX2e, XML, DocBook, or tab- or comma-separated text files. From the LaTeX2e and
DocBook output files you can in turn produce PDF, text, HTML, PostScript, and more.
Some query tools available for Linux-based databases include:
QMySQL
dbMetrix
PgAccess
Cognos Powerhouse
This is not yet available for Linux; Cognos is looking into what interest people have in the
product to assess what their strategy should be with respect to the Linux ``market.''
GRG - GNU Report Generator
The GRG program reads record and field information from a dBase3+ file, delimited
ASCII text file or a SQL query to a RDBMS and produces a report listing. The program
was loosely designed to produce TeX/LaTeX formatted output, but plain ASCII text, troff,
PostScript, HTML or any other kind of ASCII based output format can be produced just
as easily.
Word –Processing:
One way of increasing the utility of computers and word processors for the teaching of
writing may be to use software that will guide the processes of generating, organizing,
composing and revising text. This allows each person to use the normal functions of the
computer keyboard that are common to all word processors, email editors, order entry
systems, and data base management products. From the Report Manager, however, you
Performance Testing Process & Methodology          Proprietary & Confidential
- 88 -
can quickly scan through any number of these reports and see how each person's history
compares. A one-page summary report may be printed with either the Report Manager
program or from the individual keyboard or keypad software at any time. Individual
Reports include all of the following information.
Status Report
Word Processing Tests or Keypad Tests
Basic Skills Tests or Data Entry Tests
Progress Graph
Game Scores
Test Report for each test

          Test Director:

                   Facilitates consistent and repetitive testing process
                   Central repository for all testing assets facilitates the adoption of a more
                    consistent testing process, which can be repeated throughout the
                    application life cycle
                   Provides Analysis and Decision Support
                   Graphs and reports help analyze application readiness at any point in the
                    testing process
                   Requirements coverage, run schedules, test execution progress, defect
                    statistics can be used for production planning
                   Provides Anytime, Anywhere access to Test Assets
                   Using Test Director‘s web interface, tester, developers, business
                    analysts and Client can participate and contribute to the testing process
                   Traceability throughout the testing process
                   Test Cases can be mapped to requirements providing adequate visibility
                    over the test coverage of requirements
                   Test Director links requirements to test cases and test cases to defects
                   Manages Both Manual and Automated Testing
                   Test Director can manage both manual and automated tests (Win
                    Runner)
                   Scheduling of automated tests can be effectively done using Test
                    Director



Test Report Standards - Defining the components that should be included in a test
report.
Statistical Analysis - Ability to draw statistically valid conclusions from quantitative test
results.
Testing Data used for metrics
Testers are typically responsible for reporting their test status at regular intervals.
The following measurements generated during testing are applicable:
Total number of tests
Number of Tests executed to date
Number of tests executed successfully to date
Data concerning software defects include
Total number of defects corrected in each activity
Performance Testing Process & Methodology            Proprietary & Confidential
- 89 -
Total number of defects entered in each activity.
Average duration between defect detection and defect correction
Average effort to correct a defect
Total number of defects remaining at delivery
Software performance data us usually generated during system testing, once the
software has been integrated and functional testing is complete.
Average CPU utilization
Average memory Utilization
Measured I/O transaction rate
Test Reporting
A final test report should be prepared at the conclusion of each test activity. This includes
the following
      Individual Project Test Report
      Integration Test Report
      System Test Report
      Acceptance test Report

These test reports are designed to document the results of testing as defined in the
testplan.The test report can be a combination of electronic data and hard copy. For
example, if the function matrix is maintained electronically, there is no reason to print
that, as paper report will summarize the data, draws appropriate conclusions and present
recommendations.9 - Purpose of a Test Report:

The test report has one immediate and three long term purposes. The immediate
purpose is to provide information to customers of the software system so that they can
determine whether the system is ready for production , and if so, to assess the potential
consequences and initiate appropriate actions to minimize those consequences.
The first of the three long term uses is for the project to trace problems in the event the
application malfunctions in production. Knowing which functions have been correctly
tested and which ones still contain defects can assist in taking corrective actions.
The second long term purpose is to use the data to analyze the rework process for
making changes to prevent the defects from occurring in the future. These defect prone
components identify tasks/steps that if improved, could eliminate or minimize the
occurrence of high frequency defects. The Third long term purpose is to show what was
accomplished in case of an Y2K lawsuit.
     Individual Project Test Report
These reports focus on the Individual projects(software system),when different testers
should test individual projects, they should prepare a report on their results.
     Integration Test Report
Integration testing tests the interfaces between individual projects. A good test plan will
identify the interfaces and institute test conditions that will validate interfaces. Given is the
Individual Project test report except that conditions tested are interfaces.
1.Scope of Test – This section indicates which functions were and were not tested
2.Test Results – This section indicates the results of testing, including any variance
between what is and what should be
3.What works/What does not work - This section defines the functions that work and do
not work and the interfaces that work and do not work
4. Recommendations – This section recommends actions that should be taken to
Performance Testing Process & Methodology           Proprietary & Confidential
- 90 -
Fix functions /Interfaces that do not work.
Make additional improvements
      System Test Reports
A System Test plan standard that identified the objective of testing , what was to be
tested, how was it to be tested, and when tests should occur. The system test Report
should present the results of executing the test plan. If these details are maintained
Electronically , then it need only be referenced , not included in the report.
      Acceptance Test Report
There are two primary objectives of Acceptance testing Report .The first is to ensure that
the system as implemented meets the real operating needs of the user/customer. If the
defined requirements are those true needs, testing should have accomplished this
objective.
The second objective is to ensure that software system can operate in the real world
user environment, which includes people skills and attitudes, time pressures, changing
business conditions, and so forth. The Acceptance Test Report should encompass these
criteria‘s for the User acceptance respectively.



13.2.2               Conclusion

The Test Logs obtained from the execution of the test results and finally the test reports
should be designed to accomplish the following objectives:

         Provide Information to the customer whether the system should be placed into
          production, if so the potential consequences and appropriate actions to minimize
          these consequences.
         One Long term objective is for the Project and the other is for the information
          technology function.
         The project can use the test report to trace problems in the event the application
          malfunction in production. Knowing which functions have been correctly tested
          and which ones still contain defects can assist in taking corrective actions.
         The data can also be used to analyze the developmental process to make
          changes to prevent defects from occurring in the future.
         These defect prone components identify tasks/steps that if improved, could
          eliminate or minimize the occurrence of high frequency defects in future.




Performance Testing Process & Methodology         Proprietary & Confidential
- 91 -
14 Test Report
A Test Report is a document that is prepared once the testing of a software product is
complete and the delivery is to be made to the customer. This document would contain a
summary of the entire project and would have to be presented in a way that any person
who has not worked on the project would also get a good overview of the testing effort.

Contents of a Test Report
The contents of a test report are as follows:

Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations

These sections are explained as follows:



14.1 Executive Summary

          This section would comprise of general information regarding the project, the
          client, the application, tools and people involved in such a way that it can be
          taken as a summary of the Test Report itself (i.e.) all the topics mentioned here
          would be elaborated in the various sections of the report.

     1. Overview

          This comprises of 2 sections – Application Overview and Testing Scope.

          Application Overview – This would include detailed information on the
          application under test, the end users and a brief outline of the functionality as
          well.

          Testing Scope – This would clearly outline the areas of the application that
          would / would not be tested by the QA team. This is done so that there would not
          be any misunderstandings between customer and QA as regards what needs to
          be tested and what does not need to be tested.
          This section would also contain information of Operating System / Browser
          combinations if Compatibility testing is included in the testing effort.

Performance Testing Process & Methodology           Proprietary & Confidential
- 92 -
     2. Test Details

          This section would contain the Test Approach, Types of Testing conducted, Test
          Environment and Tools Used.

          Test Approach – This would discuss the strategy followed for executing the
          project. This could include information on how coordination was achieved
          between Onsite and Offshore teams, any innovative methods used for
          automation or for reducing repetitive workload on the testers, how information
          and daily / weekly deliverables were delivered to the client etc.

          Types of testing conducted – This section would mention any specific types of
          testing performed (i.e.) Functional, Compatibility, Performance, Usability etc
          along with related specifications.

          Test Environment – This would contain information on the Hardware and
          Software requirements for the project (i.e.) server configuration, client machine
          configuration, specific software installations required etc.

          Tools used – This section would include information on any tools that were used
          for testing the project. They could be functional or performance testing
          automation tools, defect management tools, project tracking tools or any other
          tools which made the testing work easier.

     3. Metrics

          This section would include details on total number of test cases executed in the
          course of the project, number of defects found etc. Calculations like defects
          found per test case or number of test cases executed per day per person etc
          would also be entered in this section. This can be used in calculating the
          efficiency of the testing effort.

     4. Test Results

          This section is similar to the Metrics section, but is more for showcasing the
          salient features of the testing effort. Incase many defects have been logged for
          the project, graphs can be generated accordingly and depicted in this section.
          The graphs can be for Defects per build, Defects based on severity, Defects
          based on Status (i.e.) how many were fixed and how many rejected etc.



     5. Test Deliverables

          This section would include links to the various documents prepared in the course
          of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release
          Report etc.

     6. Recommendations

Performance Testing Process & Methodology          Proprietary & Confidential
- 93 -
          This section would include any recommendations from the QA team to the client
          on the product tested. It could also mention the list of known defects which have
          been logged by QA but not yet fixed by the development team so that they can
          be taken care of in the next release of the application.




Performance Testing Process & Methodology         Proprietary & Confidential
- 94 -
15 Defect Management

15.1 Defect
A mismatch in the application and its specification is a defect. A software error is present
when the program does not do what its end user expects it to do.

15.2 Defect Fundamentals

A Defect is a product anomaly or flaw. Defects include such things as omissions and
imperfections found during testing phases. Symptoms (flaws) of faults contained in
software that is sufficiently mature for production will be considered as defects.
Deviations from expectation that is to be tracked and resolved is also termed a defect.

An evaluation of defects discovered during testing provides the best indication of
software quality. Quality is the indication of how well the system meets the requirements.
So in this context defects are identified as any failure to meet the system requirements.

Defect evaluation is based on methods that range from simple number count to rigorous
statistical modeling.

Rigorous evaluation uses assumptions about the arrival or discovery rates of defects
during the testing process. The actual data about defect rates are then fit to the model.
Such an evaluation estimates the current system reliability and predicts how the reliability
will grow if testing and defect removal continue. This evaluation is described as system
reliability growth modelling




Performance Testing Process & Methodology         Proprietary & Confidential
- 95 -
15.2.1              Defect Life Cycle




15.3 Defect Tracking
     After a defect has been found, it must be reported to development so that it can be
fixed.

         The Initial State of a defect will be ‗New’.

         The Project Lead of the development team will review the defect and set it to one
          of the following statuses:
          Open – Accepts the bug and assigns it to a developer.
          Invalid Bug – The reported bug is not valid one as per the requirements/design
          As Designed – This is an intended functionality as per the requirements/design
          Deferred –This will be an enhancement.
          Duplicate – The bug has already been reported.

Performance Testing Process & Methodology            Proprietary & Confidential
- 96 -
          Document – Once it is set to any of the above statuses apart from Open, and
          the testing team does not agree with the development team it is set to document
          status.

         Once the development team has started working on the defect the status is set to
          WIP ((Work in Progress) or if the development team is waiting for a go ahead or
          some technical feedback, they will set to Dev Waiting

         After the development team has fixed the defect, the status is set to FIXED,
          which means the defect is ready to re-test.

         On re-testing the defect, and the defect still exists, the status is set to
          REOPENED, which will follow the same cycle as an open defect.

         If the fixed defect satisfies the requirements/passes the test case, it is set to
          Closed.




15.4 Defect Classification

      The severity of bugs will be classified as follows:

      Critical              The problem prevents further processing and testing. The Development Team
                            must be informed immediately and they need to take corrective action
                            immediately.
      High                  The problem affects selected processing to a significant degree, making it
                            inoperable, Cause data loss, or could cause a user to make an incorrect
                            decision or entry. The Development Team must be informed that day, and they
                            need to take corrective action within 0 – 24 hours.
      Medium                The problem affects selected processing, but has a work-around that allows
                            continued processing and testing. No data loss is suffered. These may be
                            cosmetic problems that hamper usability or divulge client-specific information.
                            The Development Team must be informed within 24 hours, and they need to
                            take corrective action within 24 - 48 hours.
      Low                   The problem is cosmetic, and/or does not affect further processing and testing.
                            The Development Team must be informed within 48 hours, and they need to
                            take corrective action within 48 - 96 hours.




Performance Testing Process & Methodology             Proprietary & Confidential
- 97 -
15.5 Defect Reporting Guidelines

The key to making a good report is providing the development staff with as much
information as necessary to reproduce the bug. This can be broken down into 5
points:

       1) Give a brief description of the problem
       2) List the steps that are needed to reproduce the bug or problem
       3) Supply all relevant information such as version, project and data used.
       4) Supply a copy of all relevant reports and data including copies of the
expected
          results.
       5) Summarize what you think the problem is.

When you are reporting a defect the more information you supply, the easier it
will be for the developers to determine the problem and fix it.

Simple problems can have a simple report, but the more complex the problem–
the more information the developer is going to need.

For example: cosmetic errors may only require a brief description of the screen,
how to get it and what needs to be changed.

However, an error in processing will require a more detailed description, such as:

          1) The name of the process and how to get to it.
          2) Documentation on what was expected. (Expected results)
          3) The source of the expected results, if available. This includes spread
             sheets, an earlier version of the software and any formulas used)
          4) Documentation on what actually happened. (Perceived results)
          5) An explanation of how the results differed.
          6) Identify the individual items that are wrong.
          7) If specific data is involved, a copy of the data both before and after the
             process should be included.
          8) Copies of any output should be included.

As a rule the detail of your report will increase based on a) the severity of the bug,
b) the level of the processing, c) the complexity of reproducing the bug.


Anatomy of a bug report

Performance Testing Process & Methodology       Proprietary & Confidential
- 98 -
Bug reports need to do more than just describe the bug. They have to give
developers something to work with so that they can successfully reproduce the
problem.

In most cases the more information– correct information– given the better. The
report should explain exactly how to reproduce the problem and an explanation of
exactly what the problem is.

The basic items in a report are as follows:


Version:            This is very important. In most cases the product is not static,
                    developers will have been working on it and if they’ve found a
                    bug– it may already have been reported or even fixed. In either
                    case, they need to know which version to use when testing out the
                    bug.

Product:            If you are developing more than one product– Identify the product
                    in question.

Data:               Unless you are reporting something very simple, such as a
                    cosmetic error on a screen, you should include a dataset that
                    exhibits the error.

                    If you’re reporting a processing error, you should include two
                    versions of the dataset, one before the process and one after. If the
                    dataset from before the process is not included, developers will be
                    forced to try and find the bug based on forensic evidence. With the
                    data, developers can trace what is happening.

Steps:              List the steps taken to recreate the bug. Include all proper menu
                    names, don’t abbreviate and don’t assume anything.

                    After you’ve finished writing down the steps, follow them - make
                    sure you’ve included everything you type and do to get to the
                    problem. If there are parameters, list them. If you have to enter
                    any data, supply the exact data entered. Go through the process
                    again and see if there are any steps that can be removed.

                    When you report the steps they should be the clearest steps to
                    recreating the bug.
Performance Testing Process & Methodology         Proprietary & Confidential
- 99 -
Description: Explain what is wrong - Try to weed out any extraneous
             information, but detail what is wrong. Include a list of what was
             expected. Remember report one problem at a time, don’t combine
             bugs in one report.

Supporting documentation:
             If available, supply documentation. If the process is a report,
             include a copy of the report with the problem areas highlighted.
             Include what you expected. If you have a report to compare
             against, include it and its source information (if it’s a printout from
             a previous version, include the version number and the dataset
             used)

                    This information should be stored in a centralized location so that
                    Developers and Testers have access to the information. The
                    developers need it to reproduce the bug, identify it and fix it.
                    Testers will need this information for later regression testing and
                    verification.


15.5.1              Summary

A bug report is a case against a product. In order to work it must supply all
necessary information to not only identify the problem but what is needed to fix it
as well.

It is not enough to say that something is wrong. The report must also say what the
system should be doing.

The report should be written in clear concise steps, so that someone who has
never seen the system can follow the steps and reproduce the problem. It should
include information about the product, including the version number, what data
was used.

The more organized information provided the better the report will be.




Performance Testing Process & Methodology         Proprietary & Confidential
- 100 -
16 Automation
What is Automation
Automated testing is automating the manual testing process currently in use




16.1 Why Automate the Testing Process?
Today, rigorous application testing is a critical part of virtually all software development
projects. As more organizations develop mission-critical systems to support their
business activities, the need is greatly increased for testing methods that support
business objectives. It is necessary to ensure that these systems are reliable, built
according to specification, and have the ability to support business processes. Many
internal and external factors are forcing organizations to
ensure a high level of software quality and reliability.
In the past, most software tests were performed using manual methods. This required a
large staff of test personnel to perform expensive, and time-consuming manual test
procedures. Owing to the size and complexity of today‘s advanced software applications,
manual testing is no longer a viable option for most testing situations.
Every organization has unique reasons for automating software quality activities, but
several reasons are common across industries.

Using Testing Effectively
By definition, testing is a repetitive activity. The very nature of application software
development dictates that no matter which methods are employed to carry out testing
(manual or automated), they remain repetitious throughout the development lifecycle.
Automation of testing processes allows machines to complete the tedious, repetitive work
while human personnel perform other tasks.

Automation allows the tester to reduce or eliminate the required ―think time‖ or ―read time‖
necessary for the manual interpretation of when or where to click the mouse or press the
enter key.

An automated test executes the next operation in the test hierarchy at machine speed,
allowing
tests to be completed many times faster than the fastest individual. Furthermore, some
types of
testing, such as load/stress testing, are virtually impossible to perform manually.



Reducing Testing Costs
The cost of performing manual testing is prohibitive when compared to automated
methods. The
reason is that computers can execute instructions many times faster, and with fewer
errors than

Performance Testing Process & Methodology         Proprietary & Confidential
- 101 -
individuals. Many automated testing tools can replicate the activity of a large number of
users (and their associated transactions) using a single computer. Therefore, load/stress
testing using
automated methods require only a fraction of the computer hardware that would be
necessary to
complete a manual test. Imagine performing a load test on a typical distributed
client/server
application on which 50 concurrent users were planned.
To do the testing manually, 50 application users employing 50 PCs with associated
software, an
available network, and a cadre of coordinators to relay instructions to the users would be
required. With an automated scenario, the entire test operation could be created on a
single machine having the ability to run and rerun the test as necessary, at night or on
weekends without having to assemble an army of end users. As another example,
imagine the same application used by hundreds or thousands of users. It is easy to see
why manual methods for load/stress testing is an expensive and logistical nightmare.

Replicating Testing Across Different Platforms
Automation allows the testing organization to perform consistent and repeatable tests.
When
applications need to be deployed across different hardware or software platforms,
standard or
benchmark tests can be created and repeated on target platforms to ensure that new
platforms
operate consistently.

Repeatability and Control
By using automated techniques, the tester has a very high degree of control over which
types of
tests are being performed, and how the tests will be executed. Using automated tests
enforces
consistent procedures that allow developers to evaluate the effect of various application
modifications as well as the effect of various user actions.
For example, automated tests can be built that extract variable data from external files or
applications and then run a test using the data as an input value. Most importantly,
automated
tests can be executed as many times as necessary without requiring a user to recreate a
test
script each time the test is run.


Greater Application Coverage
The productivity gains delivered by automated testing allow and encourage organizations
to test
more often and more completely. Greater application test coverage also reduces the risk
of
exposing users to malfunctioning or non-compliant software. In some industries such as
healthcare and pharmaceuticals, organizations are required to comply with strict quality


Performance Testing Process & Methodology        Proprietary & Confidential
- 102 -
regulations as well as being required to document their quality assurance efforts for all
parts of
their systems.




16.2 Automation Life Cycle




Identifying Tests Requiring Automation
Most, but not all, types of tests can be automated. Certain types of tests like user
comprehension
tests, tests that run only once, and tests that require constant human intervention are
usually not
worth the investment to automate. The following are examples of criteria that can be used
to
identify tests that are prime candidates for automation.

High Path Frequency - Automated testing can be used to verify the performance of
application paths that are used with a high degree of frequency when the software is
running in full production.

Examples include: creating customer records, invoicing and other high volume activities
where
software failures would occur frequently.

Critical Business Processes - In many situations, software applications can literally
define or control the core of a company‘s business. If the application fails, the company
can face extreme

Performance Testing Process & Methodology         Proprietary & Confidential
- 103 -
disruptions in critical operations. Mission-critical processes are prime candidates for
automated
testing.

Examples include: financial month-end closings, production planning, sales order entry
and other core activities. Any application with a high-degree of risk associated with a
failure is a
good candidate for test automation.

Repetitive Testing - If a testing procedure can be reused many times, it is also a prime
candidate for automation. For example, common outline files can be created to establish
a testing session, close a testing session and apply testing values. These automated
modules can be used again and again without having to rebuild the test scripts. This
modular approach saves time and money when compared to creating a new end-to-end
script for each and every test.

Applications with a Long Life Span - If an application is planned to be in production for
a long period of time, the greater the benefits are from automation.

What to Look For in a Testing Tool
Choosing an automated software testing tool is an important step, and one which often
poses enterprise-wide implications. Here are several key issues, which should be
addressed when selecting an application testing solution.

Test Planning and Management
A robust testing tool should have the capability to manage the testing process, provide
organization for testing components, and create meaningful end-user and management
reports. It should also allow users to include non-automated testing procedures within
automated test plans and test results.
A robust tool will allow users to integrate existing test results into an automated test plan.
Finally, an automated test should be able to link business requirements to test results,
allowing users to evaluate application readiness based upon the application's ability to
support the business requirements.

Testing Product Integration
Testing tools should provide tightly integrated modules that support test component
reusability. Test components built for performing functional tests should also support
other types of testing including regression and load/stress testing. All products within the
testing product environment should be based upon a common, easy-to-understand
language. User training and experience gained in performing one testing task should be
transferable to other testing tasks. Also, the architecture of the testing tool environment
should be open to support interaction with other technologies such as defect or bug
tracking packages.

Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser. The
tests created for testing Internet or intranet-based applications should be portable across
browsers, and should automatically adjust for different load times and performance
levels.
Performance Testing Process & Methodology          Proprietary & Confidential
- 104 -
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application
end-users. With much of the testing responsibility shifting from the development staff to
the departmental level, a testing tool that requires programming skills is unusable by
most organizations. Even if programmers are responsible for testing, the testing tool itself
should have a short learning curve.
GUI and Client/Server Testing
A robust testing tool should support testing with a variety of user interfaces and create
simple-to manage, easy-to-modify tests. Test component reusability should be a
cornerstone of the product
architecture.
Load and Performance Testing
The selected testing solution should allow users to perform meaningful load and
performance tests to accurately measure system performance. It should also provide test
results in an easy-to-understand reporting format.



16.3 Preparing the Test Environment
Once the test cases have been created, the test environment can be prepared. The test
environment is defined as the complete set of steps necessary to execute the test as
described in the test plan. The test environment includes initial set up and description of
the environment, and the procedures needed for installation and restoration of the
environment.

Description - Document the technical environment needed to execute the tests.
Test Schedule - Identify the times during which your testing facilities will be used for a
given test. Make sure that other groups that might share these resources are informed of
this schedule.
Operational Support - Identify any support needed from other parts of your organization.
Installation Procedures - Outline the procedures necessary to install the application
software to be tested.
Restoration Procedures - Finally, outline those procedures needed to restore the test
environment to its original state. By doing this, you are ready to re-execute tests or
prepare for a different set of tests.
Inputs to the Test Environment Preparation Process
Technical Environment Descriptions
Approved Test Plan
Test Execution Schedules
Resource Allocation Schedule
Application Software to be installed




Performance Testing Process & Methodology         Proprietary & Confidential
- 105 -
Test Planning
Careful planning is the key to any successful process. To guarantee the best possible result
from
an automated testing program, those evaluating test automation should consider these
fundamental planning steps. The time invested in detailed planning significantly improves the
benefits resulting from test automation.
Evaluating Business Requirements
Begin the automated testing process by defining exactly what tasks your application software
should accomplish in terms of the actual business activities of the end-user. The definition of
these tasks, or business requirements, defines the high-level, functional requirements
of the software system in question. These business requirements should be defined in such a
way as to make it abundantly clear that the software system correctly (or incorrectly)
performs the necessary business functions. For example, a business requirement for a
payroll application might be to calculate a salary, or to print a salary check.

Creating a Test Plan
For the greatest return on automated testing, a testing plan should be created at the same
time
the software application requirements are defined. This enables the testing team to define the
tests, locate and configure test-related hardware and software products and coordinate the
human resources required to complete all testing. This plan is very much a ―living document‖
that should evolve as the application functions become more clearly defined. A good testing
plan should be reviewed and approved by the test team, the software development
team, all user groups and the organization‘s management. The following items detail the
input and output components of the test planning process.

Inputs to the Test Planning Process
Application Requirements - What is the application intended to do? These should be stated
in the terms of the business requirements of the end users.
Application Implementation Schedules - When is the scheduled release? When are
updates or
enhancements planned? Are there any specific events or actions that are dependent upon
the
application?
Acceptance Criteria for implementation - What critical actions must the application
accomplish before it can be deployed? This information forms the basis for making informed
decisions on whether or not the application is ready to deploy.



Test Design and Development

After the test components have been defined, the standardized test cases can be created
that will
be used to test the application. The type and number of test cases needed will be dictated by
the
testing plan.


Performance Testing Process & Methodology        Proprietary & Confidential
- 106 -
A test case identifies the specific input values that will be sent to the application, the
procedures
for applying those inputs, and the expected application values for the procedure being tested.
A
proper test case will include the following key components:
Test Case Name(s) - Each test case must have a unique name, so that the results of these
test
elements can be traced and analyzed.
Test Case Prerequisites - Identify set up or testing criteria that must be established before a
test can be successfully executed.
Test Case Execution Order - Specify any relationships, run orders and dependencies that
might exist between test cases.
Test Procedures – Identify the application steps necessary to complete the test case.
Input Values - This section of the test case identifies the values to be supplied to the
application as input including, if necessary, the action to be completed.
Expected Results - Document all screen identifier(s) and expected value(s) that must be
verified as part of the test. These expected results will be used to measure the acceptance
criteria, and
therefore the ultimate success of the test.
Test Data Sources - Take note of the sources for extracting test data if it is not included in
the test case.

Inputs to the Test Design and Construction Process
Test Case Documentation Standards
Test Case Naming Standards
Approved Test Plan
Business Process Documentation
Business Process Flow
Test Data sources
Outputs from the Test Design and Construction Process
Revised Test Plan
Test Procedures for each Test Case
Test Case(s) for each application function described in the test plan
Procedures for test set up, test execution and restoration



Executing the Test
The test is now ready to be run. This step applies the test cases identified by the test plan,
documents the results, and validates those results against expected performance. Specific
performance measurements of the test execution phase include:
Application of Test Cases – The test cases previously created are applied to the target
software application as described in the testing environment
Documentation - Activities within the test execution are logged and analyzed as follows:
Actual Results achieved during test execution are compared to expected application
behavior from the test cases
Test Case completion status (Pass/Fail)
Actual results of the behavior of the technical test environment
Deviations taken from the test plan or test process
Inputs to the Test Execution Process

Performance Testing Process & Methodology         Proprietary & Confidential
- 107 -
Approved Test Plan
Documented Test Cases
Stabilized, repeatable, test execution environment
Standardized Test Logging Procedures
Outputs from the Test Execution Process
Test Execution Log(s)
Restored test environment
The test execution phase of your software test process will control how the test gets applied
to the application. This step of the process can range from very chaotic to very simple and
schedule driven. The problems experienced in test execution are usually attributed to not
properly performing steps from earlier in the process.
Additionally, there may be several test execution cycles necessary to complete all the
necessary types of testing required for your application. For example, a test execution may
be required for the functional testing of an application, and a separate test execution cycle
may be required for the stress/volume testing of the same application. A complete and
thorough test plan will identify this need and many of the test cases can be used for both test
cycles. The secret to a controlled test execution is comprehensive planning. Without an
adequate test plan in place to control your entire test process, you may inadvertently cause
problems for subsequent testing.



Measuring the Results
This step evaluates the results of the test as compared to the acceptance criteria set down in
the test plan. Specific elements to be measured and analyzed include:
Test Execution Log Review - The Log Review compiles a listing of the activities of all test
cases, noting those that passed, failed or were not executed.
Determine Application Status - This step identifies the overall status of the application after
testing, for example: ready for release, needs more testing, etc.
Test Execution Statistics - This summary identifies the total number of tests that were
executed, the type of test, and the completion status.
Application Defects - This final and very important report identifies potential defects in the
software, including application processes that need to be analyzed further.

16.4 Automation Methods

Capture/Playback Approach The Capture/Playback tools capture the sequence of
manual operations in a test script that are entered by the test engineer. These sequences are
played back during the test execution. The benefit of this approach is that the captured
session can be re-run at some later point in time to ensure that the system performs the
required behavior.

The short-comings of Capture/Playback are that in many cases, if the system functionality
changes, the capture/playback session will need to be completely re-run to capture the new
sequence of user interactions. Tools like WinRunner provide a scripting language, and it is
possible for engineers to edit and maintain such scripts. This sometimes reduces the effort
over the completely manual approach, however overall savings is usually minimal.



Performance Testing Process & Methodology        Proprietary & Confidential
- 108 -
Data Driven Approach

Data driven approach is a test that plays back the same user actions but with varying input
values. This allows one script to test multiple sets of positive data. This is applicable when
large volumes and different sets of data need to be fed to the application and tested for
correctness. The benefit of this approach is that the time consumed is less and accurate than
manually testing it. Testing can be done with both positive and negative approach
simultaneously.

Test Script execution:

In this phase we execute the scripts that are already created. Scripts need to be reviewed
and validated for results and accepted as functioning as expected before they are used live.

Steps to be followed before execution of scripts:
1.Test tool to be installed in the machine.
2. Test environment /application to be tested to be installed in the machine.
3. Prerequisite for running the scripts such as tool settings, playback options, necessary data
table or data pool updation needs to be taken care.
4.Select the script that needs to be executed and run it…
5.Wait until execution is done.
6.Analysis the results via Test manager or in the logs.




      Test script execution process:




Performance Testing Process & Methodology        Proprietary & Confidential
- 109 -
                                Test tool ready


                                Test        application
                                ready



                                Tool settings,
                                Playback options


                                Script execution


                                Result analysis


                                Defect management




Performance Testing Process & Methodology                 Proprietary & Confidential
- 110 -
17 General automation tool comparison
Anyone who has contemplated the implementation of an automated test tool has
quickly realized the wide variety of options on the market in terms of both the kinds
of test tools being offered and the number of vendors. The best tool for any particular
situation depends on the system engineering environment that applies and the
testing methodology that will be used, which in turn will dictate how automation
will be invoked to support the process.
This appendix evaluates major tool vendors on their test tool characteristics,
test execution capability, tool integration capability, test reporting capability, performance
testing and analysis, and vendor qualification. The following tool vendors
evaluated are Compuware, Empirix/RSW, Mercury, Rational, and Segue.


17.1 Functional Test Tool Matrix
The Tool Matrix is provided for quick and easy reference to the capabilities of the test tools.
Each category in the matrix is given a rating of 1 – 5. 1 = Excellent support for this
functionality, 2 = Good support but lacking or another tool provides more effective support, 3
= Basic/ support only. 4 = This is only supported by use of an API call or third party add-in but
not included in the general test tool/below average, 5 = No support. In general a set of criteria
can be built up by using this matrix and an indicative score obtained to help in the evaluation
process. Usually the lower the score the better but this is subjective and is based on the
experience of the author and the test professionals opinions used to create this document.

A detailed description is given below of each of the categories used in the matrix.


17.2 Record and Playback
     This category details how easy it is to record & playback a test. Does the tool support
     low-level recording (mouse drags, exact screen location)? Is there object recognition
     when recording and playing back or does it appear to record ok but then on playback
     (without environment change or unique id‘s, etc changes) fail? How easy is it to read the
     recorded script.
     When automating, this is the first thing that most test professionals will do. They will
     record a simple script; look at the code and then playback. This is very similar to
     recording a macro in say Microsoft Access. Eventually record and playback becomes
     less and less part of the automation process as it is usually more robust to use the built-in
     functions to directly test objects, databases, etc. However this should be done as a
     minimum in the evaluation process because if the tool of choice cannot recognize the
     applications objects then the automation process will be a very tedious experience.




Performance Testing Process & Methodology          Proprietary & Confidential
- 111 -
     17.3           Web Testing
     Web based functionality on most applications is now a part of everyday life. As such the
     test tool should provide good web based test functionality in addition to its client/server
     functions.
     In judging the rating for this category I looked at the tools native support for HTML tables,
     frames, DOM, various platforms for browsers, Web site maps and links.
     Web testing can be riddled with problems if various considerations are not taken into
     account. Here are a few examples

                 Are there functions to tell me when the page has finished loading?
                 Can I tell the test tool to wait until an image appears?
                 Can I test whether links are valid or not?
                 Can I test web based objects functions like is it enabled, does it contain data,
     etc.
                 Are there facilities that will allow me to programmatically look for objects of a
     certain type on a web page or locate a specific object?
                 Can I extract data from the web page itself? E.g. the title? A hidden form
     element?

     With Client server testing the target customer is usually well defined you know what
     network operating system you will be using, the applications and so on but on the web it
     is far different. A person may be connecting from the USA or Africa, they may be
     disabled, they may use various browsers, and the screen resolution on their computer will
     be different. They will speak different languages, will have fast connections and slow
     connections, connect using MAC, Linux or Windows, etc, etc. So the cost to set up a test
     environment is usually greater than for a client server test where the environment is fairly
     well defined.

17.4 Database Tests
          Most applications will provide the facility to preserve data outside of itself. This is
          usually achieved by holding the data in a Database. As such, checking what is in the
          backend database usually verifies the proper validation of tests carried out on the
          front end of an application. Because of the many databases available e.g. Oracle,
          DB2, SQLServer, Sybase, Informix, Ingres, etc all of them support a universal query
          language known as SQL and a protocol for communicating with these databases
          called ODBC (JDBC can be used on java environments). I have looked at all the tools
          support for SQL, ODBC and how they hold returned data e.g. is this in an array, a
          cursor, a variable, etc. How does the tool manipulate this returned data? Can it call
          stored procedures and supply required input variables? What is the range of
          functions supplied for this testing?


17.5 Data Functions
          As mentioned above applications usually provide a facility for storing data off line. So
          to test this, we will need to create data to input into the application. I have looked at
Performance Testing Process & Methodology           Proprietary & Confidential
- 112 -
          all the tools facilities for creating and manipulating data. Does the tool allow you to
          specify the type of data you want? Can you automatically generate data? Can you
          interface with files, spreadsheets, etc to create, extract data? Can you randomise the
          access to that data? Is the data access truly random? This functionality is normally
          more important than database tests as the databases will usually have their own
          interface for running queries. However applications (except for manual input) do not
          usually provide facilities for bulk data input.
          The added benefit (as I have found) is this functionality can be used for a production
          reason e.g. for the aforementioned bulk data input sometimes carried out in data
          migration or application upgrades.
          These functions are also very important as you move from the record/playback
          phase, to data-driven to framework testing. Data-driven tests are tests that replace
          hard coded names, address, numbers; etc with variables supplied from an external
          source usually a CSV (Comma Separated variable) file, spreadsheet or database.
          Frameworks are usually the ultimate goal in deploying automation test tools.
          Frameworks provide an interface to all the applications under test by exposing a
          suitable list of functions, databases, etc. This allows an inexperienced tester/user to
          run tests by just running/providing the test framework with know
          commands/variables. A test framework has parallels to Software frameworks where
          you develop an encapsulation layer of software (framework) around the applications,
          databases etc and expose functions, classes, methods etc that is used to call the
          underlying applications, return data, input data, etc.
          However to do this requires a lot of time, skilled resources and money to facilitate the
          first two.


17.6 Object Mapping
          If you are in a role that can help influence the design of a product, try to get the
          development/design team to use standard and not custom objects. Then hopefully
          you will not need this functionality.
          However you may find that most (hopefully) of the application has been implemented
          using standard objects supported by your test tool vendor but there may be a few
          objects that are custom ones.
          Most custom objects will behave like a similar standard control here are a few
          standard objects that are seen in everyday applications.

                   Pushbuttons
                   Checkboxes
                   Radio buttons
                   List views
                   Edit boxes
                   Combo boxes

          If you have a custom object that behaves like one of these are you able to map (tell
          the test tool that the custom control behaves like the standard) control? Does it
          support all the standard controls methods? Can you add the custom control to it‘s
          own class of control?


Performance Testing Process & Methodology          Proprietary & Confidential
- 113 -
17.7    Image Testing
          Lets hope this is not a major part of your testing effort but occasionally you may have
          to use this to test bit map and similar images. Also when the application has painted
          controls like those in the calculator app found on a lot of windows applications you
          may need to use this.
          At least one of the tools allows you to map painted controls to standard controls but
          to do this you have to rely on the screen co-ordinates of the image.
          Does the tool provide OCR (optical character recognition)? Can it compare one
          image against another? How fast does the compare take? If the compare fails how
          long does that take? Does the tool allow you to mask certain areas of the screen
          when comparing.
          I have looked at these facilities in the base tool set.

17.8    Test/Error recovery
          This can be one of the most difficult areas to automate but if it is automated, it
          provides the foundation to produce a truly robust test suite. Suppose the application
          crashes while I am testing what can I do? If a function does not receive the correct
          information how can I handle this? If I get an error message how do I deal with that?
          If I access a web site and get a warning what do I do? I cannot get a database
          connection how do I skip those tests?
          The test tool should provide facilities to handle the above questions. I looked at built
          in wizards of the test tools for standard test recovery (when you finish tests or when a
          script fails). Error recovery caused by the application and environment. How easy is it
          to build this into your code?
          The rating given will depend on how much errors the tool can capture, the types of
          errors, how it recovers from errors, etc.

17.9    Object Name Map
          As you test your application using the test tool of your choice you will notice that it
          records actions against the objects that it interacts with. These objects are either
          identified through the co-ordinates on the screen or preferably via some unique
          object reference referred to as a tag, object ID, index, name, etc. Firstly the tool
          should provide services to uniquely identify each object it interacts with and by
          various means. The last and least desirable should be by co-ordinates on the screen.
          Once you are well into automation and build up 10‘s and 100‘s of scripts that
          reference these objects you will want to have a mechanism that provides an easy
          update if the application being tested changes.
          All tools provide a search and replace facility but the best implementations are those
          that provide a central repository to store these object identities. The premise is it is
          better to change the reference in one place rather than having to go through each of
          the scripts to replace it there. I found this to be true but not as big a point as some
          have stated because those tools that don‘t support the central repository scheme;
          can be programmed to reference windows and object names in one place (say via a
          variable) and that variable can be used throughout the script (where that object
          appears).
          Does the Object Name Map allow you to alias the name or change the name given
          by the tool to some more meaningful name?


Performance Testing Process & Methodology          Proprietary & Confidential
- 114 -
17.10 Object Identity Tool
          Once you become more proficient with automation testing one of the primary means
          of identifying objects will be via an ID Tool. A sort of spy that looks at the internals of
          the object giving you details like the object name, ID and similar.
          This will allow you to reference that object within a function call.
          The tool should give you details of some of the object‘s properties, especially those
          associated with uniquely identifying the object or window. The tool will usually provide
          the tester with a point and ID service where you can use the mouse to point at the
          object and in some window you will see all of that objects ID‘s and properties.
          A lot of the tools will allow you to search all the open applications in one swoop and
          show you the result in a tree that you can look at when required.


17.11 Extensible Language
          Here is a question that you will here time and time again in automation forums. ―How
          do I get {insert test tool name here} to do such and such‖, there will be one of four
          answers.

                   I don‘t know
                   It can‘t do it
                   It can do it using the function x, y or Z
                   It can‘t in the standard language but you can do it like this

          What we are concerned with in this section is the last answer e.g. if the standard test
          language does not support it can I create a DLL or extend the language in some way
          to do it? This is usually an advanced topic and is not encountered until the trained
          tester has been using the tool for at least 6 – 12 months. However when this is
          encountered the tool should support language extension. If via DLL‘s then the tester
          must have knowledge of a traditional development language e.g. C, C++ or VB. For
          instance if I wanted to extend a tool that could use DLL‘s created by VB I would need
          to have Visual Basic then open say an ActiveX dll project, create a class containing
          various methods (similar to functions) then I would make a dll file. Register it on the
          machine then reference that dll from the test tool calling the methods according to
          their specification. This will sound a lot clearer as you go on in the tools and this
          document will be updated to include advanced topics like this in extending the tools
          capabilities.
          Some tools provide extension by allowing you to create user defined functions,
          methods, classes, etc but these are normally a mixture of the already supported data
          types, functions, etc rather than extending the tool beyond it‘s released functionality.
          Because this is an advanced topic I have not taken into account ease of use, as
          those people who have got to this level should have already exhausted the current
          capabilities of the tools. So want to use external functions like win32api functions and
          so on and should have a good grasp of programming.




Performance Testing Process & Methodology             Proprietary & Confidential
- 115 -
17.12 Environment Support
          How many environments does the tool support out the box? Does it support the latest
          Java release, what Oracle, Powerbuilder, WAP, etc. Most tools can interface to
          unsupported environments if the developers in that environment provide classes, dll‘s
          etc that expose some of the applications details but whether a developer will or has
          time to do this is another question.
          Ultimately this is the most important part of automation. Environment support. If the
          tool does not support your environment/application then you are in trouble and in
          most cases you will need to revert to manually testing the application (more shelf
          ware).


17.13   Integration
          How well does the tool integrate with other tools. This is becoming more and more
          important. Does the tool allow you to run it from various test management suites?
          Can you raise a bug directly from the tool and feed the information gathered from
          your test logs into it? Does it integrate with products like word, excel or requirements
          management tools?
          When managing large test projects with an automation team greater than five and
          testers totaling more than ten. The management aspect and the tools integration
          moves further up the importance ladder. An example could be a major Bank wants to
          redesign its workflow management system to allow faster processing of customer
          queries. The anticipated requirements for the new workflow software numbers in the
          thousands. To test these requirements 40,000 test cases have been identified 20,000
          of these can be automated. How do I manage this? This is where a test management
          tool comes in real handy.
          Also how do I manage the bugs raised as a result of automation testing, etc?
          Integration becomes very important rather than having separate systems that don‘t
          share data that may require duplication of information.
          The companies that will score larger on these are those that provide tools outside the
          testing arena as they can build in integration to their other products and so when it
          comes down to the wire on some projects, we have gone with the tool that integrated
          with the products we already had.


17.14   Cost
          In my opinion cost is the least significant in this matrix, why? Because all the tools
          are similar in price except Visual Test that is at least 5 times cheaper than the rest
          but as you will see from the matrix there is a reason. Although very functional it does
          not provide the range of facilities that the other tools do.
          Price typically ranges from $2,900 - $5,000 (depending on quantity brought,
          packages, etc) in the US and around £2,900 - £5,000 in the UK for the base tools
          included in this document.
          So you know the tools will all cost a similar price it is usually a case of which one will
          do the job for me rather than which is the cheapest.
          Visual Test I believe will prove to be a bigger hit as it expands its functional range it
          was not that long ago where it did not support web based testing.

Performance Testing Process & Methodology            Proprietary & Confidential
- 116 -
          The prices are kept this high because they can. All the tools are roughly the same
          price and the volumes of sales is low relative to say a fully blown programming
          language IDE like JBuilder or Visual C++ which are a lot more function rich and
          flexible than any of the test tools.
          On top of the above prices you usually pay an additional maintenance fee of between
          10 and 20%. There are not many applications I know that cost this much per license
          not even some very advanced operating systems. However it is all a matter of supply.
          The bigger the supply the less the price as you can spread the development costs
          more. However I do not anticipate a move on the prices upwards as this seems to be
          the price the market will tolerate.
          Visual Test also provides a free runtime license.


17.15   Ease Of Use
          This section is very subjective but I have used testers (my guinea pigs) of various
          levels and got them from scratch to use each of the tools. In more cases than not
          they have agreed on which was the easiest to use (initially). Obviously this can
          change as the tester becomes more experienced and the issues of say extensibility,
          script maintenance, integration, data-driven tests, etc are required. However this
          score is based on the productivity that can be gained in say the first three months
          when those issues are not such a big concern.
          Ease of use includes out the box functions, debugging facilities, layout on screen,
          help files and user manuals.

17.16   Support
          In the UK this can be a problem as most of the test tool vendors are based in the
          USA with satellite branches in the UK.
          Just from my own experience and the testers I know in the UK. We have found
          Mercury to be the best for support, then Compuware, Rational and last Segue.
          However having said that you can find a lot of resources for Segue on the Internet
          including a forum at www.betasoft.com that can provide most of the answers rather
          than ringing the support line.
          On their website Segue and Mercury provide many useful user and vendor
          contributed material.
          I have also included various other criteria like the availability of skilled resources,
          online resources, validity of responses from the helpdesk, speed of responses and
          similar


17.17   Object Tests
          Now presuming the tool of choice does work with the application you wish to test
          what services does it provide for testing object properties?
          Can it validate several properties at once? Can it validate several objects at once?
          Can you set object properties to capture the application state?
          This should form the bulk of your verification as far as the automation process is
          concerned so I have looked at the tools facilities on client/server as well as web
          based applications.


Performance Testing Process & Methodology           Proprietary & Confidential
- 117 -
17.18 Matrix
          What will follow after the matrix is a tool-by-tool comparison under the appropriate
          heading (as listed above) so that the user can get a feel for the tools functionality
          side by side.
          Each category in the matrix is given a rating of 1 – 5. 1 = Excellent support for this
          functionality, 2 = Good support but lacking or another tool provides more effective
          support, 3 = Basic/ support only. 4 = This is only supported by use of an API call or
          third party add-in but not included in the general test tool/below average, 5 = No
          support.




                                                                                                                                                                                         Extensible Language

                                                                                                                                                                                                               Environment support
                                                                                                                                                                  Object Identity Tool
                                                                                                                          Test/Error recovery
                    Record & Playback




                                                                                                                                                Object Name Map
                                                                                        Object Mapping
                                                      Database tests

                                                                       Data functions



                                                                                                         Image testing
                                        Web Testing




                                                                                                                                                                                                                                                                                  Object Tests
                                                                                                                                                                                                                                                          Ease of use
                                                                                                                                                                                                                                     Integration




                                                                                                                                                                                                                                                                        Support
                                                                                                                                                                                                                                                   Cost
  WinRunner         2                   1             1                2                1                1               2                      1                 2                      2                     1                     1             3      2             1         1

  QA Run            1                   2             1                2                1                1               2                      2                 1                      2                     2                     1             2      2             2         1

  Silk Test         1                   2             1                2                1                1               1                      1                 2                      1                     2                     3             3      3             2         1

  Visual Test       3                   3             4                3                2                2               2                      4                 1                      2                     3                     2             1      3             2         2

  Robot             1                   2             1                1                1                1               2                      4                 1                      1                     2                     1             2      1             2         1



17.19 Matrix score
         Win Runner = 24
         QARun = 25
         SilkTest = 24
         Visual Test = 39
         Robot = 24




Performance Testing Process & Methodology                                                                                Proprietary & Confidential
- 118 -
18 Sample Test Automation Tool
         Rational offers the most complete lifecycle toolset (including testing) of these vendors
for the windows platform. When it comes to Object Oriented development they are the
acknowledged leaders with most of the leading OO experts working for them. Some of their
products are worldwide leaders e.g. Rational Robot, Rational Rose, Clear case, RequistePro,
etc.Their Unified Process is a very good development model that I have been involved with
which allows mapping of requirements to use cases, test cases and a whole set of tools to
support the process.


18.1 Rational Suite of tools
        Rational RequisitePro is a requirements management tool that helps project teams
control the development process. RequisitePro organizes your requirements by linking
Microsoft Word to a requirements repository and providing traceability and change
management throughout the project lifecycle. A baseline version of RequisitePro is included
with Rational Test Manager. When you define a test requirement in RequisitePro, you can
access it in Test Manager.

        Rational Clear Quest is a change-request management tool that tracks and
manages defects and change requests throughout the development process. With Clear
Quest, you can manage every type of change activity associated with software development,
including enhancement requests, defect reports, and documentation modifications.

        Rational Purify is a comprehensive C/C+ + run-time error checking tool that
automatically pinpoints run-time errors and memory leaks in all components of an application,
including third-party libraries, ensuring that code is reliable

       Rational Quantify is an advanced performance profiler that provides application
performance analysis, enabling developers to quickly find, prioritize and eliminate
performance bottlenecks within an application.

        Rational Pure Coverage is a customizable code coverage analysis tool that
provides detailed application analysis and ensures that all code has been exercised,
preventing untested code from reaching the end-user.

         Rational Suite Performance Studio is a sophisticated tool for automating
performance tests on client/server systems. A client/server system includes client
applications accessing a database or application server, and browsers accessing a Web
server. Performance Studio includes Rational Robot and Rational Load Test. Use Robot to
record client/server conversations and store them in scripts. Use Load Test to schedule and
play back the scripts.




Performance Testing Process & Methodology         Proprietary & Confidential
- 119 -
        Rational Robot. Facilitates functional and performance testing by automating record
and playback of test scripts. Allows you to write, organize, and run tests, and to capture and
analyze the results.

          Rational Test Factory. Automates testing by combining automatic test generation
with source-code coverage analysis. Tests an entire application, including all GUI features
and all lines of source code.

        During playback, Rational Load Test can emulate hundreds, even thousands, of
users placing heavy loads and stress on your database and Web servers.

        Rational Test categorizes test information within a repository by project. You can
use the Rational Administrator to create and manage projects.


        The tools that are to discussed here are
Rational Administrator
Rational Robot
Rational Test Manager




18.2 Rational Administrator
What is a Rational Project?

         A Rational project is a logical collection of databases and data stores that associates
the data you use when working with Rational Suite. A Rational project is associated with one
Rational Test data store, one RequisitePro database, one Clear Quest databases, and
multiple Rose models and RequisitePro projects, and optionally places them under
configuration management.
Rational administrator is used to create and manage rational repositories, users and groups
and manage security privileges.

How to create a new project?




Performance Testing Process & Methodology          Proprietary & Confidential
- 120 -
          Open the Rational administrator and go to File->New Project.

        In the above window opened enter the details like Project name and location.
Click Next.
        In the corresponding window displayed, enter the Password if you want to protect the
project with password, which is required to connect to, configure or delete the project.



Click Finish.
        In the configure project window displayed click the Create button. To manage the
Requirements assets connect to Requisite Pro, to manage test assets create associated test
data store and for defect management connect to Clear quest database.




Performance Testing Process & Methodology       Proprietary & Confidential
- 121 -
        Once the Create button in the Configure project window is chosen, the below seen
Create Test Data store window will be displayed. Accept the default path and click OK button.




Performance Testing Process & Methodology       Proprietary & Confidential
- 122 -
       Once the below window is displayed it is confirmed that the Test datastore is
successfully created and click OK to close the window.




        Click OK in the configure project window and now your first Rational project is
ready to play with….




Performance Testing Process & Methodology       Proprietary & Confidential
- 123 -
Rational Administrator will display your ―TestProject” details as below:




18.3 Rational Robot
       Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and
VU and VB scripts for performance testing.
Robot can be used to:

         Perform full functional testing. Record and play back scripts that navigate through
          your application and test the state of objects through verification points.

         Perform full performance testing. Use Robot and TestManager together to record and
          play back scripts that help you determine whether a multi-client system is performing
          within user-defined standards under varying loads.


Performance Testing Process & Methodology          Proprietary & Confidential
- 124 -
         Create and edit scripts using the SQABasic, VB, and VU scripting environments. The
          Robot editor provides color-coded commands with keyword Help for powerful
          integrated programming during script development.

         Test applications developed with IDEs such as Visual Basic, Oracle Forms,
          PowerBuilder, HTML, and Java. Test objects even if they are not visible in the
          application's interface.

         Collect diagnostic information about an application during script playback. Robot is
          integrated with Rational Purify, Quantify, and PureCoverage. You can play back
          scripts under a diagnostic tool and see the results in the log.

     The Object-Oriented Recording technology in Robot lets you generate scripts quickly by
simply running and using the application-under-test. Robot uses Object-Oriented Recording
to identify objects by their internal object names, not by screen coordinates. If objects change
locations or their text changes, Robot still finds them on playback.
The Object Testing technology in Robot lets you test any object in the application-under-test,
including the object's properties and data. You can test standard Windows objects and IDE-
specific objects, whether they are visible in the interface or hidden.




18.4 Robot login window




          Once logged you will see the robot window. Go to File-> New->Script




Performance Testing Process & Methodology          Proprietary & Confidential
- 125 -
          In the above screen displayed enter the name of the script say ―First Script‖ by which
the script is referred to from now on and any description (Not mandatory).The type of the
script is GUI for functional testing and VU for performance testing.




18.5 Rational Robot main window-GUI script




                                            2




Performance Testing Process & Methodology        Proprietary & Confidential
- 126 -
     The GUI Script top pane) window displays GUI scripts that you are currently recording,
editing, or debugging. It has two panes:

         Asset pane (left) – Lists the names of all verification points and low-level scripts for
          this script.
         Script pane (right) – Displays the script.

The Output window bottom pane) has two tabs:

         Build – Displays compilation results for all scripts compiled in the last operation. Line
          numbers are enclosed in parentheses to indicate lines in the script with warnings and
          errors.
         Console – Displays messages that you send with the SQAConsoleWrite command.
          Also displays certain system messages from Robot.

To display the Output window:
        Click View ® Output.

How to record a play back script?
        To record a script just go to Record->Insert at cursor
        Then perform the navigation in the application to be tested and once recording is
done stop the recording. Record-> Stop




18.6 Record and Playback options
Go to Tools-> GUI Record options the below window will be displayed.




Performance Testing Process & Methodology           Proprietary & Confidential
- 127 -
         In this window we can set general options like identification of lists, menus ,recording
think time in General tab:
         Web browser tab: Mention the browser type IE or Netscape…
         Robot Window: During recording how the robot should be displayed and hotkeys
details…
         Object Recognition Order: the order in which the recording is to happen .
         For ex: Select a preference in the Object order preference list.

If you will be testing C++ applications, change the object order preference to C++
Recognition Order.




18.6.1              Playback options
Go to Tools-> Playback options to set the options needed while running the script.

Performance Testing Process & Methodology         Proprietary & Confidential
- 128 -
This will help you to handle unexpected window during playback, error recovery, mention the
time out period, to manage log and log data.


18.7 Verification points
A verification point is a point in a script that you create to confirm the state of an object across
builds of the application-under-test. During recording, the verification point captures object
information (based on the type of verification point) and stores it in a baseline data file. The
information in this file becomes the baseline of the expected state of the object during
subsequent builds.
When you play back the script against a new build, Robot retrieves the information in the
baseline file for each verification point and compares it to the state of the object in the new
build. If the captured object does not match the baseline, Robot creates an actual data file.
The information in this file shows the actual state of the object in the build.

After playback, the results of each verification point appear in the log in Test Manager. If a
verification point fails (the baseline and actual data do not match), you can select the
verification point in the log and click View ® Verification Point to open the appropriate
Comparator. The Comparator displays the baseline and actual files so that you can compare
them.
Performance Testing Process & Methodology          Proprietary & Confidential
- 129 -
A verification point is stored in the project and is always associated with a script. When you
create a verification point, its name appears in the Asset (left) pane of the Script window. The
verification point script command, which always begins with Result =, appears in the Script
(right) pane.
Because verification points are assets of a script, if you delete a script, Robot also deletes all
of its associated verification points.
You can easily copy verification points to other scripts if you want to reuse them.


18.7.1              List of Verification Points

The following table summarizes each Robot verification point.

                    Type                                      Description
              Alphanumeric                  Captures and compares alphabetic or numeric
                                            values.
              Clipboard                     Captures and compares alphanumeric data that
                                            has been copied to the Clipboard.

              File Comparison               Compares the contents of two files.

              File Existence                Checks for the existence of a specified file
              Menu                          Captures and compares the text, accelerator
                                            keys, and state of menus. Captures up to five
                                            levels of sub-menus.

              Module Existence              Checks whether a specified module is loaded into
                                            a specified context (process), or is loaded
                                            anywhere in memory.

              Object Data
                                            Captures and compares the data in objects.
              Object Properties             Captures and compares the properties of objects.

              Region Image                  Captures and compares a region of the screen (as
                                            a bitmap).
              Web Site Compare              Captures a baseline of a Web site and compares
                                            it to the Web site at another point in time.
              Web Site Scan                 Checks the content of a Web site with every
                                            revision and ensures that changes have not
                                            resulted in defects.
              Window Existence              Checks that the specified window is displayed
                                            before continuing with the playback
              Window Image                  Captures and compares the client area of a
                                            window as a bitmap (the menu, title bar, and
                                            border are not captured).




Performance Testing Process & Methodology                 Proprietary & Confidential
- 130 -
18.8 About SQABasic Header Files
SQABasic header files let you declare custom procedures, constants, and variables that you
want to use with multiple scripts or SQABasic library source files.
SQABasic files are stored in the SQABas32 folder of the project, unless you specify another
location. You can specify another location by clicking Tools ® General Options. Click the
Preferences tab. Under SQABasic path, use the Browse button to find the location. Robot will
check this location first. If the file is not there, it will look in the SQABas32 directory.
You can use Robot to create and edit SQABasic header files. They can be accessed by all
modules within the project. SQABasic header files have the extension .sbh.


18.9 Adding Declarations to the Global Header File
For your convenience, Robot provides a blank header file called Global.sbh. Global.sbh is a
project-wide header file stored in SQABas32 in the project. You can add declarations to this
global header file and/or create your own.
To open Global.sbh:

1.Click File ® Open ® SQABasic File.
2.Set the file type to Header Files (*.sbh).
3. Select global.sbh, and then click Open.


18.10 Inserting a Comment into a GUI Script:
During recording or editing, you can insert lines of comment text into a GUI script. Comments
are helpful for documenting and editing scripts. Robot ignores comments at compile time.
To insert a comment into a script during recording or editing.

1.        If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.

If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on
the Standard toolbar.

2.        Click the Comment button on the GUI Insert toolbar.

3.        Type the comment (60 characters maximum).

4.        Click OK to continue recording or editing.

Robot inserts the comment into the script (in green by default) preceded by a single
quotation mark. For example:

' This is a comment in the script

To change lines of text into comments or to uncomment text:

1.        Highlight the text.

Performance Testing Process & Methodology          Proprietary & Confidential
- 131 -
          2.        Click Edit ® Comment Line or Edit ® Uncomment Line.

18.11 About Data pools
          A datapool is a test dataset. It supplies data values to the variables in a script during
          script playback.
          Datapools let you automatically pump test data to virtual testers under high-volume
          conditions that potentially involve hundreds of virtual testers performing thousands of
          transactions.
          Typically, you use a datapool so that:

         Each virtual tester that runs the script can send realistic data (which can include
          unique data) to the server.

         A single virtual tester that performs the same transaction multiple times can send
          realistic data to the server in each transaction.




18.11.1             Using Datapools with GUI Scripts

          If you are providing one or more values to the client application during GUI recording,
          you might want a datapool to supply those values during playback. For example, you
          might be filling out a data entry form and providing values such as order number, part
          name, and so forth. If you plan to repeat the transaction multiple times during
          playback, you might want to provide a different set of values each time.
          A GUI script can access a datapool when it is played back in Robot. Also, when a
          GUI script is played back in a TestManager suite, the GUI script can access the
          same datapool as other scripts.

          There are differences in the way GUI scripts and sessions are set up for datapool
          access:

         You must add datapool commands to GUI scripts manually while editing the script in
          Robot. Robot adds datapool commands to VU scripts automatically.

         There is no DATAPOOL_CONFIG statement in a GUI script. The SQADatapoolOpen
          command defines the access method to use for the datapool.

          Although there are differences in setting up datapool access in GUI scripts and
          sessions, you define a datapool for either type of script using TestManager in exactly
          the same way.

18.12 Debug menu

          The Debug menu has the following commands:
Performance Testing Process & Methodology           Proprietary & Confidential
- 132 -
          Go
          Go Until Cursor
          Animate
          Pause
          Stop
          Set or Clear Breakpoints
          Clear All Breakpoints
          Step Over
          Step Into
          Step Out
          Note: The Debug menu commands are for use with GUI scripts only.



18.13 Compiling the script
         When you play back a GUI script or VU script, or when you debug a GUI script,
Robot compiles the script if it has been modified since it last ran. You can also compile
scripts and SQABasic library source files manually.
         .
           To compile                                        Do this
           The active script or library source file Click File ® Compile.
           All scripts and library source files in  Click File ® Compile All. Use this if,
           the current project                      for example, you have made
                                                    changes to global definitions that
                                                    may affect all of your SQABasic
                                                    files

        During compilation, the Build tab in the Output window displays compilation results
and error messages with line numbers for all compiled scripts and library source files. The
compilation results can be viewed in the Build tab of the Output window.




Performance Testing Process & Methodology         Proprietary & Confidential
- 133 -
18.14 Compilation errors




Performance Testing Process & Methodology   Proprietary & Confidential
- 134 -
       After the script is created and compiled and errors fixed it can be executed.
The results need to be analyzed in the Test Manager.




Performance Testing Process & Methodology    Proprietary & Confidential
- 135 -
19 Rational Test Manager

Test Manager is the open and extensible framework that unites all of the tools, assets,
and data both related to and produced by the testing effort. Under this single
framework, all participants in the testing effort can define and refine the quality goals
they are working toward. It is where the team defines the plan it will implement to
meet those goals. And, most importantly, it provides the entire team with one place to
go to determine the state of the system at any time.
In Test Manager you can plan, design, implement, execute tests and evaluate results.
With Test manager we can
Create, manage, and run reports. The reporting tools help you track assets such as
scripts, builds, and test documents, and track test coverage and progress.
Create and manage builds, log folders, and logs.
Create and manage data pools and data types

When the script execution is started the following window will be displayed.
The folder in which the log is to stored and the log name needs to be given in this
window.




Performance Testing Process & Methodology     Proprietary & Confidential
- 136 -
19.1 Test Manager-Results screen




          In the Results tab of the Test Manager, you could see the results stored.
          From Test Manager you can know start time of the script and




Performance Testing Process & Methodology     Proprietary & Confidential
- 137 -
Performance Testing Process & Methodology   Proprietary & Confidential
- 138 -
20 Supported environments


20.1 Operating system
          WinNT4.0 with service pack 5
          Win2000
          WinXP(Rational 2002)
          Win98
          Win95 with service pack1



20.2 Protocols
          Oracle
          SQL server
          HTTP
          Sybase
          Tuxedo
          SAP
          People soft

20.3 Web browsers
          IE4.0 or later
          Netscape navigator (limited support)



20.4 Markup languages
          HTML and DHTML pages on IE4.0 or later.

20.5 Development environments
          Visual basic 4.0 or above
          Visual C++
          Java
          Oracle forms 4.5
          Delphi
          Power builder 5.0 and above
          The basic product supports Visual basic, VC++ and basic web pages. To test other
          types of application, you have to download and run a free enabler program from
          Rational‘s website.

For more details visit     www.rational.com




Performance Testing Process & Methodology        Proprietary & Confidential
- 139 -
21 Performance Testing
      The performance testing is a measure of the performance characteristics of an
      application. The main objective of a performance testing is to demonstrate that the
      system functions to specification with acceptable response times while processing the
      required transaction volumes in real-time production database. The objective of a
      performance test is to demonstrate that the system meets requirements for transaction
      throughput and response times simultaneously. The main deliverables from such a test,
      prior to execution, are automated test scripts and an infrastructure to be used to execute
      automated tests for extended periods.


21.1 What is Performance testing?
      Performance testing of an application is basically the process of understanding how the
      web application and its operating environment respond at various user load levels. In
      general, we want to measure the latency, throughput, and utilization of the web site
      while simulating attempts by virtual users to simultaneously access the site. One of the
      main objectives of performance testing is to maintain a web site with low latency, high
      throughput, and low utilization.



21.2 Why Performance testing?
      Performance problems are usually the result of contention for, or exhaustion of, some
      system resource. When a system resource is exhausted, the system is unable to scale
      to higher levels of performance. Maintaining optimum Web application performance is a
      top priority for application developers and administrators.

      Performance analysis is also carried for various purposes such as:

                During a design or redesign of a module or a part of the system, more than one
                alternative presents itself. In such cases, the evaluation of a design alternative is
                the prime mover for an analysis.
                Post-deployment realities create a need for the tuning the existing system. A
                 systematic approach like performance analysis is essential to extract maximum
                 benefit from an existing system.
                Identification of bottlenecks in a system is more of an effort at troubleshooting.
                 This helps to replace and focus efforts at improving overall system response.
                As the user base grows, the cost of failure becomes increasingly unbearable. To
                increase confidence and to provide an advance warning of potential problems in
                case of load conditions, analysis must be done to forecast performance under
                load.

      Typically to debug applications, developers would execute their applications using
      different execution streams (i.e., completely exercise the application) in an attempt to
      find errors.
      When looking for errors in the application, performance is a secondary issue to features;
      however, it is still an issue.
Performance Testing Process & Methodology            Proprietary & Confidential
- 140 -
21.3 Performance Testing Objectives
      The objective of a performance test is to demonstrate that the system meets
      requirements for transaction throughput and response times simultaneously.
      This infrastructure is an asset and an expensive one too, so it pays to make as much
      use of this infrastructure as possible. Fortunately, this infrastructure is a test bed, which
      can be re-used for other tests with broader objectives. A comprehensive test strategy
      would define a test infrastructure to enable all these objectives be met.

      The performance testing goals are:

                End-to-end transaction response time measurements.
                Measure Application Server components performance under various loads.
                Measure database components performance under various loads.
                Monitor system resources under various loads.
                Measure the network delay between the server and clients



21.4 Pre-Requisites for Performance Testing
We can identify five pre-requisites for a performance test. Not all of these need be in place
prior to planning or preparing the test (although this might be helpful), but rather, the list
defines what is required before a test can be executed.

First and foremost thing is
     The design specification or a separate performance requirements document should         :

                         Defines specific performance goals for each feature             that   is
                          instrumented.
                         Bases performance goals on customer requirements.
                         Defines specific customer scenarios.

    Quantitative, relevant, measurable, realistic, achievable requirements
     As a foundation to all tests, performance requirements should be agreed prior to the test.
     This helps in determining whether or not the system meets the stated requirements. The
     following attributes will help to have a meaningful performance comparison.
                     Quantitative - expressed in quantifiable terms such that when response
                        times are measured, a sensible comparison can be derived.
                     Relevant - a response time must be relevant to a business process.
                     Measurable - a response time should be defined such that it can be
                        measured using a tool or stopwatch and at reasonable cost.
                     Realistic - response time requirements should be justifiable when
                        compared with the durations of the activities within the business process
                        the system supports.


Performance Testing Process & Methodology          Proprietary & Confidential
- 141 -
                         Achievable - response times should take some account of the cost of
                          achieving them.

     Stable system
     A test team attempting to construct a performance test of a system whose software is of
     poor quality is unlikely to be successful. If the software crashes regularly, it will probably
     not withstand the relatively minor stress of repeated use. Testers will not be able to
     record scripts in the first instance, or may not be able to execute a test for a reasonable
     length of time before the software, middleware or operating systems crash.


     Realistic test environment
      The test environment should ideally be the production environment or a close simulation
     and be dedicated to the performance test team for the duration of the test. Often this is
     not possible. However, for the results of the test to be realistic, the test environment
     should be comparable to the actual production environment. Even with an environment
     which is somewhat different from the production environment, it should still be possible to
     interpret the results obtained using a model of the system to predict, with some
     confidence, the behavior of the target environment. A test environment which bears no
     similarity to the actual production environment may be useful for finding obscure errors in
     the code, but is, however, useless for a performance test.


21.5 Performance Requirements
Performance requirements normally comprise three components:

               Response time requirements
               Transaction volumes detailed in ‗Load Profiles‘
               Database volumes

Response time requirements
When asked to specify performance requirements, users normally focus attention on
response times, and often wish to define requirements in terms of generic response times.
A single response time requirement for all transactions might be simple to define from the
user‘s point of view, but is unreasonable. Some functions are critical and require short
response times, but others are less critical and response time requirements can be less
stringent.

Load profiles
The second component of performance requirements is a schedule of load profiles. A load
profile is the level of system loading expected to occur during a specific business scenario.
Business scenarios might cover different situations when the users‘ organization has different
levels of activity or involve a varying mix of activities, which must be supported by the system.

Database volumes
Data volumes, defining the numbers of table rows which should be present in the database
after a specified period of live running complete the load profile. Typically, data volumes
estimated to exist after one year‘s use of the system are used, but two year volumes or
greater might be used in some circumstances, depending on the business application.

Performance Testing Process & Methodology           Proprietary & Confidential
- 142 -
22 Performance Testing Process
                  Requirements
                    Collection                                 Requirement
                   Preparation                                  Collection




                    Test Plan
                   Preparation                                   Test Plan




                   Test Design
                   Preparation                                 Test Design




                     Scripting                                 Test Scripts




                  Test Execution                             Pre Test & Post
                                                             Test Procedure




                   Test Analysis                            Preliminary
                                                            Report



                                                                               Activity

                        Is
      NO           Performance
                                                                               Deliverable
                       Goal
                    Reached?                                                   Internal
      Deliverable



                   YES

                  Preparation of                            Final Report
                     Reports




Performance Testing Process & Methodology   Proprietary & Confidential
- 143 -
22.1 Phase 1 – Requirements Study
This activity is carried out during the business and technical requirements identification
phase. The objective is to understand the performance test requirements, Hardware &
Software components and Usage Model. It is important to understand as accurately and as
objectively as possible the nature of load that must be generated.
Following are the important performance test requirement that needs to be captured during
this phase.
          Response Time
          Transactions Per Second
          Hits Per Second
          Workload
          No of con current users
          Volume of data
          Data growth rate
          Resource usage
          Hardware and Software configurations


           Activity                                                 Work items
  Performance- Stress                          Understand the system and application model
  Test, Load Test, Volume                      Server side and Client side Hardware and software
  Test, Spike Test,                             requirements.
  Endurance Test                               Browser Emulation and Automation Tool Selection
                                               Decide on the type and mode of testing
                                               Operational Inputs – Time of Testing, Client and Server side
                                                parameters.

22.1.1

22.1.2              Deliverables

        Deliverable                                                      Sample
  Requirement Collection




Performance Testing Process & Methodology                 Proprietary & Confidential
- 144 -
22.2 Phase 2 – Test Plan
        The following configuration information will be identified as part of performance testing
        environment requirement identification.

        Hardware Platform
                 Server Machines
                 Processors
                 Memory
                 Disk Storage
                 Load Machines configuration
                 Network configuration

        Software Configuration
                  Operating System
                  Server Software
                  Client Machine Software
                  Applications


           Activity                                                 Work items
  Test Plan Preparation                        Hardware and Software Details
                                               Test data
                                               Transaction Traversal that is to be tested with sleep times.
                                               Periodic status update to the client.

22.2.1              Deliverables

                        Deliverable                                                    Sample
       Test Plan




22.3 Phase 3 – Test Design
        Based on the test strategy detailed test scenarios would be prepared. During the test
        design period the following activities will be carried out:
                   Scenario design
                   Detailed test execution plan
                   Dedicated test environment setup
                   Script Recording/ Programming
                   Script Customization (Delay, Checkpoints, Synchronizations points)
                   Data Generation
                   Parameterization/ Data pooling
Performance Testing Process & Methodology                 Proprietary & Confidential
- 145 -
          Activity                                                   Work items
  Test Design Generation                       Hardware and Software requirements that includes the
                                                server components , the Load Generators used etc.,
                                               Setting up the monitoring servers
                                               Setting up the data
                                               Preparing all the necessary folders for saving the results as
                                                the test is over.
                                               Pre Test and Post Test Procedures


22.3.1              Deliverables

                         Deliverable                                                   Sample
  Test Design




22.4 Phase 4 –Scripting
              Activity                                                Work items
  Scripting                                    Browse through the application and record the transactions
                                                with the tool
                                               Parameterization, Error Checks and Validations
                                               Run the script for single user for checking the validity of
                                                scripts

22.4.1              Deliverables

                     Deliverable                                                       Sample
           Test Scripts




Performance Testing Process & Methodology                 Proprietary & Confidential
- 146 -
22.5 Phase 5 – Test Execution
The test execution will follow the various types of test as identified in the test plan. All the
scenarios identified will be executed. Virtual user loads are simulated based on the usage
pattern and load levels applied as stated in the performance test strategy.

The following artifacts will be produced during test execution period:
                   Test logs
                   Test Result

          Activity                                                   Work items
  Test Execution                               Starting the Pre Test Procedure scripts which includes start
                                                scripts for server monitoring.
                                               Modification of automated scripts if necessary
                                               Test Result Analysis
                                               Report preparation for every cycle




22.5.1               Deliverables
                    Deliverable                                                        Sample
           Test Execution




22.6 Phase 6 – Test Analysis
          Activity                                                  Work items
  Test Analysis                                Analyzing the run results and preparation of preliminary
                                                report.




22.6.1               Deliverables
                     Deliverable                                                       Sample
           Test Analysis




Performance Testing Process & Methodology                 Proprietary & Confidential
- 147 -
22.7 Phase 7 – Preparation of Reports
The test logs and results generated are analyzed based on Performance under various
load, Transaction/second, database throughput, Network throughput, Think time, Network
delay, Resource usage, Transaction Distribution and Data handling. Manual and automated
results analysis methods can be used for performance results analysis.

The following performance test reports/ graphs can be generated as part of performance
testing:-
                  Transaction Response time
                  Transactions per Second
                  Transaction Summary graph
                  Transaction performance Summary graph
                  Transaction Response graph – Under load graph
                  Virtual user Summary graph
                  Error Statistics graph
                  Hits per second graph
                  Throughput graph
                  Down load per second graph
                  Based on the Performance report analysis, suggestions on improvement
                     or tuning will be provided to the design team:
                  Performance improvements to application software, middleware,
                     database organization.
                  Changes to server system parameters.
                  Upgrades to client or server hardware, network capacity or routing.

           Activity                                                   Work items
  Preparation of Reports                       Preparation of final report.




22.7.1               Deliverables
                     Deliverable                                                       Sample
           Final Report




Performance Testing Process & Methodology                 Proprietary & Confidential
- 148 -
22.8 Common Mistakes in Performance Testing
                    • No Goals
                    • No general purpose model
                    • Goals =>Techniques, Metrics, Workload
                    • Not trivial
                    • Biased Goals
                    • ‗To show that OUR system is better than THEIRS‖
                    • Analysts = Jury
                    • Unsystematic Approach
                    • Analysis without Understanding the Problem
                    • Incorrect Performance Metrics
                    • Unrepresentative Workload
                    • Wrong Evaluation Technique
                    • Overlook Important Parameters
                    • Ignore Significant Factors
                    • Inappropriate Experimental Design
                    • Inappropriate Level of Detail
                    • No Analysis
                    • Erroneous Analysis
                    • No Sensitivity Analysis
                    • Ignoring Errors in Input
                    • Improper Treatment of Outliers
                    • Assuming No Change in the Future
                    • Ignoring Variability
                    • Too Complex Analysis
                    • Improper Presentation of Results
                    • Ignoring Social Aspects
                    • Omitting Assumptions and Limitations




22.9 Benchmarking Lessons
Ever build needs to be measured. We should run the automated performance test suite
against every build and compare the results against previous results. Also, we should run the
performance test suite under controlled conditions from build to build. This typically means
measuring performance on "clean" test environments. Performance issues must be identified
as soon as possible to prevent further degradation.

Performance goals needs to be ensured. If we decide to make performance a goal and a
measure of the quality criteria for release, the management team must decide to enforce the
goals. Establish incremental performance goals throughout the product development cycle.
All the members in the team should agree that a performance issue is not just a bug; it is a
software architectural problem.

Performance testing of Web services and applications is paramount to ensuring an excellent
customer experience on the Internet. The Web Capacity Analysis (WebCAT) tool provides
Performance Testing Process & Methodology         Proprietary & Confidential
- 149 -
Web server performance analysis; the tool can also assess Internet Server Application
Programming Interface and application server provider (ISAPI/ASP) applications.
Creating an automated test suite to measure performance is time-consuming and labor-
intensive. Therefore, it is important to define concrete performance goals. Without defined
performance goals or requirements, testers must guess, without a clear purpose, at how to
instrument tests to best measure various response times.


The performance tests should not be used to find functionality-type bugs. Design the
performance test suite to measure response times and not to identify bugs in the product.
Design the build verification test (BVT) suite to ensure that no new bugs are injected into the
build that would prevent the performance test suite from successfully completing.


The performance tests should be modified consistently. Significant changes to the
performance test suite skew or make obsolete all previous data. Therefore, keep the
performance test suite fairly static throughout the product development cycle. If the design or
requirements change and you must modify a test, perturb only one variable at a time for each
build.


Strive to achieve the majority of the performance goals early in the product development
cycle because:


              Most performance issues require architectural change.
              Performance is known to degrade slightly during the stabilization phase of the
               development cycle.

Achieving performance goals early also helps to ensure that the ship date is met because a
product rarely ships if it does not meet performance goals. You should reuse automated
performance tests Automated performance tests can often be reused in many other
automated test suites. For example, incorporate the performance test suite into the stress
test suite to validate stress scenarios and to identify potential performance issues under
different stress conditions.

Tests are capturing secondary metrics when the instrumented tests have nothing to do with
measuring clear and established performance goals. Although secondary metrics look good
on wall charts and in reports, if the data is not going to be used in a meaningful way to make
improvements in the engineering cycle, it is probably wasted data. En sure that you know
what you are measuring and why.

Testing for most applications will be automated. Tools used for testing would be the tool
specified in the requirement specification. The tools used for performance testing are
Loadrunner 6.5 and Webload 4.5x




Performance Testing Process & Methodology          Proprietary & Confidential
- 150 -
23 Tools
23.1 LoadRunner 6.5
LoadRunner is Mercury Interactive‘s tool for testing the performance of client/server systems.
LoadRunner enables you to test your system under controlled and peak load conditions. To
generate load, LoadRunner runs thousands of Virtual Users that are distributed over a
network. Using a minimum of hardware resources, these Virtual users provide consistent.
Repeatable and measurable load to execute your client/server system just as real users
would. LoadRunner‘s in depth reports and graphs provide the information that you need to
evaluate the performance of your client/server system.


23.2 WebLoad 4.5
Webload is a testing tool for testing the scalability, functionality and performance of Web-
based applications – both Internet and Intranet. It can measure the performance of your
application under any load conditions. Use WebLoad to test how well your web site will
perform under real-world conditions by combining performance, load and functional tests or
by running them individually.

Webload supports HTTP1.0 and 1.1, including cookies, proxies, SSL, TSL, client certificates,
authentifications, persistent connections and chunked transfer coding.

Webload generates load by creating virtual clients that emulate network traffic. You create
test scripts (called agendas) using Java Scripts that instruct those virtual clients about what to
do.

When Webload runs the test, it gathers results at a per-client, per-transaction and per-
instance level from the computers that are generating the load. Webload can also gather
information server‘s performance monitor. You can watch the results as they occur- Webload
displays them in graphs and tables in real-time and you can save and export the results when
the test is finished.




Performance Testing Process & Methodology         Proprietary & Confidential
- 151 -
Performance Testing Tools - summary and comparison

This table lists several performance testing tools available on the market. For your convenience we
compared them based on cost and OS required.



 Tool Name              URL                         Cost              OS              Description

                                                                                      Load test tool
                                                                                      emphasizing ease-
                                                                                      of-use. Supports
                                                                                      all browsers and
                                                                                      web servers;
                                                                                      simulates up to
                                                                                      200 users per
                                                                                      playback machine
                                                                                      at various
                                                                                      connection
                                                                                      speeds; records
                                                                                      and allows viewing
                                                                                      of exact bytes
                                                                                      flowing between
                                                    Price ($)
                                                                                      browser and
                                                    per number
                                                                                      server. Modem
                                                    of virtual
                                                                                      simulation allows
 Web                    http://www.webperf          users:
                                                                      Windows NT,     each virtual user to
                                                    1400-100
 Performance            center.com/loadtesting.     2495-200
                                                                      Windows 2000,   be bandwidth
 Trainer                html                                          Linux Solaris   limited. Can
                                                    4995-300
                                                                                      automatically
                                                    7995-1000
                                                                                      handle variations
                                                    11995-
                                                                                      in session-specific
                                                    5000
                                                                                      items such as
                                                                                      cookies,
                                                                                      usernames,
                                                                                      passwords, and
                                                                                      any other
                                                                                      parameter to
                                                                                      simulate multiple
                                                                                      virtual users.
                                                                                      Notes:
                                                                                      downloadable, will
                                                                                      emulate 25 users,
                                                                                      and will expire in 2
                                                                                      weeks (may be
                                                                                      extend)


     Performance Testing Process & Methodology        Proprietary & Confidential
     - 152 -
                                                                                Mercury's
                                                                                load/stress testing
                                                                                tool; includes
                                                                                record/playback
                                                                                capabilities;
                                                                                integrated
                                                                                spreadsheet
                                                                                parameterizes
                                                                                recorded input to
                                                                                exercise
                                              Price ($)
                                                                                application with a
                                              per                 SunOS,
                                                                                wide variety of
                                              number of           HP-UX,
                                                                                data. 'Scenario
Astra                http://www.astratryand   virtual             IBM AIX,
                                                                                Builder' visually
LoadTest             buy.com                  users:              NCR,
                                                                                combines virtual
                                              9995-50             Windows NT,
                                                                                users and host
                                              17995-100           WIN2000
                                                                                machines for tests
                                              29995-250
                                                                                representing real
                                                                                user traffic.
                                                                                'Content Check'
                                                                                checks for failures
                                                                                under heavy load;
                                                                                Real-time monitors
                                                                                and analysis
                                                                                Notes:
                                                                                downloadable,
                                                                                evaluation version

                                                                                E-commerce load
                                                                                testing tool from
                                                                                Client/Server
                                                                                Solutions, Inc.
                                                                                Includes
                                                                                record/playback,
                                                                                web form
                                                                                processing, user
                                                                                sessions, scripting,
Benchmark            http://www.benchmark                         Windows NT,   cookies, SSL. Also
                                              $
Factory              factory.com                                  Windows2000   includes pre-
                                                                                developed industry
                                                                                standard
                                                                                benchmarks such
                                                                                as AS3AP, Set-
                                                                                Query, Wisconsin,
                                                                                WebStone, and
                                                                                others. Includes
                                                                                optimized
                                                                                database drivers


  Performance Testing Process & Methodology       Proprietary & Confidential
  - 153 -
                                                                                       for vendor-neutral
                                                                                       comparisons - MS
                                                                                       SQL Server,
                                                                                       Oracle 7 and 8,
                                                                                       Sybase System
                                                                                       11, ODBC, IBM's
                                                                                       DB2 CLI, Informix.
                                                                                       Notes:
                                                                                       downloadable (?),
                                                                                       after submitting
                                                                                       information A
                                                                                       Page with
                                                                                       suggestion to
                                                                                       apply for next infos
                                                                                       to closest dealers
                                                                                       appeared

                                                                                       Supports recording
                                                                                       of SSL sessions,
                                                                                       cookies, proxies,
                                                                                       password
                                                                        Win95/98       authentication,
Radview's                                                               Windows NT,    dynamic HTML;
                      http://www.radview.com        $
                                                                        Windows 2000   multiple platforms
WebLoad
                                                                        Solaris, AIX   Notes:
                                                                                       downloadable,
                                                                                       Evaluation version
                                                                                       does not support
                                                                                       SSl

                                                                                       Microsoft stress
                                                                                       test tool created by
                                                                                       Microsoft's Internal
                                                                                       Tools Group (ITG)
                                                                                       and subsequently
                                                                                       made available for
                                                                                       external use.
MS Web                                                                                 Includes
                      http://homer.rte.microsoft.                       Windows NT,
Application                                         Free
                                                                        Windows2000
                                                                                       record/playback,
                      com                                                              script recording
Stress Test
                                                                                       from browser,
                                                                                       SSL, adjustable
                                                                                       delay between
                                                                                       requests
                                                                                       Notes: one of the
                                                                                       advanced tools in
                                                                                       the listing…



   Performance Testing Process & Methodology            Proprietary & Confidential
   - 154 -
                                                                                     Rational's
                                                                                     client/server and
                                                                                     web performance
                                                                                     testing tool.
                                                                                     'LoadSmart
                                                                                     Scheduling'
Rational Suite                                                                       capabilities allow
Performance                                                           Windows NT,    complex usage
                      http://www.rational.com/
Studio,                                           $                   Windows2000,   scenarios and
                      products                                        Unix           randomized
Rational
                                                                                     transaction
SiteLoad
                                                                                     sequences;
                                                                                     handles dynamic
                                                                                     web pages.
                                                                                     Notes: request a
                                                                                     cd only. Not
                                                                                     downloadable

                                                                                     Load testing tool
                                                                                     from Facilita
                                                                                     Software for web,
                                                                                     client-server,
Forecast              http://www.facilita.co.uk   $                   Unix
                                                                                     network, and
                                                                                     database systems
                                                                                     Notes: not
                                                                                     downloadable

                                                                                     Free web
                                                                                     benchmarking/load
                                                                                     testing tool
                                                                                     available as
                                                                                     source code; will
                      http://webperf.zeus.
Zeus                                              Free                Unix           compile on any
                      co.uk/intro.html                                               UNIX platform
                                                                                     Notes:
                                                                                     unsupportable (?),
                                                                                     broken download
                                                                                     link.

                                                                                     Load test tool from
                                                                                     RSW geared to
                                                                                     testing web
                      http://www.rswsoftware.                                        applications under
                                                                      Win95/98/
E-Load                com/products/eload_         $
                                                                      Windows NT
                                                                                     load and testing
                      index.shtml                                                    scalability of E-
                                                                                     commerce
                                                                                     applications. For
                                                                                     use in conjunction


   Performance Testing Process & Methodology          Proprietary & Confidential
   - 155 -
                                                                                    with test scripts
                                                                                    from their e-Tester
                                                                                    functional test tool.
                                                                                    Allows on-the-fly
                                                                                    changes and has
                                                                                    real-time reporting
                                                                                    capabilities.
                                                                                    Notes:
                                                                                    downloadable, free
                                                                                    cd request,
                                                                                    evaluation copy

                                                                                    Free load test
                                                                                    application to
                     http://www.acme.com/                                           generate web
HTTP-Load                                      Free                Unix
                                                                                    server loads
                     software/http_load
                                                                                    Notes: free and
                                                                                    easy.

                                                                                    Compuware's
                                                                                    QALoad for
                                                                                    load/stress testing
                                                                                    of database, web,
                                                                   Win95/NT         and char-based
                     http://www.compuware.                         - manager;       systems, works
                                                                   Unix,            with such
QALoad               com/products/auto/        $
                                                                   Windows NT -     middleware as:
                     releases/QALoad.htm                           load test        SQLnet, DBLib or
                                                                   player           CBLib, SQL
                                                                                    Server, ODBC,
                                                                                    Telnet, and Web
                                                                                    Notes : free cd
                                                                                    request

                                                                                    Load and
                                                                                    performance
                     http://www.segue.com/                                          testing component
                                                                   Windows NT,      of Segue's Silk
SilkPerformer        html/s_solutions/s_perf   $
                                                                   Windows 2000     web testing
                     ormer/s_performer.htm                                          toolset.
                                                                                    Notes: no
                                                                                    download.

                                                                   Windows 98,      Tool for load
                     http://www.oclc.org/                          Windows NT       testing of up to
WEBArt                                         $                   4.0,             100-200 simulated
                     webart                                        Windows 2000,    users; also
                                                                   SunOS/Solaris,   includes functional


  Performance Testing Process & Methodology        Proprietary & Confidential
  - 156 -
                                                                  AIX, Linux     and regression
                                                                                 testing capabilities,
                                                                                 and
                                                                                 capture/playback
                                                                                 and scripting
                                                                                 language.
                                                                                 Evaluation copy
                                                                                 avail.
                                                                                 Notes:
                                                                                 downloadable

                                                                                 Final Exam
                                                                                 WebLoad
                                                                                 integration and
                                                                                 pre-deployment
                                                                                 testing ensures the
                                                                                 reliability,
                                                                                 performance, and
                                                                                 scalability of Web
                                                                                 applications. It
                                                                                 generates and
                                                                                 monitors load
                                                                                 stress tests -
                                                                                 which can be
                                                                                 recorded during a
                                                                                 Web session with
                     http://www.ca.com/                           AIX,
                                                                                 any browser - and
                                                                  Windows NT,
Webload              products/platinum/       $
                                                                  Windows 95,
                                                                                 assesses Web
                     appdev/fe_iltps.htm                                         application
                                                                  Sun Solaris
                                                                                 performance under
                                                                                 user-defined
                                                                                 variable system
                                                                                 loads. Load
                                                                                 scenarios can
                                                                                 include unlimited
                                                                                 numbers of virtual
                                                                                 users on one or
                                                                                 more load servers,
                                                                                 as well as single
                                                                                 users on multiple
                                                                                 client workstations.
                                                                                 Notes:
                                                                                 downloadable,
                                                                                 15day eval. period

Microsoft            http://msdn.microsoft.                                      Web load test tool
                                                                  Windows NT,
WCAT load            com/workshop/server/     Free
                                                                  Windows 2000
                                                                                 from Microsoft for
test tool            toolbox/wcat.asp                                            load testing of MS


  Performance Testing Process & Methodology       Proprietary & Confidential
  - 157 -
                                                                                     IIS on NT

                                                                                     Load testing tool;
                                                                                     includes link
                                                                                     testing capabilities;
                                                                                     can simulate up to
                                                                      Windows 98,    1,000 clients from
                     http://www.redhillnet        $199 ($99
                                                                      Windows NT     a single IP
Webspray                                          with
                                                                      4.0,           address; also
                     works.com                    discount)
                                                                      Windows 2000   supports multiple
                                                                                     IP addresses with
                                                                                     or without aliases.
                                                                                     Notes: not
                                                                                     downloadable

                                                                                     Load testing and
                                                                                     capture/playback
                                                                                     tools from
                                                                                     Technovations.
                                                                                     WebSizr load
                                                                      Win95(98),
WebSizr,             http://www.technova                                             testing tool
                                                  $                   Windows NT,
WebCorder            tions.com/home.htm                                              supports
                                                                      Windows 2000
                                                                                     authentication,
                                                                                     cookies, redirects
                                                                                     Notes:
                                                                                     downloadable, 30
                                                                                     eval. period.




  23.3 Architecture Benchmarking
           Hardware Benchmarking - Hardware benchmarking is performed to size the
            application with the planned Hardware platform. It is significantly different from
            capacity planning exercise in that it is done after development and before
            deployment

           Software Benchmarking - Defining the right placement and composition of software
            instances can help in vertical scalability of the system without addition of hardware
            resources. This is achieved through software benchmark test.




  Performance Testing Process & Methodology           Proprietary & Confidential
  - 158 -
23.4 General Tests
What follows is a list of tests adaptable to assess the performance of most systems. The
methodologies below are generic, allowing one to use a wide range of tools to conduct the
assessments.
Methodology Definitions
    Result: provide information about what the test will accomplish.
    Purpose: explains the value and focus of the test, along with some simple
        background information that might be helpful during testing.
    Constraints: details any constraints and values that should not be exceeded during
         testing.
    Time estimate: a rough estimate of the amount of time that the test may take to
        complete.
    Type of workload: in order to properly achieve the goals of the test, each test
        requires a certain type of workload. This methodology specification provides
        information on the appropriate script of pages or transactions for the user.
    Methodology: a list of suggested steps to take in order to assess the system under
       test.
    What to look for: contains information on behaviors, issues and errors to pay
        attention to during and after the test.




Performance Testing Process & Methodology     Proprietary & Confidential
- 159 -
24 Performance Metrics
The Common Metrics selected /used during the performance testing is as below
    Response time
    Turnaround time = the time between the submission of a batch job and the
      completion of its output.
    Stretch Factor: The ratio of the response time with single user to that of concurrent
       users.
    Throughput: Rate (requests per unit of time) Examples:
    Jobs per second
    Requests per second
    Millions of Instructions Per Second (MIPS)
    Millions of Floating Point Operations Per Second (MFLOPS)
    Packets Per Second (PPS)
    Bits per second (bps)
    Transactions Per Second (TPS)
      Capacity:
       Nominal Capacity: Maximum achievable throughput under ideal workload conditions.
      E.g., bandwidth in bits per second. The response time at maximum throughput is too
      high.
      Usable capacity: Maximum throughput achievable without exceeding a pre-specified
      response-time limit
      Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the
      performance of an n-processor system to that of a one-processor system is its
      efficiency.
    Utilization: The fraction of time the resource is busy servicing requests.
    Average Fraction used for memory.

As tests are executed, metrics such as response times for transactions, HTTP requests per
second, throughput etc., should be collected. It is also important to monitor and collect the
statistics such as CPU utilization, memory, disk space and network usage on individual web,
application and database servers and make sure those numbers recede as load decreases.
Cognizant has built custom monitoring tools to collect the statistics. Third party monitoring
tools are also used based on the requirement.



24.1 Client Side Statistics
         Running Vusers
         Hits per Second
         Throughput
         HTTP Status Code
         HTTP responses per Second
         Pages downloaded per Second
         Transaction response time
         Page Component breakdown time
         Page Download time
Performance Testing Process & Methodology       Proprietary & Confidential
- 160 -
         Component size Analysis
         Error Statistics
         Errors per Second
         Total Successful/Failed Transactions

24.2 Server Side Statistics
         System Resources - Processor Utilization, Memory and Disk Space
         Web Server Resources–Threads, Cache Hit Ratio
         Application Server Resources–Heap size, JDBC Connection Pool
         Database Server Resources–Wait Events, SQL Queries
         Transaction Profiling
         Code Block Analysis

24.3 Network Statistics
         Bandwidth Utilization
         Network delay time
         Network Segment delay time

24.4 Conclusion

Performance testing is an independent discipline and involves all the phases as the
mainstream testing lifecycle i.e strategy, plan, design, execution, analysis and reporting.
Without the rigor described in this paper, executing performance testing does not yield
anything more than finding more defects in the system. However, if executed systematically
with appropriate planning, performance testing can unearth issues that otherwise cannot be
done through mainstream testing. It is very typical of the project manager to be overtaken by
time and resource pressures leading not enough budget being allocated for performance
testing, the consequences of which could be disastrous to the final system. There is another
flip side of the coin.

However there is an important point to be noted here. Before testing the system for
performance requirements, the system should have been architected and designed for
meeting the required performance goals. If not, it may be too late in the software
development cycle to correct serious performance issues.

Web-enabled applications and infrastructures must be able to execute evolving business
processes with speed and precision while sustaining high volumes of changing and
unpredictable user audiences. Load testing gives the greatest line of defense against poor
performance and accommodates complementary strategies for performance management
and monitoring of a production environment. The discipline helps businesses succeed in
leveraging Web technologies to their best advantage, enabling new business opportunity
lowering transaction costs and strengthening profitability. Fortunately, robust and viable
solutions exist to help fend off disasters that result from poor performance. Automated load
testing tools and services are available to meet the critical need of measuring and optimizing
complex and dynamic application and infrastructure performance. Once these solutions are
properly adopted and utilized, leveraging an ongoing, lifecycle-focused approach, businesses
can begin to take charge and leverage information technology assets to their competitive
Performance Testing Process & Methodology        Proprietary & Confidential
- 161 -
advantage. By continuously testing and monitoring the performance of critical software
applications, business can confidently and proactively execute strategic corporate initiatives
for the benefit of shareholders and customers alike.




Performance Testing Process & Methodology       Proprietary & Confidential
- 162 -
25 Load Testing
Load Testing is creation of a simulated load on a real computer system by using virtual users
who submit work as real users would do at real client workstations and thus testing the
systems ability to support such workload.

Testing of critical web applications during its development and before its deployment should
include functional testing to confirm to the specifications, performance testing to check if it
offers an acceptable response time and load testing to see what hardware or software
configuration will be required to provide acceptable response time and handle the load that
will created by the real users of the system

25.1 Why is load testing important ?
Load Testing increases the uptime for critical web applications by helping you spot the
bottlenecks in the system under large user stress scenarios before they happen in a
production environment

25.2 When should load testing be done?
Load testing should be done when the probable cost of the load test is likely less than the
cost of a failed application deployment.

Thus a load testing is accomplished by stressing the real application under simulated load
provided by virtual users.




Performance Testing Process & Methodology        Proprietary & Confidential
- 163 -
26 Load Testing Process
26.1 System Analysis

This is the first step when the project decides on load testing for its system. Evaluation of the
requirements and needs of a system, prior to load testing will provide more realistic test
conditions. For this one should know all key performance goals and objectives like number of
concurrent connections, hits per second etc.,

Another important analysis of the system would also include the appropriate strategy for
testing applications. It can be load testing or stress testing or capacity testing.

Load Testing is used to test the application against a requested number of users. The
objective is to determine whether the site can sustain a requested number of users with
acceptable response times. Stress testing is nothing but load testing over extended periods
of time to validate an application‘s stability and reliability. Similarly capacity testing is used to
determine the maximum number of concurrent users that an application can manage. Hence
for businesses capacity testing would be the benchmark to say that the maximum loads of
concurrent users the site can sustain before the system fails.

Finally it should also be taken into consideration of the test tool which supports load
testing by determining its multithreading capabilities and the creation of number of
virtual users with minimal resource consumption and maximal virtual user count.

26.2 User Scripts
Once the analysis of the system is done the next step would be the creation of user
scripts. A script recorder can be used to capture all the business processes into test
scripts and this more often referred as virtual users or virtual user scripts. A virtual
user is nothing but an emulated real user who drives the real application as client. All
the business process should be recorded end to end so that these transactions will
assist in breakdown of all actions and the time it takes to measure the performance of
business process.

26.3 Settings
Run time settings should be defined the way the scripts should be run in order to
accurately emulate real users. Settings can configure the number of concurrent
connections, test run time, follow HTTP redirects etc., System response times also
can vary based on the connection speed. Hence throttling bandwidth can emulate dial
up connections at varying modem speeds (28.8 Kbps or 56.6 Kbps or T1 (1.54M) etc.



Performance Testing Process & Methodology           Proprietary & Confidential
- 164 -
26.4 Performance Monitoring

Every component of the system needs monitoring :the clients, the network, the webs
server, the application server, the database etc., This will result in instantly
identifying the performance bottle necks during load testing. But if the tools support
real time monitoring then testers would be able to view the application performance
at any time during the test.

Thus running the load test scenario and monitoring the performance would accelerate
the test process thereby producing a more stable application


26.5 Analyzing Results
The last but most important step in load testing is collecting and processing the data
to resolve performance bottlenecks. The reports generated can be anything ranging
from Number of hits, number of test clients, requests per second, socket errors etc.,
Hence analyzing the results will isolate bottle necks and determine which changes are
needed to improve the system performance. After these changes are made the tests
must re run the load test scenarios to verify adjustments.
Load Testing with WAST
Web Application Stress is a tool to simulate large number of users with a relatively
small number of client machines. Performance data on an web application can be
gathered by stressing the website and measuring the maximum requests per second
that the web server can handle. The next step is to determine which resource prevents
the requests per second from going higher, such as CPU, memory, or backend
dependencies.




26.6 Conclusion

Load testing is the measure of an entire Web application's ability to sustain a number
of simultaneous users and transactions, while maintaining adequate response times. It
is the only way to accurately test the end-to-end performance of a Web site prior to
going live.

Two common methods for implementing this load testing process are manual and
automated testing.
Manual testing would involve


Performance Testing Process & Methodology    Proprietary & Confidential
- 165 -
         Coordination of the operations of users
         Measure response times
         Repeat tests in a consistent way
         Compare results


As load testing is iterative in nature, the performance problems must be identified so
that system can be tuned and retested to check for bottlenecks. For this reason,
manual testing is not a very practical option.

Today, automated load testing is the preferred choice for load testing a Web
application. The testing tools typically use three major components to execute a test:

         A console, which organizes, drives and manages the load
         Virtual users, performing a business process on a client application
         Load servers, which are used to run the virtual users

With automated load testing tools, tests can be easily rerun any number of times and
the results can be reported automatically. In this way, automated testing tools provide
a more cost-effective and efficient solution than their manual counterparts. Plus, they
minimize the risk of human error during testing.




Performance Testing Process & Methodology      Proprietary & Confidential
- 166 -
27 Stress Testing
27.1 Introduction to Stress Testing

 This testing is accomplished through reviews (product requirements, software functional
requirements, software designs, code, test plans, etc.), unit testing, system testing (also
known as functional testing), expert user testing (like beta testing but in-house), smoke tests,
etc. All these ‗testing‘ activities are important and each plays an essential role in the overall
effort but, none of these specifically look for problems like memory and resource
management. Further, these testing activities do little to quantify the robustness of the
application or determine what may happen under abnormal circumstances. We try to fill this
gap in testing by using stress testing.

Stress testing can imply many different types of testing depending upon the audience. Even
in literature on software testing, stress testing is often confused with load testing and/or
volume testing. For our purposes, we define stress testing as performing random
operational sequences at larger than normal volumes, at faster than normal speeds
and for longer than normal periods of time as a method to accelerate the rate of finding
defects and verify the robustness of our product.

Stress testing in its simplest form is any test that repeats a set of actions over and over with
the purpose of ―breaking the product‖. The system is put through its paces to find where it
may fail. As a first step, you can take a common set of actions for your system and keep
repeating them in an attempt to break the system. Adding some randomization to these
steps will help find more defects. How long can your application stay functioning doing this
operation repeatedly? To help you reproduce your failures one of the most important things
to remember to do is to log everything as you proceed. You need to know what exactly was
happening when the system failed. Did the system lock up with 100 attempts or 100,000
attempts?[1]

Note that there are many other types of testing which have not mentioned above, for
example, risk based testing, random testing, security testing, etc. We have found, and it
seems they agree, that it is best to review what needs to be tested, pick multiple testing types
that will provide the best coverage for the product to be tested, and then master these testing
types, rather than trying to implement every testing type.

Some of the defects that we have been able to catch with stress testing that have not been
found in any other way are memory leaks, deadlocks, software asserts, and configuration
conflicts. For more details about these types of defects or how we were able to detect them,
refer to the section ‗Typical Defects Found by Stress Testing‘.

Table 1 provides a summary of some of the strengths and weaknesses that we have found
with stress testing.




Performance Testing Process & Methodology         Proprietary & Confidential
- 167 -
                                          Table 1
                         Stress Testing Strengths and Weaknesses
                      Strengths                           Weakness
Find defects that no other type of test would      Not real world situation
find
Using randomization increase coverage              Defects are not always reproducible
Test the robustness of the application             One sequence of operations may catch a
                                                   problem right away, but use another sequence
                                                   may never find the problem
Helpful at finding memory leaks, deadlocks,        Does not test correctness of system response to
software asserts, and configuration conflicts      user input



27.2 Background to Automated Stress Testing

Stress testing can be done manually - which is often referred to as ―monkey‖ testing. In this
kind of stress testing, the tester would use the application ―aimlessly‖ like a monkey - poking
buttons, turning knobs, ―banging‖ on the keyboard etc., in order to find defects. One of the
problems with ―monkey‖ testing is reproducibility. In this kind of testing, where the tester uses
no guide or script and no log is recorded, it‘s often impossible to repeat the steps executed
before a problem occurred. Attempts have been made to use keyboard spyware, video
recorders and the like to capture user interactions with varying (often poor) levels of success.

Our applications are required to operate for long periods of time with no significant loss of
performance or reliability. We have found that stress testing of a software application helps
in accessing and increasing the robustness of our applications and it has become a required
activity before every software release. Performing stress manually is not feasible and
repeating the test for every software release is almost impossible, so this is a clear example
of an area that benefits from automation, you get a return on your investment quickly, and it
will provide you with more than just a mirror of your manual test suite.

Previously, we had attempted to stress test our applications using manual techniques and
have found that they were lacking in several respects. Some of the weaknesses of manual
stress testing we found were:
    1. Manual techniques cannot provide the kind of intense simulation of maximum user
         interaction over time. Humans can not keep the rate of interaction up high enough
         and long enough.
    2. Manual testing does not provide the breadth of test coverage of the product
         features/commands that is needed. People tend to do the same things in the same
         way over and over so some configuration transitions do not get tested.
    3. Manual testing generally does not allow for repeatability of command sequences, so
         reproducing failures is nearly impossible.
    4. Manual testing does not perform automatic recording of discrete values with each
         command sequence for tracking memory utilization over time – critical for detecting
         memory leaks.


Performance Testing Process & Methodology         Proprietary & Confidential
- 168 -
With automated stress testing, the stress test is performed under computer control. The
stress test tool is implemented to determine the applications‘ configuration, to execute all
valid command sequences in a random order, and to perform data logging. Since the stress
test is automated, it becomes easy to execute multiple stress tests simultaneously across
more than one product at the same time.

Depending on how the stress inputs are configured stress can do both ‗positive‘ and
‗negative‘ testing. Positive testing is when only valid parameters are provided to the device
under test, whereas negative testing provides both valid and invalid parameters to the device
as a way of trying to break the system under abnormal circumstances. For example, if a valid
input is in seconds, positive testing would test 0 to 59 and negative testing would try –1 to 60,
etc.

Even though there are clearly advantages to automated stress testing, it still has its
disadvantages. For example, we have found that each time the product application changes
we most likely need to change the stress tool (or more commonly commands need to be
added to/or deleted from the input command set). Also, if the input command set changes,
then the output command sequence also changes given pseudo-randomization.

Table 2 provides a summary of some of these advantages and disadvantages that we have
found with automated stress testing.

                                      Table 2
              Automated Stress Testing Advantages and Disadvantages
                 Advantages                         Disadvantages
Automated stress testing is performed under          Requires capital equipment and development of
computer control                                     a stress test tool
Capability to test all product application           Requires maintaince of the tool as the product
command sequences                                    application changes
Multiple product applications can be supported       Reproducible stress runs must use the same
by one stress tool                                   input command set
Uses randomization to increase coverage; tests       Defects are not always reproducible even with
vary with new seed values                            the same seed value
Repeatability of commands and parameters             Requires test application information to be kept
help reproduce problems or verify that existing      and maintained
problems have been resolved
Informative log files facilitate investigation of    May take a long time to execute
problem


In summary, automated stress testing overcomes the major disadvantages of manual
stress testing and finds defects that no other testing types can find. Automated stress
testing exercises various features of the system, at a rate exceeding that at which
actual end-users can be expected to do, and for durations of time that exceed typical
use. The automated stress test randomizes the order in which the product features are
accessed. In this way, non-typical sequences of user interaction are tested with the
system in an attempt to find latent defects not detectable with other techniques.

Performance Testing Process & Methodology           Proprietary & Confidential
- 169 -
To take advantage of automated stress testing, our challenge then was to create an
automated stress test tool that would:
    1. Simulate user interaction for long periods of time (since it is computer controlled we
        can exercise the product more than a user can).
    2. Provide as much randomization of command sequences to the product as possible to
        improve test coverage over the entire set of possible features/commands.
    3. Continuously log the sequence of events so that issues can be reliably reproduced
        after a system failure.
    4. Record the memory in use over time to allow memory management analysis.
    5. Stress the resource and memory management features of the system.




27.3 Automated Stress Testing Implementation

Automated stress testing implementations will be different depending on the interface to the
product application. The types of interfaces available to the product drive the design of the
automated stress test tool. The interfaces fall into two main categories:

     1)           Programmable Interfaces: Interfaces like command prompts, RS-232,
          Ethernet, General Purpose Interface Bus (GPIB), Universal Serial Bus (USB), etc.
          that accept strings representing command functions without regard to context or the
          current state of the device.

     2)            Graphical User Interfaces (GUI’s): Interfaces that use the Windows model
          to allow the user direct control over the device, individual windows and controls may
          or may not be visible and/or active depending on the state of the device.




27.4 Programmable Interfaces
These interfaces have allowed users to setup, control, and retrieve data in a variety of
application areas like manufacturing, research and development, and service. To meet the
needs of these customers, the products provide programmable interfaces, which generally
support a large number of commands (1000+), and are required to operate for long periods of
time, for example, on a manufacturing line where the product is used 24 hours a day, 7 days
a week. Testing all possible combinations of commands on these products is practically
impossible using manual testing methods.

Programmable interface stress testing is performed by randomly selecting from a list of
individual commands and then sending these commands to the device under test (DUT)
through the interface. If a command has parameters, then the parameters are also
enumerated by randomly generating a unique command parameter. By using a pseudo-
random number generator, each unique seed value will create the same sequence of
commands with the same parameters each time the stress test is executed. Each command
is also written to a log file which can be then used later to reproduce any defects that were
uncovered.


Performance Testing Process & Methodology         Proprietary & Confidential
- 170 -
For additional complexity, other variations of the automated stress test can be performed.
For example, the stress test can vary the rate at which commands are sent to the interface,
the stress test can send the commands across multiple interfaces simultaneously, (if the
product supports it), or the stress test can send multiple commands at the same time.



27.5 Graphical User Interfaces
In recent years, Graphical User Interfaces have become dominant and it became clear that
we needed a means to test these user interfaces analogous to that which is used for
programmable interfaces. However, since accessing the GUI is not as simple as sending
streams of command line input to the product application, a new approach was needed. It is
necessary to store not only the object recognition method for the control, but also information
about its parent window and other information like its expected state, certain property values,
etc. An example would be a ‗HELP‘ menu item. There may be multiple windows open with a
‗HELP‘ menu item, so it is not sufficient to simply store ―click the ‗HELP‘ menu item‖, but you
have to store ―click the ‗HELP‘ menu item for the particular window‖. With this information it
is possible to uniquely define all the possible product application operations (i.e. each control
can be uniquely identified).

Additionally, the flow of each operation can be important. Many controls are not visible until
several levels of modal windows have been opened and/or closed, for example, a typical
confirm file overwrite dialog box for a ‗File->Save As…‘ filename operation is not available
until the following sequence has been executed:
     1. Set Context to the Main Window
     2. Select ‗File->Save As…‘
     3. Select Target Directory from tree control
     4. Type a valid filename into the edit-box
     5. Click the ‗SAVE‘ button
     6. If the filename already exists, either confirm the file overwrite by clicking the ‗OK‘
         button in the confirmation dialog or click the cancel button.

In this case, you need to group these six operations together as one ―big‖ operation in order
to correctly exercise this particular ‗OK‘ button.



27.6 Data Flow Diagram
A stress test tool can have many different interactions and be implemented in many different
ways. Figure 1 shows a block diagram, which can be used to illustrate some of the stress
test tool interactions. The main interactions for the stress test tool include an input file and
Device Under Test (DUT). The input file is used here to provide the stress test tool with a list
of all the commands and interactions needed to test the DUT.




Performance Testing Process & Methodology         Proprietary & Confidential
- 171 -
                                   System Resource Monitor



                                            Stress Test
          Input File                           Tool                                DUT



                          Log command                     Log Test
                            Sequence                      Results



Figure 1: Stress Test Tool Interactions

Additionally, data logging (commands and test results) and system resource monitoring are
very beneficial in helping determine what the DUT was trying to do before it crashed and how
well it was able to manage its system resources.

The basic flow control of an automated stress test tool is to setup the DUT into a known state
and then to loop continuously selecting a new random interaction, trying to execute the
interaction, and logging the results. This loop continues until a set number of interactions
have occurred or the DUT crashes.




27.7 Techniques Used to Isolate Defects
Depending on the type of defect to be isolated, two different techniques are used:
   1. System crashes – (asserts and the like) do not try to run the full stress test
      from the beginning, unless it only takes a few minutes to produce the defect.
      Instead, back-up and run the stress test from the last seed (for us this is
      normally just the last 500 commands). If the defect still occurs, then continue
      to reduce the number of commands in the playback until the defect is isolated.
   2. Diminishing resource issues – (memory leaks and the like) are usually limited
      to a single subsystem. To isolate the subsystem, start removing subsystems
      from the database and re-run the stress test while monitoring the system
      resources. Continue this process until the subsystem causing the reduction in
      resources is identified. This technique is most effective after full integration of
      multiple subsystems (or, modules) has been achieved.

Some defects are just hard to reproduce – even with the same sequence of commands.
These defects should still be logged into the defect tracking system. As the defect re-
Performance Testing Process & Methodology             Proprietary & Confidential
- 172 -
occurs, continue to add additional data to the defect description. Eventually, over
time, you will be able to detect a pattern, isolate the root cause and resolve the defect.

Some defects just seem to be un-reproducible, especially those that reside around
page faults, but overall, we know that the robustness of our applications increases
proportionally with the amount of time that the stress test will run uninterrupted.




Performance Testing Process & Methodology     Proprietary & Confidential
- 173 -
28 Test Case Coverage


28.1 Test Coverage

Test Coverage is an important measure of quality for software systems. Test
Coverage analysis is the process of:

         Finding areas of a program not exercised by a set of test cases,
         Creating additional test cases to increase coverage, and
         Determining a quantitative measure of code coverage, which is an indirect
          measure of quality.

Also an optional aspect of test coverage analysis is:

         Identifying redundant test cases that do not increase coverage.

A test coverage analyzer automates this process.

Test coverage analysis is sometimes called code coverage analysis. The two terms are
synonymous. The academic world more often uses the term "test coverage" while
practitioners more often use "code coverage".

Test coverage analysis can be used to assure quality of the set of tests, and not the
quality of the actual product. Coverage analysis requires access to test program
source code and often requires recompiling it with a special command. Code
coverage analysis is a structural testing technique (white box testing). Structural
testing compares test program behavior against the apparent intention of the source
code. This contrasts with functional testing (black-box testing), which compares test
program behavior against a requirements specification. Structural testing examines
how the program works, taking into account possible pitfalls in the structure and
logic. Functional testing examines what the program accomplishes, without regard to
how it works internally.



28.2 Test coverage measures


Performance Testing Process & Methodology      Proprietary & Confidential
- 174 -
A large variety of coverage measures exist. Here is a description of some fundamental
measures and their strengths and weaknesses


28.3 Procedure-Level Test Coverage
Probably the most basic form of test coverage is to measure what procedures were
and were not executed during the test suite. This simple statistic is typically available
from execution profiling tools, whose job is really to measure performance
bottlenecks. If the execution time in some procedures is zero, you need to write new
tests that hit those procedures. But this measure of test coverage is so coarse-grained
it's not very practical.


28.4 Line-Level Test Coverage
The basic measure of a dedicated test coverage tool is tracking which lines of code
are executed, and which are not. This result is often presented in a summary at the
procedure, file, or project level giving a percentage of the code that was executed. A
large project that achieved 90% code coverage might be considered a well-tested
product.
Typically the line coverage information is also presented at the source code level,
allowing you to see exactly which lines of code were executed and which were not.
This, of course, is often the key to writing more tests that will increase coverage: By
studying the unexecuted code, you can see exactly what functionality has not been
tested.


28.5 Condition Coverage and Other Measures
It's easy to find cases where line coverage doesn't really tell the whole story. For
example, consider a block of code that is skipped under certain conditions (e.g., a
statement in an if clause). If that code is shown as executed, you don't know whether
you have tested the case when it is skipped. You need condition coverage to know.
There are many other test coverage measures. However, most available code
coverage tools do not provide much beyond basic line coverage. In theory, you
should have more. But in practice, if you achieve 95+% line coverage and still have
time and budget to commit to further testing improvements, it is an enviable
commitment to quality!


28.6 How Test Coverage Tools Work


Performance Testing Process & Methodology     Proprietary & Confidential
- 175 -
To monitor execution, test coverage tools generally "instrument" the program by
inserting "probes". How and when this instrumentation phase happens can vary
greatly between different products.
Adding probes to the program will make it bigger and slower. If the test suite is large
and time-consuming, the performance factor may be significant.



28.6.1              Source-Level Instrumentation
Some products add probes at the source level. They analyze the source code as
written, and add additional code (such as calls to a code coverage runtime) that will
record where the program reached.
Such a tool may not actually generate new source files with the additional code. Some
products, for example, intercept the compiler after parsing but before code generation
to insert the changes they need.
One drawback of this technique is the need to modify the build process. A separate
version namely, code coverage version in addition to other versions, such as debug
(un optimized) and release (optimized) needs to be maintained.
Proponents claim this technique can provide higher levels of code coverage
measurement (condition coverage, etc.) than other forms of instrumentation. This
type of instrumentation is dependent on programming language -- the provider of the
tool must explicitly choose which languages to support. But it can be somewhat
independent of operating environment (processor, OS, or virtual machine).


28.6.2              Executable Instrumentation
Probes can also be added to a completed executable file. The tool will analyze the
existing executable, and then create a new, instrumented one.
This type of instrumentation is independent of programming language. However, it is
dependent on operating environment -- the provider of the tool must explicitly choose
which processors or virtual machines to support.


28.6.3              Runtime Instrumentation
Probes need not be added until the program is actually run. The probes exist only in
the in-memory copy of the executable file; the file itself is not modified. The same
executable file used for product release testing should be used for code coverage.
Because the file is not modified in any way, just executing it will not automatically
start code coverage (as it would with the other methods of instrumentation). Instead,
the code coverage tool must start program execution directly or indirectly.
Alternatively, the code coverage tool will add a tiny bit of instrumentation to the
executable. This new code will wake up and connect to a waiting coverage tool
Performance Testing Process & Methodology    Proprietary & Confidential
- 176 -
whenever the program executes. This added code does not affect the size or
performance of the executable, and does nothing if the coverage tool is not waiting.
Like Executable Instrumentation, Runtime Instrumentation is independent of
programming language but dependent on operating environment.



28.7 Test Coverage Tools at a Glance

There are lots of tools available for measuring Test coverage.
  Company              Product           OS         Lang
                                            Win32,
Bullseye              BullseyeCoverage                    C/C++
                                            Unix
                                                          C/C++,
CompuWare             DevPartner            Win32
                                                          Java, VB
                                            Win32,        C/C++,
Rational (IBM) PurifyPlus
                                            Unix          Java, VB
Software                                    Win32,        C/C++,
                      TCAT
Research                                    Unix          Java
                                            Win32,
Testwell              CTC++                               C/C++
                                            Unix
Paterson                                                  C/C++,
                      LiveCoverage          Win32
Technology                                                VB


Coverage analysis is a structural testing technique that helps eliminate gaps in a test
suite. It helps most in the absence of a detailed, up-to-date requirements specification.
Each project must choose a minimum percent coverage for release criteria based on
available testing resources and the importance of preventing post-release failures.
Clearly, safety-critical software should have a high goal. We must set a higher
coverage goal for unit testing than for system testing since a failure in lower-level
code may affect multiple high-level callers.




Performance Testing Process & Methodology            Proprietary & Confidential
- 177 -
29 Test Case points-TCP

29.1 What is a Test Case Point (TCP)

TCP is a measure of estimating the complexity of an application. This is also used as
an estimation technique to calculate the size and effort of a testing project.
The TCP counts are nothing but ranking the requirements and the test cases that are to
be written for those requirements into simple, average and complex and quantifying
the same into a measure of complexity.

In this courseware we shall give an overview about Test Case Points and not
elaborate on using TCP as an estimation technique.


29.2 Calculating the Test Case Points:

Based on the Functional Requirement Document (FRD), the application is classified
into various modules like say for a web application, we can have ‘Login and
Authentication’ as a module and rank that particular module as Simple, Average and
Complex based on the number and complexity of the requirements for that module. A
Simple requirement is one, which can be given a value in the scale of 1 to3. An
Average requirement is ranked between 4 and 7. A Complex requirement is ranked
between 8 and 10.

                                            Complexity of Requirements
Requirement
Classification                Simple (1-3)              Average (4-7)            Complex (> 8)   Total

The test cases for a particular requirement are classified into Simple, Average and
Complex based on the following four factors.

         Test case complexity for that requirement OR
         Interface with other Test cases OR
         No. of verification points OR
         Baseline Test data
     

Refer the test case classification table given below

Performance Testing Process & Methodology           Proprietary & Confidential
- 178 -
29.2.1.1 Test Case Classification
  Complexity Type      Complexity of Test Case                    Interface with    Number of       Baseline
                                                                  other Test case   verification    Test Data
                                                                                      points
                                                                                                   Not
Simple                                 < 2 transactions                     0           <2         Required
Average                                3-6 transactions                    <3           3-8        Required
Complex                                > 6 transactions                    >3           >8         Required

A sample guideline for classification of test cases is given below.

         Any verification point containing a calculation is considered 'Complex'
         Any verification point, which interfaces with or interacts with another
          application is classified as 'Complex'
         Any verification point consisting of report verification is considered as
          'Complex'
         A verification point comprising Search functionality may be classified as
          'Complex' or 'Average' depending on the complexity

Depending on the respective project, the complexity needs to be identified in a similar
manner.

Based on the test case type an adjustment factor is assigned for simple, average and
complex test cases. This adjustment factor has been calculated after a thorough study
and analysis done on many testing projects.
The Adjustment Factor in the table mentioned below is pre-determined and must not
be changed for every project.




Performance Testing Process & Methodology            Proprietary & Confidential
- 179 -
Test Case           Complexity Adjustment
Type                 Weight      Factor                       Number                    Result
                                                    No of Simple requirements in Number*Adjust factor A
Simple                     1                2(A)             the project                 (R1)
                                                   No of Average requirements in Number*Adjust factor B
Average                    2                4(B)             the project                 (R2)
                                                   No of Complex requirements in Number*Adjust factor C
Complex                    3                8(C)             the project                 (R3)
Total Test
Case Points                                                                           R1+R2+R3

From the break up of Complexity of Requirements done in the first step, we can get
the number of simple, average and complex test case types. By multiplying the
number of requirements with it s corresponding adjustment factor, we get the simple,
average and complex test case points. Summing up the three results, we arrive at the
count of Total Test Case Points.

29.3 Chapter Summary

          This chapter covered the basics on

                   What is Test Coverage
                   Test Coverage measures
                   How does Test coverage tools work
                   List of Test Coverage tools
                   What is TCP and how to calculate the Test Case Points for an
                    application




Performance Testing Process & Methodology             Proprietary & Confidential
- 180 -

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:22
posted:8/24/2011
language:English
pages:180