Learning Center
Plans & pricing Sign in
Sign Out


VIEWS: 584 PAGES: 40

									INTRODUCTION What is Software Quality Assurance? The other technical terminology for Software Quality Assurance is Verification and Validation. To put in simple words, Verification speaks on "Are we building the right system?" and Validation speaks on "Are we building the system right?". SQA is an umbrella activity that is applied throughout the software process. Quality as the American Heritage Dictionary defines is, "A Characteristic or attribute of something..." What is Software Testing by the way? In general, testing is finding out how well something works. In terms of human beings, testing tells what level of knowledge or skill has been acquired. In computer hardware and software development, testing is used at key checkpoints in the overall process to determine whether objectives are being met. For example, in software development, product objectives are sometimes tested by product user representatives. When the design is complete, coding follows and the finished code is then tested at the unit or module level by each programmer; at the component level by the group of programmers involved; and at the system level when all components are combined together. At early or late stages, a product or service may also be tested for usability.

At the system level, the manufacturer or independent reviewer may subject a product or service to one or more performance tests, possibly using one or more benchmarks. Whether viewed as a product or a service or both, a Web site can also be tested in various ways - by observing user experiences, by asking questions of users, by timing the flow through specific usage scenarios. The answer to the primary role of software testing is two fold: Determine whether the system meets specifications (Producer View), and Determine whether the system meets business and user needs (Customer View) Testing encompasses three concepts: The demonstration of the validity of the software at each stage in the system development life cycle. Determination of the validity of the final system with respect to user needs and requirements. Examination of the behavior of a system by executing the system on sample test data. Goal of Testing The primary Goal of Testing is to uncover requirement, design or coding errors in the programs. Let us look at some fundamental concepts of Software Testing. "Verification" is the process of determining whether or not the products of a given phase of software development fulfill the specifications established during the previous phase. The verification activities include proving, testing and reviews. "Validation" is the process of evaluating the software at the end of the software development to ensure compliance with the software requirements. Testing is a common method of validation.

For high reliability we need to perform both activities. Together they are called the "V & V" activities. Software Testing is an Art. A good tester can always be a good programmer, but a good programmer need not be a good tester. Test Objectives The following can be described as test objectives as per Glen Mayer: Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as-yet-undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error. Testing Principles The following can be described as testing principles: All tests should be traceable to customer requirements. Tests should be planned long before testing begins. The Pareto principle applies to testing. Testing should begin ―in small‖ and progress toward testing ―in large‖. Exhaustive testing is not possible. To be most effective, testing should be conducted by an independent third party. Verification Strategies The following can be categorized into Verification Strategies: Requirements Reviews. Design Reviews. Code Walkthrough. Code Inspection. Validation Strategies The following can be described as the basic Validation Test strategies. 1. Unit Testing. 2. Integration Testing. 3. System Testing. 4. Performance Testing. 5. Alpha Testing. 6. User Acceptance Testing (UAT) 7. Installation Testing. 8. Beta Testing. Software Testing Life Cycle 1. Requirements stage 2. Test Plan 3. Test Design. 4. Design Reviews 5. Code Reviews 6. Test Cases preparation.

7. Test Execution 8. Test Reports. 9. Bugs Reporting 10. Reworking on patches. 11. Release to production. Requirements Stage Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called ―Software Requirements Specifications‖. Test Plan Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information: • Total number of features to be tested. • Testing approaches to be followed. • The testing methodologies • Number of man-hours required. • Resources required for the whole testing process. • The testing tools that are to be used. • The test cases, etc Test Design Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project. The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows: • The different modules of the software are identified first. • Next, the paths connecting all the modules are identified. Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation Test cases should be prepared based on the following scenarios: • Positive scenarios • Negative scenarios • Boundary conditions and • Real World scenarios Design Reviews The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed. Code Reviews Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out. Test Execution and Bugs Reporting Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround. The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed. Release to Production Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done. The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.


Many of the process models currently used can be more generally connected by the V-model where the ―V‖ describes the graphical arrangement of the individual phases. The ―V‖ is also a synonym for verification and validation. The model is very simple and easy to understand. By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another i.e. serve as a base for test activities. So, for example, the system test is carried out on the basis of the results specification phase. The coarse view of the model gives the impression that the test activities first start after the implementation. However, in the description of the individual activities the preparatory work is usually listed. So, for example, the test plan and test strategy should be worked out immediately after the definition of the requirements. Nevertheless it can contribute very well to the structuring of the software development process. The disadvantage of the model is the coarse division into constructive work (including the implementation) on the left-hand side of the ―V‖ and the more destructive tasks on the right-hand side. Here also the impression may develop that, after the implementation phase, a ready product can be delivered. A planned-in removal of defects and regression test is not given.


One of the first models for software development is the so-called waterfall-model. The individual phases i.e. activities, that were defined here are to be found in nearly all models proposed since. In this it was set out that each of the activities in the software development must be completed before the next can begin. A return in the development process was only possible to an immediate previous phase. In the waterfall-model, testing directly follows the implementation. By this model it was suggested that activities for testing could first be started after the implementation. Preparatory tasks for the testing were not clear. A further disadvantage is that testing, as the last activity before release, could be relatively easily shortened or omitted altogether. This, in practice, is unfortunately all too common. In this model, the expense of the removal of faults and defects found is only recognizable through a return to the implementation phase.


From the view of testing, all of the models presented previously are deficient in various ways: • • • the the the connection tight link test between between activities the test, various debug first test and stages change start and tasks the after basis for the during the the test test phase implementation is is not not clear clear

In the following, the W-model is presented. This is based on the general V-model and the disadvantages previously mentioned are removed. The test process usually receives too little attention in the models presented and usually appears as an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second ―V‖ dedicated to testing is integrated into the model. Both ―V‖s together give the ―W‖ of the Wmodel.

Advantages of the W-Model In the W-model the importance of the tests and the ordering of the individual activities for testing are clear. Parallel to the development process, in a tighter sense, a further process - the test process - is carried out. This is not first started after the development is complete.

The strict division between constructive tasks on the left-hand side and the more destructive tasks on the right-hand side that exists in the V-model is done away with. In the W-model it is clear that such a division between tasks is not sensible and that a closer co-operation between development and testing activities must exist. From the project outset onwards the testers and the developers are entrusted with tasks and are seen as an equal-rights partnership. During the test phase, the developer is responsible for the removal of defects and the correction of the

implementation. The early collaboration and the tight co-operation between the two groups can often in practice avoid conflict meetings. The W-model becomes closer to practice when the test expenditure is given 40% and more. The model clearly emphasises the fact that testing is more than just construction, execution and evaluation of test cases.

Disadvantages of the W-Model Models simplify the real facts. In practice there are more relations between the different parts of a development process. However, there is a need for a simple model if all people involved in a project are to accept it. This is also a reason why the simple V-model so frequently used in practice.

The models of software development presented do not clarify the expenditure needed for resources that need to be assigned to the individual activities. Also in the W-model it appears that the different activities have an equal requirement for resources (time, personnel, etc.) In practice this is certainly not the case. In each project the most important aspects may vary and so therefore the resource allocation is unlikely to be equal across activities. For highly critical applications the test activities certainly have higher weighting or at least equal weighting with other activities. Spiral-Model In the spiral-model a cyclical and prototyping view of software development was shown. Tests were explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase was divided into stages. The test activities included module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects.

Extreme Programming

A further model of software development is currently frequently discussed: Extreme Programming. Taking a simplistic view of the model one could say that extreme programming does not use specifications. The test cases initially defined are used as a description of the requirements. These are then used after the implementation to help check the (sub-) product. The idea in this excerpt from Extreme Programming can also be found in the W-model: the left part of the ―W‖ can simply be omitted. This then leaves just the testing activities as tasks up to the point of implementation. The requirements for the system to be developed are then extracted from the specified test cases.

Extreme Programming Questions and Answers -- from How is Software Quality Assurance and Software Configuration Management integrated into Extreme Programming? XP defines two levels of testing. The first is unit testing, which must be performed by the programmers as they work. Each class implemented must have programmer-developed unit tests, for everything that "could possibly break". These tests are to be written during coding of the class, preferably right before implementing a given feature. Tests are run as frequently as possible during development, and all unit tests in the entire system must be running at 100% before any developer releases his code. (By release, we mean transferring from his own code space to the code integration area. This is handled differently, of course, depending on the code management tools in place.) The second level of testing is called functional testing. Each feature of the system (which is defined by something we call a User Story, rather like a Use Case) must have one or more functional tests that test it. The functional tests are the responsibility of what we call the "customer", the body responsible for defining the requirements.

The implementation and running of functional tests can be done by the Software QA group, and in fact this is an ideal way to do it.

Within XP, are there any specification baselines, test baselines, QA Acceptance testing, and CM Release Management/Change Control? XP is an inherently incremental process, with software being released to "production" as frequently as possible. This generally means that programmers release their work to the common development pool approximately daily, and that means that if a full system were built on that day, their code would be included in that build. The time period between full system builds varies depending on the environment: since you have chosen a particularly difficult integration language (C++), I could imagine that you would build less frequently. We would recommend, however, that the full system be integrated as often as possible, at least daily. (This may seem aggressive to you. We'd have to talk about what is possible in your environment.)

Since XP is incremental, developers are working in short time increments we call iterations: we recommend about three weeks. Features (user stories) are broken down to the point of detail that allows a developer and his partner to implement the stories they're working on in that time period. We like the functional tests for that iteration to be complete and available no more than half-way through the iteration. (This usually means that QA is writing tests for the next iteration while this one is going on.) All through the iteration, programmers can use QA's functional tests to determine whether they have met the requirements. (They are also using their own unit tests to determine whether their individual classes are doing what they should. This is usually at a much finer level of detail.)

Baselines work this way: when the code for a story is released, all the functional tests for it should be in place, and will ideally be working. Inevitably some will not, especially with teams just beginning with XP. One of the quality measures in the process is the daily graph of performance on functional tests. The general shape of this graph, over the course of the full system release period, is that of two s-curves: the upper curve is the total number of tests written, the lower curve is the number running at 100%. A healthy project of course shows these curves coming together at 100% by the end of the schedule. The code management software needs of course to reflect the requirements scheduled for release. This is determined by the "customers", as part of the planning components we call the commitment schedule (overall plan for a major release) and the iteration plan (plan for a (three week) iteration). The baseline of what is in the system tracks what is actually requested by the customers. Development doesn't care whether this is new functionality or a change to old. They don't care whether a given user story addresses something that was planned for or not. XP is completely flexible with regard to change management: development merely estimates how long any desired feature will take,

and works on it when "customer" schedules it into an iteration. (Dependencies of course exist, but we find that far fewer exist than most developers believe. Drilling into that subject is beyond the scope of this email.) When do the all the customer sign-offs occur? Customer sign-off is continuous. Each iteration has its functional tests. Everyone is fully up to date on which tests are working and which are not. If tests scores are trailing implementation by too much, the customer will inevitably schedule more work against older features that are incorrect (or whose requirements have changed). When test scores are tracking implementation, the customer knows it and is comfortable requesting new functionality. Because the test scores are public and visible, everyone has the same level of understanding of where quality is. Generally scores are showing a good curve toward release, and everyone gets increasing comfort as the release date shows up. And, of course, if tests are not tracking, everyone knows that and the priority of getting things right naturally increases.

The overall idea of this part of the process is to provide the most rapid feedback possible to everyone, customers and developers alike. That's why we like all the functional test run every night. Next morning, if anything has been broken the day before, everyone knows it and can deal with it effectively (since it was only yesterday's work that could be the problem). The faster the feedback, the faster development of quality software can proceed.

What are the Quality Assurance and Software Configuration Management roles and responsibilities with Extreme Programming? We prefer for there to be a separate organization for functional testing (probably exactly like your QA function, with testing results made public very quickly). XP, however, only says that there must be functional tests: it does not specify organizationally how they must be done. Experience is that testing is best done by a separate function - but one that is very tightly integrated with development rather than at the end of a long pipeline. Configuration management is also up to the team. It is usually necessary to have one or more individuals responsible for CM. We have no special rules or practices addressing how a group would manage the requirement to build multiple systems from one code base. Our main approach would be: for each release configuration, there must be corresponding functional tests, and these must be run before that configuration is released to the (real) customer. We would think that development would proceed by running kind of a "union" of all the functional tests of all the configurations. We'd probably have to talk more specifically about how your specific organization needs to build configurations to say much more about that.

Do you use IEEE, SEI, ISO9000 standards as references to acquire the fundamentals of defining accurate requirements for customers and software engineering users? How can a person write storyboards without having the basics of pinpointing and developing sound requirements? We would agree that those who play the customer role have to know what they want. We do not, however, recommend any particularly formal requirements writing or recording mechanism. Instead, what we are working toward (XP is for small teams, after all) is to have a clear understanding in the heads of customers, developers, and testers as to what is wanted. Rather than have, say, an "analyst" sit down with the customer and laboriously translate his mumblings into something representing what is wanted, and then having a "designer" take the analysis and build a design, and so on, small teams function best if the customers and designer/developers talk to one another until they develop a common vocabulary of what is needed and how it will be done. In XP, we would like to have a common level of understanding in all heads, each focused on its own particular interests: Customers: what's needed, what's the business value, when do we need it? Developers: what's needed, how can I build this, how can I test my code, how long will it take? Testers: what's needed by the customers, how can I test whether developers have done it? As you can see, the testers' functional tests are what close the loop, assuring everyone that what was asked for was what we got. The best way to do XP is with a separate functional testing organization that is closely integrated into the process. It would be delightful to have that organization run by an experienced QA manager trained in XP.

Is Extreme Programming not for Software Quality Engineering and Software Configuration Management practitioners. XP is a development discipline that is for customers (in their role as specifiers and their role as investors and their role as testers and acceptors) and for developers. As such, the Quality Engineering and Configuration Management roles are critical to the effort. They have to be assigned and played in a way that is consistent with the mission of the group, the level of criticality of quality, and so on. We'd need to talk in detail about your situation to see just where the XP terminology connects with yours, but your QA functions need to be done in any effective software effort, whether in a separate organization or not. So XP certainly is for software quality engineering and software configuration management, as part of a healthy overall process.

That said, XP is aimed at smaller projects (like yours) and it sounds like yours has a much higher level of QE and CM than is often seen in companies of your size. That should give you a strong leg up in building quality software, and we should strengthen your contribution to quality as we bring XP into the team.

Configuration management, as we know it today, started in the late 1960s. In the 1970s, the American government developed a number of military standards, which included configuration management. Later, especially in the 1990s, many other standards and publications discussing configuration management have emerged. In the last few years, the growing understanding of software development as a collection of interrelated processes has influenced work on configuration management. This means that configuration management is now also considered from a process point of view. There are many definitions of configuration management and many opinions on what it really is. In short: Configuration management is unique identification, controlled storage, change control, and status reporting of selected intermediate work products, product components, and products during the life of a system. Figure 1–1 shows the activity areas included in the definition of configuration management. It also shows their relations to each other, to common data, and to elements outside the configuration management process area.

Figure 1–1 Overview of Configuration Management Activities

When you work professionally with configuration management (as with anything else) it's important to have the fundamental concepts in place. If all else fails, you can go back and seek a solution there. Definitions of configuration management used in various standards are covered in Chapter 3, and definitions of configuration management used in various maturity models can be found in Chapter 2. The definitions in these standards and maturity models are similar to a large extent. However, they're expressed in slightly different words and with different divisions between the detailed activities that constitute configuration management. It's perfectly okay for a company to use its own definition of configuration management, but it's a good idea to investigate how that definition maps to the definition and other relevant definitions, to make sure no activity has been left out.

1.1 Configuration Management Activities
The view on configuration management is process oriented. Therefore, the definition includes activity areas, which can be described in terms of process descriptions. The activity areas described in detail in the following paragraphs are identification, storage, change control, and status reporting. Configuration management has many interactions with other development and support processes. Figure 1–1 illustrates the production and usage activity areas via their respective libraries.

All the activity areas in configuration management share metadata for items placed under configuration management. Metadata is a database concept that means data about the data stored in the database. So metadata in this context describes the configuration items. Metadata for a configuration item may include its name, the name of the person who produced the item, the production date, and references to other related configuration items. Figure 1–1 shows a logical separation of metadata, even though this data is often stored physically at the same location (in the same database) as the items in controlled storage. Change control uses metadata—for example, the trace information for a configuration item for which a change is suggested. Change control does not in itself contribute to metadata, because information produced during change control will be present only if a configuration item is affected by a suggested change. A configuration item can exist without change control information, but it can't exist without metadata.

Configuration Management Is Cyclic—or Is It?

In everyday language, "configuration item" is often used to refer to an item, which is then said to be produced in several versions. This is not strictly correct, but it's acceptable as long as the reference is clearly understood by all involved. In fact, each new version of a configuration item is a new configuration item in its own right. This can be illustrated by an analogy to an object-oriented approach. The "configuration item" may be seen as a class and the versions as instantiations of the class, as shown in Figure 1–2. Version chains of configuration items—that is, versions 1, 2, 3, and so on—may be formed by indicating which configuration item a given configuration item is derived from or based on.

Figure 1–2 Configuration Item Class and Instantiations

Configuration management activities may be viewed as cyclic for each item class placed under configuration management. This means that a configuration item class continuously goes "through the mill." The first cycle is initiated by a (planned) need for a configuration item, and later the driving force is a change request (and only this!). This is illustrated in Figure 1–3.

Figure 1–3 The Life of a Configuration Item Class In each cycle, a configuration item will be identified, produced, stored, and released for usage. Event registrations will occur as a consequence of experience gained during usage. These will lead to change control and the creation of change requests, which will lead to identification, and

so on, of a new version of the original configuration item. Items placed under configuration management must never be changed, but new versions may be created. Configuration items that are different versions of the same original item are obviously strongly related, but each one is an individual item, which will be identified and may be extracted and used independently. This is one of the main points of configuration management: to be able to revert to an earlier version of an item class.
Quality Assurance Process

Configuration management interacts with quality assurance, as illustrated by the item approval process that accompanies a configuration item from production to storage. The item approval, which may be a written quality record or verbal, is a product of quality assurance. Some see it as a product of configuration management, but it's actually the gateway from production to configuration management, provided by quality assurance.

Auditing is included in some definitions of configuration management. An audit ensures that the product—the configuration item released for use—fulfills the requirements and is complete as delivered. This includes configuration management information, so that everything required is delivered in the expected versions and that the history of each item can be thoroughly accounted for. This activity area is not considered part of configuration management. It's viewed as an activity area under general quality assurance, which partly concerns the products and partly the processes, rather than a configuration management activity area. This may be a controversial point of view, but the idea of audits is a legacy from the Department of Defense origin of configuration management. Today there is a much broader understanding in the software industry of the importance of quality assurance and, therefore, also of configuration management. Auditing uses configuration information extensively in the form of status reports, but it also uses quality assurance techniques and methods, such as reviewing and test. In practice, people involved in configuration management also carry out the audit.

Microsoft Visual Source Safe (VSS) helps you manage your projects, regardless of the file type (text files, graphics files, binary files, sound files, or video files) by saving them to a database. When you need to share files between two or more projects, you can share them quickly and efficiently. When you add a file to VSS, the file is backed up on the database, made available to other people, and changes that have been made to the file are saved so you can recover an old version at any time. Members of your team can see the latest version of any file, make changes, and save a new version in the database. When a file (or set of files) is ready to deliver to another person, group, Web site, or any other location, VSS makes it easy to share and secure different versions of the selected set of files. More and more, developers are accessing VSS functions from within their development environment. VSS can be easily integrated with Microsoft Access, Visual Basic, Visual C++, Visual FoxPro, and other development tools. The major functions of VSS are :1. Sharing: In VSS, one file can be shared among multiple projects. Changes to the file from one project are automatically seen by other projects sharing the file. This encourages code reuse. Example: For example, the database group of the office tools team wants to add a spelling checker. The word processing group already has a spelling checker. The database group could just add the spelling checker files to its project, but the two groups would not be connected. If the database team found a bug in the spelling checker, the word processing group would have to duplicate the change. By sharing the files, both groups work with common source files and all changes are automatically seen by both projects 2.Branching: Branching is the process of taking a file in two directions (that is, branches or paths) at once. VSS keeps track of these branches by tracking the different paths as a project. Branching a file breaks the shared link, making the file in that project independent of all other projects. The changes you make in the file are not reflected elsewhere, and vice versa. Example: Suppose that after your project reaches version 3.1, development proceeds in two different directions. One team is working toward the next major release, the 4.0 version. The other team is working on a maintenance release, version 3.2. You create a new project, labeled $/PATCH, to represent the 3.2 version. You share and branch all the files into it. You branch because you don’t want the projects to track each other, because versions 3.2 and 4.0 are going in different directions. Accordingly, changes made in one are not automatically reflected in the other. The projects now proceed in different directions, independent of each other. Later, you may want to merge the version 3.2 changes bug fixes or minor feature enhancements, for example into version 4.0. You will then be able to merge those changes using the "merge branches" command. 3. Merging: Merging is the process of combining differences in two or more changed copies of a file into a single, new version of the file.

VSS cannot resolve merge conflicts, but instead presents them to you for resolution. There are two methods that can be used for viewing and resolving merge conflicts Visual merge and Manual merge. Basic Definitions of VSS: Access rights Levels of permission users are granted by the VSS administrator to use the VSS database. The levels of access rights are Read, Check Out, Add, and Destroy. Automatic merge When multiple users have the same file checked out, their changes to the file are merged by VSS during check in. Branched file File whose share link has been broken using the Branch command. Branching Process of sharing a file with another project and then separating it into two or more branches. Once a branch has been created, two files (the file in the project, and its counterpart in other projects) will have a shared history up to a certain point, and divergent histories after that time. Checked-in file File stored in the VSS database and unavailable for modification. Checked-out file File that has been reserved for work by a user. Users check out files to make changes to them. In the default configuration, VSS allows only one user at a time to check out a file. Checking out a file copies its latest version into the user's working folder. Check-out folder Folder to which a file is checked out in VSS. It is not the working folder. If you check out a file, the file is checked out to your working folder. To another user, the file is in the checkout folder. The check-out folder is displayed in the Check Out Folder column of the file pane; the working folder is displayed under the toolbar. Cloaking Preventing a project from being affected by certain commands, for example, Get Latest Version, Check Out, Check In, Undo Check Out, and Project Show Difference. Conflict

Two or more different changes to the same line of code in a multiple check out situation. VSS recognizes conflicts during a merge operation, and flags them in some way. Conflict marker Symbol used to designate conflicting changes to a file. These symbols are: Symbol <<<<<< ====== >>>>>> Description SourceSafe version Conflict separator Local version

VSS places these marks in the file after a conflicting check-in or merge operation, so conflicts can be easily found and resolved. Cross-platform development VSS supports transparent file-compatibility across multiple processors and operating systems. Current version Version of a file most recently stored in the VSS database. The current version has the highest version number of a file in VSS. Delete command Removes files and projects from a VSS project, and marks them as deleted; the items still exist, however, and can be recovered using the Recover command.

Delta In VSS, a delta is the difference between version x of a file and version x-1 of the same file. VSS uses reverse delta technology to store changes. Destroy command Permanently removes deleted files and projects from the VSS database. Once destroyed, the items cannot be recovered. Development environment

Set of software development tools, presented as a unified environment in which the software developer can efficiently work. Microsoft Visual Basic and Microsoft Visual C++ are examples of such environments. File list List of files in the current project, which can be found in the file pane of the VSS Explorer window. File pane Right side of the VSS Explorer window. This pane contains the file list, which is a list of all files in the current project. History Record of changes to a file since it was initially added to VSS. With the file history, you can return to any point in the file's history and recover the file as it existed at that point. The History of Project dialog box shows the record of significant events in the current project, such as labeling and deleting or adding files and subprojects. Inheritance Inherited effect of variables in VSS initialization files that subprojects receive from their parent projects. Variables in these files can be grouped under headings to specify behavior. If a variable is set before any group heading in the initialization file, the variable affects all projects; the effect is inherited by the subproject variables. Label User-defined name you can attach to a specific version number of a file or project. Local copy Copy of a file stored in your working folder on your local computer. The local copy may differ from the VSS master copy if the local copy has been changed since the last check out, or if the master copy was changed by another user while you were working on the local copy. Locking System of ensuring that two processes do not affect the same record in a database at the same time. To coordinate record access, VSS applies native locking, which uses native operating system functions. VSS also can be set to use lockfiles, which create temporary files in the LOCKS folder. Log on Process of entering and verifying a user's name and password to access the VSS database.

Master copy Most recently checked-in version of a file stored in the VSS database, as opposed to the local copy of a file in your working folder. Merging Process of combining differences in two or more changed copies of a file into a new version of the file. A merge involves at least two different files (which can be different versions of the same file or changes made to the same version of the file) and creates a new file made up of the results of the merge. Merging can occur when the user merges two branches or when the Check In or Get Latest Version command is used. Multiple check out Simultaneous check outs by two or more users. The VSS administrator must enable multiple check out. Parent project The project a file or subproject exists in. For example, the parent of the file $/Project/Abc.txt is $/Project and the parent of the project $/Project is the root ($/). Project Group of related files, typically all the files required to develop a software component. Files can be grouped within a project to create subprojects. Projects can be defined in any way meaningful to the user(s). For example, one project per version, or one project per language. Projects tend to be organized in the same way as file directories. Project list List of all the projects available in the VSS database; the project list is found in the left pane of the VSS Explorer window. Project security system The more restrictive of the two security systems provided by VSS. By default, it is disabled. When enabled by the admin, this feature allows an admin to set access rights on a per user, per project basis. Purge command Permanently removes previously deleted files and projects from the VSS database. Once purged, the items cannot be recovered. Read-only file

File marked as read-only in its file attributes. Such a file can be viewed in an appropriate text editor, but cannot be modified. VSS marks the file as read-only when you use the Check In and Get Latest Version commands. Recursive operation Operation applied to a project and to all the files and subprojects of that project. For example, you can use the Check Out command recursively to check out all files in the project list simultaneously and avoid selecting each file individually. Results pane Portion of the VSS Explorer window where results from VSS operations are shown. For example, when you check in a file, this pane shows the file name being checked in. Reverse delta Change-storage technology used by VSS, in which incremental changes to a baseline file are stored, rather than each successive version of the file in its entirety. In VSS, the current version of a file is used as the baseline, and changes from the previous versions are saved. This results in reduced disk storage requirements and faster access times, because only the current version is stored in the database in its entirety. Rights propagation Default assignment of user-access rights in subprojects based on rights assigned in the parent project. This default assignment can be changed.

Root project The highest-level project with the name $/ in the project list. All projects in a VSS database are subprojects of the root project. Security VSS has two levels of security: default security and project security. Default security provides two access rights: read/write and read-only. When project security is enabled, four access rights are available per user, per project: Read, Check Out, Add, and Destroy. Each succeeding right includes all rights preceding it. The Destroy access right provides unlimited access and is equivalent to Read/Write rights under default security. Shadow folder Central, optional folder that contains current versions of all the files in a project. The shadow folder does not contain the master copy of a file or the local copy of a file. Instead, it provides a central location from which to view the overall structure of the project and serves as a convenient place to build or compile the project.

Share link Link between a file shared with one or more projects. This link serves to update the shared file with any checked-in changes, regardless of which project the file was checked out from. Shared file File simultaneously used by, and part of, more than one project. Source code control The management of a file's change history and the file's relation to a larger grouping of related files known as a project. Source code control is a vital part of the efficient development of software applications. VSS is a project-oriented source code control. User list List of users who can use the VSS database. The list is maintained by the VSS administrator and displayed in VSS Administrator's main window. Username Unique identifying string for a given user. Used for logging on. Version control VSS maintains multiple versions of a file, including a record of the changes to the file from version to version. Version number Number that indicates the number of revisions a file has undergone since it was added to VSS. This number is displayed in the History dialog box. Version numbers are always whole numbers. Version tracking Record keeping process of tracking a file's history from the initial version to the current version. Changes to a file are tracked as part of this process. Visual merge Merge operation where conflicts are resolved visually, in an easy-to-use graphical interface. VSS Administrator Individual responsible for the VSS database. The administrator uses VSS Administrator program to control the location of the database, the user list, and access rights of each user, and performs setup and backup duties on the database. The administrator's user name is always Admin.

VSS database Central database where all master copies, history, project structures, and user information is stored. A project is always contained within one database; multiple projects can be stored in one database, and multiple VSS databases can exist to store multiple projects. VSS Explorer VSS’s graphical user interface. By default, it comprises three panes: the left project pane and the right file pane, as well as the toolbar, status bar, menus, and so on. VSS Explorer is displayed when you click the VSS icon. Web site project Project marked as a Web site project in the VSS Administrator. Such a designation allows special Web site commands, such as Deploy, to be used in this project.

Wildcard characters Asterisk (*) and question mark (?) are wildcard characters. You can use these characters to match patterns. You can also use wildcard characters and matching characters to further refine a search. Symbol * Example Usage wh* finds what, white, and Like the MS-DOS asterisk why; *at finds cat, bat, and (*) wildcard character, this what asterisk matches any number of characters. b?ll finds ball, bell, and bill Like the MS-DOS (?) wildcard character, this symbol matches any single character.


A backslash preceding an asterisk or question mark indicates a literal asterisk or question mark: \* or \? (This is necessary if you want to search for actual asterisks, question marks, or backslashes.) Working folder Specified folder on a user's local computer used to store files when they are checked out of the VSS database. A user makes changes to files in the working folder and then checks the modified files back into the VSS database for version tracking.

What is a Bug Life Cycle? The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’. (Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article ‘Software Testing – Bug & Statuses Used During A Bug Life Cycle’). There are seven different life cycles that a bug can passes through: < I > Cycle I: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) Test lead finds that the bug is not valid and the bug is ‘Rejected’. < II > Cycle II: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as ‘New’. 4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending Reject’ before passing it back to the testing team. 5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’. < III > Cycle III: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as ‘New’. 4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’. 5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader. 6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest. 7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest. 8) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’. < IV > Cycle IV: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as ‘New’. 4) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’. 5) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader. 6) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest. 7) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.

8) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing. < V > Cycle V: 1) A tester finds a bug and reports it to Test Lead. 2) The Test lead verifies if the bug is valid or not. 3) The bug is verified and reported to development team with status as ‘New’. 4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team. 5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it ‘Rejected’. < VI > Cycle VI: 1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as ‘Postponed’. < VII > Cycle VII: 1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’. This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

Software Testing - Contents of a Bug Complete list of contents of a bug/error/defect that are needed at the time of raising a bug during software testing. These fields help in identifying a bug uniquely. Enlarge ImageWhen a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely identifying the bug reported by the tester. The contents of a bug are as given below: Project: Name of the project under which the testing is being carried out. Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which the bug has been reported. Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the actual results. At the end of the summary, the step at which the test case fails is described along with the actual result obtained and expected result. Summary: This field contains some keyword information about the bug, which can help in minimizing the number of records to be searched. Detected By: Name of the tester who detected/reported the bug. Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of developer group leader, who then delegates the task to member of his team, and changes the name accordingly. Test Lead: Name of leader of testing team, under whom the tester reports the bug. Detected in Version: This field contains the version information of the software application in which the bug was detected. Closed in Version: This field contains the version information of the software application in which the bug was fixed. Date Detected: Date at which the bug was detected and reported. Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug. Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed and retested successfully. Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally Medium, Low, High, Urgent are the type of severity that are used. Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is high severity and 5 is the lowest. Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first time reported by the tester, further the status is changed

to Assigned, Open, Retest, Pending Retest, Pending Reject, Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process. Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely. Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in explaining the testing he had done and it also helps developers in recreating the similar testing condition. Test Case Failed: This field contains the test case that is failed for the bug. Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of reporting a bug. Making a field mandatory or optional depends on the company requirements and can take place at any point of time in a Software Testing project. (Please Note: All the contents enlisted above are generally present for any bug reported in a bug-reporting tool. In some cases (for the customized bug-reporting tools) the number of fields and their meaning may change as per the company requirements.)

Software Testing - Bug and Statuses Used During A Bug Life Cycle Find out what a bug or error is called in software testing and what are the various statuses used for the bug during a bug life cycle. Enlarge ImageThe main purpose behind any Software Development process is to provide the client (Final/End User of the software product) with a complete solution (software product), which will help him in managing his business/work in cost effective and efficient way. A software product developed is considered successful if it satisfies all the requirements stated by the end user. Any software development process is incomplete if the most important phase of Testing of the developed product is excluded. Software testing is a process carried out in order to find out and fix previously undetected bugs/errors in the software product. It helps in improving the quality of the software product and make it secure for client to use. What is a bug/error? A bug or error in software product is any exception that can hinder the functionality of either the whole software or part of it. How do I find out a BUG/ERROR? Basically, test cases/scripts are run in order to find out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs, it is called as a bug. What is a Test Case? A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs. What do I do if I find a bug/error? In normal terms, if a bug or error is detected in a system, it needs to be communicated to the developer in order to get it fixed.

Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. (Please note that there are various ways to communicate the bug to the developer and track the bug status) Statuses associated with a bug: New: When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug. Assigned: After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is assigned to it. Open: Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that he/she is working on it to find a solution. Fixed: Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team. Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Retest’ is assigned to it. Retest: The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to ‘Retest’ and assigns it to a tester for retesting. Closed: After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester closes it and marks it with ‘Closed’ status. Reopen: If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as ‘Reopen’. Pending Rejected: If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending Reject’. Rejected:

If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bugs and marks its status as ‘Rejected’. Postponed: Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with ‘Postponed’ status. Deferred: In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with ‘Deferred’ status.

Risk Analysis
In this tutorial you will learn about Risk Analysis, Technical Definitions, Risk Analysis, Risk Assessment, Business Impact Analysis, Product Size Risks, Business Impact Risks, Customer-Related Risks, Process Risks, Technical Issues, Technology Risk, Development Environment Risks, Risks Associated with Staff Size and Experience. Risk Analysis is one of the important concepts in Software Product/Project Life Cycle. Risk analysis is broadly defined to include risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. Risk Assessment is also called as Security risk analysis.

Technical Definitions:
Risk Analysis: A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats. Risk Assessment: A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization. Business Impact Analysis: A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial. Risks for a software product can be categorized into various types. Some of them are:

Product Size Risks:
The following risk item issues identify some generic risks associated with product size:  Estimated size of the product and confidence in estimated size?  Estimated size of product?  Size of database created or used by the product?  Number of users of the product?  Number of projected changes to the requirements for the product? Risk will be high, when a large deviation occurs between expected values and the previous experience. All the expected information must be compared to previous experience for analysis of risk.

Business Impact Risks:
The following risk item issues identify some generic risks associated with business impact:

 Affect of this product on company revenue?  Reasonableness of delivery deadline?  Number of customers who will use this product and the consistency of their needs relative to the product?  Number of other products/systems with which this product must be interoperable?  Amount and quality of product documentation that must be produced and delivered to the customer?  Costs associated with late delivery or a defective product?

Customer-Related Risks:
Different Customers have different needs. Customers have different personalities. Some customers accept what is delivered and some others complain about the quality of the product. In some other cases, customers may have very good association with the product and the producer and some other customers may not know. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager. The following risk item checklist identifies generic risks associated with different customers:  Have you worked with the customer in the past?  Does the customer have a solid idea of what is required?  Will the customer agree to spend time in formal requirements gathering meetings to identify project scope?  Is the customer willing to participate in reviews?  Is the customer technically sophisticated in the product area?  Does the customer understand the software engineering process?

Process Risks:
If the software engineering process is ill-defined or if analysis, design and testing are not conducted in a planned fashion, then risks are high for the product.  Has your organization developed a written description of the software process to be used on this project?  Are the team members following the software process as it is documented?  Are the third party coders following a specific software process and is there any procedure for tracking the performance of them?

 Are formal technical reviews are done regularly at both development and testing teams?  Are the results of each formal technical review documented, including defects found and resources used?  Is configuration management used to maintain consistency among system/software requirements, design, code, and test cases?  Is a mechanism used for controlling changes to customer requirements that impact the software?

Technical Issues:
 Are specific methods used for software analysis?  Are specific conventions for code documentation defined and used?  Are any specific methods used for test case design?  Are software tools used to support planning and tracking activities?  Are configuration management software tools used to control and track change activity throughout the software process?  Are tools used to create software prototypes?  Are software tools used to support the testing process?  Are software tools used to support the production and management of documentation?  Are quality metrics collected for all software projects?  Are productivity metrics collected for all software projects?

Technology Risk:
 Is the technology to be built new to your organization?  Does the software interface with new hardware configurations?  Does the software to be built interface with a database system whose function and performance have not been proven in this application area?  Is a specialized user interface demanded by product requirements?  Do requirements demand the use of new analysis, design or testing methods?  Do requirements put excessive performance constraints on the product?

Development Environment Risks:
 Is a software project and process management tool available?  Are tools for analysis and design available?  Do analysis and design tools deliver methods that are appropriate for the product to be built?  Are compilers or code generators available and appropriate for the product to be built?  Are testing tools available and appropriate for the product to be built?  Are software configuration management tools available?  Does the environment make use of a database or repository?  Are all software tools integrated with one another?  Have members of the project team received training in each of the tools?

Risks Associated with Staff Size and Experience:
 Are the best people available and are they enough for the project?  Do the people have the right combination of skills?  Are staffs committed for entire duration of the project?

Metrics Used In Testing
In this tutorial you will learn about metrics used in testing, The Product Quality Measures – 1. Customer satisfaction index 2. Delivered defect quantities 3. Responsiveness (turnaround time) to users 4. Product volatility 5. Defect ratios 6. Defect removal efficiency 7. Complexity of delivered product 8. Test coverage 9. Cost of defects 10. Costs of quality activities 11. Re-work 12. Reliability and Metrics for Evaluating Application System Testing.

The Product Quality Measures:
1. Customer satisfaction index This index is surveyed before product delivery and after product delivery (and on-going on a periodic basis, using standard questionnaires).The following are analyzed:  Number of system enhancement requests per year  Number of maintenance fix requests per year  User friendliness: call volume to customer service hotline  User friendliness: training time per new user  Number of product recalls or fix releases (software vendors)

 Number of production re-runs (in-house information systems groups) 2. Delivered defect quantities They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc. 3. Responsiveness (turnaround time) to users  Turnaround time for defect fixes, by level of severity  Time for minor vs. major enhancements; actual vs. planned elapsed time 4. Product volatility  Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality) 5. Defect ratios  Defects found after product delivery per function point.  Defects found after product delivery per LOC  Pre-delivery defects: annual post-delivery defects  Defects per function point of the system modifications 6. Defect removal efficiency  Number of post-release defects (found by clients in field operation), categorized by level of severity  Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects  All defects include defects found internally plus externally (by customers) in the first year after product delivery 7. Complexity of delivered product  McCabe's cyclomatic complexity counts across the system  Halstead’s measure  Card's design complexity measures  Predicted defects and maintenance costs, based on complexity measures

8. Test coverage  Breadth of functional coverage  Percentage of paths, branches or conditions that were actually tested  Percentage by criticality level: perceived level of risk of paths  The ratio of the number of detected faults to the number of predicted faults. 9. Cost of defects  Business losses per defect that occurs during operation  Business interruption costs; costs of work-arounds  Lost sales and lost goodwill  Litigation costs resulting from defects  Annual maintenance cost (per function point)  Annual operating cost (per function point)  Measurable damage to your boss's career 10. Costs of quality activities  Costs of reviews, inspections and preventive measures  Costs of test planning and preparation  Costs of test execution, defect tracking, version and change control  Costs of diagnostics, debugging and fixing  Costs of tools and tool support  Costs of test case library maintenance  Costs of testing & QA education associated with the product  Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations) 11. Re-work  Re-work effort (hours, as a percentage of the original coding hours)  Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)  Re-worked software components (as a percentage of the total delivered components) 12. Reliability

 Availability (percentage of time a system is available, versus the time the system is needed to be available)  Mean time between failures (MTBF).  Man time to repair (MTTR)  Reliability ratio (MTBF / MTTR)  Number of product recalls or fix releases  Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:
Metric = Formula Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code) Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code). Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria Defects per size = Defects detected / system size Test cost (in %) = Cost of testing / total cost *100 Cost to locate defect = Cost of testing / the number of defects located Achieving Budget = Actual cost of testing / Budgeted cost of testing Defects detected in testing = Defects detected in testing / total system defects Defects detected in production = Defects detected in production/system size Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100 Effectiveness of testing to business = Loss due to problems / total resources processed by the system. System complaints = Number of third party complaints / number of transactions processed Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10 Source Code Analysis = Number of source code statements changed / total number of tests. Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

To top