MANUAL TESTING STUDY MATERIAL:
Tutorial 1 - Software Testing - Introduction - Importance (02:02)
Software testing is a process used to identify the correctness, completeness, and quality of
developed computer software.It includes a set of activities conducted with the intent of finding
errors in software so that it could be corrected before the product is released to the end users.
In simple words, software testing is an activity to check whether the actual results match
the expected results and to ensure that the software system is defect free.
Why is testing is important?
This is China Airlines Airbus A300 crashing due to a software bug on April 26, 1994 killing 264
Software bugs can potentially cause monetary and human loss, history is full of such examples
In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and
delivered lethal radiation doses to patients ,leaving 3 people dead and critically injuring 3 others
In April of 1999 ,a software bug caused the failure of a $1.2 billion military satellite launch, the
costliest accident in history
In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S.
bank to be credited with 920 million US dollars
As you see, testing is important because software bugs could be expensive or even
As Paul Elrich puts it - "To err is human, but to really foul things up you need a computer."
Tutorial 2 - Seven Fundamental Principles of Testing (05:18) (Must Watch)
Consider a scenario where you are moving a file from folder A to Folder B.Think of all the
possible ways you can test this.
Apart from the usual scenarios, you can also test the following conditions
Trying to move the file when it is Open
You do not have the security rights to paste the file in Folder B
Folder B is on a shared drive and storage capacity is full.
Folder B already has a file with the same name, infact the list is endless
Or suppose you have 15 input fields to test ,each having 5 possible values , the number of
combinations to be tested would be 5^15
If you were to test the entire possible combinations project EXECUTION TIME & COSTS will
Hence, one of the testing principle states that EXHAUSTIVE testing is not possible. Instead we
need optimal amount of testing based on the risk assessment of the application.
And the million dollar question is, how do you determine this risk ?
To answer this lets do an exercise
In your opinion, Which operations is most likely to cause your Operating system to fail?
I am sure most of you would have guessed, Opening 10 different application all at the same
So if you were testing this Operating system you would realize that defects are likely to be
found in multi-tasking and needs to be tested thoroughly which brings us to our next
principle Defect Clustering which states that a small number of modules contain most of the
By experience you can identify such risky modules.But this approach has its own problems
If the same tests are repeated over and over again , eventually the same test cases will no
longer find new bugs
This is the another principle of testing called “Pesticide Paradox”
To overcome this, the test cases need to be regularly reviewed & revised , adding new &
different test cases to help find more defects.
But even after all this sweat & hard work in testing, you can never claim you product is bug
free. To drive home this point , lets see this video of public launch of Windows 98
You think a company like MICROSOFT would not have tested their OS thoroughly & would risk
their reputation just to see their OS crashing during its public launch!
Hence, testing principle states that - Testing shows presence of defects i.e. Software Testing
reduces the probability of undiscovered defects remaining in the software but even if no defects
are found, it is not a proof of correctness.
But what if , you work extra hard , taking all precautions & make your software product 99%
bug free .And the software does not meet the needs & requirements of the clients.
This leads us to our next principle, which states that-
Absence of Error is a Fallacy i.e. Finding and fixing defects does not help if the system
build is unusable and does not fulfill the users needs & requirements
To fix this problem , the next principle of testing states that
Early Testing - Testing should start as early as possible in the Software Development Life
Cycle. so that any defects in the requirements or design phase are captured as well more on this
principle in a later training tutorial.
And the last principle of testing states that the Testing is context dependent which basically
means that the way you test a e-commerce site will be different from the way you test a
commercial off the shelf application
Summary of the Seven Testing Principles
Principle 1 Testing shows presence of defects
Principle 2 Exhaustive testing is impossible
Principle 3 Early Testing
Principle 4 Defect Clustering
Principle 5 Pesticide Paradox
Principle 6 Testing is context dependent
Principle 7 Absence of errors - fallacy
Tutorial 3 - SDLC & STLC (03:58)
Suppose, you are assigned a task, to develop a custom software for a client.
Each block below represents a step required to develop the software.
Irrespective of your technical background, try and make an educated guess about the sequence
of steps you will follow, to achieve the task
The correct sequence would be.
Gather as much information as possible about the details & specifications of the desired
software from the client. This is nothing but the Requirements gathering stage.
Plan the programming language like java ,php, .net ; database like oracle, mysql etc which
would be suited for the project. Also some high level functions & architecture. This is the Design
Actually code the software. This is the Built Stage.
Next you, Test the software to verify that it is built as per the specifications given by the
client. This is the TEST stage.
Once your software product is ready , you may to do some code changes to accommodate
enhancements requested by the client. This would be Maintenance stage.
All these levels constitute the waterfall method of software development lifecycle.As you may
observe, that testing in the model starts only after implementation is done.
But if you are working in large project, where the systems are complex, its easy to miss out
key details in the requirements phase itself. In such cases , an entirely wrong product will be
delivered to the client. You will have to start a fresh with the project
Or if you manage to note the requirements correctly but make serious mistakes in design and
architecture of you software you will have to redesign the entire software to correct the error.
Assessments of thousands of projects have shown that defects introduced during
requirements & design make up close to half of the total number of defects
Also, the costs of fixing a defect increases across the development life cycle. The earlier in
life cycle a defect is detected, the cheaper it is to fix it. As the say, "A stitch in time saves a nine"
To address this concern , the V model of testing was developed where for every phase , in
the Development life cycle there is a corresponding Testing phase
The left side of the model is Software Development Life Cycle - SDLC
The right side of the model is Software Test Life Cycle - STLC
The entire figure looks like a V , hence the name V - model
You a find a few stages different from the waterfall model.
These differences , along with the details of each testing phase will be discussed in later
Apart from V model , there are iterative development models , where development is carried
in phases , with each phase adding a functionality to the software.
Each phase comprises of, its own independent set of development and testing activities.
Good examples of Development lifecycles following iterative method are Rapid Application
Development, Agile Development
Before we close this software testing training a few pointers -
You must note that, there are numerous development life cycle models. Development model
selected for a project, depends on the aims and goals of that project
Testing is not a stand-alone activity and it has to adopt with the development model chosen for
In any model, testing should performed at all levels i.e. right from requirements until
Types of Testing
Tutorial 4 - Unit Testing (02:22)
Consider a scenario where you work for an IT outsourcing company as part of the Testing Team
and your company is hired by a bank to develop a online banking application
To understand the STLC , lets first quickly go through the SDLC
During Requirement analysis phase, after series of meetings the customer decides he wants
the following 5 functionalities into his system Login on Valid Credentials, View Current Balance,
Deposit Money, Withdraw Money, Transfer Money to a 3rd party account
Next , in the Functional Specification stage architecture , database , and operating
environment design are finalized
Next , in High level Design stage the application is broken down in module & programs
In Detail Design Stage The pseudo code for functions for each module is documented
Actual Coding Begins. This is the software development cycle of the V-model.
During all these phases, the tester is not sitting idle for the coding to complete
But is doing corresponding testing activities
Lets look at one by one -
Unit Testing. It is also called Component Testing. It is performed on standalone module to
check it is developed correctly. For the Login Module, typical unit test cases would be
Check response for Valid Login & password
Check response for invalid Login & password
Check response when login id is empty and login button is pressed
Developers do unit test. In practical world, developers are either reluctant to test their code
or do not have time to unit test .Many a times, much of the unit testing is done by testers
Tutorial 5 - Integration Testing (03:24)
In this phase of testing, individual modules are combined and tested as a group .
Integration Testing is carried out by Testers. Data transfer between the modules is tested
Consider this Integration Testing Scenario. Customer is currently in Current Balance Module. His
balance is 1000. He navigates to the Transfer Module. And transfers 500 to a 3rd part account
Customer navigates back to the Current Balance Module & now his latest balance should be
The Modules in the project are assigned to 5 different developers to reduce coding time
Coder 2 is ready with Current Balance module. Coder 5 is not ready with Transfer module
required to test your integration scenario. What do you do in such a situation?
On approach is to use Big - Bang Integration Testing - where you wait for all modules to be
developed before you begin integration testing. The major disadvantage is that it increases project
execution time, since testers will be sitting idle unless all modules are developed .Also it becomes
difficult to trace the root cause of defects.
Alternatively, you can use Incremental approach were modules are checked for integration as
and when they are available.
Consider that the Transfer module is yet to be developed but Current Balance module is ready
.You will create a Stub which will accept and give back data to the current balance module.
Note that , this is not a complete implementation of the Transfer module which will have lots
of check like 3rd party account # entered is correct, amount to transfer should not be more than
amount to available in account and so on. But it will just simulate the data transfer that takes
place between the two modules to facilitate testing
On the contrary, if transfer module is ready but current balance module is not developed you
will create a Driver to stimulate data transfer between the modules
To increase the effectiveness of the integration testing you may use -
Top to down approach where higher level modules are tested first .This technique will
require creation of stubs
Bottom Up approach -where lower level modules are tested first. This technique will require
creation of drivers
Other approaches, would be functional increment & Sandwich - which is combination of top
to down and bottom to up.
The choice of approach chosen depends on the system architecture and location of high risk
Tutorial 6 - System - Acceptance Testing (01:30)
Unlike Integration testing, which focuses on data transfer amongst modules, System Testing
checks complete end to end scenarios , as the way a customer would use the system.
A good example of Test Case in the phase would be - Login into the banking application, Check
Current Balance, Transfer some money, Logout
Apart from Functional, NON-FUNCTIONAL requirements are also checked during system
testing . Non-functional requirements include performance, reliability. We will discuss non-
functional requirement in detail in a later tutorial
Acceptance test is usually done at client location, by the client, once all the defects found
in System testing phase are fixed.
Focus of Acceptance test is not to find defects but to check whether meet their
requirements .Since this is the first time, the client sees their requirements which is plain text into
an actual working system
Acceptance Testing can be done in two ways
Alpha Testing – A small set of employees of the client in our case employees of the bank will
use the system as the end user would
Beta Testing – A small set of customers in our case bank account holders will use the software
and recommend changes
That’s all to the various testing levels in the V- model
Tutorial 7 - Sanity - Smoke Testing (01:54)
Consider a scenario, when after fixing defects of integration testing , the system is made
available to the testing team for system testing
You look at the initial screen , system looks fine and delay system test execution for the next
day since you have other critical testing requirements to attend to
Next day say you plan to execute the scenario login > View Balance > Transfer 500 > logout .
The deadline is 4 hours.
You begin executing the scenario , enter a valid login id , password, click the login button and
boom you are taken to a blank screen with absolutely no links , no buttons & no where for you to
proceed with succeeding steps of your scenario
This is not a figment of any imagination but a very practical condition which could arise due to
developer negligence, time pressures , test environment configuration & instability
To fix this developer requires atleast 5 hours & deadline would be missed
In fact , none of your team members would be able to execute their respective scenarios ,
since view balance is start point to perform any other operation and the entire project will be
Had you checked this yesterday itself, the system would been fixed by now and you were good
To avoid such situation sanity also know SMOKE testing is done to check critical
functionalities of the system before its is accepted for major testing. Sanity testing is quick and
is non- exhaustive. Goal is not to find defects but to check system health
Tutorial 8 - Maintenance - Regression Testing (01:09)
Suppose in the current Balance module instead of just showing the current balance the client
now wants customized reports based on date & amount of transaction
Obviously, any such change needs to be tested. Once deployed, testing any further system
changes , enhancements or corrections forms part of Maintenance Testing
Suppose that in our banking application your current balance is 2000.
Using the new enhancement, you check your balance for an year ago that comes out to be 500.
You enter the transfer module and try to transfer Rs 1000.In order to proceed the transfer
module checks for the current balance.
Instead of sending the current balance, it sends the old balance of 500 and transaction fails
As you may observe, code changes were in Current Balance module only but still transfer
module is affected. Regression testing is carried out to check modification in software has not
caused unintended adverse side - effects
Tutorial 9 - Non - Functional Testing (01:30)
Apart from Functional testing, non – functional requirements like performance , usability ,load
factor are also important.
How many times have you seen such long load time messages while accessing an application -
(pic in video) I am sure many
To address this issue , performance testing is carried out to check & fine tune system
response times. The goal of performance testing is to reduce response time to an acceptable level
Or you may have seen messages like - (pic in video ) .Hence load testing is carried out to
check systems performance at different loads i.e. number of users accessing the system
Depending on the results and expected usage, more system resources may be added
That’s all to Types of Testing
In general there three testing types 1) Functional 2) Non - Functional 3)
Maintenance. Under these types, you havemultiple TESTING Level's but usually people call them as
You may find some difference in this classification in different resources but the general theme
remains the same.
This is not the complete list as there are more than 150 types of testing types and still
adding. No need to bother or worry, you will pick them up as you age in the testing industry
Also, note that not all testing types are applicable to all projects but depend on nature & scope
of the project. More on this in a later tutorial.
Test Case Development
Tutorial 10 - First Steps Test Case Development (01:30)
For a newbie, its easy to assume that Testing is executing various section of code ,on a ad-hoc
basis and verifying the results. But in real world, Testing is a very formal activity, and is
documented in detail.
The degree of formality depends on the type of application under test , standards followed
by your organization ,& maturity of development process.
The importance of documentation will be highlighted in the succeeding tutorials
For all hands - on, we will be using the Flight Reservation Application ,which comes bundled
with Automation Tool , QTP.
To get this application , either install QTP, or use this link given below.
Tutorials on this site for QTP & Loadrunner use Flight Reservation .Therefore we have selected
flight reservation to reduce your learning curve while studying QTP & Loadrunner. Below, find Link
to Introduction Video to Flight Reservation Application.
Tutorial 11 - Test Scenario (02:04)
Test Scenario - A Scenario is any functionality that can be tested. It is also called Test
Condition ,or Test Possibility.
For the Flight Reservation Application a few scenarios would be 1) Check the Login
Functionality 2) Check that a New Order can be created 3) Check that an existing Order can be
opened 4) Check that a user , can FAX an order 5) Check that the information displayed in the HELP
section is correct 6) Check that the information displayed in About section , like version ,
programmer name , copy right information is correct
Apart from these six scenarios try and list all the other possible scenarios for the application.
Pause the tutorial and complete the exercise
I am sure you have identified many a more like Update Order ,Delete Order ,Check Reports
,Check Graphs ,and so on.For the time being lets stick to these six .
Next , we have already learned exhaustive testing is not possible. Suppose you have time to
only execute 4 out of these 6 scenarios Which two low priority scenarios of these six will you
eliminate. Think, your time starts now
I am sure most of you would have guessed scenarios 4 & 5, since they are not the core
functionality of the application. This is nothing but Test Prioritization.
Tutorial 12 - Test Case Specifications (03:56)
Now, consider the Test scenario Check Login Functionality there many possible cases like
Check response on entering valid Agent Name & Password ,Check response on entering invalid Agent
Name & Password ,Check response when Agent Name is Empty & Login Button is pressed, and many
This is nothing but Test Case. Test scenarios are rather vague and cover a wide range of
possibilities. Testing is all about being very specific.Hence we need Test Cases
Now just Consider the test case , Check response on entering valid Agent Name and password.
Its obvious that this test case needs input values viz.Agent Name & Password
This is nothing but Test Data. Identifying test data can be time-consuming and may some
times require creating test data afresh. The reason it needs to be documented
Before we proceed ahead consider a quote from a wity man who said "To ensure perfect aim,
shoot first and call whatever you hit the target" But if you do not live by this philosophy ,which I am
sure most of you do not then your Test case must have an expected result.
For our test case, expected result would be , Login should be successful
If expected results are not documented we may miss out on small differences in
calculations in results which otherwise look OK.
Consider this example, where you are calculating monthly pay for an employee which involves
lots of calculations. The need for documenting expected results becomes obvious.
Suppose the author of the test case has left the organization or is on a vacation or is sick
and off duty or is very busy with other critical tasks and you are recently hired and have been
asked to execute the test case.Since you are new, it would certainly help to have test steps
documented which in this case would be Launch application , Enter Agent Name, Enter Password ,
You may wonder that for this simple test steps, documentation is not required
But what is your test steps looked something like this ? (pic in video) I think the need will
becomes instantaneously obvious.
That apart your test case -may have field like , Pre - Condition which specifies things that
must in place before the test can run
For our test case , a pre-condition would be Flight Reservation Application should be installed ,
which I am sure 50% of the people watching this tutorial do not have
Also, your test case may also include Post – Conditions which specifies anything that applies
after the test case completes.
For our test case , a post - condition would be time & date of login is stored in the database
During test case execution , you will document the results observed in the Actual Results
column and may even attach some screen shots and based on the results give PASS & FAIL status.
This entire table may be created in Word , Excel or any other Test management tool.That’s all
to Test Case Design
Tutorial 13 - Test Basis (01:33)
Now , consider a scenario , where the client sends in a request to add a functionality to Flight
Reservation to allow sending a order via email.He also specifies the GUI fields and buttons he wants
Even though the application is yet to be developed , try and develop a few test cases for this
requirement.A few test cases amongst the many you could have thought of would be-
Check response when Valid Email ID are entered and send is pressed
Check response when inValid Email ID are entered and send is pressed
You may have also realized that to create test cases you need to look at something to base
your test. This is nothing but Test Basis
This test basis could be the actual Application Under Test (AUT) ,or may ay be even
by experience but most of the times , like in this case would be based on documents
In fact , this is what happens during the different phases of V- Model where test plans are
created using the corresponding documents and once the code is ready , testing is done
Tutorial 14 - Traceability Matrix (01:10)
Consider a scenario where, the client changes the requirement , something so usual in the
practical world and adds a Field Recipient name to the functionality. So now you need to enter
email id and name both to send a mail
Obviously you will need to change your test cases to meet this new requirement
But , by now your test case suite is very large and it is very difficult to trace the test cases
affected by the test cases
Instead , if the requirements were numbered and were referenced in the test case suite it
would have been very easy to track the test cases that are affected. This is nothing but Traceability
The traceability matrix links a business requirement to its corresponding functional
requirement right up to the corresponding test cases.
If a Test Case fails, traceability helps determine the corresponding functionality easily .
It also helps ensure that all requirements are tested.
Tutorial 15 - Equivalence Partitioning & Boundary Value Analysis (03:01)
We have already learned that exhaustive testing is not possible due to time and budget
We need certain testing techniques to select a few test cases out of the many with most
likelihood of finding a defect
Lets look into Equivalence Partitioning & Boundary Value Analysis testing techniques .
Equivalence Partitioning - is a block box technique which can be applied to all levels of testing
like unit , integration , system etc.
A black box technique where the code is not visible to the tester.
In Equivalence Partitioning , you divide set of test conditions into partitions that can be
considered the same.
To understand this better , lets consider the behavior of tickets in the Flight reservation
application , while booking a new flight. Ticket values 1 to 10 are considered valid & ticket is
Values 11 to 99 are considered invalid and a error message "Only ten tickets may be ordered at
one time" is shown
On entering values 100 and above , the ticket # number defaults to a two digit number
On entering values 0 and below , the ticket # defaults to 1
We can not test all the possible values , because if done , number of test cases will be more
than 100 .To address this problem we use equivalence partitioning where we divide the possible
values of tickets into groups or sets where the system behavior can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick
only one value from each partition for testing.
The hypothesis behind this technique is that if one condition/value in a partition passes all
others will also pass. Likewise , if one condition in a partition fails , all other conditions in that
partition will fail.
In Boundary Value Analysis , you test boundaries between equivalence partitions
In our earlier example instead of checking, one value from each partition you will check the
values at the partitions like 0,1,10,11 and so on
As you may observe, you test values at both valid and invalid boundaries
Boundary Value Analysis is also called range checking.
Equivalence partitioning and boundary value analysis are closely related and can be used
together at all levels of testing.
Tutorial 16 - Decision Table Testing (02:02)
Decision Table Testing is a good way to deal with combination of inputs, which produce
To understand this with an example lets consider the behavior of Flight Button for different
combinations of Fly From & Fly To
When both Fly From & Fly To are not set the Flight Icon is disabled.In the decision table , we
register values False for Fly From & Fly To and the outcome would be ,which is Flights Button will be
disabled i.e. FALSE
Next , when Fly From is set but Fly to is not set , Flight button is disabled. Correspondingly
you register True for Fly from in the decision table and rest of the entries are false
When , Fly from is not set but Fly to is set , Flight button is disabled And you make entries in
the decision table
Lastly , only when Fly to and Fly from are set , Flights button is enabled And you make
corresponding entry in the decision table
If you observe the outcomes for Rule 1 , 2 & 3 remain the same .So you can select any of the
them and rule 4 for your testing
The significance of this technique becomes immediately clear as the number of inputs
increases. .Number of possible Combinations is given by 2 ^ n , where n is number of Inputs.
For n = 10 , which is very common is web based testing , having big input forms , the number
of combinations will be 1024.Obviously, you cannot test all but you will choose a rich sub-set of
the possible combinations using decision based testing technique
Tutorial 17 - State Transition Diagram (02:52)
You can use State Table to determine invalid system transitions
In a state Table all the valid states are listed on the left side of the table , and the events
that cause them on the top.
Each cell represents the state system will move to when the corresponding event occurs
For example while in S1 state you enter correct password you are taken to state S6
Or in case you enter incorrect password you are taken to state S3
Like wise you can determine all other states
Two invalid states are highlighted using this method which basically means , what happens
when you are already logged into the application and you open another instance of flight reservation
and enter valid or invalid passwords for the same agent
System response for such a scenario can need to be tested.
Tutorial 18 - Use Case Testing (01:18)
In use case , an actor is represented by "A" and system by "S"
First we list the Main Success Scenario
Consider a first step of an end to end scenario where the Actor enters Agent Name and
In the next step system will validate the password
Next, if password is correct , Access is granted.
There can be extension of this use case
In case password is not valid system will display message and ask for re-try four times
Or if Password not valid four times system will close the application
Here we will test the success scenario and one case of each extension
Tutorial 19 - Testing Review (05:39) (Must Watch)
To understand a review in detail lets consider the same example , to add email functionality to
flight reservation application for which the Functional Design Document is prepared by the technical
Technical Lead approaches his Manager and requests to initiate a review.
The Manager will quickly go through the document and check whether the document is of
acceptable quality to request a review by other people. For example , in this case , he finds a few
spelling mistakes and asks the technical lead to correct them.
The manger will send out a meeting request to all stake holders with Meeting Location
Information, Date and Time of meeting, and will mention the Agenda for the Meeting, also attach
the Functional Design Document itself. This is the planning stage
Next stage is the Kick Off Meeting. It is an Optional Step. Goal is to get everybody on the
same wavelength regarding the document under review and is beneficial for new or highly complex
Next stage is the Preparation Stage where Review Meeting participants individually go through
the document to identify defects, comments and questions to be asked during the review meeting
This phase is necessary to ensure that during the meeting participants focus of subject in hand
instead of day dreaming. This is your exercise. For this Functional Design Document think of the
details missing, which will help you test this functionality. Pause the training and think!
The next stage , which is, the actual review meeting. Here , the meeting Participants are
assigned different roles to increase the effectiveness of the meeting.
The moderator is a role usually played by the manager who leads the review meeting and
sets the Agenda.
The creator of the document ,under review plays the role of AUTHOR who reads the
document and invites comments
The task of the reviewer is to communicate any defects in the work product .
Suppose , one of the reviewer says it would be nice to have a Reset Button. The author agrees
Another review comment is that there is no mention , as to where in the menu item ,the Email
Functionality will appear. Again the author agrees and accepts to make changes
The meeting participant playing the role of the scribe ( also know as recorder ) , will note
down this defect or suggestion.
One young review , suggests the possibility of sharing a ticket via face book , orkut and so on.
The author strongly disagrees to this and the reviewer and author enter into a heated argument. At
this juncture the moderator intervenes and finds a amicable solution which is to ask the client
whether he needs sharing via social networking
Finally , all comments are discussed and the scribe gives a list of Defects / Comments /
Suggestions that the author needs to incorporate into his work product.
The moderator then closes the review meeting. That’s all to the meeting phase of review.
The important roles here are - The Moderator ,The Author ,The Scribe / Recorder ,The
The moderator and scribe can also play the role of reviewer meaning they can give review
comments to the author.
The next phase of the review, is the re-work phase, where the author will make changes in
the document ,as per the action items of the meeting .
In the Follow -up phase , the moderator will circulate the reworked document all review
participants and ensure that all changes have been included satisfactorily.
This was a generic review.
Note that, there are three types of reviews
Walk Through , which is led by Author
Technical Review , which is led by a trained moderator with No management participation
Inspection ,which is led by trained moderator and use entry and exit criteria
All these 3 types follow the same review process and follow the same stages as discussed
Test Management & Control
Tutorial 20 - Estimation (02:42)
Lets do an exercise -for the Flight Reservation Application prepare a Work Breakdown Structure
various testing tasks like - Check Login Functionality, Check New Order Functionality,Check Fax
Functionality, and other similar functionality and Estimate the effort required to test these
For example login functionality can be tested in 2 hours. Like wise prepare a list of all the tasks
and corresponding effort. Pause the training tutorial and complete the exercise. I hope you made an
educated guess of the effort required
This is Bottom - Up Strategy for Test Estimation. The technique is called bottom- up since
based on the tasks which is at the lowest level of the work breakdown hierarchy you estimate
the duration , dependencies and resources.
In bottom up strategy , estimates are not taken by a single person but all stakeholders
, individual contributors , experts and experienced staff members collectively. The idea is to draw
on the collaborative wisdom of the team members to arrive at accurate test estimates
Now since you have considerable experience on the flight reservation system. Use this
experience to estimate the effort required for full functional testing of the website. -
This site’s functionally is identical to the Flight Reservation Application , just that it is web
based. Pause the tutorial and do the exercise now
I hope based on your experience you made a good estimate on the effort required to test the
This is the Top - Down Approach to estimation which is based on experience .
Another technique is to classify project based on their size and complexity and then seeing
how long a project of a particular size and complexity have taken in past.
Another approach is determining Average Effort Per Test Case in past for similar projects
and then using estimated test cases of the current project and arriving at total effort
More sophisticated estimation models involve complex mathematical models.In practice ,
majority of the projects use top-down approach for estimation.
Test estimates can be affected by many factors like timing pressures , people factors ,
geographic distribution of the test team and so on
Tutorial 21 - Test Plan (03:08)
In a earlier trainings we have already informed that there more than 150 types of testing and
you can not possibly test your application for all the different types
For the Flight Reservation system -you might want to test the application to examine how it
works when installed on different operating systems
But testing it to check how it works for different browsers does not makes sense since it is not
a web based application
Based on above , you can make a list testing types that are in scope and will be tested and out
of scope , testing types that will not be executed for flight Reservation.
A Risk could any future event with a negative consequence .You need to identify the risks
associated with your project
Risks are of two types 1) Project Risks 2) Product Risk
Example of Project risk is Senior Team Member leaving the project abruptly
Every risk is assigned a likelihood i.e. chance of it occurring, typically on a scale of 1 to
10.Also the impact of that risk is identified on a scale of 1- 10 .
But just identifying the risk is not enough. You need to identify a mitigation. In this case
mitigation could be Knowledge Transfer to other team members & having a buffer tester in place
The second type of risks are product risks. An example of a product risk would be Flight
Reservation system not installing in test environment
Mitigation in this case would be conducting a smoke or sanity testing. Accordingly you will
make changes in your scope items to include sanity testing
This is risk based strategy of testing. There are many other testing strategies to help you
select testing types for your application under test.
Most of the times your out scope times will not contain out of context testing types but in
context testing types excluded due to the test strategy chosen , budget and timing
considerations. So for example if timing considerations do not permit performance testing It will
move from in scope to out of scope list .
That apart, a test plan will contain information about the Test Estimates,Test
Team,Schedule,and so on.
A test Plan helps monitor the progress of various testing activities and helps take
controlling action in case of any deviations from the planned activities. That’s a brief overview of
how to create a test plan
Tutorial 22 - Defects (02:42)
While executing test cases you may find that actual results vary from the expected results.
This is nothing but a defect also called incident , bug , problem or issues.
In case you find a defect , What information would you convey to a developer to help him
understand the defect ? Pause the training and think.Your Bug Report should contain following
Defect_ID – Unique identification number for the defect.
Defect Description – Detailed description of the defect including information about the module
in which defect was found.
Version – Version of the application in which defect was found.
Steps – Detailed steps along with screenshots with which the developer can reproduce the
Date Raised – Date when the defect is raised
Reference- where in you Provide reference to the documents like . requirements, design,
architecture or may be even screenshots of the error to help understand the defect
Detected By – Name/ID of the tester who raised the defect
Status – Status of the defect , more on this later
Fixed by – Name/ID of the developer who fixed it
Date Closed – Date when the defect is closed
A sample bug report for your reference. This apart , your bug report will also include
Severity , which describes the impact of the defect on the application
Priority which is related to defect fixing urgency.
Severity & Priority could be High/Medium/Low based on the impact & urgency at which
the defect should be fixed respectively
A defect could have a very low severity but a high priority
For example if there is an error in the text of the logo of flight reservation application , its
severity is low since its can be fixed very easily and it does not affect any functionality of the system
.But it needs to be fixed at high priority since you do not want to ship out your product with a
Likewise a defect could be high severity but low priority
Suppose there is a problem with Email functionality of Flight Reservation .This defect has high
severity since it causes the application to crash but the functionality is scheduled to release in next
cycle which makes it a low priority
Tutorial 23 - Defect Life Cycle (02:18)
From discovery to resolution a defect moves through a definite lifecycle called the defect
Lets look into it. Suppose a tester finds a defect .The defect is assigned a status , new.
The defect is assigned to development project manager who will analyze the defect.He will
check whether it is a valid defect.
Consider that In the flight reservation application, the only valid password is mercury. But you
test the application for some random password , which causes logon failure and report it as defect
.Such defects due to due to corrupted test data , miss configurations in the test environment ,
invalid expected results etc are assigned a status rejected
If not , next the defect is checked whether it is in scope. Suppose you find a problem with the
email functionality. But it is not part of the current release .Such defects are postponed
Next, manager checks whether a similar defect was raised earlier . If yes defect is assigned a
If no the defect is assigned to developer who starts fixing the code. During this stage defect is
assigned a status in- progress.
Once code is fixed. Defect is assigned a status fixed
Next the tester will re-test the code. In case the test case passes the defect is closed.
If the test cases fails again the defect is re-opened and assigned to the developer
Consider a situation where during the 1st release of Flight Reservation a defect was found in
Fax order which was fixed and assigned a status closed
During the second upgrade release the same defect again re-surfaced. In such cases, a closed
defect will be re-opened.That’s all to Bug Life Cycle
Tutorial 24 - Testing Tools
TEST TEST PERFORMANCE REQUIREMENTS
Type Of Tool MANAGEMENT EXECUTION MEASUREMENT MANAGEMENT
TOOL TOOLS TOOLS TOOLS
Storing an expected
result in the form of
a screen or GUI Ability to simulate
Management of Tests
object and high user load on the Storing Requirements
comparing it with application under test
run-time screen or
Ability to create Identifying undefined ,
Scheduling of Tests Executing tests from
Key Features diverse load missing or to be
a stored scripts
& conditions defined requirements
Functionalities Management of
Support for majority Traceability of
Testing Activities Logging test results
of protocols Requirements
Interfaces to other Sending test
tools to interpret the Interfacing with Test
testing tools summary to test
performance logs Management Tools
Traceability Access of data files Requirements
for use as test data Coverage
Example Quality Center QTP Loadrunner Vector
Type Of Tool CONFIGURATION REVIEW TOOL STATIC MODELING
MANAGEMENT ANALYSIS TOOLS
Information About Calculate Identify
Sorting and Storing
Versions and builds of Cyclomatic Inconsistencies or
Software and Test Ware Complexity defects in Models
Help in prioritization
Build and release Enforce Coding of tests in accordance
management Standards with the model in
keeping track of
Build and release Analyze Structure response under
Key Features review comments ,
management and Dependencies various levels of
& including defects
Using UML, it helps
Access control (check Help in in understanding
review comments &
in and check out) understanding Code system functions and
Status ( Pass , Pass
Identify defects in
with corrections ,
Example SourceAnywhere InView PMD Altova ; ER
Test Harness / Unit
Test Data Coverage
Type Of Tool Test Framework Security Tools
Preparation Tools Measurement Tool
Supplying inputs or
Extract Selected data
receiving outputs for Identifying Coverage
records from files or Identify Viruses
the software under Items
Recording pass / fail Identify Denial of
Data Anonymization items which are not
status Service Attacks
Create new records Simulating Various
Key Features Identifying test inputs
populates with Storing tests Types of External
& to exercise
random data Attacks
Create large number
Support for Generating stubs and Weakness in
of similar records
debugging drivers Passwords for files
from a template
Probing for open
Code coverage ports or externally
measurement visible points of
Example Clone & Test Junit CodeCover Fortify
On Reader Demand
Tutorial 25 - Web Application Testing - A complete Guide
Web Application Testing Checklist:
Some or all of the following testing types may be performed depending on your web testing
1. Functionality Testing :
This is used to check of your product is as per the specifications you intended for it as well as the
functional requirements you charted out for it in your developmental documentation.Testing Activities
Test all links in your webpages are working correctly and make sure there are no broken links. Links to
be checked will include -
Test Forms are working as expected. This will include-
Scripting checks on the form are working as expected. For example- if a user does not fill a
mandatory field in a form a error message is shown.
Check default values are being populated
Once submitted , the data in the forms is submitted to a live database or is linked to an
working email address
Forms are optimally formatted for better readability
Test Cookies are working as expected. Cookies are small files used by websites to primarily remember
active user sessions so you do not to log in every time you visit a website. Cookie Testing will include
Testing cookies (sessions) are deleted either when cache is cleared or when they reach their
Delete cookies (sessions) and test that login credentials are asked for when you next visit the
Test HTML and CSS to ensure that search engines can crawl your site easily. This will include
Checking for Syntax Errors
Readable Color Schemas
Standard Compliance.Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or WS-I are
Test business workflow- This will include
Testing your end - to - end workflow/ business scenarios which takes the user through a series
of webpage's to complete.
Test negative scenarios as well , such that when a user executes an unexpected step ,
appropriate error message or help is shown in your web application.
Tools that can be used: QTP , IBM Rational , Selenium
2. Usability testing:
Usability testing has now become a vital part of any web based project. It can carried out by
testers like you or a small focus groupsimilar to the target audience of the web application.
Test the site Navigation:
Menus , buttons or Links to different pages on your site should be easily visible and consistent
on all webpages
Test the Content:
Content should be legible with no spelling or grammatical errors.
Images if present should contain and "alt" text
Tools that can be used: Chalkmark, Clicktale, Clixpy and Feedback Army
Three areas to be tested here are - Application , Web and Database Server
Application: Test requests are sent correctly to the Database and output at the client side is
displayed correctly. Errors if any must be caught by the application and must be only shown to the
administrator and not the end user.
Web Server: Test Web server is handling all application requests without any service denial.
Database Server: Make sure queries sent to the database give expected results.
Test system response when connection between the three layers (Application, Web and
Database) can not be established and appropriate message is shown to the end user.
Tools that can be used: AlertFox,Ranorex
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-
Test if any errors are shown while executing queries
Data Integrity is maintained while creating , updating or deleting data in database.
Check response time of queries and fine tune them if necessary.
Test data retrieved from your database is shown accurately in your web application
Tools that can be used: QTP
5. Compatibility testing.
Compatibility tests ensures that your web application displays correctly across different devices. This
Browser Compatibility Test: Same website in different browsers will display differently. You need to
authentication is working fine. You may also check for Mobile Browser Compatibility.
The rendering of web elements like buttons , text fields etc changes with change in Operating System.
Make sure your website works fine for various combination of Operating systems such as Windows ,
Linux , Mac and Browsers such as Firefox , Internet Explorer , Safari etc.
Tools that can be used: NetMechanic
This will ensure your site works under all loads. Testing activities will include but not limited to -
Website application response times at different connection speeds
Load test your web application to determine its behavior under normal and peak loads
Stress test your web site to determine its break point when pushed to beyond normal loads at
Test if a crash occurs due to peak load , how does the site recover from such an event
Make sure optimization techniques like gzip compression , browser and server side cache
enabled to reduce load times
Tools that can be used: Loadrunner, JMeter
7. Security testing:
Security testing is vital for e-commerce website that store sensitive customer information like credit
cards.Testing Activities will include-
Test unauthorized access to secure pages should not be permitted
Restricted files should not be downloadable without appropriate access
Check sessions are automatically killed after prolonged user inactivity
On use of SSL certificates , website should re-direct to encrypted SSL pages.
Tools that can be used: Babel Enterprise, BFBTester and CROSS
You will select a large number of people (crowd) to execute tests which otherwise would have been
executed a select group of people in the company. Crowdsourced testing is an interesting and
upcoming concept and helps unravel many a unnoticed defects.
Tutorial 26 - Software Testing Types - An Exhaustive List of 100 Testing Types with Definition
. Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its
acceptance criteria and to enable the customer to determine whether or not to accept the system. It is
usually performed by the customer.
2. Accessibility Testing: Type of testing which determines the usability of a product to the people
having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons
3. Active Testing: Type of testing consisting in introducing test data and analyzing the execution
results. It is usually conducted by the testing teams.
4. Agile Testing: Software testing practice that follows the principles of the agile manifesto,
emphasizing testing from the perspective of customers who will utilize the system. It is usually
performed by the QA teams.
5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The
evaluation process is conducted by testing teams.
6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break'
the system by randomly trying the system's functionality. It is performed by the testing teams.
7. Alpha Testing: Type of testing a software product or system conducted at the developer's site.
Usually it is performed by the end user.
8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product
requirements. It is performed by the testing teams.
9. API Testing: Testing technique similar to unit testing in that it targets the code level. API Testing
differs from unit testing in that it is typically a QA task and not a developer task.
10. All-pairs Testing: Combinatorial testing method that tests all possible discrete combinations of
input parameters. It is performed by the testing teams.
11. Automated Testing: Testing technique that uses automation testing tools to control the
environment set-up, test execution and results reporting. It is performed by a computer and is used
inside the testing teams.
12. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a
procedural design and use this as a guide for defining a basic set of execution paths. It is used by
testing teams when defining test cases.
13. Backward Compatibility Testing: Testing method which verifies the behavior of the developed
software with older versions of the test environment. It is performed by testing teams.
14. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done
by end-users or others.
15. Benchmark Testing: Testing technique that uses representative sets of programs and data
designed to evaluate the performance of computer hardware and software in a given configuration. It
is performed by testing teams.
16. Big Bang Integration Testing: Testing technique which integrates individual program modules only
when everything is ready. It is performed by the testing teams.
17. Binary Portability Testing: Technique that tests an executable application for portability across
system platforms and environments, usually for conformation to an ABI specification. It is performed by
the testing teams.
18. Boundary Value Testing: Software testing technique in which tests are designed to include
representatives of boundary values. It is performed by the QA testing teams.
19. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are
developed first and other modules which go towards the 'main' program are integrated and tested one
at a time. It is usually performed by the testing teams.
20. Branch Testing: Testing technique in which all branches in the program source code are tested at
least once. This is done by the developer.
21. Breadth Testing: A test suite that exercises the full functionality of a product but does not test
features in detail. It is performed by testing teams.
22. Black box Testing: A method of software testing that verifies the functionality of an application
without having specific knowledge of the application's code/internal structure. Tests are based on
requirements and functionality. It is performed by QA teams.
23. Code-driven Testing: Testing technique that uses testing frameworks (such as xUnit) that allow
the execution of unit tests to determine whether various sections of the code are acting as expected
under various circumstances. It is performed by the development teams.
24. Compatibility Testing: Testing technique that validates how well a software performs in a
particular hardware/software/operating system/network environment. It is performed by the testing
25. Comparison Testing: Testing technique which compares the product strengths and weaknesses
with previous versions or other similar products. Can be performed by tester, developers, product
managers or product owners.
26. Component Testing: Testing technique similar to unit testing but with a higher level of integration
- testing is done in the context of the application instead of just directly testing a specific method. Can
be performed by testing or development teams.
27. Configuration Testing: Testing technique which determines minimal and optimal configuration of
hardware and software, and the effect of adding or modifying resources such as memory, disk drives
and CPU. Usually it is performed by the performance testing engineers.
28. Condition Coverage Testing: Type of software testing where each condition is executed by making
it true and false, in each of the ways at least once. It is typically made by the automation testing
29. Compliance Testing: Type of testing which checks whether the system was developed in
accordance with standards, procedures and guidelines. It is usually performed by external companies
which offer "Certified OGC Compliant" brand.
30. Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the
same application code, module or database records. It it usually done by performance engineers.
31. Conformance Testing: The process of testing that an implementation conforms to the specification
on which it is based. It is usually performed by testing teams.
32. Context Driven Testing: An Agile Testing technique that advocates continuous and creative
evaluation of testing opportunities in light of the potential information revealed and the value of that
information to the organization at a specific moment. It is usually performed by Agile testing teams.
33. Conversion Testing: Testing of programs or procedures used to convert data from existing systems
for use in replacement systems. It is usually performed by the QA teams.
34. Decision Coverage Testing: Type of software testing where each condition/decision is executed by
setting it on true/false. It is typically made by the automation testing teams.
35. Destructive Testing: Type of testing in which the tests are carried out to the specimen's failure, in
order to understand a specimen's structural performance or material behaviour under different loads. It
is usually performed by QA teams.
36. Dependency Testing: Testing type which examines an application's requirements for pre-existing
software, initial states and configuration in order to maintain proper functionality. It is usually
performed by testing teams.
37. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic
behavior of code. It is typically performed by testing teams.
38. Domain Testing: White box testing technique which contains checkings that the program accepts
only valid input. It is usually done by software development teams and occasionally by automation
39. Error-Handling Testing: Software testing type which determines the ability of the system to
properly process erroneous transactions. It is usually performed by the testing teams.
40. End-to-end Testing: Similar to system testing, involves testing of a complete application
environment in a situation that mimics real-world use, such as interacting with a database, using
network communications, or interacting with other hardware, applications, or systems if appropriate.
It is performed by QA teams.
41. Endurance Testing: Type of testing which checks for memory leaks or other problems that may
occur with prolonged execution. It is usually performed by performance engineers.
42. Exploratory Testing: Black box testing technique performed without planning and documentation.
It is usually performed by manual testers.
43. Equivalence Partitioning Testing: Software testing technique that divides the input data of a
software unit into partitions of data from which test cases can be derived. it is usually performed by
the QA teams.
44. Fault injection Testing: Element of a comprehensive test strategy that enables the tester to
concentrate on the manner in which the application under test is able to handle exceptions. It is
performed by QA teams.
45. Formal verification Testing: The act of proving or disproving the correctness of intended
algorithms underlying a system with respect to a certain formal specification or property, using formal
methods of mathematics. It is usually performed by QA teams.
46. Functional Testing: Type of black box testing that bases its test cases on the specifications of the
software component under test. It is performed by testing teams.
47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random data to the
inputs of a program - a special area of mutation testing. Fuzz testing is performed by testing teams.
48. Gorilla Testing: Software testing technique which focuses on heavily testing of one particular
module. It is performed by quality assurance teams, usually when running full testing.
49. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a
piece of software against its specification but using some knowledge of its internal workings. It can be
performed by either development or testing teams.
50. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of an
application’s code. It is performed by development teams.
51. GUI software Testing: The process of testing a product that uses a graphical user interface, to
ensure it meets its written specifications. This is normally done by the testing teams.
52. Globalization Testing: Testing method that checks proper functionality of the product with any of
the culture/locale settings using every type of international input possible. It is performed by the
53. Hybrid Integration Testing: Testing technique which combines top-down and bottom-up
integration techniques in order leverage benefits of these kind of testing. It is usually performed by the
54. Integration Testing: The phase in software testing in which individual software modules are
combined and tested as a group. It is usually conducted by testing teams.
55. Interface Testing: Testing conducted to evaluate whether systems or components pass data and
control correctly to one another. It is usually performed by both testing and development teams.
56. Install/uninstall Testing: Quality assurance work that focuses on what customers will need to do to
install and set up the new software successfully. It may involve full, partial or upgrades
install/uninstall processes and is typically done by the software testing engineer in conjunction with
the configuration manager.
57. Internationalization Testing: The process which ensures that product’s functionality is not broken
and all the messages are properly externalized when used in different languages and locale. It is
usually performed by the testing teams.
58. Inter-Systems Testing: Testing technique that focuses on testing the application to ensure that
interconnection between application functions correctly. It is usually done by the testing teams.
59. Keyword-driven Testing: Also known as table-driven testing or action-word testing, is a software
testing methodology for automated testing that separates the test creation process into two distinct
stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation
60. Load Testing: Testing technique that puts demand on a system or device and measures its
response. It is usually conducted by the performance engineers.
61. Localization Testing: Part of software testing process focused on adapting a globalized application
to a particular culture/locale. It is normally done by the testing teams.
62. Loop Testing: A white box testing technique that exercises program loops. It is performed by the
63. Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the
team before executing it. It is done by manual testing teams.
64. Manual-Support Testing: Testing technique that involves testing of all the functions performed by
the people while preparing the data and using these data from automated system. it is conducted by
65. Model-Based Testing: The application of Model based design for designing and executing the
necessary artifacts to perform software testing. It is usually performed by testing teams.
66. Mutation Testing: Method of software testing which involves modifying programs' source code or
byte code in small ways in order to test sections of the code that are seldom or never accessed during
normal tests execution. It is normally conducted by testers.
67. Modularity-driven Testing: Software testing technique which requires the creation of small,
independent scripts that represent modules, sections, and functions of the application under test. It is
usually performed by the testing team.
68. Non-functional Testing: Testing technique which focuses on testing of a software application for
its non-functional requirements. Can be conducted by the performance engineers or by manual testing
69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim is showing that
a component or system does not work. It is performed by manual or automation testers.
70. Operational Testing: Testing technique conducted to evaluate a system or component in its
operational environment. Usually it is performed by testing teams.
71. Orthogonal array Testing: Systematic, statistical way of testing which can be applied in user
interface testing, system testing, regression testing, configuration testing and performance testing. It
is performed by the testing team.
72. Pair Testing: Software development technique in which two team members work together at one
keyboard to test the software application. One does the testing and the other analyzes or reviews the
testing. This can be done between one Tester and Developer or Business Analyst or between two
testers with both participants taking turns at driving the keyboard.
73. Passive Testing: Testing technique consisting in monitoring the results of a running system without
introducing any special test data. It is performed by the testing team.
74. Parallel Testing: Testing technique which has the purpose to ensure that a new application which
has replaced its older version has been installed and is running correctly. It is conducted by the testing
75. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria for each
logical path through the program. It is usually performed by the development team.
76. Penetration Testing: Testing method which evaluates the security of a computer system or
network by simulating an attack from a malicious source. Usually they are conductedby specialized
penetration testing companies.
77. Performance Testing: Functional testing conducted to evaluate the compliance of a system or
component with specified performance requirements. It is usually conducted by the performance
78. Qualification Testing: Testing against the specifications of the previous release, usually conducted
by the developer for the consumer, to demonstrate that the software meets its specified requirements.
79. Ramp Testing: Type of testing consisting in raising an input signal continuously until the system
breaks down. It may be conducted by the testing team or the performance engineer.
80. Regression Testing: Type of software testing that seeks to uncover software errors after changes
to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is
performed by the testing teams.
81. Recovery Testing: Testing technique which evaluates how well a system recovers from crashes,
hardware failures, or other catastrophic problems. It is performed by the testing teams.
82. Requirements Testing: Testing technique which validates that the requirements are correct,
complete, unambiguous, and logically consistent and allows designing a necessary and sufficient set of
test cases from those requirements. It is performed by QA teams.
83. Security Testing: A process to determine that an information system protects data and maintains
functionality as intended. It can be performed by testing teams or by specialized security-testing
84. Sanity Testing: Testing technique which determines if a new software version is performing well
enough to accept it for a major testing effort. It is performed by the testing teams.
85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to help a
person think through a complex problem or system for a testing environment. It is performed by the
86. Scalability Testing: Part of the battery of non-functional tests which tests a software application
for measuring its capability to scale up - be it the user load supported, the number of transactions, the
data volume etc. It is conducted by the performance engineer.
87. Statement Testing: White box testing which satisfies the criterion that each statement in a
program is executed at least once during program testing. It is usually performed by the development
88. Static Testing: A form of software testing where the software isn't actually used it checks mainly
for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code.
89. Stability Testing: Testing technique which attempts to determine if an application will crash. It is
usually conducted by the performance engineer.
90. Smoke Testing: Testing technique which examines all the basic components of a software system
to ensure that they work properly. Typically, smoke testing is conducted by the testing team,
immediately after a software build is made .
91. Storage Testing: Testing type that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected termination resulting from lack
of space. It is usually performed by the testing team.
92. Stress Testing: Testing technique which evaluates a system or component at or beyond the limits
of its specified requirements. It is usually conducted by the performance engineer.
93. Structural Testing: White box testing technique which takes into account the internal structure of
a system or component and ensures that each program statement performs its intended function. It is
usually performed by the software developers.
94. System Testing: The process of testing an integrated hardware and software system to verify that
the system meets its specified requirements. It is conducted by the testing teams in both development
and target environment.
95. System integration Testing: Testing process that exercises a software system's coexistence with
others. It is usually performed by the testing teams.
96. Top Down Integration Testing: Testing technique that involves starting at the stop of a system
hierarchy at the user interface and using stubs to test from the top down until the entire system has
been implemented. It is conducted by the testing teams.
97. Thread Testing: A variation of top-down testing technique where the progressive integration of
components follows the implementation of subsets of the requirements. It is usually performed by the
98. Upgrade Testing: Testing technique that verifies if assets created with older versions can be used
properly and that user's learning is not challenged. It is performed by the testing teams.
99. Unit Testing: Software verification and validation method in which a programmer tests if individual
units of source code are fit for use. It is usually conducted by the development team.
100. User Interface Testing: Type of testing which is performed to check how user-friendly the
application is. It is performed by testing teams.
Bonus !!! Its always good to know a few extra
101. Usability Testing: Testing technique which verifies the ease with which a user can learn to
operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by
102. Volume Testing: Testing which confirms that any values that may become large over time (such
as accumulated counts, logs, and data files), can be accommodated by the program and will not cause
the program to stop working or degrade its operation in any manner. It is usually conducted by the
103. Vulnerability Testing: Type of testing which regards application security and has the purpose to
prevent problems which may affect the application integrity and stability. It can be performed by the
internal testing teams or outsourced to specialized companies.
104. White box Testing: Testing technique based on knowledge of the internal logic of an
application’s code and includes tests like coverage of code statements, branches, paths, conditions. It
is performed by software developers.
105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific workflows
which are expected to be utilized by the end-user. It is usually conducted by testing teams.
Tutorial 27 - Practical Tips and Tricks to Create Test Data
What is Test Data ? Why is it Important?
Test data is actually the input given to a software program. It represents data that affects or is
affected by the execution of the specific module. Some data may be used for positive testing, typically
to verify that a given set of input to a given function produces an expected result. Other data may be
used for negative testing to test the ability of the program to handle unusual, extreme, exceptional,
or unexpected input. Poorly designed testing data may not test all possible test scenarios which
will hamper the quality of the software.
What is Test Data Generation? Why test data should be created before test execution?
Depending on your testing environment you may need to CREATE Test Data (Most of the times)or
atleast identify a suitable test data for your test cases (is the test data is already created).
Typically test data is created in-sync with the test case it is intended to be used for.
Test Data can be Generated -
Mass copy of data from production to testing environment
Mass copy of test data from legacy client systems
Automated Test Data Generation Tools
Typically test data should be generated before you begin test execution since in many testing
environments creating test data takes many pre-steps or test environment configurations which
is very time consuming. If test data generation is done while you are in test execution phase you may
exceed your testing deadline.
Below are described several testing types together with some suggestions regarding their testing data
Test Data for White Box Testing
In white box testing, test data is derived from direct examination of the code to be tested. Test data
may be selected by taking into account the following things:
It is desirable to cover as many branches as possible; testing data can be generated such that
all branches in the program source code are tested at least once
Path testing: all paths in the program source code are tested at least once - test data can be
designed to cover as many cases as possible
Negative API testing:
o Testing data may contain invalid parameter types used to call different methods
o Testing data may consist in invalid combination's of arguments which are used to call
the program's methods
Test Data for Performance Testing
Performance testing is the type of testing which is performed in order to determine how fast system
responds under a particular workload. The goal of this type of testing is not to find bugs, but to
eliminate bottlenecks. An important aspect of performance testing is that the set of test data used
must be very close to 'real' or 'live' data which is used on production. The following question arises:
‘Ok, it’s good to test with real data, but how do I obtain this data?’ The answer is pretty
straightforward: from the people who know the best – the customers. They may be able to provide
some data they already have or, if they don’t have an existing set of data, they may help you by
giving feedback regarding how the real-world data might look like.In case you are in amaintenance
testing project you could copy data from the production environment into the testing bed. It is a good
practice toanonymize (scramble) sensitive customer data like Social Security Number , Credit Card
Numbers , Bank Details etc while the copy is made.
Test Data for Security Testing
Security testing is the process that determines if an information system protects data from malicious
intent. The set of data that need to be designed in order to fully test a software security must cover
the following topics:
Confidentiality:All the information provided by clients is held in the strictest confidence and is
not shared with any outside parties. As a short example, if an application uses SSL, you can design a
set of test data which verifies that the encryption is done correctly.
Integrity: Determine that the information provided by the system is correct. To design suitable
test data you can start by taking an in depth look at the design, code, databases and file structures.
Authentication: Represents the process of establishing the identity of a user. Testing data can
be designed as different combination of usernames and passwords and its purpose is to check that
only the authorized people are able to access the software system.
Authorization: Tells what are the rights of a specific user. Testing data may contain different
combination of users, roles andoperations in order to check only users with sufficient privileges are
able to perform a particular operation.
Test Data for Black Box Testing
In Black Box Testing the code is not visible to the tester . Your functional test cases can have test data
meeting following criteria -
No data: Check system response when no data is submitted
Valid data : Check system response when Valid test data is submitted
Invalid data :Check system response when InValid test data is submitted
Illegal data format: Check system response when test data is in invalid format
Boundary Condition Data set: Test data meeting bounding value conditions
Equivalence Partition Data Set : Test data qualifying your equivalence partitions.
Decision Table Data Set: Test data qualifying your decision table testing strategy
State Transition Test Data Set: Test data meeting your state transition testing strategy
Use Case Test Data: Test Data in-sync with your use cases.
Note: Depending on the software application to be tested, you may use some or all of the above test
Automated Test Data Generation
In order to generate various sets of data, you can use a gamut of automated test data generation
tools. Below are some examples of such tools:
Test Data Generator by GSApps can be used for creating intelligent data in almost any database or text
file. It enables users to:
Complete application testing by inflating a database with meaningful data
Create industry-specific data that can be used for a demonstration
Protect data privacy by creating a clone of the existing data and masking confidential values
Accelerate the development cycle by simplifying testing and prototyping
Test Data generator by DTM, is a fully customizable utility that generates data, tables (views,
procedures etc) for database testing (performance testing, QA testing, load testing or usability testing)
Datatect by Banner Software, generates a variety of realistic test data in ASCII flat files or directly
generates test data for RDBMS including Oracle, Sybase, SQL Server, and Informi.
Tutorial 28 - Software Testing Life Cycle - STLC explained
The different stages in Software Test Life Cycle -
Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met.
But practically this is not always possible. So for this tutorial , we will focus of activities and
deliverables for the different stages in STLC. Lets look into them in detail.
During this phase, test team studies the requirements from a testing point of view to identify the
testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst,
Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could
be either Functional (defining what the software must do) or Non Functional (defining system
performance /security availability ) .Automation feasibility for the given testing project is also done in
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM).
Identify test environment details where testing is supposed to be carried out.
Automation feasibility analysis (if required).
Automation feasibility report. (if applicable)
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will
determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Test plan /strategy document.
Effort estimation document.
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is
identified/created and is reviewed and then reworked as well.
Create test cases, automation scripts (if applicable)
Review and baseline test cases and scripts
Create test data (If Test Environment is available)
Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested.
Test environment set-up is one of the critical aspects of testing process and can be done in parallel
with Test Case Development Stage. Test team may not be involved in this activity if the
customer/development team provides the test environment in which case the test team is required to
do a readiness check (smoke testing) of the given environment.
Understand the required architecture, environment set-up and prepare hardware and software
requirement list for the Test Environment.
Setup test Environment and test data
Perform smoke test on the build
Environment ready with test data set up
Smoke Test Results.
During this phase test team will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting will be
Execute tests as per plan
Document test results, and log defects for failed cases
Map defects to test cases in RTM
Retest the defect fixes
Track the defects to closure
Completed RTM with execution status
Test cases updated with results
Test Cycle Closure
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be
implemented in future, taking lessons from the current test cycle. The idea is to remove the process
bottlenecks for future test cycles and share best practices for any similar projects in future.
Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business
Objectives , Quality
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work product to the customer.
Test result analysis to find out the defect distribution by type and severity.
Test Closure report
Finally, summary of STLC along with Entry and Exit Criteria
STLC Stage Entry Criteria Activity Exit Criteria Deliverables
Requirement Requirements Analyse business functionality Signed off RTM
Analysis Document available to know the business modules RTM Automation
(both functional and module specific Test automation feasibility
and non functional) functionalities. feasibility report (if
Acceptance criteria Identify all transactions in the report signed applicable)
defined. modules. off by the client
Application Identify all the user profiles.
architectural Gather user
document available. interface/authentication,
Identify types of tests to be
Gather details about testing
priorities and focus.
Traceability Matrix (RTM).
Identify test environment
details where testing is
supposed to be carried out.
Automation feasibility analysis
Test Planning Requirements Analyze various testing Approved test Test
Documents approaches available plan/strategy plan/strategy
Requirement Finalize on the best suited document. document.
Traceability matrix. approach Effort Effort
Test automation Preparation of test estimation estimation
feasibility plan/strategy document for document document.
document. various types of testing signed off.
Test tool selection
Test effort estimation
Resource planning and
determining roles and
Test case Requirements Create test cases, automation Reviewed and Test
development Documents scripts (where applicable) signed test cases/scripts
RTM and test plan Review and baseline test cases Cases/scripts Test data
Automation and scripts Reviewed and
analysis report Create test data signed test data
Test System Design and Understand the required Environment Environment
Environment architecture architecture, environment set- setup is ready with test
setup documents are up working as per data set up
available Prepare hardware and software the plan and Smoke Test
Environment set-up requirement list checklist Results.
plan is available Finalize connectivity Test data setup
requirements is complete
Prepare environment setup Smoke test is
Setup test Environment and
Perform smoke test on the
Accept/reject the build
depending on smoke test result
Test Baselined RTM, Execute tests as per plan All tests Completed
Execution Test Plan , Test Document test results, and log planned are RTM with
case/scripts are defects for failed cases executed execution
available Update test plans/test cases, if Defects logged status
Test environment is necessary and tracked to Test cases
ready Map defects to test cases in closure updated with
Test data set up is RTM results
done Retest the defect fixes Defect reports
Unit/Integration test Regression testing of
report for the build application
to be tested is Track the defects to closure
Test Cycle Testing has been Evaluate cycle completion Test Closure Test Closure
closure completed criteria based on - Time, Test report signed report
Test results are coverage , Cost , Software off by client Test metrics
available Quality , Critical Business
Defect logs are Objectives
available Prepare test metrics based on
the above parameters.
Document the learning out of
Prepare Test closure report
Qualitative and quantitative
reporting of quality of the
work product to the customer.
Test result analysis to find out
the defect distribution by type
Better Late Than Never
Tutorial 29 - Black Box Testing
In Black Box Testing we just focus on inputs and output of the software system without bothering
about internal knowledge of the software program.
The above Black Box can be any software system you want to test. For example : an operating system
like Windows, a website like Google ,a database like Oracle or even your own custom application.
Under Black Box Testing , you can test these applications by just focusing on the inputs and outputs
without knowing their internal code implementation.
Black box testing - Steps
Here are the generic steps followed to carry out any type of Black Box Testing.
Initially requirements and specifications of the system are examined.
Tester chooses valid inputs (positive test scenario) to check whether SUT processes them
correctly . Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able
to detect them.
Tester determines expected outputs for all those inputs.
Software tester constructs test cases with the selected inputs.
The test cases are executed.
Software tester compares the actual outputs with the expected outputs.
Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but following are the prominent ones -
Functional testing – This black box testing type is related to functional requirements of a
system; it is done by software testers.
Non-functional testing – This type of black box testing is not related to testing of a specific
functionality , but non-functional requirements such as performance, scalability, usability.
Regression testing – Regression testing is done after code fixes , upgrades or any other system
maintenance to check the new code has not affected the existing code.
Tools used for Black Box Testing:
Tools used for Black box testing largely depends on the type of black box testing your are doing.
For Functional/ Regression Tests you can use - QTP
For Non-Functional Tests you can use - Loadrunner
Black box testing strategy:
Following are the prominent test strategy amongst the many used in Black box Testing
Equivalence Class Testing: It is used to minimize the number of possible test cases to an
optimum level while maintains reasonable test coverage.
Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This
technique determines whether a certain range of values are acceptable by the system or not.It is
very useful in reducing the number of test cases. It is mostly suitable for the systems where input is
within certain ranges.
Decision Table Testing: A decision table puts causes and their effects in a matrix. There is
unique combination in each column.
Comparison of Black Box and White Box Testing:
While White Box Testing (Unit Testing) validates internal structure and working of your software
code, the main focus of black box testing is on the validation of your functional requirements.
To conduct White Box Testing , knowledge of underlying programming language is essential. Current
day software systems use a variety of programming languages and technologies and its not possible
to know all of them. Black box testing gives abstraction from code and focuses testing effort on the
software system behaviour.
Also software systems are not developed in a single chunk but development is broken down in
different modules. Black box testing facilitates testing communication amongst
modules (Integration Testing) .
In case you push code fixes in your live software system , a complete system check (black box
regression tests) becomes essential.
Though White box testing has its own merits and help detect many internal errors which may
degrade system performance
Black Box Testing and Software Development Life Cycle (SDLC)
Black box testing has its own life cycle called Software Test Life Cycle (STLC) and it is relative to
every stage of Software Development Life Cycle.
Requirement – This is the initial stage of SDLC and in this stage requirement is gathered.
Software testers also take part in this stage.
Test Planning & Analysis – Testing Types applicable to the project are determined. A Test
Plan is created which determines possible project risks and their mitigation.
Design – In this stage Test cases/scripts are created on the basis of software requirement
Test Execution- In this stage Test Cases prepared are executed. Bugs if any are fixed and re-