Definition by Levone


									Test automation is the use of software to control the execution of tests, the comparison
of actual outcomes to predicted outcomes, the setting up of test preconditions, and other
test control and test reporting functions. Commonly, test automation involves automating
a manual process already in place that uses a formalized testing process. Automation is
the use of strategies, tools and artifacts that augment or reduce the need of manual or
human involvement or interaction in unskilled, repetitive or redundant tasks.

The purpose of Automation is to increase the flexibility of time and resources, to avoid
redundancy on test execution, increase test coverage, thus increasing the quality and
reliability of the software.

From Manual to Automation
Manual testing is a process in which all the six phrases of the software testing life cycle
like test planning, test development, test execution, result analysis, bug tracking and
reporting are done successfully and manually with human efforts. where as in automation
testing these things are done by tools like QTP and Win Runner. Without manual testing
there is no automation testing.

What to test

Testing tools can help automate tasks such as product installation, test data creation, GUI
interaction, problem detection, defect logging, etc., without necessarily automating tests
in an end-to-end fashion.

One must keep following points when thinking of test automation:

      Platform and OS independence
      Data driven capability (Input Data, Output Data, Meta Data)
      Customizable Reporting (DB Access, crystal reports)
      Email Notifications (Automated notification on failure or threshold levels)
      Easy debugging and logging
      Version control friendly – minimum or zero binary files
      Extensible & Customizable (Open APIs to be able to integrate with other tools)
      Common Driver (Ant or Maven)
      Headless execution for unattended runs (For integration with build process or
       batch runs)
      Support distributed execution environment (distributed test bed)
      Distributed application support (distributed SUT)
What kind of testing to be automated
        Functional - testing that operations perform as expected.
        Regression - testing that the behavior of the system has not changed.
        Exception or Negative - forcing error conditions in the system.
        Stress - determining the absolute capacities of the application and operational
        Performance - providing assurance that the performance of the system will
         be adequate for both batch runs and online transactions in relation to business
         projections and requirements.
        Load - determining the points at which the capacity and performance of the
         system become degraded to the situation that hardware or software upgrades
         would be required.

What type of tools used
There are a number of test automation tools available

          Record and playback. This type of software operates only with mouse moves
           and clicks, it is not much reliable.
          Click buttons, move mouse and playback. Software like this have more
           functions, for instance, it does not just move mouse to the button, but it can
           click the button by its internal name or caption.
          Using image templates. Tools like iTestBot can emulate user's behavior by
           finding certain images. Resulted scripts are more flexible and reliable.

Capture and Playback mechanism

Capture and Playback tools provide an alternative mechanism for testing applications.
Instead of having developers write test cases using scripts or code, these tools enable test
cases to be created through the recording of input during a program execution. These
input can then be played back during the test run, and the output compared to an expected
input-output value.

These tools are independent of the programming language used in the program, and are
very useful for testing interactive programs (command-line interfaces or graphical user
interfaces). These tools may be a quick solution to preparing functional or acceptance
Tools Used

    Functional Test Tools
         Win runner
         Quick Test Pro
         Rational Robot
    Performance Test Tools
         Load Runner
         Web Server Stress Tool
    Defect Tracking & Configuration management Tools
         Test Director
         Clear Quest
    Test Management Tools
         Test Director
         Silk test manager
         Rational test manager

Work With Developers

The same approach should be applied at each subsequent level of testing. Apply test
automation where it makes sense to do so. Whether homegrown utilities are used or
purchased testing tools, it's important that the development team work with the testing
team to identify areas where test automation makes sense and to support the long-term
use of test scripts.

Where GUI applications are involved the development team may decide to use custom
controls to add functionality and make their applications easier to use. It's important to
determine if the testing tools used can recognize and work with these custom controls. If
the testing tools can't work with these controls, then test automation may not be possible
for that part of the application. Similarly, if months and months of effort went into
building test scripts and the development team decides to use new custom controls which
don't work with existing test scripts, this change may completely invalidate all the effort
that went into test automation. In either case, by identifying up front in the application
design phase how application changes affect test automation, informed decisions can be
made which affect application functionality, product quality and time to market. If test
automation concerns aren't addressed early and test scripts cannot be run, there is a much
higher risk of reduced product quality and increased time to market.

Working with developers also promotes building in 'testability' into the application code.
By providing hooks into the application testing can sometimes be made more specific to
any area of code. Also, some tests can be performed which otherwise could not be
performed if these hooks were not built.
Besides test drivers and capture/playback tools, code coverage tools can help identify
where there are holes in testing the code. Remember that code coverage may tell you if
paths are being tested, but complete code coverage does not indicate that the application
has been exhaustively tested. For example, it will not tell you what has been 'left out' of
the application.

Which areas to be automated
   Highly redundant tasks or scenarios
   Repetitive tasks that are boring or tend to cause human error
   Well-developed and well-understood use cases or scenarios first
   Relatively stable areas of the application over volatile ones must be automated.


     Reliable: Tests perform precisely the same operations each time they are run,
       thereby eliminating human error
     Repeatable: You can test how the software reacts under repeated execution of the
       same operations.
       Programmable: You can program sophisticated tests that bring out hidden
information from the application.

Comprehensive: You can build a suite of tests that covers every feature in your

    Reusable: You can reuse tests on different versions of an application, even if the
     userinterface changes.
    Better Quality Software: Because you can run more tests in less time with fewer
    Fast: Automated Tools run tests significantly faster than human users.
    Cost Reduction: As the number of resources for regression test are reduced.

Proficiency is required to write the automation test scripts.
• Debugging the test script is major issue. If any error is present in the test script,
sometimes it may lead to deadly consequences.
• Test maintenance is costly in case of playback methods. Even though a minor change
occurs in the GUI, the test script has to be rerecorded or replaced by a new test script.
• Maintenance of test data files is difficult, if the test script tests more screens.

To top