Docstoc

Chapter 9

Document Sample
Chapter 9 Powered By Docstoc
					Chapter 9
Tools for Business
Facing Tests That
Critique the Product
Tool Strategy
    In this chapter we will talk about some of the tools that can be used to critique the product
from a business point of view. Tools chosen for this purpose should always have the customer’s
perspective in mind. Some questions to ask yourself are:

    •   Can the customers understand the results?
    •   Can the customers help to write the tests and evaluate the results?
    •   Who will maintain the tests and the test framework?
    •   How can we represent the results if required?
    Remember, we now have a product, and need to find ways to test it so that we can find any
anomalies in what we expect.
    We give some specific tool examples here, but by the time you read this book, these tools
will have changed drastically, and new ones have become available. Use our examples to guide
your efforts to find an use test automation to help you learn about and critique the code you’re
developing.


How to Choose a Tool
     There are many open source tools, as well as many vendors who have a wide variety of tools
to sell. In Chapter 6, we discussed tools that would support programming. Many of those tools
can be used to help decide if the product meets the final needs of the customer, so don’t choose
different tools if you don’t have to. Leverage the ones you have. For specific ideas on evaluating
test automation tools, visit Chapter 20, Implementing your Automation Strategy.
     There are many ways to find tools. A quick search on Google produces many sites that list
tools. Some give a rating and a bit about the tool. Some just list the tools, but categorize them.
Most of these sites list both commercial and open source tools. We’re not including any specific
sites here, as the links are likely to go out of date.



How Do You Know When You’re Done?
    Your tools, whether they are part of your automation strategy or manual testing, should give
you the power to determine when you are comfortable saying that you are done testing. This



                                                                                      Chapter 1, 1
means that you have enough information so that business can make an informed decision whether
to release or not.




Critiquing the GUI
     In the previous chapter, we said that testing the GUI for consistency can be done using
automation. If you have a stable user interface, then a record / playback tool, accompanied by
good programming practices applied to test scripts, can effectively be used to test the UI. Having
a stable UI is a critical assumption, because in many cases the user interface starts to crystallize
toward the middle or end of a development cycle. As customers use the system, there can be a lot
of changes being made to it up until release.
     Sometimes usability testing brings up shortcomings in the user interface, and changes need to
be made that make existing tests useless or expensive and time consuming to change. If you’ve
already spent a lot of time automating tests, it can be especially painful.
     Early GUI test tools recorded mouse movements using X-Y screen coordinates. Scripts using
those tools may also be sensitive to changes in screen resolution, color depth, and even where the
window is placed on the screen. Changing these tests is usually expensive and it is probably
cheaper in the long run to run your tests manually.
     Most modern GUI test tools use objects to recognize the controls in a graphical application,
like buttons, menus, and text input widgets, so they can refer to them symbolically rather than
with raw screen coordinates. This makes the application much more testable as it’s more robust to
standing up to changes. For example, a button may change the label or the location moved for
usability purposes, but the basic functionality does not change so the tests do not need to change.
        In one team of Janet’s, they started automating the tests using Ruby and
        Watir. The automation went fairly quickly at first, but then the tests
        started failing. The testers went to the developers to ask if they could
        change the way they were coding. The developers were just using the
        default WebLogic object names, which would change if a new object was
        added to the page. It took a little convincing, but once the developers
        realized the problems their practice was causing, they changed their
        habits. Over time, all the defaults were changed, and each object had an
        assigned name. The tests became much more robust.
    There always drawbacks to any tool you use. For example, there are limitations to using
objects. Sometimes developers use custom controls or a new toolkit that you tool may not
understand.
        In a different project that Janet was on, the developers were automating
        the GUI tests separately from the functional tests. They originally were
        using Ruby / Watir, as that was what the functional tests were using.
        However, they found that they were constantly switching IDEs, and since
        Ruby was not their most familiar language, they were struggling. They
        did some research and decided to try Watij, the Java version of Watir.
        They had some issues, but overall, it was a better decision and the tests
        ran several times a day ensuring all unintended changes in the GUI were
        caught.

        Sample of watir vs watij




                                                                                       Chapter 1, 2
Specifying Tests
    Lisa learned about specifying automated tests, as opposed to scripting or coding them, when
she first read about Canoo WebTest (http://webtest.canoo.com). This tool was developed by a
group of programmers who were unable to find vendor test tools that met their needs on a web
application project. They didn’t want to have to spend time testing their tests, so they specified
them using XML, and ran them using Ant. [need reference to glossary, or whatever].
    When Lisa encountered WebTest, she was used to robust vendor tools that allowed data-
driven testing and contained a fair amount of logic. She was surprised to find that simple
WebTest scripts could and did catch regression bugs, while being inexpensive to create and
maintain. Consider a simple approach when you automate scripts to generate data or produce a
particular scenario for further manual testing. Always consider the ROI on test automation;
whether it’s for regression testing or to help you critique the product.
    Here’s an example of a script using WebTest.

        <invoke description="Go to main page"
url="http://www.google.com"/>
             <verifyText description="Make sure we got there" text="I'm
Feeling Lucky"/>
        <verifyTitle description="check title" text="Google"/>
        <setInputField description="set query" name="q" value="Lisa
Crispin"/>
        <clickButton description="submit query" label="Google Search"/>
        <verifyText description="check for result" text="Agile Tester"
/>
     </steps>
    </webtest>
   </target>
    </project>

    Lisa’s team has used WebTest for GUI test automation with great success. It's quite a robust
tool and has great support for an open source tool, and the developer community is always adding
new features.



Tools for Functional Testing
     Tools evolve practically on a daily basis, and new tools become available all the time. Here
we present some examples of the types of tools you might choose to facilitate business facing
tests that critique the product. One of the things to remember when you are choosing tools that
customers may want to use, is they should be describe the tests in English (or language the
customer understands)


Keyword and Data-Driven Tools
     Data-driven testing is a tool that can help reduce test maintenance, and allow you to share
your test automation with manual testers. There are many times when you want to run the same
test code over and over, repeating only the inputs and expected results. Spreadsheets or tables,
such as those supported by FIT, are excellent ways to specify inputs. The test fixture, or method,
or script, can loop through each data value one at a time, matching expected results to actual. By



                                                                                       Chapter 1, 3
using data-driven tests, you are actually using examples to show what the application is supposed
to do.
     Keyword-driven testing is another tool used in automated testing where pre-defined
keywords are used to define actions. These actions correspond to a process related to the
application. It is the first step for creating a domain testing language. These keywords (or action
words) represent a very simple specification language that non-programmers can use to develop
automated tests. You still need programmers or technical automation testers to implement the
fixtures that the action words act on. If these keywords are extended to emulate the domain
language, customers and non technical testers can specify tests that map to the workflow more
easily.
     Combining data-driven and keyword-driven testing techniques can be very powerful. FIT
(http://fit.c2.com) or FitNesse (http://fitnesse.org) are tools that use both keywords and data to
drive their tests. FitNesse is fully integrated standalone wiki, and acceptance testing framework
that uses FIT as the underlying test harness. FIT is a great tool to help specify requirements (see
Chapters 5 and 6). However, it can become a maintenance burden, so you although you may do a
lot of the functional testing with FitNesse during development, you may not want to all the tests
for your regression suite.
     Janet has been involved in a few projects that have used Ruby / Watir to create a full
framework for functional testing that customers can use to specify tests, and then turn them into a
their functional regressions suite. Here’s one of those success stories.

    Example here of PASFIT.



Record / Playback
     There are many record / playback tools on the market. Our only suggestion is to make sure
the scripts that are created are easily modified and refactored. All the good coding practices used
for writing production code apply to test scripts. Whether your test scripts are created from
scratch, or by using a capture tool, they should be modular; code shouldn’t be duplicated. They
should have the capability to be driven by different inputs. And they should be self-verifying; the
test result should be instantly apparent. Many vendor capture/playback tools have proprietary
languages,. Programmers coding in a standard programming language don’t want to have to learn
a new language just for the test scripts. Consider this when you are choosing a tool.
     Another problem you may have with record / playback tools is that you do not control the test
design patterns, so if you want to control your test framework structure, you may not have the
freedom to do as you wish.
     In addition to the many commercial capture/playback tools available, some open source test
tools offer a capture feature. Selenium is an example of an open source tool that has gained
popularity because it allows you to write automated web application UI tests in any programming
language, against any HTTP website, using any mainstream JavaScript-enabled browser. It
records web applications on Firefox and the scripts are recorded in 'Selenese' or any of 6 other
languages. It runs against Internet Explorer, Mozilla and Firefox on Windows, Linux and Mac.
For these reasons, it is good for browser compatibility testing and system functional testing.
        John Overbaugh, test lead for http://www.lds.org and other related sites,
        has had success using Selenium running in FitNesse for content-heavy
        sites. He’s found that it's actually quite resilient to content and layout
        changes, and even if changes are made the tests can be updated
        quickly.



                                                                                      Chapter 1, 4
     However, Brian Marick posted this chunk of a Selenium test on the agile-
testing@yahoogroups.com mailing list to demonstrate what the test looked like. He was having
trouble maintaining it.
        To quote Brian “The Selenium test fails Sigh. That's almost certainly
        because I changed something that the test depended on (id of a field,
        most likely). That happens more often with Selenium tests than with my
        workflow tests, even though I try to write the Selenium tests in a
        reasonably change-resistant form:
        def test_tour
        with_server do
        @selenium.open("/"); and_then {
        assert_on_home_page
        assert_has_navigation_group_with_login
        }

        @selenium.click(navigation_group_show_all_link); and_then {
        assert_showing(FULL_NAME)
        }

        @selenium.click(user_view_link(LOGIN)); and_then {
        assert_on_non_editable_profile_page_for(FULL_NAME)
        }
        ...
        When Janet asked Brian what was problem with the test, he went back
        and checked. He found that the last-check-before-deployment Selenium
        test had a step that asked "Are we now on the page displaying
        information about Staging Test User?"

        <h\d>Staging Test User</h\d>

        It worked when the test was first built, but someone later added
        additional text within the header, so the match didn't work. Using regular
        expressions to verify HTML might be a questionable practice. Brian
        made an explicit decision to fix that problem when it came up, which was
        his general strategy of doing less than the minimum he thinks can
        possibly work, then letting reality add on. In fact, if he hadn't wanted to
        play with Selenium, he wouldn't have done a staging test at all until the
        earlier gauntlet of tests had let something through.

        A better solution would be to parse the page and use XPath or
        something to pick out key values in a less fragile way, rather than match
        with regular expressions.



    Any other examples that people are familiar with?


XUnit Testing Frameworks
    When we normally talk about functional testing, we automatically think of applications that
can be tested through the GUI. What about all those applications that don’t have a GUI?




                                                                                      Chapter 1, 5
     Janet has worked on a couple of applications like that. One was a message handling system
that was being deployed in an organization. The developers used JUnit for all the component and
integration testing. They built a load test framework that could make use of the JUnit tests, so no
other testing tools were needed. The GUI front end was so small that Janet was able to test it
manually. It made no sense to automate the GUI testing in this case.
     There are many XUnit frameworks out there that may be able to handle all your testing needs.
Programmers can work together with testers to apply XUnit frameworks to higher levels of
testing.




Tools to Assist with Exploratory Testing
     Exploratory testing is manual testing. Some of the best testing happens because a person is
paying attention to details that often get missed if we are following a script. Intuition is something
that we cannot make a machine learn. However, there are many tools that can assist us in our
quest for excellence.
     Tools shouldn’t replace human interaction, but should enhance the experience. Tools can
provide testers with more power to find the hard-to-reproduce bugs that often get filed away
because no one can get a handle on them. Exploratory testing is unconventional, so why shouldn’t
the tools be as well? Think about low effort, high value ways that tools can incorporated into your
testing.
     Computers are good at doing repetitive tasks and performing calculations. These are two
areas they are much better than us, so let’s use them for those tasks. In an agile environment
where we need to keep pace with the programmers, any time advantage we can gain is a bonus.


Test Set-up
     Let’s think about what we do when we test. We’ve just found a bug, but not one that is easily
reproducible. We’re pretty sure it is happens as a result of interactions between components. We
go back to the beginning and try one scenario after another. Soon, we’ve spent the whole day just
trying to reproduce this one bug. Ask yourself how you can make this easier. We’ve found that
one of the most time-consuming tasks is set up, and getting to the right starting point for your
actual test. This is an excellent opportunity for some automation. Whatever tool you are using can
be adapted to run the scenario over and over, plugging different inputs. Janet has successfully
used Ruby with Watir to set up tests to run multiple times to help identify bugs.
     Watij, like Watir would work the same way, so your Java programmers may be able to help
you with these tests. Both tools drive Internet Explorer much the same way an end-user would, so
because you can play back it back on your monitor, you can watch for anything that might not
look as it should.


Data Generation
    PerlClip is an example of a tool that you can use to test a text field with different kinds of
inputs. James Bach provides it free of charge on his website www.satisfice.com. It can be very
helpful to help validate fields. For example, if you have a field will accept a maximum input of
200 characters, testing this field and its boundaries manually would be very tedious. Use PerlClip




                                                                                         Chapter 1, 6
to create a string, put it in your automation library, and have your automation tool call the string
to test the value.


Simulators
     Simulators are tools used to create data that represents key characteristics and behaviour of
real data for the system under test. If you do not have access to real data for your system,
simulated data will sometimes work almost as well. The other advantage of using a simulator is
for pumping data into a system over time. It can be used to help generate error conditions that are
difficult to create under normal circumstances, and can reduce time in boundary testing.
     See the ‘System Test’ example at the end of this chapter to see how a simulator was critical to
the testing the whole system.


Monitoring
     Tools like the Unix/Linux command “tail -f”, or James Bach’s LogWatch, can help monitor
log files for error conditions. Many error messages are never displayed on the screen, so if you’re
testing via the GUI, you never see them. Get familiar with tools like these as they can make your
testing more effective and efficient.


Emulators
     An emulator duplicates the functionality of a system, so that it behaves like the system under
test. There are many reasons to use an emulator.
        WestJet, an airline company, provides the ability for guests to use their
        mobile devices to check in at most airports. When testing this application,
        it is better for both the programmers and the testers to test various
        devices as early as possible. To make this feasible, they use
        downloadable emulators to test the Web Check in In application quickly
        and often during an iteration. Real devices which are expensive to use
        can then be used to verify already tested functionality

        They also create another type of emulators to help test against the
        legacy system they interface with. The programmers on the legacy
        system have different priorities and delivery schedules and have a
        backlog of requests. To prevent this from holding up development, the
        programmers on the web application have created a type of emulator for
        the API into the legacy system that returns predetermined values for
        specific API calls. They develop against this emulator, and when the real
        changes are available, they test and make any modifications then. This
        change in process has allowed them to move ahead much more quickly.
        It has proved to be a simple, but very powerful tool. Tools for below the
        GUI layer


Testing Web Services
    CrossCheck
    CrossCheck is another example of a tool to test web services. There is a free version appears
to be enough to do some basic web services testing. You supply the WSDL; it compiles the page
and then presents you with a tabbed menu that contains textboxes for you to fill in. It has a Run
mode where you can add your tests to a suite and then run the suite. It also lists pass/fails, time


                                                                                         Chapter 1, 7
the test took to run and pops up a pie chart that shows number of pass and fail. In the main UI you
can filter by the passes or fails or all tests. It also has a page in the set up mode where you can
specify what a "pass" is.
     The personal version only allows you to load one WSDL at a time, but it does allow you to
save off your suite to run in the future. This looks like after the initial setup I can save a day or
two of testing, getting my testing more in tune with the development turn-around time.
     Ruby Test::Unit
     One project Janet was on used Ruby’s unit testing framework, Test::Unit to test their web
services. At the end of this chapter, you can see how it was just one part of a whole system in the
example.



A Tool Selection Rationale
David Reed, a test automation engineer at BEA, and his team went with soapUI
       (www.soapui.org) to automate testing for their web services. Here are some
       reasons he gave for choosing this particular tool.
       It’s got an open source version, so you can use it for free. You can learn it, kick
       the tires, expand stuff, learn its strengths and weaknesses.
       It was easy to figure out what requests to make for what service.
       The assertions provided for verifying the results from requests are great and
       expandable. One really helpful one is verifying that the response comes back in
       an acceptable amount of time, raising an error if it doesn’t.
       The Pro version takes a lot of the hassle out of designing XPath queries to verify
       results. It also adds some nice touches for retrieving database data.
       It’s expandable with Groovy, a Java-based scripting language. (They’re working
       on a Java application, so it pays to have Java-friendly tools).
       Developers can use it without sneering at it as a “test tool”.
       It’s easily integrated with our continuous integration environment.
       It has a feature to check code coverage.
       The price is right.
       And Swedish people are such fun to work with.




API
    FitNesse
    FitNesse is just one example of a tool that tests “behind the GUI”; in fact, FitNesse tests
replace the GUI and invoke the production codde. Chapter 6 contains some examples of using
FitNesse tests to drive development, and Lisa has found FitNesse invaluable for bootstrapping
exploratory testing, as well.
    If you’re trying to test a batch process, you could probably write a harness in Java or
whatever language you’re coding in to kick it off for testing. Lisa’s team has found that writing a
FitNesse fixture to kick off the job is an easier way to allow testers to provide inputs, such as an



                                                                                        Chapter 1, 8
‘as of” date or a record id. For example, there’s a batch job that checks all the loans in the system
to see if each is current on payments. The FitNesse fixture to kick off this job takes a run date:

                 Loan Default Job Fixture
                 runDate                          runJob!
                 01-03-2009                       true

     If the date provided isn’t the first business day of a quarter, the test will return “false”.
Otherwise, it will return true and actually run the job as if it’s that day, checking all the loans and
updating their payment status. Lisa can then spot checks loans and see if updates were made as
expected.
     Account statements are a vital feature of the retirement accounts that Lisa’s company
manages. The process to snapshot monthly account data and generate statements from it would be
almost impossible to test without FitNesse tests. Too many prerequisites are needed to run the
batch job for every retirement plan in the system, One FitNesse test allows Lisa to run the job to
snapshot monthly data for a given retirement plan account and a given date range, and generate
statements in PDF format, all at one time.
     Include GenerateStatements.tiff




                                                                                          Chapter 1, 9
    Lisa can check the actual data that the job invoked by the FitNesse fixture persists in the test
database, and she can visually check the PDFs that are stored out on the file system. This has
been a huge timesaver, allowing much more exploratory testing of statement data than before.
    Lisa’s team has similar tests for other batch jobs. A fixture for the account rebalancing job
accepts an “as of” date, and creates records in the database to trigger the rebalancings. Another
fixture dispatches emails as of a given date.
    FitNesse is just one example of a tool that bypasses the GUI, making it easier to use and
maintain, and invokes production code with the set of inputs that you want. You can then use the
outputs for exploratory testing.

    JUnit
    JUnit is can be a viable test framework for business facing tests. It’s a natural fit for testing
the API since programmers are familiar with it, and can make the transition readily.

    Need story




                                                                                         Chapter 1, 10
Visible results
     Tools don’t only have to be to help test. Stakeholders, including management, often want to
have easy access to test results. If you chose to use one of the large vendor products, they all have
built in test result components. What do you do if you don’t have one of the large integrated test
systems?
     Janet has been involved on two separate projects where they built their own test management
system. One was built using PHP on mysql, and the other which was a little more sophisticated,
was built using Ruby on Rails. Both systems were maintained by the automation team with the
testers from the project teams acting as the customers to add new features when necessary.
     FitNesse can be tied in with your build and continuous integration, so you can get reports
back on passing and failing tests.

More ideas?




Summary

    •   We talked about how to find tools for your different types of testing
    •   GUI testing can automated using a programming language with tools such as
        Watir or Watij, or using open source tools such as Canoo WebTest
    •   There are different types of tools that can be used for functional testing: keyword
        or data-driven tools, record / playback or x-unit are just some of them.
    •   Tools can be used to supplement exploratory testing to make the testers more
        efficient and effective
    •   Visible results may be very important to management or customers so work with
        the stakeholders to determine the best way to capture them




                                                                                       Chapter 1, 11

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:3/22/2012
language:
pages:11