Usability Testing Bedford St. Martins' New Website by mtc13769

VIEWS: 0 PAGES: 4

									                                 Usability Testing
                         Bedford St. Martins' New Website
                                                             In
                            Texas Tech’s Usability Research Lab
                                     Dr. Locke Carter, Director
The following is a mini-proposal that identifies the scope, timeline, and budget for three small usability tests to be
conducted on the proposed revision of Bedford St. Martins' (BSM) current website. The purpose of these tests is to
a) help the development team identify areas of improvement in navigation structures in January-March, b) examine
how target users interact with the almost-finished code and design in March-June, and c) subject the finished
product to rigorous usability/functionality tasks in August-September. These usability tests will consist of a series
of tasks that will be developed jointly by the TTU Usability Research Lab (URL) and the BSM design team. The
usability test will collect quantitative and qualitative results about the relative success or failure of target users
engaged in these tasks. At the end of the process, we will present you with a report of our findings and a highlights
DVD that illustrates key findings.

Rationale for Usability Testing
When users interact with BSM's website, they usually are driven by goals related to their school jobs: they may be
either looking for a book to use next semester or they may be checking for ancillaries to the book that they
previously chose (or which was chosen for them in the case of standardized programs). In many ways, the interface
is like an e-commerce storefront: if the customer does not see what s/he is looking for within a very short interval
s/he will leave. Similarly, if BSM site fails to satisfy its user's goal, then the user may give up or become frustrated.
If they have not chosen a book yet, this frustration may lead to the user trying another publisher; if they are looking
for ancillaries, this frustration will lead to poor word-of-mouth advertising and possible losses in future semesters.

By observing representative users performing tasks in a usability test, the BSM designers can identify areas of
weakness (along with strengths) and may develop better approaches to satisfying user expectations. Usability testing
does not aim to ensure accuracy of information, nor does it strive to debug programming. Instead, usability testing
focuses on real users working with a real product towards real goals. Regardless of how pretty or well designed a
product may be, it is the user's experience that determines the quality of its usability.

While all three tests should be thought of as participating in the ongoing development of the project, each will
approach the BSM website differently. The first test (January-March) assumes only that the wireframe designs are
complete and somewhat malleable. At this stage of the project, our usability tests will focus on the mental spaces of
the user as s/he looks for textbooks with an eye to testing how well the wireframe design matches those navigation
expectations. Full functionality is not necessary at this stage of testing. Usability problems discovered in the first
test may be corrected early in the project, giving the design team confidence that the navigation metaphors
employed during the wireframe stage are valid.

The second test (March-June) assumes a functional prototype of the website, thus allowing us to run users through
full tasks: finding particular materials, learning about them, and conducting whatever business is assumed by the
BSM designers. Usability problems discovered during this stage of testing may involve simple interface corrections
(such as oddly-named buttons), but are more likely to involve functional quirks like speed, results screens, bugs,
error screens, and other events that happen when real users use a product heavily. These problems can be fixed
relatively early in the project, giving the BSM team the summer to implement our suggestions and solidify the
product.

The third test (August-September) assumes a fully-functional, tested, product. Ideally, we will not find any usability
problems at this point, and if so, then BSM can use the usability test as a sort of "Good Housekeeping seal of
approval": this type of test is called a summative test and is often used to tout a product's usability and to develop a

Bedford St. Martins                                   p. 1                          Texas Tech Usability Testing Proposal
product-comparison chart. However, if we find design, navigation, or functional problems, they ought to be
minimal, and can be corrected well before the spring semester.

Dr. Carter and members of the BSM design team will construct representative tasks that focus on user experience.
We will use all available information to help craft these tasks: helpdesk calls, intuition, and training feedback, to
name a few.

Research Methods
The usability tests will be conducted in the usability lab at Texas Tech University. Participants will be recruited
from the university population and will be tested separately. Before testing begins, participants will be asked to sign
a release form as well as respond to a pre-test interview that will help identify their level of experience with using
various types of software. During the test, participants will be asked to use think-aloud protocol while they complete
each task.

Together, we will construct some scenarios for our participants so that they will be trying to achieve normal tasks at
the BSM website instead of just browsing. The two scenarios we will certainly want to investigate will involve a)
the teacher who doesn't know which book to use for the coming semester and b) the teacher who already has the
book, but is looking for help in getting ready for the semester. We can add another one or two scenarios to this list
based on BSM's experiences.

Operating within a given scenario, users will be asked to perform certain tasks, or operations that have a beginning
and an end and for which we can determine levels of success or failure. Again, we will develop these tasks together,
starting in places where BSM's design team thinks there might be problems, either because of hunches or of designer
questions or of focus groups or helpdesk calls. We will come at the tasks from a couple different directions so that
we make sure what we're seeing is really a characteristic of a given web page or other feature of the project. In each
task we will be videotaping a) the computer screen that the user sees and b) the participant him/herself. We will be
interested in locating parts of tasks where participants are either satisfied or frustrated. The tasks we design will not
be difficult; indeed, they should all be achievable within a reasonable time. If a user cannot complete a task in a
reasonable time (perhaps 5 minutes), we will simply ask the user to move on to the next task and we will discreetly
note this as a failure.

If BSM doesn't have any idea about discrete problem spots, then we can conduct a less-guided, more-fishing-
expedition type of usability test where we construct scenarios that are general enough to get the participant to use
many parts of the website, but which are also specific enough so that we can tell where there are glitches in qualities
of usability. Even in more structured tasks and scenarios, we will be keeping a watch for usability problems that fall
outside our expectations.

While the test may make use of some quantitative data, the primary method of research will be direct observation.
The user sample is too small to allow us to perform statistical analyses, but even if it were larger, the purpose of
these small usability tests is to identify the most grievous usability problems faced by users of your redesigned
website. If we have to perform statistical analysis in order to identify minute differences in performance or
preference, then these findings represent relatively meaningless recommendations to the design team. Of course,
such analysis may be valuable at some later point in the design process.

Qualitative data that the test will observe includes the following:

     •    Running comments by the user, properly briefed in thinking aloud (and encouraged to do so during the test
          by the facilitator)

     •    Facial expressions and observations made during the tasks

     •    Opinions as to success or failure during tasks




Bedford St. Martins                                   p. 2                          Texas Tech Usability Testing Proposal
     •    Our own comparison of user performance against benchmarks, perhaps defined as something like "user
          performs task A within 5 screens (or 1 minute or 17 mouse clicks, etc)." or "teacher user identifies proper
          link to ancillary materials within 10 seconds." But these benchmarks may also be subjectively defined:
          "user locates release date of new textbook to his/her satisfaction without frustration."

User Profile
The BSM website's primary user base is mainly composed of university and community college instructors from the
"soft side" disciplines. Based on our telephone call, the user group we propose to test is the primary group
(instructors), broken down into technologically-savvy instructors and technologically-naïve instructors. Instructors
we choose for the tests will likely have the following characteristics:

   •      Academic background: Completed, at or above, the Master’s degree level (areas of study will vary).
          Users are highly educated, and therefore, are also likely to be independent and motivated workers,
          experienced researchers, and excellent problem solvers and critical thinkers. Because most instructor
          positions require an advanced degree, it is likely that most users hold at least a Master’s degree, and likely
          at most a Doctorate degree.

   •      Work environment: Work in academic or a home office/environment (time of day may vary). Users
          perform work tasks in various environments, including private offices, laboratory settings, and classrooms.

   •      Age: Be between the ages of 24 to 65 years of age. This approximation is based on the notion that most
          instructors will hold at least a Master’s degree, which in most cases requires 2 to 3 years of study beyond
          the Bachelor’s degree level, at which most college graduates are between 22 to 23 years of age.

   •      Computer & Internet experience: Have a wide range of experience using a computer for web-based
          services, such as searching for information. It is presumed that most college instructors will have some
          experience with conducting research online, but they may not have investigated, compared, or used
          textbooks online.

For each of the three phases of testing, we will examine 4-6 users, evenly mixed by their technological abilities.
Tests will take approximately one hour each, and will be conducted in the English Department’s User Research
Facility. All institutional research waivers and permissions will be handled by Texas Tech.

It is possible that a very small usability test can be conducted late in the project on sales and marketing tools, which
would involve testing the usability of emailing customized lists of books and building custom school web sites.
Since this test would involve a different audience, it would lie outside the scope of this proposal, but Texas Tech
would be interested in helping with this stage of testing.

Deliverables
For each phase of testing, the BSM design team will receive a formal test report and a highlights DVD at the end of
the usability testing process. This report will identify areas of success and failure, will describe the methodology of
the usability test, and will make recommendations to the design team (if they are evident from the usability test). In
addition, we will be available (if BSM wishes a face-to-face briefing and is willing to fly one or more of us to
Boston) to present our findings in a 30- or 60-minute briefing to the web design team and other stakeholders, who
will have ample opportunities to ask specific questions of our findings.

Duties and Responsibilities
Bedford St. Martins’ Design Team
If the test aims to ask participants to interact with features that are still in beta testing, then we will need screen
shots, prototypes, or design docs, which we will use to construct user tasks. BSM will provide electronic access to


Bedford St. Martins                                   p. 3                           Texas Tech Usability Testing Proposal
the evolving product as it becomes functional. Immediately after this agreement is finalized, the BSM design team
and TTU need to communicate frequently by e-mail or phone to firm up the user’s tasks to ensure that the test will
capture good-quality data. Finally, the BSM team needs to provide some sort of incentive for users—a scholarly or
educational book would be suitable for our target users (bearing in mind that they already receive plenty of exam
copies of textbooks).

Texas Tech User Research
For our part, TTU will familiarize ourselves with the BSM prototypes and evolving website, design the tasks (along
with the BSM team), recruit test participants, conduct the usability tests, analyze the data, and write the report of our
findings (along with video highlights). If BSM wishes a face-to-face briefing, one or more of us would be available
with a week’s notice.

Approximate Schedule of Events and Milestones
The following schedule is an achievable estimate that assumes BSM and Dr. Carter arrive at a viable test plan based
on realistic user tasks on Week 0. It also assumes that participant recruitment is successful, giving us a pool of
participants in the two target groups sufficient to conduct the test. The testing schedule does not have to proceed this
quickly, of course. Since the goal of the test is to provide the design group with useful and usable information, the
schedule can adapt to its timetable. The main constraint lies with the academic calendar. After May 1, school is out
and we would be advised to wait until June 1, when we can recruit summer school participants.

                                                                   Jan   Feb   Mar   Apr   May      Jun     Jul   Aug     Sep    Oct
Test   Write tasks, scenarios, test plan                            x
 1     Recruit participants                                         x     x
       Conduct pilot test, fine tune tasks, select participants           x
       Usability test                                                     x     x
       Compile data, create deliverables                                        x
       Report, client briefing                                                  x
Test   Write tasks, scenarios, test plan                                        x
 2     Recruit participants                                               x     x
       Conduct pilot test, fine tune tasks, select participants                       x
       Usability test                                                                 x     x
       Compile data, create deliverables                                                    x
       Report, client briefing                                                                       x
Test   Write tasks, scenarios, test plan                                                                    x
 3     Recruit participants                                               x     x     x     x        x      x
       Conduct pilot test, fine tune tasks, select participants                                                     x
       Usability test                                                                                               x      x
       Compile data, create deliverables                                                                                   x
       Report, client briefing                                                                                             x      x




Personnel
The usability test will be conducted in the English Department’s Usability Research Lab by Dr. Locke Carter and
one or more of his graduate students.

Dr. Locke Carter is the director of usability research. An assistant professor of Technical Communication and
Rhetoric, he is also the TCR program's director of graduate studies. He teaches courses in usability testing,
document production, argumentation theory, and hypertext theory. His book, Market Matters: Applied Rhetoric
Studies and Free Market Competition, will be published by Hampton Press in the first quarter of 2005.




Bedford St. Martins                                               p. 4                          Texas Tech Usability Testing Proposal

								
To top