CS 5115 USER INTERFACE DESIGN_ I by chenshu

VIEWS: 2 PAGES: 50

									Evaluation Without Users: Cognitive
Walkthroughs, Heuristic Evaluation

        Loren Terveen
       CS 5115, Fall 2008
          October 20


                                      1
                 Agenda

   Deliverables for next week
   Cognitive Walkthroughs
   Heuristic Evaluation
   Next time: bring your paper prototype!




                                             2
    Hall of Fame/Shame This Week

   Today
     Sheng Wang & Weiqi Wei
     Rakesh Ramakrishnan & Subrahmanya    Bhat

   Wednesday
     Patrick Weygand & Steve Chou
     Rama Susarla & Ashish Kumar Sharma
Mobile Phone Shopping.
   Websites


               Mobile Retail Stores




                               By
                 Subrahmanya Bhat
              Rakesh Ramakrishnan
                HOS: Website Interface.
                                                      Complete info on cell phone
•   Incomplete mental model.
               5                                      deals, package and
                                                      customization.
    •   Can not get a 100% feel of the phone of
        interest as immediate feedback is partial
        and real feedback is only when you buy
        it. 
•   Prior knowledge of cell phones is
    necessary.
•   Not a natural interface.
•   Selective Attention.
    •   There is too much of data that prevents
        focus.
•   Gulf of Evaluation.
    •   Language issues.
    •   Computer/Internet inexperience.
    •   Keyboards and mouse may not be intuitive to
        everyone but we are stuck with it!.
             HOS: Retail Store Interface.
•   Memory
     •   Need to memorize the features of           Sales Executives and
                                                    pamphlets.
         interest.
•   Not a Design for Error.
     •   Human recommendation derived from
         memory / Past experience may be
         incorrect and missing requirements.
     •   Incompatible / Dangerous
         customizations due to human err?.
•   Gulf of execution.
     •   Exploration is limited due to time
         constraints and spatial separation and
         search constraints.
     •   What if you don‟t get the best package!.    Real
                                                     Phones.
•   Comparing the models is not easy.
                                                                           ..and Tables. 
     •   Via the pamphlet / word of mouth way.                                          6
 HOF: Microsoft Surface in Retail Stores




Review features of a particular mobile device by simply
placing it on the display.
                                                          7
 HOF: Microsoft Surface in Retail Stores




Place two devices side by side on the unit and easily compare
their features.
                                                                8
    HOF: Microsoft Surface in Retail Stores
•   Drag and drop ring tones, graphics,
    video and more by “grabbing”
    content with their hands from a
    menu on the display and “dropping”
    it into the phone.
•   Get compatible accessories by
    simply tapping on the list of
    accessories.
•   Find similar phones using the
    surface display. You actually drag,
    rotate and see features of these
    phones as if all of them are dropped
    on your table!.
                                           9
                     HOF: Summary of Points
•   Direct Interaction with devices of interest.
•   Natural User Interaction (NUI)
     •   Using natural hand movements and physical objects.
     •   No gulf of execution or evaluation.
     •   Immediate feedback.
•   Affordance
     •   It‟s a table!.
•   Recognition rather than recall of the accessories / features .
•   User control and freedom exploring phones.
•   Flexibility and efficiency of use
     •   Does not require any user training. Works with novice and expert users.
•   Match between the system and the real world.
•   Incorporates collaboration.
                                                                                   10
•   Error prevention
    Deliverables for next week
   Walkthrough evaluation report:
        For each of your three scenarios, walk through your
         prototype asking the “cognitive walkthrough questions” plus
         … (see project guide)‫‏‬
        Write up a list of interface problems discovered during the
         walkthrough
        Add brief notes about how you discovered them
   I encourage individual members to do walkthroughs, then
    do a group walkthrough, then create unified report
   List of interface improvement ideas
   Start working on your executable prototype
        Complete all the peripheral tasks so you are in a position to
         work on the interface
             Database back-ends, network connections to DB
             Style sheets, banner images, backgrounds
             Icons
   Questions?
                                                                         11
Back-of-the-Envelope Action
          Analysis

   Coarse-grain
         basic actions, e.g., at the level of a
     list
      scenario
     each action is at least 2-3 seconds
     what must be learned/remembered?
     what can be done easily?
     documentation/training?
   Goal is to find major problems
     Example:   1950’s 35mm camera
                                                  12
            Expert Evaluation
   Usability specialists are very valuable
     double-specialists   are even better
 An inexpensive way to get a lot of
  feedback
 Be sure the expert is qualified in your area




                                              13
Cognitive Walkthroughs




                         14
      Cognitive Walkthroughs - I
 A task-oriented method of evaluating an
  interface without users
 A systematic way to imagine users'
  thoughts and actions when they use an
  interface for the first time.
 Benefits of evaluation before user
  meetings
     Helps get rid of obvious problems that would
      waste users’ time
     May catch problems that testing with a few
      users will miss                                15
      Cognitive Walkthroughs - II
 Goals
   evaluate choice-points in the interface
   detect confusing labels, icons, images
    or options
   detect likely user navigation errors
 Start with a complete TCUID scenario
   never try to “wing it” on a walkthrough




                                              16
                   Best Approach
   Tell a Believable Story
      How does the user accomplish the task,
       action-by-action?
      Based on user knowledge and system
       interface
       Recall DOET principles (Is this visible? Is feedback
        clear? Is there a gulf of execution? … )‫‏‬
   Work as a group
       don’t partition the task
   Be highly sceptical
       remember the goal!
                                                               17
   Every gap is an interface problem
 Cognitive Walkthrough How To - I

 Interface prototype (start with LoFi)‫‏‬
 Task description
 Scenario – written list of the actions to
  complete the task in the interface
 An idea of who the users will be and their
  characteristic (so you can tell believable
  stories)‫‏‬
     Personas   may be useful (Google?)‫‏‬

                                               18
Cognitive Walkthrough How To - II
   For each action in the sequence
       tell the story of why the user will do it
       ask critical questions (recall 7 Stages of Action)‫‏‬
            Will users be trying to produce the effect? I.e., will they
             form the goal designers wanted them to?
            Will users see the correct control?
            Will users recognize that this is the control they’re after,
             i.e., that it will advance them toward their goal?
                  Or will they select a different control instead?
            Will users understand the feedback? That is, will be
             they be able to tell that they achieved their intended goal
             or at least made progress toward it?


                                                                        19
                Quick example
 Task: Understand the change made in a
  geographic edit on Cyclopath
 Scenario:
     Click on “Recent Changes” tab
     Click “Changes” option
     Click “Update” button
     Click “Look at” button
     Alternative clicking the “Before”   and “After
      buttons”
     (Zoom in if necessary)‫‏‬




                                                       20
                      Theory
1.   The user sets a goal to be accomplished with
     the system (for example, "check spelling of this
     document").
2.   The user searches the interface for currently
     available actions (menu items, buttons,
     command-line inputs, etc.).
3.   The user selects the action that seems likely to
     make progress toward the goal.
4.   The user performs the selected action and
     evaluates the system's feedback for evidence
     that progress is being made toward the current
     goal.
                                                        21
              Empirical Support
1.   Subjects will try label-guided actions first before
     they experiment with direct manipulations of
     unlabeled objects.
2.   Providing few actions in the search set can help
     to narrow the search if labeling cannot be
     provided, or if criteria for a "good" label are
     difficult to establish.
3.   Users are reluctant to try atypical actions
4.   Users are reluctant to extend their search
     beyond the readily available menus and
     controls.



                                                       22
          Benefits of a Cognitive
              Walkthrough
   Focus most on first experiences - learnability
   Easy to learn
   Can do early in the software cycle
   Surfaces and examines assumptions about what
    users might be thinking
   Can identify controls that are obvious to the
    designer but not to the user
   It can suggest difficulties with labels and
    prompts
   It can help find inadequate feedback
   Can help find inadequacies in the spec       23
     Shortcomings of Cognitive
           Walkthrough
 Is diagnostic, not prescriptive
 Focuses mostly on novice users (someone
  who has to figure it out, rather than
  someone who already knows)‫‏‬
 Relies on the ability of designers to put
  themselves in the users shoes




                                          24
       When to do a Cognitive
           Walkthrough
 Before you do a formal evaluation with
  your users
 Can be done on your own for small pieces
  of the whole
 Can do a walkthrough of a complete task
  as the interface develops




                                         25
Heuristic Evaluation




                       26
           Heuristic Evaluation

   Usability heuristics are broad “rules of thumb” that
    describe features of usable systems
       Derived by evaluating common design problems across a
        wide range of systems
   Heuristic evaluation is a procedure for applying
    heuristics to evaluate a design – an “expert
    evaluation”
   “Discount usability engineering”
   See http://www.useit.com/papers/heuristic/



                                                                27
               Pros / Cons
+ Cheap (no special lab or equipment)‫‏‬
+ Easy
+ Fast (about 1 day)‫‏‬
+ Cost-effective
+ Detects many problems without users
+ Complementary to task-centered approaches
+ Coverage
+ Catches cross-task interactions
- Requires subjective interpretation / application
- Does not specify how to fix problems
- Performance improves as evaluator knowledge
   increases                                         28
    ... vs. Cognitive Walkthroughs
   H.E.s are not task-centered
   H.E.s work better on higher fidelity
    prototypes (but can be done on LoFi)‫‏‬




                                            29
                   Procedure
   A set of evaluators (3-5 is about optimal)
    evaluate a UI (some training may be
    needed)‫‏‬
   Each one independently checks for
    compliance with the heuristics
       Different evaluators find different problems
   Evaluators then get together and merge
    their findings
   Collectively rate severity of the problems
   Debriefing/brainstorming  how to fix the
    problems (+ point out what’s really good)‫‏‬
                                                       30
Why multiple evaluators?




            Wisdom of Crowds…
          (even true for „experts‟)‫‏‬
                                       31
          Why multiple evaluators?
       Proportion of Usability
                                 100
          Problems found

                                  80
                                  60
                                  40
                                  20
                                   0
                                       0   5        10      15    20
Average over 6
case studies                               Number of evaluators


                                                                       32
    So how many evaluators?

   One evaluator does very poorly – only 35%
    of problems detected
   5 evaluators find about 75% of problems
   So more is better, right?
       Well…
       More evaluators costs more
       And don’t find many more problems
       So there are diminishing returns


                                                33
Cost-benefit analysis
                  • Based on
                    estimates of the
                    value of finding
                    problems and
                    the cost of
                    doing the
                    evaluation
                  • Note: a ratio of
                    50 means that
                    investing $10K
                    leads to value
                    of $500K




                                   34
 What an individual evaluator
            does
 Each evaluator goes through the UI at
  least twice
 First, get an overall feel for the system
 Second, inspect the various interface
  elements and consider them in terms
  of the heuristics
     Mayuse a supplementary list of domain-
     specific guidelines

                                               35
    Preparing the evaluators

 If system is intended to be “walk up
  and use” or the evaluators are domain
  experts, no particular training is
  needed
 Otherwise, evaluators may need some
  knowledge about the domain and
  scenarios




                                          36
        Output of an individual
         Heuristic Evaluation
   List of problems
   For each problem, what heuristics
    were violated




                                        37
             Severity ratings
   Used to allocate resources to fix
    problems
   Based on
       Frequency the problem will occur
       Impact of problem (hard or easy to
        overcome)‫‏‬
       Persistence (will users learn a work around
        or will they be bothered every time?)‫‏‬
1 – cosmetic problem
2 – minor usability problem
3 – major usability problem; important to fix
4 – usability catastrophe – must fix                  38
            Debriefing

 Conduct with evaluators, observers,
  and development team members
 Discuss general properties of UI,
  including good points
 Brainstorm potential improvements to
  fix major usability problems
 Development team rates how much
  effort each fix would require
                                         39
The individual heuristics




                            40
Heuristic 1




              41
42
43
H1. Simple and natural dialog
   Exploit the user’s conceptual model
   Match user tasks in as natural a way as
    possible
       Maximize mapping between interface and task
        semantics




                                                      44
     Simple and natural dialog
   Info should appear in natural order (for the task)‫‏‬
   Remove or hide irrelevant or rarely needed info
       It competes for users’ cognitive attention
   Less is more – easier to learn, fewer errors, less
    distraction…
   Good graphic design
       Use grouping and proximity to present related info
       Use color appropriately




                                                             45
Heuristic 2




              46
Poor use of language




                       47
What does
 this do?




            48
49
          H2. Speak the User’s
               Language
   Use terminology based on user’s language
    for the task
   Avoid engineering jargon
   Use the user’s native language
   Use conventional meanings
   View the interaction from the user’s
    perspective
   Do not force naming conventions
   Exploit natural mappings and metaphors




                                               50

								
To top