Docstoc

Usable-Privacy-Security

Document Sample
Usable-Privacy-Security Powered By Docstoc
					ITIS 6010/8010
Usable Privacy & Security

Dr. Heather Richter Lipford
richter@uncc.edu
Agenda

   Evaluation (from last time)
   Ethics & IRB
   Assignments update
   Chapter 2 & 3 discussion
When to do evaluation?
   Summative
    – assess an existing system
    – judge if it meets some criteria
   Formative
    – assess a system being designed
    – gather input to inform design


   Which you do depends on maturity of prototypes and
    goals of evaluation
   Same techniques work for both
Evaluation techniques
   Feedback from experts
    – Discount usability techniques: heuristic evaluation,
      cognitive walkthrough
   Observe users
    – Think-aloud & Cooperative evaluation
   Talk to users
    – Interviews & Focus groups
   Survey users
    – Questionnaires
   Test hypotheses
    – Experiments
Typical User Study
   Bring participants into a controlled
    setting (lab)
   Introductions and consent
   Gather demographic data and give
    instructions
   Ask participant to do a set of tasks
    – Prototype can be simulated or partially
      functional
   Observe and record behavior
   Ask participant for feedback about
    interface
Many variations

   Show or demonstrate mockup,
    storyboard, screenshots and gather
    feedback
   Observe or gather data about behavior
    in a natural setting
   Can be multiple sessions or just one
Evaluation planning
   Decide on techniques, tasks, materials
    – What are usability criteria?
    – How much required authenticity?
   How many people, how long
   How to record data, how to analyze data
   Prepare materials – interfaces,
    storyboards, questionnaires, etc.
   Pilot the entire evaluation
    – Test all materials, tasks, questionnaires, etc.
    – Find and fix the problems with wording,
      assumptions
    – Get good feel for length of study
General
Recommendations
   Clearly identify evaluation goals
   Include both objective & subjective data
    – e.g. “completion time” and “preference”
   Use multiple measures, within a type
    – e.g. “reaction time” and “accuracy”
   Use quantitative measures where possible
    – e.g. preference score (on a scale of 1-7)


Note: Only gather the data required; do so with
  minimum interruption, hassle, time, etc.
Performing the Study
   Be well prepared so participant’s time is not wasted
   Describe the purpose of the evaluation
    – “I’m testing the product; I’m not testing you”
   Explain procedures without compromising results
   Session should not be too long , subject can quit
    anytime
   Never express displeasure or anger
   Data to be stored anonymously, securely, and/or
    destroyed
Consent
   Why important?
    – People can be sensitive about this process and issues
    – Errors will likely be made, participant may feel inadequate
    – May be mentally or physically strenuous
 What are the potential risks (there are always
  risks)?
 “Vulnerable” populations need special care &
  consideration
    – Children; disabled; pregnant; students (why?)


   More later on IRB…
Now what do you do?
   Start just looking at the data
    – Were there outliers, people who fell asleep,
      anyone who tried to mess up the study, etc.?
   Sort & prioritize the data
   Identify & summarize issues:
    – Overall, how did people do?
    – “5 W’s” (Where, what, why, when, and for
      whom were the problems?)
   Compile aggregate results and descriptive
    statistics
Making Conclusions

   Where did you meet your criteria? Where
    didn’t you?
   What were the problems? How serious are
    these problems?
   What design changes should be made?
    – Update task analysis, scenarios, etc.
   Prioritize and plan changes to the design
   Modify prototypes and go again
Experiments

   A controlled way to determine impact of
    design parameters on user experience
   Want results to eliminate possiblity of
    chance

   Hypothesis: What you predict will happen
    – More specifically, the way you predict the
      dependent variable (i.e., accuracy) will depend
      on the independent variable(s)
Types of Variables
   Independent
    – What you’re studying, what you intentionally
      vary (e.g., interface feature, interaction device,
      selection technique)
   Dependent
    – Performance measures you record or examine
      (e.g., time, number of errors)
   Controlled
    – Factors you want to prevent from influencing
      results
“Controlling” Variables
    Prevent a variable from affecting the
     results in any systematic way
    Methods of controlling for a variable:
     – Don’t allow it to vary
           e.g., all males
     – Allow it to vary randomly
           e.g., randomly assign participants to different
            groups
     – Counterbalance - systematically vary it
           e.g., equal number of males, females in each
            group
    The appropriate option depends on
     circumstances
Example
   Do people complete operations faster with a
    black-and-white display or a color one?
    – Independent - display type (color or b/w)
    – Dependent - time to complete task (minutes)
    – Controlled variables - same number of males and
      females in each group
    – Hypothesis: Time to complete the task will be
      shorter for users with color display
    – Ho: Timecolor = Timeb/w
Experimental Designs

   Within Subjects Design
    – Every participant provides a score for all
      levels or conditions
                  Color          B/W
    P1            12 secs.       17 secs.
    P2            19 secs.       15 secs.
    P3            13 secs.       21 secs.
    ...
Experimental Designs

   Between Subjects
    – Each participant provides results for only
      one condition
          Color             B/W
      P1 12 secs.      P2   17 secs.
      P3 19 secs.      P5   15 secs.
      P4 13 secs.      P6   21 secs.
      ...
Comparison
   Within subjects
    – More efficient: fewer trials and participants
    – But need to avoid “order effects”
          e.g. seeing color then b/w may be different from
           seeing b/w then color
   Between subjects
    – Simpler design & analysis because fewer order
      effects
    – Often shorter, so easier to recruit participant
    – More subjects for same statistical power
Hypothesis Testing
    Tests to determine differences
     – t-test to compare two means
     – ANOVA (Analysis of Variance) to compare
       several means
     – Need to determine “statistical significance”

    “Significance level” (p):
     – The probability that your null hypothesis was
       wrong, simply by chance
     – p (“alpha” level) is often set at 0.05, or 5%
       of the time you’ll get the result you saw, just
       by chance
Discount Evaluation
Techniques
   Basis:
    – Observing users can be time-consuming and
      expensive
    – Try to predict usability rather than observing it
      directly
    – Conserve resources (quick & low cost)
   Expert reviewers used
    – HCI experts interact with system and try to find
      potential problems and give prescriptive
      feedback
Example: Heuristic
evaluation
    3-5 experts in HCI view or interact with a prototype.
     – May vary from mock-ups and storyboards to a working
       system
    They use high-level heuristics as guidelines, and
     identify any problems they see. For example:
     – Does the interface use natural and simple dialog?
     – Does the interface provide good error messages?
    Designers compile and summarize all the problems
     and iterate.

    Where to get heuristics?
     – http://www.useit.com/papers/heuristic/
     – http://www.asktog.com/basics/firstPrinciples.html
Cognitive Walkthrough
   Assess learnability and usability through
    simulation of way novice users explore and
    become familiar with interactive system
   Experts walk through all steps in representative
    tasks, identifying trouble spots based on 4
    questions
    • Will users be trying to produce whatever effect action has?
    • Will users be able to notice that the correct action is available?
      (is it visible)
    • Once found, will they know it’s the right one for desired effect?
      (is it correct)
    • Will users understand feedback after action?
Advantages &
Disadvantages
   Fast and cheap
   Does not need working system
   Detailed, careful examination that can
    cover entire interface
   Problems are subjective – are they
    really usability problems?
   Outcomes depend upon expertise and
    experience of the reviewers
For more info:

http://www.sis.uncc.edu/~richter/classe
  s/2006/6010/index.html
or
http://www.sis.uncc.edu/~clatulip/ITIS6
  400/ITIS6400_Home.html

Or take the course in the spring.
Ethics of working with
people

    Usability testing can be arduous; privacy is
     important
    Each person should know and understand
     what they are participating in:
     – what to expect, time commitments
     – what the potential risks are
     – how their information will be used
    Must be able to stop without danger or
     penalty
    All participants to be treated with respect
Attribution Theory

    Studies why people believe that they
     succeeded or failed--themselves or
     outside factors (gender, age
     differences)

    Make sure participants do not feel
     that they did something wrong, that
     the errors are their problem
Respecting your
participants
 Be well prepared so participant’s time is not wasted
 Make sure they know you are testing software, not them
 Explain procedures without compromising results
 Make them aware they can quit anytime
   Make sure participant is comfortable
   Session should not be too long
   Maintain relaxed atmosphere
   Never indicate displeasure or anger
   State how session will help you improve system
    (“debriefing”)
   Don’t compromise privacy (never identify people, only show
    videos with explicit permission)
IRB

 Institutional Review Board (IRB)
 Federal law governs procedures
 Reviews all research involving human (or
  animal) participants
 Safeguarding the participants, and thereby
  the researcher and university
 Not a science review (i.e., not to asess
  your research ideas); only safety & ethics
 http://www.research.uncc.edu/Comp/human.cfm
Ethics Certification

   Ethics is not just common sense
   Training being standardized to ensure
    even and equal understanding of issues

   Go get your certification: due Sept. 18!
http://www.research.uncc.edu/tutorial/index3.cfm
IRB @ UNCC
http://www.research.uncc.edu/comp/human.cfm

   On-line tutorial
   Guidelines
   Consent procedures and template forms
   Protocol application forms

   IRB Protocol 101 Training
    – http://www.research.uncc.edu/comp/human_trng.cfm
    – 9/10, 9/11, 9/12, 9/18, 9/20 from 6-7pm
Assignments
Scenario
Your target users work in a hospital. Confidentiality of patient data
cannot be compromised. Different employees have different levels of
clearance within the one system that controls all of the patient records.
There are a limited number of public workstations that are highly
trafficked throughout the day. Current practice at the hospital is that
one worker logs in and often many people with different levels of
clearance work under that same account, even though they are not
authorized to do so. Often, the workstation remains logged in between
users, thus an unauthorized user could gain access to patient records. In
addition, passwords change on a monthly basis so it is more convenient
for the workers to just use the account that has already been logged in
than try to recall their always changing password. Management insists
that the passwords much change frequently to reduce the risk of a
hacker viewing the confidential data.

How to address security needs with passwords or other forms of
authentication in this context?
My current scenario
Your target users are students and faculty doing studies in the
usability lab. This lab is a room with two Novell computers with
special usability recording software on them. Access to the lab is
controlled by 49er card. All study personnel need to be able to
access the study materials, but no one else should have access
to those materials. Study materials include consent forms,
questionnaires, and instructions to give study participants, both
in digital and physical forms. Additionally, on the computer are
the application and application data to be tested, as well as the
digital recordings of the study. An external hard drive contains
back up copies of all the recordings and application data. Most of
the people in the lab will have Novell accounts, but not everyone.

How  can we provide shared access to the study materials? How
can we prevent unauthorized people from getting access to the
study materials and records?
Usable Privacy &
Security: An Introduction
   “weakest link property” – attackers only
    have to exploit one error or vulnerability
   Sociotechnical system – complex system of
    technologies and people/organizations

   So are people really the weakest link in
    security or privacy systems? How much is a
    self-fulfilling prophecy?
   Are security and usability competing goals?
The product

The technologies and processes put in place
  for security and privacy protection

   Why don’t they work?
    – Users are unable to behave as required
          How many accounts with passwords do you have?
          How many actual passwords do you have?
    – Users are unwilling to behave as required
          Do you create strong passwords all the time?
User motivations
   Users underestimate their risk and the negative outcomes
     – Has anyone ever had a password compromised or misused?
     – how concerned are you about shoulder surfing for your
       passwords?
     – What could happen if someone could get into your email? Your
       blog? Your bank account?
   Users are not held accountable
     – Who makes sure you don’t write down your password? Who
       makes sure you don’t reuse passwords?
   Conflicts with social norms & self image
     – Have you ever shared a password with a friend/colleague?
     – Why wouldn’t you share your bank password with your spouse?

   Question: The real-world equivalent of good security is locking
    your home or car to protect your belongings. Yet those who
    follow good cybersecurity practices are perceives as “anal” or
    “paranoid.” Why the difference?
The process
The methods for creating the product.

   In your organization:
    – Who creates security policies and technologies for the
      employees?
    – Who creates the security policies and technologies for the
      customers/users?

   AEGIS
    – What are the benefits of this method?
    – What are its drawbacks?
    – Do the methods change if your users are non-technical?
The panorama
The context of the products, the larger environment

   Education
     – Teaching concepts and skills
   Training
     – Correct usage of security mechanisms through drills, monitoring,
       feedback, reinforcement
     – Should encompass all staff, not only those with immediate access to
       systems deemed at risk
   Attitudes
     – Role models

   What training/education have you had on good passwords?
   What training/education has your [favorite-non-technical-person]
    had?
     – What do you think they should have?
     – How could that be provided to them?
Tog’s advice
   Achieving balance
     – User context and bad guy context
     – User task and authentication
     – Security and privacy

   “RingWall” metaphor
     – Castlekeep, ramparts, town wall, outside
     – Is this a reasonable metaphor?

   Question: Much of security and privacy concerns has more to
    do with where your information is, than where you are. Does
    Tog’s same desire for flexibility of privacy settings based on
    the user’s environment apply? Do the same metaphors apply?

				
DOCUMENT INFO