; Questionnaire Design
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Questionnaire Design

VIEWS: 49 PAGES: 12

  • pg 1
									Questionnaire Design
                Amit Das
         aadas@ntu.edu.sg
Guiding Principle
   Respondents should be able and willing to
    provide the information requested
       respondents may not be able to recall the
        information
           “How much did you spend on films in the last 3 years?”
       questions may be unclear or ambiguous
           “Do you agree with the government’s philosophy?”
       questions may invade respondent’s privacy
           “How much did you earn last year?”
       the “good subject” effect
Multiple Items
   Many theoretical constructs are multi-faceted;
    multiple questions are needed to assess them
       average of multiple items = score on construct
   multiple measures of a single construct
    increases reliability (freedom from noise)
   the multiple measures of one construct should
    be “sprinkled” across the questionnaire
       responses to related questions “clump up"
Inter-relationship among items
   Measures of the same construct should show
    strong association (“hang together”)
        let items 1, 7, and 11 measure Construct A and
         items 4, 6, and 9 Construct B
                    Construct A                       Construct B
              1          7            11        4          6            9
     1    perfect    strong       strong    weak       weak         weak
     7    strong     perfect      strong    weak       weak         weak
    11    strong     strong       perfect   weak       weak         weak
     4    weak       weak         weak      perfect    strong       strong
     6    weak       weak         weak      strong     perfect      strong
     9    weak       weak         weak      strong     strong       perfect
Open or closed-ended?
   Open-ended questions allow respondents
    more freedom to express their thoughts
       time-consuming to respond to
       difficult to analyze
           if open-ended responses are to be “coded” into a set
            of categories
               establish inter-rater reliability (Cohen’s Kappa)
               aren’t we better off with closed-ended questions?
   Closed-ended questions must anticipate the
    common responses
       “other” category should be used infrequently
Scaling of responses
   To measure the strength of attitudes towards
    an issue, responses are located on a
    continuum anchored by opposites, e.g.
    “The NTU MBA program is …”

        |-----------------------|-----------------------|-----------------------|-----------------------|

awful                     not good                   so-so             pretty good                 awesome


        easy to establish ordinal nature of data
        are these interval data?
Response Biases
   Not enough variation among responses
        use scale with more points (7-point, 9-point, …)
   Too many “middle” responses
        use scale with even number of points
   Leniency bias (responses on “generous” side)
     use asymmetrical anchors e.g.
    “The candidate’s potential for graduate studies is”
        |-----------------------|-----------------------|-----------------------|-----------------------|

quite                     good                       very              extremely                      best
good                                                 good               good                       I’ve seen
Forced-choice questions
   Sometimes respondents choose high levels
    of all attributes when researcher wants them
    to choose among attributes
     forced-choice questions, e.g.
    “Which characteristic best describes you – intelligent
      or hard-working?”
     variation: “Allocate 100 points over the following
      features – sound quality, build quality, weight,
      style, converged features (camera, MP3, PDA)”
Questions to Avoid
   Double-barrelled questions
    “Have you stopped beating your wife?”
     split into two or more separate questions

   Leading questions
    “Don’t you think REITs are going to take off?”
     research, not advocacy

   Questions with jargon
       Are RDBMS better for TPS or DW/BI?
Pilot Testing
   The best-laid plans can go haywire !
   Objective of pilot testing is to see if
    respondents consistently interpret questions
    in the same way as intended
       pilot test respondents might be invited to
        comment on instrument and procedure
       presence of researcher during survey
        administration helps spot problems quicker
   pilot testing “uses up” respondents
Using Existing Instruments
   Many researchers place their questionnaires
    in the public domain
       such questionnaires (or parts thereof) can be
        used (with proper credits) if our study examines
        the same or similar constructs
       re-use of existing instruments ensures
           validity and reliability of measures
           comparability of results across studies
   Try to find existing measures
Interviews
   Interviews provide
       better rapport
       clarification of complex items
       greater flexibility in wording and sequence
   However, interviews
       are costly in terms of time and effort
       do not offer the anonymity of mail surveys
   If you do interviews,
       develop a script and stick to it

								
To top