Threats to Construct Validity by JKVN5u16


									              Threats to
           Construct Validity

11:11 AM
           Inadequate Preoperational
            Explication of Constructs
• preoperational = before translating constructs
  into measures or treatments
• in other words, you didn't do a good enough
  job of defining (operationally) what you mean
  by the construct

11:11 AM
           Mono-operation Bias

• pertains to the treatment or program
• used only one version of the treatment or

11:11 AM
            Mono-method Bias

• pertains especially to the measures or outcomes
• only operationalized measures in one way
• for instance, only used paper-and-pencil tests

11:11 AM
                 Hypothesis Guessing
           • people guess the hypothesis and are
             responding to it rather than responding
           • people want to look good or look smart
           • this is a construct validity issue because the
             "cause" will be mislabeled. You'll attribute
             effect to treatment rather than to good

11:11 AM
           Evaluation Apprehension

• people make themselves look good because they
  know they're in a study
• or, it could be that their apprehension makes
  them consistently respond poorly -- you
  mislabel this as a negative treatment effect

11:11 AM
           Experimenter Expectancies

• the experimenter can bias results
• consciously or unconsciously
• becomes confused with (mixed up with) the
  treatment; we mislabel the results as a
  treatment effect

11:11 AM
Confounding Constructs and Levels of

• conclude that the treatment has no effect when
  it is only that level of the treatment which has
• really a dosage issue - related to mono-
  operation because you only looked at one or
  two levels.

11:11 AM
   Interaction of Different Treatments

• people get more than one treatment
• this happens all the time in social ameliorative
• again, the construct validity issue is largely a
  labeling issue

11:11 AM
    Interaction of Testing and Treatment

• does the testing itself make the groups more
  sensitive or receptive to the treatment
• this is a labeling issue
• differs from testing threat to internal validity;
  here, the testing interacts with the treatment
  to make it more effective; there, it is not a
  treatment effect at all (but rather an
  alternative cause)

11:11 AM
  Restricted Generalizability Across

• you didn't measure your outcomes completely
• or, you didn't measure some key affected
  constructs at all (i.e., unintended effects)

11:11 AM
11:11 AM

To top