Program Evaluation by V5YS2d


									 Health Program Effect
Evaluation Questions and
Data Collection Methods

       CHSC 433
    Module 5/Chapter 9
       L. Michele Issel, PhD
    UIC School of Public Health

1.   Develop appropriate effect
     evaluation questions
2.   List pros and cons for various data
     collection methods
3.   Distinguish between types of
    Involve Evaluation Users
          so they can:

   Judge the utility of the design
   Know strengths and weaknesses of the
   Identify differences in criteria for
    judging evaluation quality
   Learn about methods
   Have debated BEFORE have data

The following terms are used in
 reference to basically the same set
 of activities and for the same
     Impact evaluation
     Outcome evaluation
     Effectiveness evaluation
     Summative evaluation
      Differences between
      Research - Evaluation

   Nature of problem addressed:new knowledge
    vs assess outcomes
   Goal of the research: new knowledge for
    prediction vs social accounting
   Guiding theory: theory for hypothesis testing
    vs theory for the problem
   Appropriate techniques: sampling, statistics,
    hypothesis testing, etc. vs fit with the

Characteristic      Research                  Evaluation

Goal or Purpose     Generate new              Social accounting and
                    knowledge for             program or policy decision
                    prediction                making
The questions       Scientist’s own           Derived from program goals
                    questions                 and impact objectives

Nature of problem   Areas where knowledge     Assess impacts and outcomes
addressed           lacking                   related to program

Guiding theory      Theory used as base for   Theory underlying the
                    hypothesis testing        program interventions, theory
                                              of evaluation

Characteristic   Research                   Evaluation

Appropriate      Sampling, statistics,      Whichever research techniques
techniques       hypothesis testing, etc.   fit with the problem

Setting          Anywhere that is           Usually where ever can access
                 appropriate to the         the program recipients and non-
                 question                   recipient controls

Dissemination    Scientific journals        Internal and externally viewed
                                            program reports, scientific
Allegiance       Scientific community       Funding source, policy
                                            preference, scientific
    Evaluation Questions…

   What questions do the stakeholders
    want answered by the evaluation?
   Do the questions link to the impact and
    outcome objectives?
   Do the questions link to the effect
       From Effect Theory to
         Effect Evaluation

   Consider the effect theory as source of
   Consider the effect theory as guidance
    on design
   Consider the effect theory as informing
    the timing of data collection
                          Impact Theory                                             Outcome Theory

                                      Causative Theory

                                  xa, xb...
                                  Determiniant                      Y1:                             Y1:
  xa, xb... Antecedent
                                  Independent variables        Dependent-Impact              Dependent-Outcome
  Independent variables
                                  plus                            variables                      varioables
                                  Intervening variables

plus                                                      xa, xb:
                                                          Contributing variables, often
xa, xb...                                                 confounding, moderating or
ASSET variables                                           mediating

            Intervention Theory
  From Effect Theory to

The next slide is an example of using the
 the effect theory components to
 identify possible variables on which to
 collect evaluation data.
                                    Impact Theory

                                      Causative Theory
                                                                           Outcome Theory

   Antecedent-independent                                          Y: Prenatal             Y:
                                  xa:dietary habits
   variables:                                                       anemia              Newborn
                                  xb:dietary knowledge
    xa: Knowledge                                                  hematocrt             weight
                                  xc:iron intake
                                  xd,e,f: parity, age, income

x0: Control group                                                 Contributing
x1: Prenatal vitamin group                                         variables
x2: Nutrition education group                                   (none measured)
x3: Vitamins and Education

              Intervention Theory
       Impact vs Outcome

   Impact is more realistic because it
    focuses on the immediate effects and
    participants are probably more
   Outcomes is more policy, longitudinal,
    population based and therefore more
    difficult and costly. Also, causality
    (conceptual hypothesis) is fuzzier.
      Effect Evaluation

Draws upon and uses what is known
  about how to conduct rigorous

  Design --overall plan, such as experimental,
   quasi-experimental, longitudinal,

  Method -- how collect data, such as
   telephone survey, interview, observation
    Methods --> Data Sources

   Observational--> logs, video
   Record review--> Client records, patient
   Survey--> participants/not, family
   Interview--> participants/not,
   Existing records --> birth & death
    certificates, police reports
  Comparison of Data
  Collection Methods

Characteristics of each method to be
   considered when choosing a method:
1. Cost
2. Amount of training required for data
3. Completion time
4. Response rate
    Validity and Reliability

   Method must use valid
   Method must use reliable processes for
    data collection
   Method must use reliable measures
      Variables, Indicators,

   Variable is the “thing” of interest, variable is
    how that thing gets measured
   Some agencies use “indicator” to mean the
    number that indicates how well the program is
   Measure the way that the variable is known
It’s all just language…. Stay focused on what is
     Levels of Measurement

Level         Examples              Advantage                Disadvantage
Nominal,      Zip code, race,       Easy to understand.      Easy to understand.
Categorical   yes/no

Ordinal,      Social class,         Limited information      Limited information
Rank          Lickert scale,        from the data            from the data
              “top ten” list
              (worst to best)

Interval,     Temperature, IQ,      Gives most               Can be difficult to
Ratio:        distances, dollars,   information; can         construct valid and
continuous    inches, dates of      collapse into nominal    reliable interval
              birth                 or ordinal categories.   variable
                                    Used as a continuous
Types of Effects as documented
      through Indicators

Indicators   of   physical change
Indicators   of   knowledge change
Indicators   of   psychological change
Indicators   of   behavioral change
Indicators   of   resources change
Indicators   of   social change

It is more productive to focus on
  a few relevant variables than to
  go on a wide ranging fishing

            Carol Weiss (1972)

   Intervening variable: any variable that
    forms a link between the independent
    variable, AND without which the
    independent variable is not related to
    the dependent variable (outcome).

   Confounding variable is an extraneous
    variable which accounts for all or part of
    the effects on the dependent variable
    (outcome); mask underlying true

   Must be associated with the dependent
    variable AND the independent variable.

   Exogenous (outside of individuals) confounding
    factors are uncontrollable (selection bias,
    coverage bias).
   Endogenous (within individuals) confounding
    factors equally important: secular drift in
    attitudes/knowledge, maturation (children or
    elderly), seasonality, interfering events that
    alter individuals.
            Variable story…

To get from Austin to San Antonio, there is one highway.
  Between Austin and San Antonio there is one town, San
San Marcus is the intervening variable because it not
  possible to get to San Antonio from Austin without going
  through San Marcus.
The freeway is often congested, with construction and
  heavy traffic. The highway conditions is the
  confounding variable because it is associated with both
  the trip (my car, my state of mind) and with arriving
  (alive) in San Antonio.
           Measure Program Impact
             Across the Pyramid

            Health Care

         Enabling Services

       Population-Based Services

        Infrastructure Services

To top