PERFORMANCE MEASUREMENT FOR COMMUNITY MOBILIZATION

Document Sample
PERFORMANCE MEASUREMENT FOR COMMUNITY MOBILIZATION Powered By Docstoc
					                                                 Performance Measurement for Community Mobilization




                                     CHAPTER 3

               PERFORMANCE MEASUREMENT FOR
                  COMMUNITY MOBILIZATION


                   Performance measurement is a tool to track
                  and improve community mobilization initiatives
                                 and outcomes.



       Chapter Contents                                                            Page


       Principle One: Know What Performance Measurement Is ........... 50
       Principle Two: Begin with a Logic Model for Your Program ......... 52
       Principle Three: Be Useful, Accurate, Realistic,and Respectful .... 53
       Principle Four: Know Your Capacity for Performance Measurement 55
       Principle Five: Carefully Choose Your Design
        When, Who, What, and How? ....................................... 57
       Principle Six: Use What Is Learned .................................. 71
       Facts and Findings ...................................................... 71
       Meaning and Judgements ............................................... 75
       Recommendations ........................................................ 76
       Annotated Bibliography ................................................. 77




2003                          Oregon State University Family Policy Program                     47
                              Oregon Commission on Children and Families
Community Mobilization




                         Performance measurement is
                               about improving,
                              not about proving.


                                     Peter Bloome
                                  Associate Director
                                Oregon State University
                                Extension Service, 2000




                                  increase awareness                             have positive
         “government and            and participation                         impact on children,
          private efforts            of citizens and                            youth, families,
                to...                organizations in                          and communities.”
                                      actions that...                             OREGON SB555




            Community                     Positive                                Positive
            Mobilization                 Community                              Outcomes for
            Strategies                   Outcomes                                  People




48                            Oregon State University Family Policy Program                         2003
                              Oregon Commission on Children and Families
                                                        Performance Measurement for Community Mobilization




                                            CHAPTER 3

                    PERFORMANCE MEASUREMENT FOR
                       COMMUNITY MOBILIZATION




Performance measurement builds on logic modeling. Performance measurement tracks the implementation
and results of community mobilization initiatives. Plans for community mobilization, summarized in a logic
model, are compared to actual resources invested, strategies implemented, and results achieved.


Despite its complexity, performance measurement is guided by six basic principles:


             Principle One: Know What Performance Measurement Is

             Principle Two: Begin with a Logic Model for Your Program

             Principle Three: Be Useful, Accurate, Realistic, and Respectful

             Principle Four: Know Your Capacity for Performance Measurement

             Principle Five: Carefully Choose Your Design: When, Who, What, and How

             Principle Six: Use What Is Learned

              This chapter reviews these principles, applying each to community mobilization.




2003                                 Oregon State University Family Policy Program                     49
                                     Oregon Commission on Children and Families
Community Mobilization




                       Principle One: Know What Performance Measurement Is

Performance measurement determines the success of a specific community initiative by comparing plans to
actual activities and outputs, and outcomes in order to improve decision-making.

                          Performance measurement is one form of program evaluation. Like all evaluation,
                          performance measurement uses established research methods to document what
                          was invested, what strategies were implemented, and what results (both outputs
 PERFORMANCE
                          and outcomes) were achieved. Because performance measurement compares what
 MEASUREMENT
                          actually happens to what was planned, it is essential to effective program
 Is essential             management.
 to implement,
 assess, and refine       One widely known evaluation model describes five tiers of evaluation. Three of the
 community                five levels of evaluation are performance measurement activities1 (Figure 3-1). In
 mobilization             this model, the “Tier 1” involves community mapping. The fifth and highest tier
 initiatives              addresses the question of causality. For example, a “Tier 5” evaluation of a volunteer
                          or mentoring program could be designed to determine if the program was the most
                          likely cause of a particular outcome. Such an evaluation would have the following
                          characteristics.

            Information is gathered from program participants as well as a similar group of people who did not
            participate. This is called a comparison or control group.

            Everyone (participants and people in the control group) is randomly assigned to one of two groups
            – a program participant group and a control group that does not participate in this program.

            Outcomes are tracked over time using the same indicators for both participants and the control
            group.

            Results are compared using inferential statistics that show how likely it is the program activities
            “caused” any differences in outcomes for participants and control group members.

Such Tier 5 evaluations are essential to advance knowledge and practice. Few community programs, however,
have the resources to conduct such in-depth, carefully controlled studies. Moreover, establishing causality is
almost impossible for comprehensive community mobilization initiatives. Connections can be made between
key strategies and outcomes, but it is rarely possible to control all the community variables that may have
contributed to success or failure.

In contrast to Tier 5 evaluation, performance measurement (Tiers 2, 3, and 4) does not prove that a community
initiative caused a particular outcome; rather performance measurement reveals the initiative’s success in
conducting planned activities and achieving desired outputs and outcomes.

Most public and non-profit programs rely on empirically sound practices to design initiatives and then use
performance measurement to track the implementation of these initiatives in their community.2

50                                     Oregon State University Family Policy Program                       2003
                                       Oregon Commission on Children and Families
                                                        Performance Measurement for Community Mobilization




Performance measurement provides valid answers to the questions that are most relevant to operating and
improving community initiatives:

           What was invested?

           What strategies were effective? Were planned activities and outputs (numbers served for example)
           reached?

           How did the community benefit? How did people benefit?

How can this information be used to improve future efforts?




                                  Figure 3-1: Tiers of Evaluation3




                                        TIER 5
                                                            RESEARCH AND
                                        Compared
                                         to people           INFERENTIAL
                                        not served,          EVALUATION
                                    do participants do
                                          better on
                                    desired outcomes?


                                          TIER 4
                                Does program or initiative
                                         achieve
                             intended outcomes and results?

                                          TIER 3                                  PERFORMANCE
                                Does program or initiative
                                     implement planned                            MEASUREMENT
                         activities and achieve desired outputs?


                                          TIER 2
                              What resources are invested?



                                          TIER 1                                       COMMUNITY
             What are community needs and resources? How will the                       MAPPING
             program or initiative address these needs and resources?




2003                                 Oregon State University Family Policy Program                     51
                                     Oregon Commission on Children and Families
Community Mobilization




                    Principle Two: Begin with a Logic Model for Your Program

Performance measurement compares what was planned to what occurred. The first critical step in performance
measurement is the creation of a logic model that shows the planned “chain” of activities and outputs that will
lead to the intermediate outcomes, and, eventually, high level outcomes and goals. Each step in the creation
of a logic model was shown in Chapter Two.



                                                                                            High Level
                             Key                                  Intermediate              Outcomes
           Inputs         Strategies         Outputs                Outcomes                   and
                                                                                              Goals




Effective logic models use research to align strategies, outcomes, and goals. Research can help to identify the
intermediate outcomes that contribute to high level outcomes and goals; research can also identify the key
strategies and best practices that are likely to lead to desired outcomes.


Sometimes logic models outline several intermediate outcomes. Each successive outcome is achieved “so that”
a larger, more important outcome can be achieved. This is called a “so that” chain. The direct influence of a key
strategic activity on achievement of outcomes is strongest for the first immediate outcomes in the chain.



                         A “so that” chain links strategic activities to
                       intermediate outcomes to higher level and goals




     Key                     Intermediate               Intermediate                   High
                                                                                                         Oregon
     Strategies              Community                  “People”                       Level
                                                                                                         Goals
     and Outputs             Outcomes                   Outcomes                       Outcomes




52                                     Oregon State University Family Policy Program                          2003
                                        Oregon Commission on Children and Families
                                                           Performance Measurement for Community Mobilization




                 Principle Three: Be Useful, Accurate, Realistic, and Respectful

When a performance measurement system is established, many decisions have to be made. How should
activities and outcomes be tracked? What indicators should be used? When and how should information be
gathered? Answering these questions demands clear guidelines.

Evaluation experts as well as program managers believe that an effective performance           Performance
measurement system provides useful and accurate information that is collected in a             measurement
realistic and respectful manner. These characteristics – utility, accuracy, feasibility, and   should be
propriety – are called the standards of evaluation.4                                           useful,
                                                                                               accurate,
Utility. Good performance measurement provides timely information that is relevant             realistic, and
to planning, delivery, and improvement. Useful reports summarize findings and are              respectful.
timely and accessible for decision-making.

Accuracy. Good performance measurement provides information that is correct. Accurate information builds
on valid and impartial standards, reliable procedures, and reasonable interpretations and conclusions. A limited
amount of accurate information is better than a lot of inaccurate or incomplete information. Ultimately only
accurate information is believable or useful.

Feasibility. Good performance measurement uses resources realistically and wisely. This means that information
is gathered in a manner that is manageable and sustainable over time. Careful planning can assure that
measurement strategies are practical and do not add unnecessary work or record-keeping. When choosing
indicators, it is critical that time, skills, and other resources be considered. When the needed resources are
lacking, the information that is collected is likely to be incomplete and inaccurate.

Most communities select output and outcome indicators from records, observations, and other simple assessments
that are already being collected to inform the operation of the initiative. Whenever possible use databases or
other existing records as the sources for indicators. If needed information is not already gathered, databases
and records should be reformatted to be effective as sources for important indicators.

Propriety. Good performance measurement collects information in a respectful manner and provides fair
information that represents diverse perspectives. This means that performance assessments should:

             Ask questions that allow for both positive and negative responses

             Include both participants and non-participants (or drop-outs)

             Present successes and strengths as well as short-comings and challenges

In addition, respectful performance measurement tracks worthwhile (not just measurable) outputs and outcomes.
It is important to balance the value of the information that will be gathered with the costs (staff time, money,
and other resources) of collecting that information. One cost to consider is the opportunity costs – when a
person is collecting information, some other task is not being done. If the information that is collected directly
benefits participants and program operations, then people are more willing to invest the time and resources
needed to gather that information.

2003                                    Oregon State University Family Policy Program                           53
                                         Oregon Commission on Children and Families
Community Mobilization




Good performance measurement protects the rights and welfare of participants and involved staff. This
means respecting confidentiality, dignity, time, and other non-service needs. Other specific ideas about
respecting the rights are presented later in this chapter.


Finally, respectful processes acknowledge that accountability and performance measurement are difficult.
Leadership and a willingness to experiment are essential to build the capacity for effective performance
measurement.


Balancing Utility, Accuracy, Feasibility, and Propriety

 Not everything that
                             Trade-offs always have to be made between what would produce the best, most
 is measurable is            accurate information and what is actually possible and respectful to do. In short,
                             utility, accuracy, feasibility and respect have to be balanced.
 important; not
 everything that is          Community mobilization initiatives must be particularly sensitive to community
 important is                members and organizations. People and grassroots organizations are likely to be
                             more invested in action than in data collection. Too many forms and reports are
 measurable.
                             likely to feel more like “government intrusion” than like helpful technical assistance.
           Michael Patton
                Evaluator


In community mobilization initiatives, it is critical that the information collected be:


            Directly useful to the mobilization initiative


            Accurate enough to be valued and believable


            Feasible to gather without investment of great resources and


Very respectful of the time of volunteers and participants.




54                                       Oregon State University Family Policy Program                         2003
                                         Oregon Commission on Children and Families
                                                             Performance Measurement for Community Mobilization




               Principle Four: Know Your Capacity For Performance Measurement

The capacity for performance measurement varies across programs, agencies, and initiatives. Capacity is
influenced by resources, experience, and the amount of contact with participants or stakeholders. The self-
assessment survey (Figure 3-2) on the next page was originally developed for service programs. This self-
assessment can also help community organizations or initiatives define their capacity for performance
measurement.

Level one capacity is limited to the most straight-forward data collection methods. Generally level one relies
on records and/or simple surveys completed at the end of key activities.

                                                                                          Most grassroots,
       EXAMPLE: At the end of a volunteer training workshop, participants could
                                                                                          community
       complete a simple survey describing how much the program helped them,
                                                                                          organizations are at
       if at all. To make this post-activity assessment stronger, participants are
                                                                                          level one capacity for
       asked to rate themselves relative to the specific outcomes desired for the
                                                                                          measurement – even if
       program.
                                                                                          their formal partners
                                                                                          are capable of level
Level two capacity indicates that a community initiative is able to collect all
                                                                                          two or three.
the level one information and more. At level two, a program might use longer
surveys or simple interviews collected at two or more points in time.

       EXAMPLE: A program assesses the outcomes of a volunteer training program by surveying participants
       immediately after, and again six months after, a workshop series.

Level three capacity means that a community initiative can assess multiple outcomes and use more complicated
measures and data collection approaches. Level three may also involve a “pre-post” design that requires more
complicated record-keeping and analysis.

       EXAMPLE: A volunteer tutoring program uses records to collect information on both the community outcomes
       (more people volunteer to support children over the school year) and child outcomes (teacher reports of
       children’s academic performance increases).

       EXAMPLE: A community-wide mobilization initiative, such as a Community Progress Team, involves multiple
       diverse activities. This initiative uses focus groups interviews and detailed key informant interviews to
       assess progress over time. To implement this assessment requires time and training as well as strong
       analysis and writing skills. (See Appendix 3-A for an example: Our Communities Then and Now.)

Levels one, two, and three! A community mobilization initiative may operate at all three levels, depending
on the outcomes being tracked and the resources available.

       EXAMPLE: A community initiative on reading (a) Reviews records to track numbers of participants at family
       reading fairs (Level one). The initiative also (b) conducts simple “one-minute interviews” during reading
       fairs (Level two), and tracks the outcomes including (c) volunteers tutoring knowledge (Level two) and (d)
       children’s reading scores (Level three).


2003                                      Oregon State University Family Policy Program                        55
                                          Oregon Commission on Children and Families
Community Mobilization




        Figure 3-2: Self-Assessment of Performance Measurement Capacity


        Circle the number that BEST describes your program or initiative.

        How established is your program or initiative?
        How long has your program or initiative been operating? What is its size, stability of staff
        and leadership, community support, and funding level?
        1. Really just getting started – operating less than 3 years and many elements are still
            being developed.
        2. Established for 3-5 years and most elements are working pretty well.
        3. Established over 5 years and working smoothly; funding, activities and leadership are in
            place and pretty stable.

        How intense is your program or initiative?
        How frequent and how intense are contacts with participants?
        1. Most contacts pretty brief; one-to-one contact is rare; most contact is in group
           settings; total contact time is less than 3 or 4 hours.
        2. Most participants are seen in one-on-one and group settings for at least 5 hours.
        3. Extensive contacts with participants. Usually several contacts in small groups or
           extensive one-on-one contact.


        How much and how complicated is the information that you need?
        1.   Only basic information on our program activities and outputs (who we serve, what we do)
             and one or two simple outcomes.
        2.   Basic program information plus information about multiple outcomes or about outcomes
             that occur over longer periods of time.
        3. Basic information plus information on more complicated or longer-term outcomes that
             are more difficult to achieve.


        What resources do you have for performance assessment?
        What money, staff skills and time, equipment, and technical assistance are available for data
        collection, record-keeping, and analysis?
        1. Limited resources of all kinds
        2. Adequate resources if we make it a priority
        3. Good, reliable resources and high priority

        SCORING: Thinking is the most important part of scoring!
        What number did you circle most often?
                              Mostly number 1 = Level one
                              Mostly number 2 = Level two
                              Mostly number 3 = Level three
                              1, 2, and 3? = Use your judgment to assign a score.

        Adapted from: Goddard, W. et al. (1994). The Alabama Children’s Trust Fund Evaluation Manual.
        Auburn University, Auburn, GA.




56                                     Oregon State University Family Policy Program                    2003
                                        Oregon Commission on Children and Families
                                                          Performance Measurement for Community Mobilization




        Principle Five: Carefully Choose Your Design: When, Who, What, and How?


Design is the master plan for performance measurement. Design answers four key questions:

            When will information be collected?
                                                                                              Performance
            From whom will information be collected?
                                                                                             measurement
            What tools will be used to collect information?
                                                                                           designs must fit
            How will participants’ rights and welfare be protected?
                                                                                           in the real world

As discussed earlier, performance measurement designs must fit in the “real world”           of community
of community mobilization. An appropriate design must be selected based on needs             mobilization.
for information and capacity for performance measurement.

When Will Information Be Collected?

Data collection plans identify when information (data) will be collected. Figure 3-3 describes commonly used
performance measurement designs, highlighting the times when information is collected. The most common
performance measurement design collects data only at the end of a major activity (post only). Pre-post designs
require collecting data before (pre) and after (post) an initiative. Retrospective pre-post designs are becoming
more common because they have been demonstrated to be accurate in describing change over time, but only
require data collection at one time - at the end (post) of some event or project.


An initiative may collect different kinds of information at different times. For example, a 2-year community
initiative provides education for child care providers and young families. The immediate desired outcome is to
increase provider and family literacy activities. The ultimate desired outcome is that more local children reach
school age with adequate pre-literacy (reading) skills.


To assess this community initiative, different kinds of information would need to be collected at different times,
and from different people. Some key questions might include:

            Do parents and child care providers report satisfaction with the various supports for increased
            literacy activities? (Post only: Satisfaction survey of parents and providers)

            Do parents and/or providers report increased literacy activities with children? Increased knowledge
            of how to develop literacy skills? (Retrospective pre-post: Survey of parents and providers)

            What do community partners identify as the challenges and successes of the initiative? (Post:
            Interviews and focus groups with community partners)

            Do more children reach school with adequate pre-literacy skills following the initiative? (Pre-post:
            Teacher survey, teacher review of student records)

2003                                   Oregon State University Family Policy Program                          57
                                        Oregon Commission on Children and Families
Community Mobilization




                         Figure 3-3: Performance Measurement Designs


                              Description:

           Post Only          Assesses behavior, attitudes, knowledge, skills, and/or
           Design             circumstances following a key activity.

           Retrospective      Following a key program activity, participants’ describe
           Pre-Post           their behavior, attitudes, skills, knowledge and/or
           Design             circumstances as they are now (post or after the activity)
                              and as they were before (pre) the program activity.
               Also called:   Retrospective pre-tests can measure change more
               Post-          accurately when participants’ limited information before a
               Then Pre       program activity reduces their ability to correctly assess
               Design         their initial behavior or circumstances.5 Examples of
                              retrospective pre-post survey items are found in other
                              chapters, as well as later in this chapter.

           Pre-               Describes    participants’ behavior,   attitudes,   skills,
           Post               knowledge and/or circumstance prior to and after a
           Design             program activity. Requires managing information collected
                              at two points in time.

           Pre-               Same as pre- and post-program measure approach, with
           Post-              additional scores obtained again at a later point in time
           Long-Term          (e.g., six months, one year, two years). Demands more
           Post Design        complicated data management and analysis.

           Comparison         Comparisons could be added to any of the above designs.
           Group              In community mobilization initiatives, comparisons are
           Design             usually made to the community’s status before and after
                              the initiative rather than to other communities. This is
                              possible when comparable data are available before and
                              after the other initiative.

           Case Study         Case studies empirically examine the contextual conditions
           Design             that influence a multifaceted experience, such as the
                              mobilization of a community. Case studies demand multiple
                              sources of data and usually address questions of “how and
                              why” events or results occurred. Generally data are
                              collected that describe the experience of interest before,
                              during, and at the end (or after) its development.


58                                  Oregon State University Family Policy Program           2003
                                    Oregon Commission on Children and Families
                                                          Performance Measurement for Community Mobilization




Case study designs effectively assess comprehensive community initiatives. Case studies demand information
about community conditions before and after the initiative, as well as information about processes and activities
during the initiative.

Most often a case study compares a community to itself, documenting and examining conditions and processes
before, during, and after the initiative. The fundamental questions are: “Is this community improving? How
and why?”6 Case studies are described further later in this chapter.

From Whom Will Information (Data) Be Collected?

In performance measurement, data are often collected from all participants in an initiative. For example, to
assess a diversity training workshop data may be collected from all participants in the workshop. Similarly, all
commission members or community advisory group members may be asked to assess the linkages between
the community and the formal system.

Sometimes, however, it is not feasible or necessary to include everyone. For example, to assess reactions to a
family-friendly celebration event, randomly approached attendees could be asked four or five key questions. In
this situation, attendees are being sampled.

Sampling means to select a group of people from a larger group. The selected group is called “the sample.”
Sampling is done when the total number of participants is so large that it is not feasible or necessary to collect
information from everyone. In these situations, some participants are selected to represent all the participants.

Two issues are important in sampling to ensure a sample is
representative: sample size and random selection. There are                            Guidelines for Sample Size
complicated formulas to determine sample size. The basic rule is:
As the size of the total group (population) increases, the proportion            If the total group (population) is
needed in a sample decreases (see box at left).
                                                                                        under 100, sample all 100
Random selection is the best way to be sure a sample represents
a larger group. Random selection means that every member of the                    200, sample about 120 people
population has an equal chance of being included in the sample.
When a sample is randomly selected and of adequate size, it is
                                                                                   300 or 400, sample about 150
assumed to be representative of the total group. The same
procedures can be used to select a sample with particular
characteristics such as civic leaders, volunteers, Latino families,                    500-900, sample about 200
providers, or parents in a particular area. Often programs rely on
“convenience” samples in which information is collected only from                       1,000, sample about 250
people who are convenient or easy to engage. Convenience samples
can provide valuable information but that information isn’t really               2,000 or over, sample about 300
representative of a larger group.

For example, suppose you ask people who come to the library “what activities are needed in the community for
school aged children?” You can’t use their responses to describe what needs are seen by the whole community.
It would be better to call every tenth family with children in local schools. This is called “random selection” of
families with children in school and who have telephones7.



2003                                   Oregon State University Family Policy Program                                59
                                       Oregon Commission on Children and Families
Community Mobilization




Different sampling methods may be used to collect the diverse kinds of information needed to assess a community
initiative. Obviously, it is most appropriate to collect data from all members of a 20 person advisory or leadership
group. On the other hand, if 400 people participate in an event, randomly selecting 150 to interview would give
a good sense of what people who attended had learned from the event. Interviewing 150 people is feasible
when the interview is VERY short and focused.


Event or activity sampling can also be done. For example, to collect data from every training event may not be
feasible if there are many events and few resources for data collection and analysis. It may be more realistic to
randomly select events at which to collect information.

Not everyone who is sampled will respond. Some proven techniques can encourage responses, without being
pushy or demanding. For example:


                     Tell respondents why their responses are important and how the information will be used.


                     Say “please” and “thank you.” Even say “thank you” to people who do not participate. If
                     substantial effort or time are required to respond, send thank you notes. Offer summary
                     reports of findings to respondents.


                     Make surveys attractive and easy to complete. Mostly use questions that require only
                     marking answers, not long written responses. If you only want to ask two or three quick
                     questions, tell people that at the beginning.


                     Complete surveys in a group setting such as during the last few minutes of a meeting.
                     Provide pencils or pens and somewhere to write if needed.


                     For mail returns, provide self-addressed, stamped or business reply envelops. Number
                     mailed surveys so follow-up reminders or calls can be made after one week, two weeks,
                     and three weeks.


                                  Schedule telephone or face-to-face interviews at times that are convenient for
     Send your
                                  the respondents. When contact is first made with potential respondents, tell
     survey back                  them about how long the interview will take and ask if this is a good time. If
     and you will be              not, ask when might be a better time for them to participate.

     entered in a                 Provide incentives for completion. Offer chances to win savings bonds for
     drawing to                   children, toys, books, free training workshops, or other prizes donated by
                                  local businesses. At events, enter interview participants into a drawing.
     win…




60                                       Oregon State University Family Policy Program                        2003
                                         Oregon Commission on Children and Families
                                                          Performance Measurement for Community Mobilization




What Measurement Tools Will Be Used to Collect Information?

Records, staff observations and ratings, surveys, and self-reports are the most commonly used sources of
information in performance measurement. Most importantly, measurement tools should:

            Provide useful and accurate information about inputs, activities, outputs, and outcomes,

            Be feasible to use given available resources,

            Be respectful of participants, volunteers, and staff, and

            Demonstrate accomplishments as well as short-comings experienced during the time of the initiative.

Program records can provide a wealth of information for describing a program’s inputs, outputs, and outcomes.
Records are a ready source of information and may demand less effort than other data collection methods. If
current records do not provide needed information, however, they must be revised to better monitor outputs
and outcomes. To be useful, records must be kept carefully and thoroughly. It is crucial to involve program staff
in decisions about the use of records.


Records from other agencies and institutions can also offer valuable information. For example, in a community
wide initiative to engage more volunteers to support children, a volunteer referral service can ask all people
who call:

            How did you hear about this opportunity?

If the answer is “the announcement on the radio” that is an important indicator of how well the radio campaign
is reaching, and motivating, people. On the other hand if no one answers the “brochures at the senior center,”
it may indicate that those brochures are not useful for recruiting new volunteers. In either case, a simple
record-keeping system for noting responses is critical.


Surveys or questionnaires are commonly used in performance measurement8. Survey data may be collected
by mail, telephone, or face to face. (See Appendix 3-B and 3-C for sample surveys.)


In surveys people assess themselves or events and conditions that they have experienced. To increase the
accuracy of these self-reports, it is important to:


            Assure confidentiality of the responses.


            Let people know that their responses will be used to improve program effectiveness.


            Ask about specific behaviors and specific time periods to guide people’s thinking.




2003                                   Oregon State University Family Policy Program                         61
                                       Oregon Commission on Children and Families
Community Mobilization




This last point is especially important to accuracy. For example, rather than ask “How often do you interact with
children in your neighborhood?” ask,


        “How often, in the past week, have you:
        1) seen children in your neighborhood?;
        2) waved or spoken to neighborhood kids just to say hi?;
        3) talked to one or more of the children about their interests?;
        4) had some other positive interaction?” (Please describe____)

If surveys are not “user-friendly” the number of people who respond is likely to be very low. User friendliness
begins with how questions are formatted. Surveys use either fixed choice or free response question formats.
Each format has particular advantages and disadvantages.


Fixed-choice formats ask direct questions with preset answer choices. These questions require people to
choose their responses from a list of offered alternatives. Fixed choice questions are quick to answer and
simple to score, but it is essential that the choices fully reflect the range of responses that participants might
actually have.


                                        The simplest fixed-choice questions offer yes/no or true/false response
        QUESTION TYPES                  choices. Always include a third category: Don’t know or Not sure. This
                                        third choice yields more accurate results because people are not forced
 FIXED CHOICE                           to guess or choose an answer when they aren’t sure.
 Do you plan to stay in this
 community for at least 2 more
                                        Other fixed choice questions involve ratings. Response rating alternatives
 years?
                                        should:
 __ Yes       __ Not sure   __ No
                                              Offer all possible choices; including both negative and positive
                                              responses, and don’t know, not sure, or no opinion.
 Overall, how do you rate your the
 available community activities for
 children ages 9 to 12?
                                              Be mutually exclusive of all others, that is, response alternatives
                                              should not overlap.
 1     Poor        4   Very good
 2     Fair        5   Excellent              Use rating scales of 3 to 7 points.
 3     Good        6   Don’t know
                                        For example, a volunteer support program for higher-risk teen parents
 FREE RESPONSE                          may want to know the impact of that support. To assess this, satisfaction
 How would you describe your
                                        survey response choices should include possible negative impacts (1 =
 biggest concerns, if any, about kids
 in this community in the following 3
                                        things got worse), neutral impacts (2 = nothing changed), as well as
 areas: schools, after school           positive impacts ( 3 = things got better). Follow-up free-response
 activities, family relationships?      questions can ask the parents to elaborate perhaps giving examples of
                                        how life got worse or better.



62                                      Oregon State University Family Policy Program                        2003
                                        Oregon Commission on Children and Families
                                                            Performance Measurement for Community Mobilization




Free-response formats require respondents to supply their own answers. Examples of free-response or
open-ended questions include:


       “During this program what is the most important thing you’ve learned about your
       community?”

        “Think about the last time your child needed help with school work. What did you do or
       say?”

       “Is there anything else you want to tell us?”

Like fixed choice questions, free responsive questions should specify a time period and/or behavior if possible.
The first two examples (above) do specify time (“during this program;” “the last time…”). The second question
specifies behavior “do or say” in response to child’s need for school work help. The third question is completely
open.


In written surveys, free-response questions are often used to supplement information gained from fixed choice
questions. Free-response questions can provide better detail than fixed choice questions. However, free response
questions demand more time and a higher educational level, especially if the responses are written. Free
response formats demand greater skill and more time to code and analyze.




2003                                     Oregon State University Family Policy Program                       63
                                         Oregon Commission on Children and Families
Community Mobilization




                             Scales are a series of three or more survey questions on the same issue. Effective
 SCALE                       scales are internally consistent; that is statistical analysis indicates that each item
                             is strongly correlated with each other item and the total scale. Because of this
 Set of three or more        internal consistency, scales offer a more reliable way to measure a person’s attitudes
 questions about the         and ideas about an issue than simply depending on a single question.
 same topic.

                             Satisfaction scales are often used to assess participants’ reactions to a program
 Statistical data show
 items have internal         or experience. Effective satisfaction scales ask questions about four dimensions of
 consistency.                satisfaction: responsiveness, reliability, accuracy, and understanding (see Appendix
                             3-B).


Whether surveys use single items or scales to assess responses, they are gathering self-report information.
As noted earlier, self reported information is most accurate when:

            Confidentiality is assured.

            The uses of the information are described

            Questions are about specific behavior and periods of time.

            Scales are used.


The accuracy of self-reported data also can be improved by using a retrospective pre-post format for the
questions. Sometimes, participants’ limited information activity reduces their ability to correctly assess their
behavior before a program. In these situations, a retrospective pre-post survey can measure change more
accurately than traditional pre-post survey.9


In a retrospective pre/post survey:

            Information is collected only once at the end of a program or other experience.

            Participants describe their behavior, attitudes, skills, knowledge and/or circumstances twice: First
            as they are now (POST) after the program activity and second, as they were before (PRE) the
            program activity.

            The difference between the PRE (before) and POST (now) indicates the amount of change resulting
            from the program or activity.


The retrospective pre-post approach can be adapted to almost any topic. Several examples are given in later
chapters and in the appendices. Figure 3-4 (next page) presents a sample of two retrospective pre-post
questions designed to assess volunteers’ knowledge before and after training on mentoring young adolescents
who are learning to read.



64                                        Oregon State University Family Policy Program                       2003
                                          Oregon Commission on Children and Families
                                                         Performance Measurement for Community Mobilization




                 Figure 3-4: Two RETROSPECTIVE PRE-POST Questions




                                                                            6 High
                                                                           5
                                                                          4
                                                                      3
                                                                      2
                                                                  1
                                                                0 Low




                                Where are you on the ladder?




       Where are you on this ladder NOW after this              Low                               High
       training?

       1. Your knowledge of how adolescents develop?             0        1   2       3   4   5    6

       2. Your ability to help young people learn to             0        1   2       3   4   5    6
          read?




       Think back to when you started this training to
       be a reading mentor. Where were you on the               Low                               Hig
       ladder THEN?                                                                                h

       1. Your knowledge of how adolescents develop?             0        1   2       3   4   5    6

       2. Your ability to help young people learn to             0        1   2       3   4   5    6
          read?




2003                                  Oregon State University Family Policy Program                      65
                                      Oregon Commission on Children and Families
Community Mobilization




Focus group interviews are planned discussions. Focus group interviews are designed to stimulate ideas on
a particular (focused) topic. Diverse perspectives, issues, concerns, and ideas that may be overlooked in
individual interviews or surveys are more likely to come up in a focus group discussion. (See Appendix 3-D for
guidelines on focus groups.)


Focus groups involve only a small, non-representative sample of the people, but these groups can provide very
valuable information. Approximately 7-10 individuals, led by a trained interviewer, share their ideas and
perceptions on the topic of interest. The discussion is recorded and the content is reviewed at a later date.
Focus groups can address questions such as:

            What are the strengths and weaknesses of the current activities for families with young children in
            this community?

            What would improve the current activities?

Focus groups can also adapt a retrospective pre/post orientation to assess perceived changes. For example,

            Think about our community of Prattville as it is now and as it was two years ago. TWO YEARS AGO,
            what were the strengths and weaknesses of activities and supports for families with young children?

            What are the strengths and weaknesses of Prattville’s activities and supports for families with
            young children TODAY? (See Appendix 3-A OUR COMMUNTIES THEN AND NOW as an example of
            how a focus group used a retrospective approach to gather information on perceived changes as
            a result of community mobilization.)


Focus groups are not an ideal source for information about how a topic is perceived by a single individual.
Individual responses may be influenced by the comments of other members in the group. A written survey to
be completed by individuals either before or after the focus group session can provide better information about
individual ideas.


Analysis of the focus group information is time-consuming and often demanding. Be realistic about the time
this will take. Analysis centers on identifying patterns and trends that arose in the focus group session. Analysis
can be conducted separately for individual focus group or across several groups.




                                                            Focus groups can be great                sources of
                                                            information      for   designing   and    assessing
                                                            community mobilization initiatives. Focus
                                                            groups are especially useful to examine the
                                                            context and structure of community initiatives
                                                            and to identify “what worked and what didn’t
                                                            work.”


66                                      Oregon State University Family Policy Program                             2003
                                         Oregon Commission on Children and Families
                                                          Performance Measurement for Community Mobilization




Observations of participant behavior, environments, and events are valuable tools in performance measurement.
Observations can be used to assess the circumstances and behavior of communities and people, such as:

            Developmentally appropriate guidance by volunteers.

            Collaboration in group decision-making processes.

            Conditions in parks or play areas before and after equipment grants are awarded.


When systematically guided and recorded, observations can provide especially important information. Observation
guides focus information gathering on particular behaviors or environmental characteristics. (See Appendix 3-
E for guidelines on observation measures.)


Observation rating scales provide specific criteria for categorizing what is observed. The two ends of the
rating scale are “anchor points.” At these anchor points, behavior or other characteristics are described. For
example, the interactions between volunteer playground supervisors and children could be rated on a scale
ranging from 1 - unfriendly to 7 - excellent. At these key anchor points, examples should be given to further
guide numerical ratings.


Physical environments can also be assessed with observations.
Below is an example of a scale for rating the cleanliness of a
playground. In this example all 4 points on the rating scale
are described. (See Chapter 5 for more resources on
observational methods for environments.)


Training is almost always needed to accurately use observation
guides. Training helps insure that the observation criteria are
accurately and consistently applied to avoid misinterpretation.




2003                                   Oregon State University Family Policy Program                       67
                                       Oregon Commission on Children and Families
Community Mobilization




                         Case studies10 combine several sources of data to describe conditions,
                         processes, causal linkages and outcomes of a real life, contemporary experience.
                         Applied to a community mobilization initiative, a case study compares the
                         community to itself, contrasting and linking conditions and processes before,
                         during, and after the initiative. The fundamental questions are: “Is this
                         community improving? How and why or why not?” (See Appendix 3-A and
                         below for example.)

                         An effective case study clearly defines a specific purpose and limits data
                         collection to that purpose. Case studies are best when guided by specific
                         “theoretical propositions” or ideas. For example, many community mobilization
                         initiatives are built on the idea (proposition) that formal networks can
                         successfully engage informal networks through a public information campaign,
                         technical assistance, and training. A case study could examine information
                         from a number of sources to answer questions such as those shown below.




68                             Oregon State University Family Policy Program                        2003
                                Oregon Commission on Children and Families
                                                            Performance Measurement for Community Mobilization




Other assessment methods include:


            Tests or examinations that measure knowledge.


            Journals or logs that record feelings and actions related to program activities.


            Anecdotes or testimonials that describe personal experiences or “success stories” that
            illustrate program outcomes.


            Photographs and videos that depict program activities or that demonstrate outcomes such
            as changes in behaviors or environments.


Often, several assessment strategies are combined. For example, record data on the number of children using
a park can be supplemented with photographs and anecdotal comments about the park before and after the
park is renovated.


No one measurement tool or strategy is better than another. The right tool (or tools) depends on the outputs
and outcomes that need to be measured, the resources available, and the participants.


Realistic performance measurement tools should:


         Demonstrate accomplishments (outputs and outcomes) achieved during the time of service.


         Be relevant, accurate, useful to stakeholders and believable.


         Identify successes and short-comings in order to improve operations


         Be feasible to collect reliably over time given available resources.


Every method of data collection takes time, skills, and financial resources. Some methods require more time,
skills and resources, than do other methods. Unless funding is available to pay outside evaluators, time and
effort will come from program staff and participants.


It is also critical to consider what method is appropriate for the people who will participate. Language, educational
level, time, and other factors determine how appropriate an information collection method is for participants or
other respondents.




2003                                     Oregon State University Family Policy Program                           69
                                         Oregon Commission on Children and Families
Community Mobilization




How Will Participants’ Needs, Rights, and Welfare Be Protected?


Introductions to surveys, interviews, and focus groups should provide three pieces of information to potential
respondents:

            How the information will be used. (“We will use the information to improve services”)

            Participation is voluntary (“You don’t have to answer any question and you can stop whenever you
            want…”)

            Answers will be kept in confidence (“Your answers will be confidential and combined with other
            peoples’ answers in reports”).

IF potential respondents are given the information that is bulleted above, signed consent is NOT required in
most performance measurement situations. After people are given the above information, they give implied
informed consent when they freely provide the requested information. Box 3-1 (below) gives two examples
of introductions that address three elements needed to gain implied informed consent.

                    Box 3-1: Sample Wording for Implied Informed Consent

                       Written introduction to survey assessing volunteer training
        Now that you have finished this course, we want to know how the course has influenced
        you. We also want to learn your ideas for improving the course in the future (uses). Your
        answers will not be seen by anyone except the staff who evaluate the program. Your
        name will not appear anywhere in our reports (confidentiality). You don’t have to answer
        any question, but your ideas are very important and we hope you will share them with us
        (voluntary). Thank you!

                             Verbal introduction to a focus group discussion
        Each of you has been involved in the Prattville Campaign for Our Kids. We want to hear
        your ideas about how the Campaign has been going and to get ideas for improvements and
        priorities for the next year (uses). We’re interested in different points of view and
        negative comments as well as positive ones.

        We’re taping our conversation so we don’t miss any of your comments. No one will listen
        to the tapes but our evaluation team. We’ll only use the tape to help write a summary of
        this session. No names will be used in this summary or any report (confidentiality). You
        do not have to answer any question and you can stop whenever you want (voluntary). But
        please remember, your ideas are important to the Campaign. Do you have any questions?
        Shall we begin?


More detailed, signed informed consent is needed if sensitive information is being gathered or if the informa-
tion will be used in published research. In these cases, programs need to follow strict federal guidelines for
collecting and using information. It is best to work with a college, university, or agency with an established
Human Subjects Review Board (often called an Institutional Review Board or IRB).

70                                     Oregon State University Family Policy Program                     2003
                                       Oregon Commission on Children and Families
                                                          Performance Measurement for Community Mobilization




                                  Principle Six: Use What Is Learned

Performance measurement is intended to improve public and non-profit programs and initiatives. Actual utilization
of performance measurement findings is dependent on several factors. One critical factor is the commitment
by leadership, staff, and advocates to continuous improvement in which findings are used to refine and strengthen
programs and initiatives.

Moreover, even the best information is useless if it is not effectively communicated. Most people, from community
members to agency staff to legislators, want to review key findings and information quickly in order to answer
the question: What evidence is there that the program achieved its intended activities and outcomes? Effective
written and verbal reports11:

            Begin with a summary.

            Describe the program and the design of the performance
            measurement, highlighting what, how, and from whom information
            was collected.

            Report findings as frequencies, percentages, average scores, or
            in other summary form.

            Illustrate key findings with graphs, tables, and examples.

            Clearly separate facts from interpretations, judgments, and
            recommendations, and use relevant facts to support
            interpretations, judgments, and recommendations.

            State both positive and not-so-positive findings.

            Use active, positive words that emphasize continuous improvement of program operations and
            outcomes.

            End with a conclusion.

Facts and Findings

Facts are data, information, or other evidence. When presented in a report, facts are called findings. Usually
findings are presented as frequencies, percentages, average scores, or other in summary form.

Findings should report who participated in the assessment, what was measured, how and when it was
measured, and what was found. For example, findings from a retrospective pre-post test of volunteer knowledge
could be narratively reported in the following way:

        “On a 5-point scale (1 = low to 5 = high), twenty volunteers rated their knowledge of
        methods for supporting children’s learning before and after the training program. Volunteers
        reported almost a one-point increase (0.7) in knowledge; this difference was statistically
        significant.”


2003                                   Oregon State University Family Policy Program                         71
                                       Oregon Commission on Children and Families
Community Mobilization




The previous example identifies who was assessed (twenty volunteers), what was measured (knowledge) and
when (retrospective before and after training), how knowledge was measured (on a 5 - point scale) and what
was found (one point increase that was statistically significant).

Similarly, parent ratings of the helpfulness of an after-school program could be reported as:

“Parents rated the helpfulness of three after-school services on a 5-point scale (1 = low to 5 = high).
Over the three service types, the average rating was 4.0. “Providing a safe, supervised place for children”
received the highest ratings (4.5). “Help with homework” (4.0) and “transportation” to activities (4.0)
were rated most highly by parents whose children were 9 to 12 years of age.”

The above example identifies who was assessed (parents), what was measured (helpfulness in three areas of
service), how helpfulness was measured (on a 5 - point scale) and what was found overall, for specific service
types, and for specific parents (those with children ages 9 and 12).

Written presentations of key findings are more powerful when they are also presented in graphs or tables. This
is especially important when the findings are complicated. When graphs and tables are included, it is not
necessary to report every finding in the written report. Rather, use the written report to highlight key findings.
For example, parent reports of satisfaction with an after-school program could be reported in the following
written and graphic forms:



“Fifty parents rated the quality of ABC After-School Coalition supports for parents. Ratings assessed
six quality indicators on a 5-point scale, with 1 being low and 5 being high. Across the six indicators, the
average score was 4.2. Courteous treatment and staff understanding of children’s needs received the
highest scores.”


                         5
                       4.5
                         4
                       3.5
                         3
                       2.5
                         2
                       1.5
                         1
                               Ease                    Courteou                         Knowledgea
                               of
                             Contactin                 s reatme
                                                       T                                ble Staff
                             g                         nt
                                         Quality of After School Supports for Parents




Pie charts also make key findings easier to interpret. Pie charts are useful when presenting findings to lay
audiences, because most people are familiar with the idea of dividing a pie. Pictorial forms can also be useful
with lay audiences.

On the next page, findings are shown in three forms: narrative, pie chart, and pictorial. Each form depicts the
percentage of playgrounds that met developmental and safety characteristics before and after a community
initiative.

72                                       Oregon State University Family Policy Program                       2003
                                         Oregon Commission on Children and Families
                                                      Performance Measurement for Community Mobilization




                       Figure 3-5: Three Presentations:
                  The Percentage of Playgrounds Meeting Safety
Narrative Form:

       “Before the Hernandez City for Kids Initiative, 25% of the community’s playgrounds failed to
       meet developmental and safety standards as measured by the America’s Playgrounds Report
       Card.1 One year later, only 3% of playgrounds in Hernandez City had one or more health or safety
       issues. 97% of the playgrounds were assessed as developmentally appropriate and safe.”

Pie Chart Form:




               In June 2002, before the                         By June 2003, following one
                Hernandez City for Kids                         year of the initiative, almost
                 Initiative, only 75% of                          100% of playgrounds met
            playgrounds met developmental                              developmental
                 and safety standards.                             and safety standards.


Pictorial Form:

In June 2002, only three out of four
Hernandez City playgrounds met safety and
developmental standards.



In June 2003, after the Hernandez City
for Kids Initiative, almost all (97%) of
the city’s playgrounds met safety
and developmental standards.




2003                               Oregon State University Family Policy Program                     73
                                   Oregon Commission on Children and Families
Community Mobilization




Pictures can be especially powerful especially when combined with factual descriptions that support the photo
images. In the previous Hernandez City for Kids example, a “before” photo might show broken or locked
equipment, while an “after” photo could show a child playing at the same park.




              June 2002                          June 2003                             June 2003

       In June 2002, the north playground in Hernandez City was locked
       because of repeated crime and vandalism. One year later, after the
       Hernandez City of Kids Initiative, over 150 children play there each
       day.



Pictures, pictorials, graphs, pie-charts, illustrations and other images can powerfully convey information, but
should not be overused. Pages and pages of graphics can take attention away from the important facts that are
the basis for the report.


All graphics should

        Highlight the most important findings.


        Be labeled with a title or other description.


        Be explained in greater detail in the narrative portion of the report.


Whether presented verbally or graphically, findings are the factual statements that describe the inputs, outputs,
and outcomes of an initiative. All reports must include findings. Once findings are clear, the meaning of these
findings can be discussed and judgments made about success or value.


74                                     Oregon State University Family Policy Program                       2003
                                        Oregon Commission on Children and Families
                                                         Performance Measurement for Community Mobilization




Meaning and Judgments

Findings take on meaning when their importance is clear. For example, findings may be important because
they mean that best practice principles are being followed. Findings may also indicate (mean) that progress is
being made toward valued high level outcomes and goals.


Once the meaning of findings is established, a report can make judgments about “success” or other values.
The fundamental question is: Given these findings and their meaning, how should the program be judged?


Judgments such as “success” or “failure” demand some standard of comparison. In performance measurement,
“success” or “failure” judgments are usually based on comparisons to initially planned or targeted activities,
outputs or outcomes. It is also possible to base judgments on comparisons to best practices or the rates of
success of other initiatives.



       Findings are facts.                  Meaning gives impor-                       Judgments assess “suc-
                                          tance to factual findings.                       cess” or value.
       The You Can READ Pro-
       gram now serves 80 chil-          Regular reading and                          The You Can READ
       dren. Each child meets for        long-term one-to-one                         program volunteer
       one hour twice each week          support for reading is a                     reading partners are
       with a trained reading            strong predictor of read-                    making an important
       partner over the entire           ing success in the elemen-                   contribution to reading
       school year.                      tary grades.                                 during the school year.
                                                                                      Maintenance of long-
       During the first six months       Growth in attendance                         term relationships re-
       of the You Can READ               was essential to justify                     flects an important best
       after-school program,             the investment of re-                        practice in reading
       average daily attendance          sources and to maintain                      support.
       grew from 10 children             program quality.
       under age 9 to 20 children                                                     The unexpected rate of
       under 9, and 10 children          Research indicates that                      growth in participation
       ages 8 to 12.                     children ages 9 to 12 are                    by older children greatly
                                         at high risk without                         strained volunteer capac-
       Among children ages 9 to          essential reading skills by                  ity for several months.
       12, rates of participation        grade 5. New program
       tripled from under 10% in         activities have attracted                    This rate of participation
       the first month to over           and engaged older chil-                      by older children is
       40% by the January. 80%           dren who previously had                      higher than anticipated.
       had increased one grade           no one-to-one support for                    The improvement in
       level in reading scores by        reading.                                     reading scores compares
       June.                                                                          well to other programs.

2003                                  Oregon State University Family Policy Program                                75
                                      Oregon Commission on Children and Families
Community Mobilization




                                           Recommendations

Recommendations open the door to the future because they present ideas about improving or strengthening
a program or activity.


Recommendations should reflect clearly based facts, meaning, and judgments. Recommendations typically
address questions such as:



                                     Should changes be made in the key strategies or activities? In the target
                                     group? In collaborators or other partners?


                                     Should targeted outputs or outcomes be revised? Were targets too
                                     ambitious, too low, or just right?


                                     What are alternative courses of action that may improve the activities,
                                     outputs, and outcomes? What are the possible advantages and
                                     disadvantages of each course of action?



Ultimately, recommendations should emphasize continuous improvement. It is critical that what is learned
from performance measurement be included in decision-making and future planning.


When continuous improvement is emphasized, performance measurement is a great asset to programs and
communities. In contrast, punitive uses of performance measurement undermine improvements. If agencies
and community groups fear that performance measurement will be used to undercut funding or support, they
may be less willing to serve the hard-to-serve or to take on challenging projects. This is especially true with
community mobilization initiatives that tend to be complex, multi-faceted, and collaborative…all characteristics
that reduce direct control.


It takes time, and resources, to plan, implement, try out, adjust, and improve performance measurement
systems. It also takes commitment by staff, administrators, decision-makers, legislators, and other stakeholders
to the idea of continuous improvement.




76                                     Oregon State University Family Policy Program                       2003
                                        Oregon Commission on Children and Families
                                                            Performance Measurement for Community Mobilization




                                  Annotated Bibliography for Chapter 3

1
     Weiss, H. and Jacobs, F. (1988). Evaluating Family Programs. Hawthorne, NY: Aldine deGruyter.

     Jacobs, F, and Kapuscik, J. (2000). Making it Count: Evaluating Family Services. A Guide for State
     Administrators. Medford, MA: Tufts University, Child Development.

2
     Hatry, H. (1997). Where the rubber meets the road: Performance measurement of state and local public
     agencies. New Directions for Evaluation, 73. 31-44.

     Hatry, H. (1999). Performance Measurement: Getting Results. Washington, DC: Urban Institute.

3
     Jacobs, F, and Kapuscik, J. (2000). Making it Count: Evaluating Family Services. A Guide for State
      Administrators. Medford, MA: Tufts University, Child Development.

4
     In 1981 a group of researchers, evaluators, and educators, joined to form the Joint committee on
      Standards in Evaluation. The four standards were adopted: Utility, Accuracy, Feasibility, and Propriety. In
      1994, these standards were reaffirmed and are now adopted by 15 professional organizations including
      the American Evaluation Association, the American Psychological Association, the American Educational
      Research Association, and others. The standards are discussed in exceptional detail in: The Standards of
      Evaluation. Thousand Oaks, CA, Sage Publishers. 1998.

5
     Pratt, C., McGuigan, W. & Katzev, A. (2000). Measuring program outcomes using retrospective pretest
     methodology. American Journal of Evaluation.

6
     Yin, R. (1994). Case Study Research: Designs and Methods. Second Edition. Thousand Oaks, CA: Sage.

7
     Over 95% of households have telephones, however, the lowest income families are the most likely to lack
     this resource.

8
     It is possible to make comparisons across cases – for example, comparing mobilization efforts in one
      community to mobilization efforts in another. This is called the comparative case study or multiple-case
      study design. Comparative case studies are more complex than single case studies because comparative
      studies demand comprehensive, similar data about multiple communities.

9
     Pratt, C. McGuigan, W. & Katzev, A. (2000) Measuring program outcomes using retrospective pretest
     methodology. American Journal of Evaluation.

10
     Yin, R. (1994). Case Study Research: Designs and Methods. Second Edition. Thousand Oaks, CA: Sage.

11
     Fink, A. (1995). How to write an evaluation report. Thousand Oaks, CA: Sage.

12
     http://www.uni.edu/playground/report.html#grades. Also see Chapter 6 in this guide.


2003                                     Oregon State University Family Policy Program                        77
                                         Oregon Commission on Children and Families
Community Mobilization




78                       Oregon State University Family Policy Program   2003
                         Oregon Commission on Children and Families