EDITING by ert554898

VIEWS: 10 PAGES: 24

									Measurement


“If you can’t measure it,
 you can’t manage it.”

                      Bob Donath,
                      Consultant
Measurement

             Selecting
        measurable phenomena


          Developing a set of
            mapping rules


       Applying the mapping rule
         to each phenomenon
Measurement
What is Measured?

  Object-things that is experienced and
   also those that are not very concrete
  Properties – Characteristics of the object
Levels of Measurement

               Classification
   Nominal

               Classification
    Ordinal
                   Order
               Classification     Distance
    interval
                   Order
               Classification     Distance
     Ratio
                   Order        Natural Origin
Sources of Error in
Measurement


          Respondent   Situation




           Measurer    Instrument
Characteristics of a Good Measurement


                  Validity




                 Criteria


  Practicality                Reliability
Characteristics of a Good Measurement

    Validity
     The ability of a scale to measure what was intended
     to be measured.
    Reliability
     The degree to which measures are free from random
     error and therefore yield consistent results.
    Practicality
     Relates to the economic factors, convenience and
     interpretability
Validity and Reliability
Reliability and Validity on Target




 Old Rifle            New Rifle           New Rifle
                                          Sun glare
 Low Reliability   High Reliability   High Reliability
 Low Validity       Validity ?        Low Validity
 (Target A)          (Target B)          (Target C)
The Goal Of Measurement Validity
Validity

                        Validity




                        Criterion
 Content                                     Construct
                        Related




 Face      Concurrent      Predictive   Convergent   Discriminant
Content Validity

 Type      What is measured?              Technique

 Content   Does the measure adequately Judgment
           measure the concept?        Panel Evaluation
                                       Content Validity Ratio


 Face      Do experts validate that the
           Instrument measures what its
           Name suggests it measures?
Criterion related Validity
 Type         What is measured?                     Technique

 Criterion    Does the measure differentiate        Correlation
 based        in a manner that helps to
              predict a criterion variable?


 Concurrent   Does the measure differentiate in a
              manner that helps to predict a
              criterion variable currently?


 Predictive   Does the measure differentiate
              in a manner as to help
              predict a future criterion?
Quality of Criterion
    Relevant (Proper measure)
            Sales performance measured using the total sales in a
             year

    Free from bias (Equal opportunity)
            Must be adjusted according to area, competition and
             potential

    Reliable (Stable or Reproducible)
            Monthly sales versus yearly sales

    Available (Ease of collection)
            Cost
            Difficulty in securing the data
Construct Validity
 Type           What is measured?                         Technique

 Construct      Does the instrument tap the               Judgment
                concept as theorized?                     Correlation of proposed
                                                          with established ones
                                                          Convergent-discriminant
                                                          technique
                                                          Factor Analysis


 Convergent     Do two instruments measuring
                the same concept correlate highly?


 Discriminant   Does the measure have a low correlation
                with a variable that is supposed to be
                unrelated to this variable
Reliability

                          Reliability




                                              Internal
   Stability              Equivalence
                                             Consistency




Test-retest    Parallel                     Split
                              Inter rater              Inter-item
                Form                        Half
Reliability (Stability)
 Type          What is measured?                        Technique

 Stability     Reliability of a test or instrument       Correlation
               inferred from examinee scores. Same
               test is administered again after some
               time to see the stability of the 2 scores


 Test-retest   Same test is administered over a short
               interval of less than 6 months
Problems in Test-retest
  Time-delays between measurement leads to situational
   factor changes
  Insufficient time between measurements permit the
   respondents to remember previous answers
  Respondent’s discernment of a disguised purpose may
   introduce bias – opinion related to the purpose but not
   assessed
  Topic sensitivity occurs when the respondent seeks to
   learn more about the topic before retest
  Introduction of other variables – unrelated to the
   research
Reliability (Equivalence)
 Type             What is measured?                           Technique

 Equivalence      Degree to which alternative forms of the    Correlation
                  same measure produce same or similar
                  results. Administered simultaneously
                  or with a delay

 Parallel forms   Same measurement only difference in
                  terms of sequence and wording


 Inter rater      To what extent 2 raters agree on the
                  rating of the same object/characteristics
Reliability (Internal Consistency)
 Type          What is measured?                        Technique

 Internal      Degree to which instrument items are Cronbach Alpha
 Consistency   Homogeneous and reflect the same     Split half
                                                    correlation
               Underlying constructs.                KR20


 Inter item    Measures the consistency of responses
 Consistency   in a given measurement tool


 Split- half   The measurement tool is split into two
               halves and the correlation measured
Improving Reliability

    Minimize external sources of variation
    Standardize the condition under which
     measurement occurs
    Use only trained and experienced investigators
     and motivated persons
    Broaden the sample of measurement questions
    Taken into consideration only extreme
     responses then eliminate questions that do not
     discriminate
Practicality




   Economy     Convenience   Interpretability

								
To top