Docstoc

Correlation Coefficient

Document Sample
Correlation Coefficient Powered By Docstoc
					Data Collection
Considerations:
Validity, Reliability,
Generalizability, and Ethics
                           JS Mrunalini
                               Lecturer
                              RAKMHSU
Reliability and Validity
• Before using a measuring instrument it is
  important to be assured that it has acceptable
  levels of reliability and validity.
Validity
• How we know that the data we collect (test
  scores, for example) accurately gauge what we
  are trying to measure

• The degree to which scientific observations
  actually measure or record what they purport to
  measure
Types of Validity
• Criterion
• Content
• Construct
Criterion Validity
• Using another measuring instrument as a
  criterion to check the validity of the new one.
Content Validity
• The content of the instrument needs to be
  relevant to the concept of what is being
  measured.
• For instance, an instrument designed to rate
  depression that asked questions not relevant to
  depression would have problems with content
  validity.
Construct Validity
• Describes the extent to which an instrument
  measures a theoretical construct.
• This is the most difficult validity to establish.
• Basically this approach ties the instrument in
  with a theoretical perspective.
Validity
• Linked to numerically based research
• Convinces the researcher and the researchee that
  the results of the research were right, accurate,
  and withstand scrutiny from other researchers
• Qualitative research –
 ▫ Trustworthiness (Kinchloe)
 ▫ Understanding (Wolcot)
Validity
• Guba
• Trustworthiness is established through
  credibility, transferability, dependability, and
  confirmability
• Credibility
  ▫ Researcher’s ability to take into account all of the
    complexities that present themselves in a study
    and to deal with patterns that re not easily
    explained
Validity
• Transferability
 ▫ Qualitative researchers’ beliefs that everything
   they study is context bound and that the goal of
   their work is not to develop “truth” statements
   that can be generalized to larger groups.
 ▫ Depends on whether the consumer of the research
   can identify with the setting
Validity
• Dependability
 ▫ The stability of the data
    Triangulation of data tow or more methods
    Audit trail-examine the interpretive accounts that
     are grounded in the language of the people studied
     and in their own words
Validity
• Theoretical validity
  ▫ The ability of the research report to explain the
    phenomenon that has been studied and described
• Generalizability
  ▫ Within the community that has been studied
    (internal)
  ▫ To settings that were not studied by the researcher
    (external)
Validity
• Evaluative validity
  ▫ Whether the researcher was objective enough to
    report the data in an unbiased way
Validity In Action Research
• Action researchers need a system for judging the
  quality of their inquiries that is specifically
  tailored to their classroom-based research
  projects
• Democratic validity requires that multiple
  perspectives of all participants have been
  accurately represented
Validity In Action Research
• Outcome validity required that the action
  emerging from the study leads to the successful
  resolution of the problem
 ▫ Your study is valid if you learn something that can
   be applied to subsequent research
• Process validity requires that a study has been
  conducted in a dependable and competent
  manner
Validity In Action Research
• Catalytic validity requires that the participants
  are moved to take action o the basis of their
  heightened understanding of the subject of the
  study
• Dialogic validity involved having a critical
  conversation with others about your research
  findings and practices
Establishing Validity
•   Talk little, listen a lot
•   Record observations accurately
•   Begin writing early
•   Let readers see for themselves
•   Report fully
•   Be candid
•   Seek feedback
•   Write accurately
Reliability
• The consistency with which our data measures
  what we are attempting to measure over time
• Getting the same results over time
• As you think about the results of your inquiry,
  consider whether you think that your data would
  be consistently collected if the same technique
  were used over time
Reliability
• The extent to which a measure reveals actual
  differences in what is being measured. It is
  possible for a measuring instrument to show
  variance that has nothing to do with what it is
  actually supposed to be measuring.
• You could construct an IQ test that produces
  different scores but the difference in the scores is
  actually caused by the way the test is constructed
  rather than differences in the IQ of persons
  taking the test.
Sources of Error
• Sources of error in an instrument can be caused
  by:
 ▫ definitions are unclear/vague
 ▫ retrospective information, i.e., information that is
   gathered by recall or recollection
 ▫ variations in collection conditions
 ▫ structure of the instrument
Testing Reliability
• There are four basic methods for testing the
  reliability of an instrument.
 ▫   Test-retest
 ▫   Alternate form
 ▫   split half
 ▫   observer reliability
Test-Retest
• Repeated administration of the instrument to
  the same people on separate occasions.
• This is done to test the reliability of the
  instrument - not during the actual research
  project that the instrument is going to be used
  in.
Alternate Form
• Variations of the form, alternates, are used on
  the same individuals and then compared.
Split Half
• Items in the instrument are divided into
  comparable segments in such a way that the
  different segments should have comparable
  scores.
• This type of test for reliability is used to screen
  the internal reliability, or consistency, of the
  instrument.
Observer Reliability
• Compares the administration of an instrument
  performed by different administrators who are
  trained to use the same protocol.
Correlation
Coefficient
Correlation Coefficient
• A statistical procedure that measures the extent
  that the comparisons are similar or not.
• Used for testing the reliability of a survey,
  instrument, or test.
• Correlation coefficients range from 0.0 to 1.0.
Interpreting the Correlation
Coefficient
• The Correlation Coefficient can produce a result
  resulting from 0.0 to 1.0.
• 1.0 is a perfect correlation and rarely, if ever,
  achieved. Usually a coefficient of .80 or better
  suggest that the instrument is reasonably
  reliable.
• 0.0 is the other end of the scale indicating that
  the instrument is not at all reliable.
Generalizability
• Refers to the applicability of findings to settings
  and context different from the one in which they
  were obtained.
• Can it be explained to a wider group of people
• Goal of action research is to understand what is
  happening in your school or your classroom and
  to determine what might improve things in that
  context
Personal Bias
• If we conduct our research in a systematic,
  disciplined manner, we will go a long way
  toward minimizing personal bias in our findings
• Challenge is to remain open and objective, to
  look, and to reflect on what we see
Ethics
• How each of us treats the individuals with whom
  we interact
• There is little distance between teacher
  researchers and their subjects—their students
• Qualitatively oriented action research is open
  ended
• Informed consent

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:10/30/2012
language:Latin
pages:31