Racial Disparities in Patient Safety Indicator

Document Sample
Racial Disparities in Patient Safety Indicator Powered By Docstoc
					Moving Patient Safety
Improvement Practices
to the Next Level:
Closing the
Organizational Learning Loop

         Academy Health Annual Research Meeting
                                   June 3, 2007
                                    Orlando, FL
Peter E. Rivard, Ph.D.1,2
Victoria A. Parker, D.B.A. 1,2
Amy K. Rosen, Ph.D. 2

   1.   VA Center for Organization, Leadership and Management
        Research (COLMR), Boston, MA
   2.   VA Center for Healthcare Quality, Outcomes and Economics
        Research (CHQOER), Bedford, MA

                Supported by grant number IIR-02-144-1, the Department
                of Veterans Affairs, Veterans Health Administration, Health
                Services Research and Development (HSR&D) Service,
                and the Agency for Healthcare Research and Quality
                (AHRQ), Center for Delivery, Organization and Markets.
                     Overview
• Patient safety improvement process follows Quality
  Improvement (QI) cycle:
      Plan-Do-Study-Act
• How do hospitals “Study” (evaluate) their patient
  safety initiatives?
• Case studies (qualitative): 3 VA hospitals
• Implications for research and practice
               Background:
          Patient Safety in the VA
• VA & National Center for Patient Safety (NCPS)
  recognized for patient safety initiatives
• QI approaches to safety improvement:
   – Root Cause Analysis (RCA)
   – Healthcare Failure Mode Effects Analyses
     (HFMEA)
• VA Patient Safety Indicators (PSI) Study
   – Feasibility of using AHRQ PSIs in VA
   – Site visits explored variation in safety
     improvement practices
             Research Questions

• General question: Are there systematic facility-level
  approaches to error detection and prevention that
  may account for lower PSI rates?
• Specific question: Are there facility-level differences
  in evaluation of patient safety improvement
  initiatives?
              Methods: Site Selection
• Population: all VA acute inpatient facilities (N=127)
• Theoretical sampling
• Selection criteria: FY 2004 PSI rates & geographic diversity
• AHRQ PSIs: proxy for patient safety outcomes
   – Use administrative data
   – Rates of potentially preventable adverse events
   – Mean of hospital’s within-VA rank on 11 PSIs
• 3 Sites selected
   – 1. Consistently high (unfavorable) PSI rates
   – 2. Most rates near VA median
   – 3. Consistently low (favorable) PSI rates
              Methods: Interviews
• Site visits June-September 2005
• Interviews:
   – Interviewers & interviewees blinded to site PSI rates
   – Interviewed executives, managers, non-managers
   – One-hour interviews; semi-structured
   – Interviews recorded & transcribed

• Data:
   – Transcripts coded using qualitative software (NVivo)
   – Coded for a priori constructs and emergent themes
   – Within-case & cross-case comparison
  Results: Two Levels of Evaluation
• Project evaluation: improvement efforts that target a
  particular structure, process or outcome
     e.g., handwashing or falls reduction
   – Implemented as planned?
   – Changes in practice or process?
   – Changes in event rates or other outcomes?

• Program evaluation: of patient safety improvement
  program, cutting across projects
     e.g., data collection or Root Cause Analysis process
        Results: Project Evaluation

Consistent across sites:
• All forms of project evaluation occurred at all sites
• No site perfect: staff reported gaps in follow-up at all
  sites
Variation across sites:
• Apparent quantity of project evaluation
• Emphasis on measurement and trends
      Results: Program Evaluation
                 Site 1
• Least evidence of program evaluation: in “start-up”
  mode
• Structure:
   – Hire patient safety support staff
   – Develop event reporting channels
   – Set RCA process
• Culture:
   – Sell patient safety improvement
   – Reduce fear of blame in order to increase reporting
        Results: Program Evaluation
                   Site 2
• Inadequate improvement action and follow-through 
       More measurement
       Increase executive oversight & involvement
       Increase accountability by units
       Report status & results back to staff
• Gaps in reporting, e.g., residents not reporting events 
       Facilitate event reporting by residents; educate them
• Summary:
   – Using increased measurement and centralized control to
     institutionalize patient safety improvement;
   – Now that reporting is occurring in general, addressing gaps in
     reporting.
        Results: Program Evaluation
                   Site 3
• Pockets of resistance 
   Strategic use of RCAs - educate “late adopters” & gain buy-in

• Some RCAs narrowly focused on clinical practice 
   Re-frame RCA data presentation

• Need QI staff to focus on program improvement 
   Shift routine monitoring back to units

• Measurement fatigue 
   Prioritize certain measures; “retire” others

• Staff frustration that successes are on-paper only 
   Project evaluation = dialog
      Results: Program Evaluation
                 Site 3
Summary:
• Fine-tuning aspects of the process, e.g., shared
  meaning of RCA
• “Post-centralization”: shifting responisibility back to
  departments
• “Post-measurement”:
   – addressing measurement fatigue;
   – evaluating projects in qualitative and dialogic ways
                   Discussion

Organizational learning for patient safety improvement
  occurred at multiple levels
• First-order:
     Did the project achieve the desired changes in
     structure, process, and/or outcomes?
• Second-order (Argyris & Schon 1996):
   – Are we doing the right projects?
   – How can we improve our patient safety
     improvement processes?
Greater variation among sites was in program
  evaluation
   Implications for Future Research

Program evaluation variation across sites:
   – More evaluation associated with more favorable
     PSI rates
   – Qualitative differences in nature of program
     evaluation
Does program evaluation and improvement affect
  patient safety?
      Implications for Future Research
  Do patient safety improvement programs follow a
    developmental path?
  Organizational learning and stages of organizational
    design for safety (Carroll & Rudolph 2006):

  “Local” Stage:
Improve individual
       skills &
 Unit-level routines   “Control” Stage:
                        Centralization,    “Open” Stage:
                        standardization   Cross-function &
                                          cross-discipline
                                            collaboration
           Implications for Practice

• PSIs as indicators of patient safety program
  performance
• Patient safety improvement:
   – More (projects) isn’t necessarily better (Weiner,
     Alexander et al 2006)
   – Program improvement may be key
      • Continual program improvement: dynamic
        capabilities (Zolla & Winter 2002)
      • Program integration (Weiner, Alexander et al 2006)

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:34
posted:4/12/2008
language:English
pages:17