Outcome PowerPoint Presentation.pdf by liningnvp


									The GRACE Principles:
Best Practices in Observational
Comparative Effectiveness
Erin Holve, PhD
Senior Manager
            Learning Objectives
1.   Describe the differences between trials and
     observational studies, and each of their strengths
     and limitations for effectiveness research
2.   Understand how observational studies of
     comparative effectiveness are used
3.   Provide examples of how these observational
     studies are shaping health policy
4.   Describe the GRACE Principles and other recent
     efforts to improve the quality of observational
     studies of comparative effectiveness

 Questions and Resources
Questions and Comments?
Please E-mail:
Join a discussion on research with
registries at www.hsrmethods.org
Nancy Dreyer, PhD
Chief of Scientific Affairs
The GRACE Principles: Best Practices
in Observational Comparative
Effectiveness Research
Evidence Hierarchy for Evaluating Interventions

                                © Outcome, 2008
Different Paradigm for Discovery & Explanation

 Vandenbroucke JP (2008) Observational research, randomised trials, and two
              views of medical science. PLoS Med 5(3): e67

                                                   © Outcome, 2008

• Background on observational studies
• How are observational studies of CE being used
  “today” for safety, effectiveness and value?
• Are there different rules for good practice?
• Initiatives in furthering quality
• Discussion

                                    © Outcome, 2008
What is the difference between
“efficacy” and “effectiveness?”

What are observational studies
  and when should they be
                      © Outcome, 2008
Background: Effectiveness

 “the extent to which medical interventions achieve
 health improvements under ideal circumstances.”
      Can it work?

 “the extent to which medical interventions achieve
 health improvements in real practice settings.”
    Does it work in the real world?

                                      © Outcome, 2008
The Research Tool

            © Outcome, 2008
Randomized Controlled Clinical Trials


   Strong Internal Validity*
   • Treatment is assigned by randomization
   • Behavior is largely driven by protocol
   • Inferences are limited by inclusion/exclusion criteria
   • Generally all data are collected specifically for the study
     at hand
   • Analysis is based on “intent to treat”
*how well the data collected reflects the truth about the pop’n under study

                                                          © Outcome, 2008
Observational Studies

• Individuals are enrolled on the basis of disease or
• Physician (investigator) chooses who gets treated &
• Results of ongoing disease process and medical care
  are observed
• Can be accomplished by direct data collection from
  physicians and/or patients
• May be conducted using existing data, collecting new
  data specific for a study, or some combination
• Analysis is based on people’s actual practice (e.g.,
  dose and duration of drug used, not “intent to treat”)
                                       © Outcome, 2008
Observational Study Design Examples

• Cross-sectional studies – 1 point in time
• Cohort studies – follow-up over time
   • Follow-up time may already have occurred at time of study or
   • May have follow-up from the start of the study into the future
• Patient registries – with or without longitudinal follow-
• Case-Control studies
• Case-Crossover studies
• Etc.

• Timing may be prospective, retrospective or mixed
                                              © Outcome, 2008
Patient Registry

   A patient registry is an organized system that
   uses observational study methods to collect
   uniform data (clinical and other) to evaluate
   specified outcomes for a population defined by
   a particular disease, condition, or exposure,
   and that serves a predetermined scientific,
   clinical, or policy purpose(s).
   The registry database is the file (or files)
   derived from the registry.
Gliklich RE, Dreyer NA: Registries for Evaluating Patient Outcomes: A User’s
Guide: AHRQ publication No. 07-EHC001. Rockville, MD. April 2007

                                                     © Outcome, 2008
Observational Studies

Strong external validity*
•   Limited inclusion/exclusion criteria are used to recruit
    patients who are more representative of usual practice
•   Observed practice not dictated by protocol
•   Comparative information from actual practice
•   Estimates of impact of treatment are more realistic
•   Practical clinical research is favorably viewed by
    buyers, and is growing in favor by regulatory

                                           © Outcome, 2008
                             Why                            How used?
Traditional     Measure efficacy                    Component of NDA
Clinical        Generate preliminary safety         Use for content in label
Trial           data                                Use in marketing materials

                Measure effectiveness and          Support
Large           safety                               NDA
Simple          Evaluate QoL, cost and patient       Label change
Trials          preferences                          Pricing, market positioning,
                Build clinical experience            reimbursement strategies
                Support multiple comparisons

Registries &    Similar purposes to LST             Post marketing publications
Other           Evaluate use and effects of drug    and presentations
                in routine clinical practice        Safety assessments
                                                    Support reimbursement and
                                                    market positioning
Classical Evidence Hierarchy

                      © Outcome, 2008
Why bother with observational studies?

                            © Outcome, 2008
Why not just use trials for all research?

 RCT are not well suited to answer all
 research questions.
  • Atypical behavior & setting
     • Protocol-driven behavior in narrowly defined study
     • May not be usual physician or usual practice
  • Analyzed by intent-to-treat, not how products are
    actually used
  • Do not give insights into why clinicians may use
    products off-label or in risky situations

                                               © Outcome, 2008
Why do we need more/different evidence?

   Data from RCTs do not
   always reflect real-world
   practice and outcomes

                               © Outcome, 2008
A Reality Check?
 Observational Data
and Clinical Trial Data

JAMA 1998;279:1278-1281
Do we need more and/or different evidence?

                                     Available evidence may be a product of
Generalizability                     otherwise methodologically rigorous
                                     evaluations but…
                                      • may not have evaluated outcomes that
Data from standard RCTs                 are relevant to Medicare beneficiaries.
cannot necessarily be
                                      • [does not address] risks and benefits to
assumed to apply to                     Medicare beneficiaries for off-label or
subpopulations not studied in           other unanticipated uses
those RCTs                            • may not have included specific patient
                                        subgroups or patients with disease
                                        characteristics that are highly prevalent
                                        in the Medicare population.

 From: Coverage with Study Participation in National Coverage Determination with
         Data Collection as a condition. Final Guidance: July, 12, 2006

                                                        © Outcome, 2008
Do we need more and/or different evidence?

                                                                • Obesity

                                           ~180,000 operations in 2007
                                              40%-50% increase in procedure per
 “The questions are who should be treated,    10-12 million eligible based on BMI
 which operation is the best, who should be
 doing them, and what kind of center should
 this work be performed in.”

                    Walter Pories, President of the American
                                 Society of Bariatric Surgery

                                                                            © Outcome, 2008
Why do we use observational CE studies?

    Data from RCTs do not always reflect real-world practice and
    Data from RCTs cannot necessarily be assumed to apply to
    subpopulations not studied in those RCTs
    Data from RCTs do not answer questions of physician
    practice behavior and the resulting outcomes of that behavior
    There are a limited number of RCTs relative to the number of
    decisions that must be made
                                             © Outcome, 2008
 “A clinical trial is the best way to assess whether an
intervention works, but it is arguably the worst way to
       assess who will benefit from it” David Mant

                 –From Kravitz et al.. Milbank Q. 2004;82:661-687
Different Paradigm for Discovery & Explanation

 Vandenbroucke JP (2008) Observational research, randomised trials, and two
              views of medical science. PLoS Med 5(3): e67

                                                   © Outcome, 2008
 How are observational
studies being used today
 for safety, effectiveness
        and value?

                   © Outcome, 2008
Observational Studies for Evidence

•   Determine clinical or cost
    effectiveness; comparative
    analyses or in isolation
•   Measure or monitor safety and
    harm, including comparative

•   Measure and/or improve quality of

•   Natural history of disease process

                                         © Outcome, 2008
Focus of Studies: Products, Services &
 ♦   Device registries
 ♦   Pharmaceutical product registries
 ♦   Pregnancy registries (Exposed pop’n = fetus)
 Health care service, procedure or clinical
 ♦   Procedure or hospitalization registries
 ♦   Clinical service (and quality measurement) registries
 ♦   Pay for Performance
 Disease or Event
 ♦   Acute disease or event
 ♦   Chronic disease
 ♦   Rare Disease
                                               © Outcome, 2008
Goals and Objectives

Clinical Goals

• Understand long-term effects of products
  • Delayed effects from short-term use, and/or
  • Cumulative effects from long-term use
• Unanticipated beneficial effects may lead to new
• To identify best practices to achieve optimal
• Safety/risk management

                                         © Outcome, 2008
Goals and Objectives

Marketing goals

• Assist market penetration / optimize product use
  • Understand utilization patterns
  • Special patient subpopulations
  • Further document safety
• Re-position through different outcomes (e.g., QOL,
• Develop relationships with providers
• Thought leadership

                                      © Outcome, 2008
Goals and Objectives

Regulatory Mandated
• Product approved under accelerated review process
   • New device or drug to treat serious/life-threatening condition
   • Approval contingent on conducting post-marketing study
• Required due to pre-market safety ‘signals’
   • Risk Evaluation and Mitigation Strategies (REMS)
• Response to question by regulatory agency
   • Spontaneous reporting systems may generate a potential safety
• Monitor effects on birth outcomes
• Changes in national or private payers approach to
  coverage determinations
Litigation support
                                                       © Outcome, 2008
    Comparative Effectiveness of Treatments

•   Large, on-line breast care
                                                        % of Surgeries Associated with SLN
    registry                                                         Biopsies
•   Systematically documents use
    patterns, effectiveness and safety

    of a range of disease treatments
•   13,000 patients enrolled from                     20.00%
    250 breast centers and followed                   0.00%
    for 5 years                                                Apr-01 May-01 Jun-01 Jul-01
                                                                        Month and Year

                                                                  © Outcome, 2008
Comparative Effectiveness: Hepatocellular

    Quantify the overall survival
•      Compare new drug to other therapies, using both historic (prior to
       new drug’s availability) and concurrent treatment groups for

    Characterize and compare the treatment approaches
•      Evaluate based on the number and proportion of patients who
       received each treatment approach per calendar year, in aggregate
       & by country
•      Characterize patients treated with new drug vs. other therapies in
       the same time period (concurrent comparators)

    Evaluate the outcomes (survival, time to progression,
    incidence of treatment-limiting adverse effects, treatment
    failure) associated with each treatment approach

                                                     © Outcome, 2008
 Quality Improvement
 American Society of Clinical Oncologists (ASCO)
  Registry for their Quality Oncology
  Practice Initiative (QOPI)

• Measures-based Certification Program
• Electronic Health Records/RFD:
   • Integrate QOPI registry with the commonly used
     EHRs in oncology practices
   • Allow reporting of QOPI through EHR
• Breast Cancer Registry for planning
  treatments and summaries for patients
• Provide PQRI for members regardless
  of participation in QOPI
                                                      © Outcome, 2008
Observational Studies to Quantify Value

• How do payers evaluate their needs?
• How do payers evaluate new products?
  • Will they provide long-term value or just add cost?
• How do sellers show convincing, quantitative
  arguments about value?
  • How will this product impact budget?
  • Who will likely be the most frequent users of the new
    product and what information can be used to support

                                         © Outcome, 2008
Differentiate Treatment: Prostate Ca

•   Largest advanced prostate cancer
•   5,000 patients followed longitudinally,
    including quality of life and health
•   250 sites (community and academic)

                                              © Outcome, 2008
Understanding Treatment Benefits:
Benign Prostatic Hyperplasia: A design

  • To examine the characteristics, management
    practices, and patient outcomes in symptomatic
    BPH patients in the U.S.
  • To explore the effects of demographic factors,
    comorbidities, and concomitant medications in
    BPH patients
  • To measure safety outcomes (common complaints
    and Serious Adverse Events) in this population
                                     © Outcome, 2008
BPH Registry & Patient Survey

               Patients with benign prostatic hyperplasia

        Physicians recruit n eligible patients from y sites;
  advise treatment according to standard of care & clinical judgment

     TAKE ALL: Presently or recently             SAMPLE:
     treated with α blockers, 5 ARI or           Watchful Waiting

       Evaluate sexual and urinary functioning over ~2 years
        •Patients complete forms every 6 months (mail to i3)
      •Physician complete forms at every physician visit (EDC)

                                                 © Outcome, 2008
Safety, Effectiveness & Value – All in

Human Avian Influenza Registry – EXAMPLE

• One study meets many purposes
• Study is designed to be adapted over time as
  situation and needs change

                                   © Outcome, 2008

      A scalable repository for directly reported
      information about presentation, clinical
      course, response to treatment and
      outcome to promote understanding of the
      nature of this disease in humans

Human Avian Influenza Registry
One study meets many purposes
  • Safety – monitors marketed product(s)
  • Effectiveness – evaluates all treatments used,
    individually and in combination
  • Value – will be used to help make resource
    allocation decisions
Study is designed to be adapted over time as
  situation and needs change
  • E.g., effect of vaccines as they come on the

                                         © Outcome, 2008
Criticism/ Challenges

  Without randomization, there is more potential for
  bias, i.e., systematic errors, e.g.,
     •Selective recruitment
     •Selective loss to follow-up

  Without on-site monitoring, there is greater concern
  about errors in data entry and errors in accuracy

                                       © Outcome, 2008
Criticism/ Challenges

 Confounding is the single biggest issue in observational
 studies of comparative effectiveness

Courtesy of S. Schneeweiss, MD, Sc.D., Brigham and Women’s DEcIDE
                                              © Outcome, 2008
 More Challenges for Observational Studies

• Heterogeneity of the quality of observational
  studies and inability to discern those with
  greater or lesser risk of bias

• Dismissal or exclusion of observational data
  in effectiveness reviews, EBM projects
  • Oregon Drug Effectiveness Review Project
    (DERP) uses only systematic reviews of RCT

                                  © Outcome, 2008
 Long-standing Skepticism

Observational research has not been highly
 regarded or widely used by:
  • Regulatory authorities – not accepted for approval or
    significant labeling changes

  • Payers – not widely used for formulary decisions

  • Clinicians – not considered strong support for
    evidence-based medicine decisions

  If it is so difficult, what is the value of this information
     versus other investments (options)?

                                           © Outcome, 2008
“Randomised controlled trials…have been put on
an undeserved pedestal. Their appearance at
the top of "hierarchies" of evidence is
inappropriate; and hierarchies, themselves, are
illusory tools for assessing evidence. They
should be replaced by a diversity of approaches
that involve analysing the totality of the evidence-
 Michael Rawlins’ Critique of RCT’s
• Impossible - with treatments for very rare diseases
• Unnecessary – when a "dramatic" benefit is evident
    E.g., imatinib (Glivec) for chronic myeloid leukemia
• Stopping trials early after interim analyses show
  apparent benefit may be due to “random high”
• Costs of RCTs are substantial
    Median cost of over £3 million in ‘08
    Av. cost/pt increased from £6,300 in ‘05 to £9,900 in ‘07
• Generalizability

                                           © Outcome, 2008
Some stakeholders see value

    • Some guidelines development
    • Some FDA approvals
    • Some Payer decisions
      • Medicare CED

                              © Outcome, 2008
New Indication Supported by Registry

                            © Outcome, 2008
Change in Indication for IOLs

Eydelman MB, Hilmantel G, Saviola J,Calogero D. US FDA: Ophthalmic devices and clinical epidemiology.
Chapter 28. Medical Device Epidemiology and Surveillance. SL Brown, RA Bright, DR Tavris, Eds,
John Wiley & Sons, Ltd, 2007.

                                                                    © Outcome, 2008
Change in Labeling

                     © Outcome, 2008
Use in Practice Guidelines

                             © Outcome, 2008
Trends on Managed Care Decision-Making

  • Observational data are considered by both
     • P&T Committee
     • Formulary Management
  • They are looking for guidance as to how to apply the
    evidence to a coverage decisions
• WellPoint is adopting new guidelines for formal
  health technology assessments

                                         © Outcome, 2008
Covering PET Scans for Cancer Diagnosis

PET scan cancer registry mandated by CMS under
Unanswered question needed by decision maker
•   While the diagnostic sensitivity of a PET scan in identifying
    metastatic disease can be determined by standard trials, the utility of
    that additional information in decision making cannot.
•   Its important to assess the utility in cases where physicians would
    choose to use the test.
Focus: Expansion to non-covered types of cancer
•   Participation required as a condition of coverage by CMS
•   Doctors asked in each case whether the results of the PET scan
    were used in decision making.
                                                     © Outcome, 2008
Design: National Oncologic PET Registry

• Targeted all Medicare beneficiaries receiving
  PET scans for cancer indications not currently
  covered by CMS
• Endpoint: Impact of PET scan on physicians
  intended treatment
  • Treatment
     • Surgery, chemotherapy, radiation or other, alone or in
     • Non-Treatment: watching, noninvasive imaging, biopsy or
       supportive care
  • Intent – curative or palliative

• Enrolled 34,358 PET studies in 1 year (May ‘06 –
                                              © Outcome, 2008
National Oncologic PET Registry

                            © Outcome, 2008
Results: National Oncologic PET Registry

                                 Some highlights
                         •   Physicians changed their
                             intended management in
                             35% (95% CI 35.9-37.2)
                         •   Biopsies avoided in 70%
                         •   Intended management
                             was 3x more likely to
                             lead to treatment than no
                         •   9% of PET scans lead to
                             a major change in the
                             type of treatment

                               © Outcome, 2008
Risk Sharing on Pricing

                          © Outcome, 2008
Do we need more/different
  rules and guidance for
observational studies than
     for clinical trials?

                   © Outcome, 2008
Guidance for Clinical Trials

  Good Clinical Practice guidelines by ICH
    • International Conference on Harmonization of
      Technical Requirements for Registration of
      Pharmaceuticals for Human Use is composed of
      regulatory authorities of Europe, Japan and the
      United States and experts from the pharmaceutical
    • Defines and describes Good Clinical Practice
    • Guidance are maintained and adapted on a regular

  CONSORT Guidelines for Reporting
    • Moher D, Schulz KF, Altman DG, for the CONSORT
      Group. The Lancet 2001;357(9263):1191-1194
                                       © Outcome, 2008
Typical Registries* are NOT GCP Trials…

A Disease Registry typically….           But Does Not…
•   Specifies no treatment               •   Assign treatments or dictate treatment

•   Observes without intervention*       •   Pay for drugs, devices or routine

•   Collects data as patient seek care       services

•   Enables data collection through      •   Mandate a data collection schedule

    broad patient authorization          •   Bind data collection based on

•   Allows broad enrollment criteria         informed consent

•   May gather data over the long term •     Verify source documentation*

                                         •   Require restrictive inclusion criteria

*some pay for special tests; some verify a % of source docs

                                                         © Outcome, 2008
Best Practices for SAE Reporting to Regulatory Bodies in Registries of Marketed Products
                    Dreyer NA, Sheth N, Trontell A, et al: Drug Information Journal 42(5):421-428, 2008.

                                           No                                       Yes         Does the registry have data collection with
     Follow good public health                      Does the registry                           individual patient interaction ?
     practices for reporting new or                 receive sponsorship or
     serious AEs (recommended                                                                                      Yes
                                                    financial support from            No
     practice; not mandated)                                                                    Registry trains site(s) on identification and
                                                    any regulated industry?
                                                                                                reporting of AEs including events of
                                                                                                special interest and serious AEs (SAE).

     Notify company and/or regulatory
     body about new or serious AEs*                Report AEs in periodic                       Establish rules, roles, responsibilities for
                                                   regulatory reports or PSUR                   involved parties for oversight and
                                                   if applicable                                reporting in conformance with registry
                                                                                                design and applicable regulations.
      Company               Regulatory
       Contact                Body                                                               Are SAEs recognized by a
                                                   Aggregate study findings
                                                                                          No     knowledgeable person in temporal
                                                   of adverse events
                                                                                                 association with a drug* under study?
                                                                                                 Is there a reasonable possibility that the
                                                                                                 drug caused the SAE?
     ∗For devices, no attribution of expectedness is
                                                                                                 Notify responsible entity (e.g., company)
     required; “device-relatedness” is based on whether
                                                                                                 ASAP, ideally within 24 hours
     the device caused or contributed to death or serious
     injury, or, in the case of malfunction, if the chance of
     death or serious injury is not remote if the malfunction                             No     Company determines if the SAE is
     were to recur
                                                                                                 “unexpected” (based on labeling) in
                                                                                                 terms of type, specificity or severity

 Company reports SAEs considered unexpected and possibly related for own drugs to regulatory body within 15 calendar days of original
 report; reports for device-related deaths, serious injuries, or malfunctions are due within 10-30 calendar days.
  Congressional Budget Office on CER

CBO defined CER as
• “a rigorous evaluation of the impact of different
  options that are available for treating a given
  medical condition for a particular set of patients.”
• Studies may compare similar treatments, such
  as competing drugs, or very different
  approaches, e.g., surgery and drug therapy.
• The analysis may focus only on the relative
  medical benefits and risks of each option, or it
  may also weigh both the costs and the benefits
  of those options.”
  Goal: Provide patients, physicians & payers with
  evidence to support treatment decisions
                                              © Outcome, 2008
Guidance Gap?

• Institute of Medicine estimates that less than
  half of all medical care is supported by
  adequate effectiveness research
• Head-to-head research is risky for
  pharmaceutical and biotech manufacturers to
• Hard to get safety and effectiveness studies,
  especially observational ones, published
  without dramatic findings

                                    © Outcome, 2008
Current Status
Observational Comparative Effectiveness
  • Important for clinical, policy, payer decision-makers
  • Little evidence on CE available
  • Not produced by industry b/c not accepted widely
    for regulatory purposes or by payers

Existing guidance does not explain how to
  design a strong observational CE study or
  how to evaluate the findings from a decision
  makers perspective
                                         © Outcome, 2008
Contrasting Depth of Guidance Available
Clinical Trials: ICH & CONSORT

Observational Research has bits & pieces
  • Good practices for handling adverse events identified
    through registries. Drug Information Association Journal
  • Guidelines for good pharmacoepidemiology practices
    (Pharmacoepidemiology & Drug Safety 2005:14:589-595)
  • Quality of Reporting of Observational Longitudinal Research
    (AJE 2005:161:280-288)
  • Guidance for Industry: Good pharmacovigilance practices &
    pharmacoepidemiology assessment. DHHS, March 2005
  • Guidance for Industry: Establishing Pregnancy Exposure
    Registries, DHHS, August 2002.

                                           © Outcome, 2008
Guidance on Good Practice
 for Observational Studies

                   © Outcome, 2008
ISPE Guidelines (2005)

• Guidelines for good pharmacoepidemiology
• Goal: “Assist investigators with issues
  pertaining to the planning, conduct, and
  evaluation of pharmacoepidemiologic
• Published in Pharmacoepidemiology & Drug
  Safety in 2005, updated in 2008:
    Guidelines for good pharmacoepidemiology
    practices. Pharmacoepidemiology & Drug Safety.
    2008; 17: 200-208.

                                     © Outcome, 2008
    ISPE Guidelines (2005)
Broadly address:
•   Protocol development                • But, do not address
    • Detailed list of what should be
      included in a protocol              comparative
•   Responsibilities, personnel,          effectiveness
    facilities, resource
    commitment, and contractors           research specifically
•   Study conduct                         or what would
    • Discusses operations, human
      subject protection, data
                                          constitute ‘good’
      collection, management, &           practices
      verification, analysis, and
      study report
•   Communication
•   Adverse event reporting
•   Archiving

                                                 © Outcome, 2008
  STROBE Statement (2007)

• “Strengthening the Reporting of Observational
  Studies in Epidemiology” Statement
• Goal: Promote consistent, thorough reporting
  of observational studies; not a study quality
• Adopted by major journals
• Checklist and paper published in BMJ and
  Annals of Internal Medicine in October 2007
• www.strobe-statement.org

                                   © Outcome, 2008
Handbook for Patient Registries

Sr. Editors: Drs. Richard Gliklich and Nancy
Dreyer, Outcome DEcIDE Center

Collaborative effort with broad multi-
stakeholder involvement
 •   Duke University EPC
 •   CMS Coverage and Analysis Group
 •   39 contributors from industry, academia, health plans,
     physician societies and government
 •   35 invited peer reviewers and public comment, including OCR,
     OHRP, IOM among others

Example driven: ~20 case studies illustrating
specific challenges and solutions.

Print copies available free of charge. PDFs
available at http://effectivehealthcare.ahrq.gov

                                                                    © Outcome, 2008
Handbook for Patient Registries(2007)

Provides a framework and evaluation of patient
  • Design – planning, design, data elements, data
  • Legal/ethics
  • Operations – recruitment & retention, data
    collection, quality assurance, AE/SAE reporting
  • Analysis
  • Evaluation
     But does not specifically address use of registries in
             comparative effectiveness research

                                                  © Outcome, 2008
  Evaluating the Quality of Registries

• Although all registries can provide useful
  information, there are levels of rigor that
  enhance validity and make the information
  from some registries more useful for guiding
• We adapted quality evaluations used for
  trials, that address the confidence that the
  design, conduct and analysis of the registry
  protect against systematic errors and errors
  in inference
                                    © Outcome, 2008
    Evaluating Registries

• Quality component analysis
   • Research quality (scientific process)
        Planning; design; data elements & data sources; ethics, privacy
        and governance
   • Evidence quality (data/findings)
       Patients; data elements & data sources; QA; analysis; reporting
• Components classified as
   • Basic Good Registry PracticeSM or
   • Potential Enhancement to Good Registry Practice

                                                  © Outcome, 2008
 Research Quality: Good Registry Practice

Basic + Potential Enhancements
• PLANNING: A written study plan documents goals, design, study
  population, recruitment, data collection, human subject
  protection, data elements and sources, data review/QA.
  Feasibility is considered at the outset.
• Plans address how the data will be evaluated, including what
  comparative information will be used, if any, to support study
  hypotheses or objectives.
   • Formal study protocol….review from key stakeholders; pilot studies for hard
     to reach or sensitive pop’ns
   • Use concurrent comparators, esp for treatments that are changing rapidly
• DESIGN: Size required to detect an effect, should it exist, or
  achieve a desired level of precision is acknowledged, whether or
  not it is met.
   • Formal sample size calculations, whether achievable or not

                                                       © Outcome, 2008
 Research Quality: Good Registry Practice

Basic + Potential Enhancements
• DESIGN: Follow-up time needed to detect events of interest is
  acknowledged, whether or not feasible to achieve. To the extent
  feasible, the follow-up time is adequate to address the main
• DATA ELEMENTS: Outcomes are clinically meaningful and
  relevant, i.e., useful to the medical community for decision-
   • Use scales that have been validated, when such tools exist
   • Adaptive QA based on observed performance
   • Coding consistent with nationally approved systems; standardized data
     dictionaries, etc.
• ETHICS, PRIVACY, GOVERNANCE: Registry has received
  review by required oversight committees
• COMMUNICATION PLAN for results is addressed.
   • Publication policies are specified in advance of collecting the data

                                                          © Outcome, 2008
Evidence Quality: Good Registry Practice
Basic + Potential Enhancements
• PATIENTS: Participants are similar to the target population;
  attention is paid to minimize selection bias to the extent feasible.
   • External validity is described
• For safety studies, registry personnel are trained to ask about
  AEs in a consistent, clear & specific manner, and know how to
• DATA ELEMENTS & SOURCES: Data are reasonably
   • Results can be confirmed by an unbiased observer (e.g., non-subjective
• QA: Reasonable efforts have been expended to assure that
  appropriate patients have been systematically enrolled and
  followed in as unbiased a manner as possible and losses to
  follow-up have been minimized. Data are checked using range
  and consistency checks.
   • Potential sources of errors relating to accuracy and falsification are
     rigorously evaluated and quantified
   • A sample of data are compared with patient records.
                                                         © Outcome, 2008
Evidence Quality: Good Registry Practice

Basic + Potential Enhancements

•   ANALYSIS: Accepted analytic techniques are used; they may be
    augmented by new or novel approaches.

     • Quantitative description of risks and benefits, not just reporting p values

•   REPORTING: Results are reported for all main objectives; follow-up
    time is described so readers can assess its impact on conclusions
    drawn; report clearly states any conclusions drawn and implications of
    results, as appropriate.

     • Good reporting practices are employed (see STROBE)
     • Inferences are based on a variety of factors including the strength of the
       association, biases, temporal relation, etc.

                                                           © Outcome, 2008
Guidance on Observational
 Studies of Comparative

                  © Outcome, 2008
GRADE Collaborative

• “Grades of Recommendation, Assessment,
  Development and Evaluation Working Group”
• Goal: Promote more consistent and
  transparent approach to grading evidence and
• Being adopted/piloted by ACP, ACCP, NICE,
  WHO, UpToDate, etc.
• Background paper published in BMJ April 2004
• www.gradeworkinggroup.org

                                 © Outcome, 2008
   GRADE Evidence Levels (2004)
                          Quality of
Observational studies     Evidence     RCTs
Very strong association   High         Well designed studies
Strong, consistent
association with no       Moderate       Study flaws
plausible confounders
Well-designed studies Low
                                         Sparse data
                                         Publication bias
Few or inconsistent       Very Low

                                           © Outcome, 2008
GRADE opens the door

• Well-designed observational CE studies are
  not as limited by being observational per se
• Still limited by lack of consensus and clarity
  on what well-designed or strong findings

                                     © Outcome, 2008
 ISPOR – Real World Data Task Force(2007)

• “Using 'Real World' Data in Coverage and
  Reimbursement Decisions” Task Force
• Goal: Develop a framework to assist health-
  care decision-makers in dealing with real-world
  data, especially related to coverage and
  payment decisions.
• Published in Value in Health in 2007
    Garrison Jr. LP, Neumann PJ, Erickson P, et al.
    Using real-world data for coverage and payment
    decisions: The ISPOR real-world data task force
    report. Value Health 2007;10:326-35

                                        © Outcome, 2008
 ISPOR – Real World Data Task Force(2007)


• RCTs remain the gold standard for demonstrating
  clinical efficacy in restricted trial setting but..

• [RW data] can contribute to the evidence base needed
  for coverage and payment decisions

• It is critical that policymakers recognize the benefits,
  limitations, and methodological challenges in using
  RW data, and the need to carefully consider the costs
  and benefits of different forms of data collection in
  different situations

  Does not provide any specific insights into evaluating
  OCER                                     © Outcome, 2008
 GRACE Initiative
• Develop principles to address
  good practice for the design,
  conduct, analysis, and reporting of
  observational studies of
  comparative effectiveness

• Ultimate goal of the principles:
  enhance quality and facilitate the      www.graceprinciples.
  use of this research to support                org
  decision-making by patients,
  physicians, and payers

                                        © Outcome, 2008
    GRACE Initiative

• Model for consensus
    • Draft principles posted online for public comment
    • Extending input and review to broad group of
    • Iterative postings/presentations/reviews/revisions
•   Industry seed funding from National
    Pharmaceutical Council

                                           © Outcome, 2008
Hierarchy for Observational CE Studies

1.   Determinants of use are not related to
     determinants of outcomes
      •   Treatment decisions are largely driven by reimbursements,
          e.g., different formularies
2.   Appears to be a situation of clinical equipoise
      •   E.g., conflicting guidelines or widespread debate about appropriate
          patient management
3.   Known determinants of treatment, independent
     of patient characteristics, can be identified
      •   Strong treatment preferences, e.g., physician prescribing
          preference, surgical approaches
      •   Treatment choice affected by toxicity, e.g., warfarin tx decision
          relates to safety concern about stomach ulcers, but is unrelated to
          risk of stroke
4.   Little relevant evidence available
                                                       © Outcome, 2008
Draft GRACE Principles

1.   Design realistic and clinically meaningful
     comparisons for appropriate target
     •   Study Purpose
     •   Target population
     •   Comparisons
     •   Outcomes: clinical relevance, appropriateness
         for all products, safety, effectiveness
     •   Safety & tolerability
     •   Sample size and statistical power

                                         © Outcome, 2008
Draft GRACE Principles

2.   Collect the most valid, clinically relevant
     data needed to answer the study question
     as efficiently as possible.
     •   Have data been collected in an unbiased
         manner for a study group that represents the
         target population?
     •   Are the appropriate data being collected and
     •   What data might be systematically missing?

                                         © Outcome, 2008
Draft GRACE Principles

3.   Analyze the data to compare people who
     are similar in the characteristics that would
     cause them to receive the treatment and in
     their likelihood of benefiting from the
      •   Actual medication use, not intent to treat
      •   Account for disease severity, risk factors &
          other potential confounders
      •   Provide quantitative estimates of CE for all
          products under study

                                           © Outcome, 2008
Draft GRACE Principles

4. Consider alternative explanations for
   the findings
    • Consider the role and impact of missing
      data and bias (due to selection,
      misclassification, detection,
      performance or attrition)
    • Evaluate loss to follow-up at various
      points in time to evaluate impact on

                                 © Outcome, 2008
Draft GRACE Principles

5. Conduct the study in a manner that
   adheres to accepted good practices of
   evidence quality for observational

                              © Outcome, 2008
Collaborators and Supporters*

           *As of December, 2008
                                   © Outcome, 2008
Examples of observational
studies that have been
used to support decision-
• Real-world research is critical to the development of
  usable evidence for real-world decision-making.
• Observational comparative effectiveness research will be
  increasingly important for
   • Safety
   • Proof of real-world effectiveness, and
   • Product differentiation
• Unleashing the value of real-world research requires new
  methods to evaluate the quality of the research
  performed and the evidence produced.
• Good evidence depends on strong design, research
  quality and information value

                                              © Outcome, 2008
Contact Information

Nancy A. Dreyer, MPH, PhD, FISPE
 Chief of Scientific Affairs

 201 Broadway, 5th Floor
 Cambridge, MA 02138
                               © Outcome, 2008
    Submitting questions
Join a discussion on this presentation
– www.HSRMethods.org
– > ‘Forum’ > ‘Continuing Education’ > ‘The
  GRACE Principles’
– hsrmethods@academyhealth.org
– www.academyhealth.org
– www.hsrmethods.org
  • AHRQ reports in ‘links’
– www.academyhealth.org/hsrproj
        Methods Updates
Receive updates on:
– Training
– Methods resources
To join:
– e-mail to hsrmethods@academyhealth.org
   • subject line: “join methods list.”
Thank You

To top