Docstoc

westfall

Document Sample
westfall Powered By Docstoc
					Bayesian Multiple Testing for Two-
  Sample Multivariate Endpoints

      Mithat Gonen, Memorial-Sloan-Kettering
     Peter H. Westfall, Texas Tech University
    Wesley O. Johnson, Univ. of California at Davis
     Q: Is Bayesian Testing using
     Point Null Priors Defensible?
     A: Yes:
1. High throughput screening
2. Early Phase I and II
3. Bioequivalence
4. Adverse events with no biological connection to drug
5. Phenotype comparisons among
   genotype subgroups for unlinked genes
6. Skeptical reviewers may doubt the sponsor’s prior
     An Approximate Method
- Let Z be the k-vector of z –statistics
- Then Z  Z0 + d

where
Z0 ~ N(0, R), R = correlation between endpoints
d = the vector of noncentrality parameters

• Treat R as known (substitute estimate)
• Put a prior on d that accommodates all 2k states
• Find P(di =0 | Z)
    Solution – Approximate Case
Let h be one of the 2k states, let H be the true state. Then

P(H = h | Z) =

{marginal pdf of Z for state h} * p(h)
                                          .
Sh {marginal pdf of Z for state h} * p(h)


 Then P(di=0 | Z ) = {sum of P(H = h | Z) over
 all 2k-1 states where di=0} .
Example – Two Endpoints

    1 .7 
R          , Var(d i )  2, E(d i ) = 2.5,
   .7 1 
Corr(d1, d 2 ) = 0 ; and


P(d1 = 0 , d 2 = 0) = P(d1 = 0 , d 2  0) =
P(d1  0 , d 2 = 0) = P(d1  0 , d 2  0) = .25
   Marginal pdfs of Z
                    0   1 .7  
h = (0,0): Z ~ N   ,         
                    0   .7 1  
                    0   1 .7  
h  (0,1) : Z ~ N   ,           
                    2.5   .7 3  
                    2.5   3 .7  
h  (1, 0) : Z ~ N   ,          
                    0   .7 1  
                   2.5   3 .7  
h  (1,1) : Z ~ N   ,         
                   2.5   .7 3  
Observed Data: Z1 = 2.4, Z2=3.0
      8                                 8




      4                                 4




      0                                 0
 -4        0    4        8        -4        0    4        8


               h=(0,0)                          h=(0,1)
      -4                               -4




                                       8
       8


               h=(1,0)
                                       4
       4




                                       0
       0
 -4        0    4            8   -4         0    4            8




      -4                               -4
                                                h=(1,1)
                                                h=(1,1)
                Posterior Analysis

  h = (0,0):   f(Z) =   0.0022667    p{(0,0)|Z} =      0.0385
  h = (0,1):   f(Z) =   0.0042732    p{(0,1)|Z} =      0.0726
  h = (1,0):   f(Z) =   0.0004255    p{(1,0)|Z} =      0.0072
  h = (1,1):   f(Z) =   0.0518999    p{(1,1)|Z} =      0.88817

  Thus, P(d1=0|Z) = .0385 + .0726 = .1111
  and P(d2=0|Z) = .0385 + .0072 = .0457

Software: Westfall et al., 1999, Multiple Comparisons and Multiple
Tests using the SAS® System
Q: Is the Approximate Method a
    Limiting Case of a Fully
      Bayesian Approach?

 A: Yes, if you use the right
    prior distribution!
 1) Can’t be “too vague”
 2) Must consider contiguity
 3) Want something reasonable for the univariate case
    Fully Bayesian Approach
Trt: X11, …, X1n1 iid N(m1, S)
Ctrl: X21, …, X2n2 iid N(m2, S)

Complete Minimal Sufficient Statistics are (D,M,C)
   D  X1  X 2
   M  (n1 X 1  n2 X 2 ) /(n1  n2 )
   C   ( X ir  X i )( X ir  X i ) '
    Fully Bayesian Approach, II
Natural parameter space: (d, m, S), where

d = m1 – m2,

m = (n1 m1 + n2 m2)/(n1 + n2)

S = Cov(Xir).

Auxiliary vector: H={Hj};
  Hj = 0 if dj = 0; Hj = 1 if dj  0.
 Fully Bayesian Approach, III
Hierarchical prior:

p(d | m, S, H) = N(d| Hsl, sHSdHs)
where s=diag(S) ,
p(m, S | H) S(k1)/2
P(H=h) = ph.

lj= a priori mean of dj/sj when dj 0.
Sd = a priori cov matrix of s--1d
ph = a priori probability for subset h.
     Fully Bayesian Approach, IV
  Posterior probabilities:

        P( H  h | D, M , C ) 
         p h p( D, M , C | H  h)
        p h p( D, M , C |H = h)
where p(D,M,C| H=h) is easily evaluated via Monte Carlo.

Thus P(Hj = 0|D,M,C) = Sh: hj=0 P(H=h | D,M,C)
Monte Carlo Evaluation
   Exact Results, Univariate Case

P( H  0 | D, M , C ) 
                   p T (t | 0,1)
p T (t | 0,1)  (1  p )T (t | nd l ,1  nd s d )
                                  1/ 2          2




where t = usual (pooled variance) two-sample t-stat, and
where T(. | m, s2) is the density of N(m,s2)/(c2/)1/2
            Asymptotic Result
Theorem: Let    so that (1) C/  S0 , pointwise,
(2) Z is constant, (3) nd1/2l is constant, and (4) ndSd is
constant.

(Note: Conditions (2)-(4) reflect contiguity.)

Then the fully Bayesian method converges to the
“approximate” method.

Proof: Gönen, Westfall and Johnson (2002).
             Liver Resection Study:
          Group1: <50% resection, Group2: >50%
            Endpoints: Length of stay; op. time;
               peak prothrombin; biluribin




Implied t-values: 2.32, 2.21, 6.39, 3.95.
                      Priors
Prior probability on each null was .25.
Prior probability on joint null was .10.
These imply a tetrachoric correlation of .75 among {Hi}

Prior mean and variance of l are .56 and .075 –
  suggested by power analysis and low probability
  of effects in the wrong direction.

Prior correlation between effect sizes also .75.
  Posterior Null Probabilities
Length of stay:    .0315 (asymptotic: .0300)
Operation Time:    .0354 ( “          .0265)
Peak Prothrombin: <.0001 ( “         <.0001)
Bilirubin:         .0004 ( “           .0004)
Connection with Closed Testing
 Selected Bibliography
Related research specific to point nulls:
Berger and Sellke, 1987 JASA
Westfall, Johnson and Utts, 1997 Biometrika
Gönen and Westfall, 1998 Proceedings of ASA, Biopharm
Gopalan and Berry, 1998 JASA
Westfall et al. 1999 (SAS books by Users)
Gönen, Westfall and Johnson 2002, Biometrics, in press.

Research relating to model selection in general:
Mitchell and Beacham, 1988 JASA
George and McCulloch, 1993 JASA
Kass and Raftery, 1995 JASA
Geweke, 1996 Bayesian Statistics 5

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:12/19/2011
language:
pages:22