Docstoc

Review of Probability and Statistics in Simulation_3_

Document Sample
Review of Probability and Statistics in Simulation_3_ Powered By Docstoc
					Chapter 7
Estimation




             1
Criteria for Estimators
• Main problem in stats: Estimation of population parameters, say θ.
• Recall that an estimator is a statistic. That is, any function of the
observations {xi} with values in the parameter space.


• There are many estimators of θ. Question: Which is better?


• Criteria for Estimators
    (1) Unbiasedness
    (2) Efficiency
    (3) Sufficiency
    (4) Consistency
                                                                      2
Unbiasedness
Definition: Unbiasedness
                               ^
An unbiased estimator, say  , has an expected value that is equal to the
value of the population parameter being estimated, say θ. That is,
      ^
   E[  ] = θ


Example:        E[ x ] = 
                E[s2]   = 2




                                                                  3
Efficiency
Definition: Efficiency / Mean squared error
An estimator is efficient if it estimates the parameter of interest in
some best way. The notion of “best way” relies upon the choice of a
loss function. Usual choice of a loss function is the quadratic: ℓ(e) =
e2, resulting in the mean squared error criterion (MSE) of optimality:
            2   E    E ( )  E ( )   2   Var ( )  [b( )]2
 MSE  E ˆ
        
        
                 
                  
                       
                        
                           ˆ     ˆ        ˆ       
                                                    
                                                              ˆ
          ^              ^
b(θ): E[(  - θ) bias in  .

The MSE is the sum of the variance and the square of the bias.
=> trade-off: a biased estimator can have a lower MSE than an
unbiased estimator.
Note: The most efficient estimator among a group of unbiased
                                                             4
estimators is the one with the smallest variance => BUE.
Efficiency
Now we can compare estimators and select the “best” one.

Example: Three different estimators’ distributions

                             3             1, 2, 3 based on samples
             2                             of the same size
 1




                      θ                    Value of Estimator

 –   1 and 2: expected value = population parameter (unbiased)
 –   3: positive biased
 –   Variance decreases from 1, to 2, to 3 (3 is the smallest)
 –   3 can have the smallest MST. 2 is more efficient than 1.
                                                                      5
Relative Efficiency
It is difficult to prove that an estimator is the best among all estimators, a
relative concept is usually used.

Definition: Relative efficiency
                          Variance of first estimator
   Relative Efficiency 
                         Variance of secondestimator

 Example: Sample mean vs. sample median
     Variance of sample mean = 2/n
     Variance of sample median = 2/2n
     Var[median]/Var[mean]     = (2/2n) / (2/n) = /2 = 1.57

 The sample median is 1.57 times less efficient than the sample mean.
                                                                 6
Asymptotic Efficiency
• We compare two sample statistics in terms of their variances. The
statistic with the smallest variance is called efficient.

• When we look at asymptotic efficiency, we look at the asymptotic
variance of two statistics as n grows. Note that if we compare two
consistent estimators, both variances eventually go to zero.

Example: Random sampling from the normal distribution
    • Sample mean is asymptotically normal[μ,σ2/n]
    • Median is asymptotically normal [μ,(π/2)σ2/n]
    • Mean is asymptotically more efficient
Sufficiency
• Definition: Sufficiency
 A statistic is sufficient when no other statistic, which can be calculated
 from the same sample, provides any additional information as to the
 value of the parameter of interest.

 Equivalently, we say that conditional on the value of a sufficient
 statistic for a parameter, the joint probability distribution of the data
 does not depend on that parameter. That is, if
         P(X=x|T(X)=t, θ) = P(X=x|T(X)=t)
 we say that T is a sufficient statistic.

 • The sufficient statistic contains all the information needed to
 estimate the population parameter. It is OK to ‘get rid’ of the
                                                                    8
 original data, while keeping only the value of the sufficient statistic.
Sufficiency
• Visualize sufficiency: Consider a Markov chain θ → T(X1, . . . ,Xn) →
{X1, . . . ,Xn} (although in classical statistics θ is not a RV). Conditioned
on the middle part of the chain, the front and back are independent.

Theorem
Let p(x,θ) be the pdf of X and q(t,θ) be the pdf of T(X). Then, T(X) is
a sufficient statistic for θ if, for every x in the sample space, the ratio of
                   px  
                  q t  
is a constant as a function of θ.

Example: Normal sufficient statistic:
 Let X1, X2, … Xn be iid N(μ,σ2) where the variance is known. The
sample mean, x , is the sufficient statistic for μ.
Proof: Let’s starting with the joint distribution function

                         n                                     xi    2    
      f   x     
                                            1
                                                         exp  
                                                                   2 2
                                                                               
                        i 1            2 2                
                                                                              
                                                                               
                                                                    xi        
                                                                               2
                                                                 n
                                        1
                                                       exp                     
                                                                        2 2
                         2 
                                                n
                                            2       2       
                                                              i 1                
                                                                                   

• Next, add and subtract the sample mean:
                                             n  xi  x  x    2       
  f   x             1
                                        exp  
                                                        2 2
                                                                           
                    2 
                                n
                            2       2        i 1
                                                                          
                                                                           
                                                 n
                                                                                   
                                                    xi  x   n  x   
                                                               2              2

                        1                                                          
                                       exp   i 1                               
                                                              2 2
                    2 
                                n
                            2       2                                             
                                            
                                                                                  
                                                                                   
 • Recall that the distribution of the sample mean is
                                                                         n  x   2            
   q T       X  
                                                1
                                                                    exp                         
                                                                            2 2
                                                            1
                                       2 
                                                2               2       
                                                                                                 
                                                                                                  
                                                    n

• The ratio of the information in the sample to the information
  in the statistic becomes independent of μ
                                                  n
                                                                                              
                                                    xi  x   n  x   
                                                               2              2

                               1                                                              
                                        exp   i 1                                          
                                                              2 2
                            2 
                                    n
                                 2    2                                                      
     f      x         
                                             
                                                                                             
                                                                                              
    
  q T  x                             1               n  x   2 
                                                   exp                
                                                              2 2
                                               1
                                   2 
                                          2      2      
                                                                       
                                                                        
                                            n
                                                                                         n
                                                                                                      2 
         f    x                                  1                                      xi  x  
                                                                              exp     i 1
                                                                                                        
   q T         x              n
                                       1
                                           2
                                                2       2
                                                                    n 1
                                                                           2                 2 2
                                                                                                        
                                                                                   
                                                                                                       
                                                                                                        
Sufficiency
Theorem: Factorization Theorem
Let f(x|θ) denote the joint pdf or pmf of a sample X. A statistic T(X)
is a sufficient statistic for θ if and only if there exists functions g(t|θ)
and h(x) such that, for all sample points x and all parameter points θ
                                         
                f  x    g T  x  h  x

• Sufficient statistics are not unique. From the factorization theorem
it is easy to see that (i) the identity function T(X) = X is a sufficient
statistic vector and (ii) if T is a sufficient statistic for θ then so is any
1-1 function of T. Then, we have minimal sufficient statistics.

Definition: Minimal sufficiency
A sufficient statistic T(X) is called a minimal sufficient statistic if, for any
other sufficient statistic T ’(X), T'(X) is a function of T (X).
Consistency
Definition: Consistency
The estimator converges in probability to the population parameter
being estimated when n (sample size) becomes larger
             ^
That is,     n  θ.
                   p




             ^
We say that  n is a consistent estimator of θ.

Example:    x is a consistent estimator of μ (the population mean).

• Q: Does unbiasedness imply consistency?
No. The first observation of {xn}, x1, is an unbiased estimator of μ. That
is, E[x1] = μ. But letting n grow is not going to cause x1 to converge in
probability to μ.                                                   13
Squared-Error Consistency
Definition: Squared Error Consistency
                ^
The sequence { n} is a squared-error consistent estimator of θ, if
               
             ^
  limn→∞ E[( n - θ)2] = 0

           ^
That is,  n   m.s.
                      θ.

• Squared-error consistency implies that both the bias and the variance
of an estimator approach zero. Thus, squared-error consistency implies
consistency.




                                                                     14
 Order of a Sequence: Big O and Little o
• “Little o” o(.).
A sequence {xn}is o(n) (order less than n) if |n- xn| 0, as n  ∞.
Example: xn = n3 is o(n4) since |n-4 xn|= 1 /n  0, as n  ∞.

• “Big O” O(.).
A sequence {xn} is O(n) (at most of order n ) if n- xn  ψ, as n  ∞
(ψ≠0, constant).
Example: f(z) = (6z4 – 2z3 + 5) is O(z4) and o(n4+δ) for every δ>0.
Special case: O(1): constant

• Order of a sequence of RV
The order of the variance gives the order of the sequence.
Example: What is the order of the sequence { x }?
     Var[ x ] = σ2/n, which is O(1/n)          -or O(n-1).
Root n-Consistency
• Q: Let xn be a consistent estimator of θ. But how fast does xn
converges to θ ?
The sample mean, x, has as its variance σ2/n, which is O(1/n). That is,
the convergence is at the rate of n-½. This is called “root n-consistency.”
 Note: n½ x has variance of O(1).

• Definition: nδ convergence?
 If an estimator has a O(1/n2δ) variance, then we say the estimator is nδ
 –convergent.
 Example: Suppose var(xn) is O(1/n2). Then, xn is n–convergent.

 The usual convergence is root n. If an estimator has a faster (higher
 degree of) convergence, it’s called super-consistent.
Estimation
• Two philosophies regarding models (assumptions) in statistics:
(1) Parametric statistics.
It assumes data come from a type of probability distribution and makes
inferences about the parameters of the distribution. Models are
parameterized before collecting the data.
Example: Maximum likelihood estimation.

(2) Non-parametric statistics.
It assumes no probability distribution –i.e., they are “distribution free.”
Models are not imposed a priori, but determined by the data.
Examples: histograms, kernel density estimation.

• In general, parametric statistics makes more assumptions.
Least Squares Estimation
• Long history: Gauss (1795, 1801) used it in astronomy.
• Idea: There is a functional form relating Y and k variables X. This
function depends on unknown parameters, θ. The relation between Y
and X is not exact. There is an error, . We will estimate the
parameters θ by minimizing the sum of squared errors.

(1) Functional form known
        yi = f(xi, θ) + i
(2) Typical Assumptions
- f(x, θ) is correctly specified. For example, f(x, θ) = X 
- X are numbers with full rank --or E(|X) = 0. That is,       (  x)
-  ~ iid D(0, σ2 I)
Least Squares Estimation
• Objective function: S(xi, θ) =Σi i2

• We want to minimize w.r.t to θ. That is,
      minθ {S(xi, θ) =Σi i2 = Σi [yi - f(xi, θ)]2 }
      => d S(xi, θ)/d θ = - 2 Σi [yi - f(xi, θ)] f ‘(xi, θ)
       f.o.c. => - 2 Σi [yi - f(xi, θLS)] f ‘(xi, θLS) =0

Note: The f.o.c. deliver the normal equations.

The solution to the normal equation, θLS, is the LS estimator. The
estimator θLS is a function of the data (yi ,xi).
Least Squares Estimation
Suppose we assume a linear functional form. That is, f(x, θ) = X.
Using linear algebra, the objective function becomes
        S(xi, θ) =Σi i2 = ’ = (y- X )’ (y- X )
The f.o.c.
        - 2 Σi [yi - f(xi, θLS)] f ‘(xi, θLS) = -2 (y- Xb)’ X =0
where b = OLS.             (Ordinary LS. Ordinary=linear)

Solving for b            => b = (X’ X)-1 X’ y

Note: b is a (linear) function of the data (yi ,xi).
Least Squares Estimation
The LS estimator of LS when f(x, θ) = X  is linear is
        b = (X′X)-1 X′ y
Note: b is a (linear) function of the data (yi ,xi). Moreover,
       b = (X′X)-1 X′ y = (X′X)-1 X′ (X + ) =  +(X′X)-1 X′

Under the typical assumptions, we can establish properties for b.
1) E[b|X]= 
2) Var[b|X] = E[(b-) [(b-)′|X] =(X′X)-1 X’E[ ′|X] X(X′X)-1
               = σ2 (X′X)-1
   Under the typical assumptions, Gauss established that b is BLUE.
3) If |X ~iid N(0, σ2In)     => b|X ~iid N(, σ2 (X’ X)-1)
4) With some additional assumptions, we can use the CLT to get
        b|X  N(, σ2/n (X’ X/n)-1)
              
              a
Maximum Likelihood Estimation
• Idea: Assume a particular distribution with unknown parameters.
Maximum likelihood (ML) estimation chooses the set of parameters
that maximize the likelihood of drawing a particular sample.

• Consider a sample (X1, ... , Xn) which is drawn from a pdf f(X|θ)
where θ are parameters. If the Xi’s are independent with pdf f(Xi|θ)
the joint probability of the whole sample is:
                                                    n
             L( X |  )  f( X 1 ... X n |  ) =    f( X
                                                   i=1
                                                            i |   )


The function L(X| θ) --also written as L(X; θ)-- is called the likelihood
function. This function can be maximized^with respect to θ to
produce maximum likelihood estimates (  MLE   ).
Maximum Likelihood Estimation
• It is often convenient to work with the Log of the likelihood
function. That is,
         ln L(X|θ) = Σi ln f(Xi| θ).

• The ML estimation approach is very general. Now, if the model is
not correctly specified, the estimates are sensitive to the
misspecification.




                    Ronald Fisher (1890 – 1962)
Maximum Likelihood: Example I
Let the sample be X={5, 6, 7, 8, 9, 10}drawn from a Normal(μ,1).
The probability of each of these points based on the unknown
mean, μ, can be written as:

                                    5   2   
             f 5 |   
                             1
                                exp            
                             2    
                                        2       
                                                 
                                     6   2 
             f 6 |   
                             1
                                 exp          
                             2     
                                         2     
                                                
                                
                                     10   2     
             f 10 |   
                              1
                                 exp               
                              2    
                                         2          
                                                     

Assume that the sample is independent.
Maximum Likelihood: Example I
Then, the joint pdf function can be written as:
                                    5   2   6   2   10   2 
L X |   
                  1
                               exp                                     
               2   5
                           2
                                        2           2             2      
The value of  that maximize the likelihood function of the sample
can then be defined by max L X |  
                                  



It easier, however, to maximize ln L(X|μ). That is,

    max ln L X |   
                                5   2  6   2   10   2 
                             K                                       
                          
                                     2           2             2      
                                                                       
                               5     6       10     0
                                         5  6  7  8  9  10 _
                                MLE
                               ˆ                               x
                                                   6
Maximum Likelihood: Example I
• Let’s generalize this example to an i.i.d. sample X={X1, X2,...,
XT}drawn from a Normal(μ,σ2). Then, the joint pdf function is:
      T                ( X  ) 2                                                     T          ( X  ) 2   
                  exp                                      (2 2 ) T / 2
                                                                                              exp             
                1
 L                        i                                                                           i
                          2 2                                                                      2 2      
      i 1   2 2                                                                    i 1                    

Then, taking logs, we have:
                                        T

                                    
      T           1                                               T        T         1
 L   ln 2 2  2                            ( X i  ) 2       ln 2  ln  2  2 ( X   )( X   )
      2          2                     i 1
                                                                  2        2        2
We take first derivatives:
                     T                                                T
  L
                     2( X                                            (X
         1                                                    1
                                  i    ) (1)                           i    )
      2 2        i 1                                         2
                                                                      i 1
                                               T
   ln L
                                          
                 T              1
                                                ( X i  ) 2
   2          2 2           2 4         i 1
Maximum Likelihood: Example I
• Then, we have the f.o.c. and jointly solve for the ML estimators:
                   T                                         T
    L
                  ( X                                       X
           1                                             1
(1)                     i     MLE )  0   MLE
                               ˆ             ˆ                     i   X
      2
         ˆ MLE    i 1
                                                         T   i 1


Note: The MLE of μ is the sample mean. Therefore, it is unbiased.

                                     T                                             T
       ln L
                                                                                  
                   T      1                                                    1
(2)             2                       ( X i   MLE ) 2  0   2
                                                   ˆ               ˆ MLE                 ( X i  X )2
       2       2 MLE 2 2
                  ˆ      ˆ MLE      i 1
                                                                               T   i 1

Note: The MLE of σ2 is not s2. Therefore, it is biased!
Maximum Likelihood: Example II
• Suppose we assume:
          yi  X i β   i             i ~ N (0,  2 )
   or     y  Xβ  ε           ε ~ N (0,  2 I T )

where Xi is a 1xk vector of exogenous numbers and β is a kx1 vector
of unknown parameters. Then, the joint likelihood function becomes:
           T               2                                 T          2      
                      exp  i 2           (2 2 ) T / 2
                                                                      exp  i 2   
                  1
     L
                           2                                            2      
          i 1   2 2                                        i 1               

• Then, taking logs, we have the log likelihood function::
                             T

                          
          T           1                        T            1
  ln L   ln 2 2  2              i2       ln 2 2  2 (y  Xβ)(y  Xβ)
          2          2      i 1
                                               2           2
Maximum Likelihood: Example II
• The joint likelihood function becomes:
                                      T

                                      
         T           1                                       T        T         1
 ln L   ln 2 2  2                        i2            ln 2  ln  2  2 (y  Xβ)(y  Xβ)
         2          2                i 1
                                                             2        2        2
• We take first derivatives of the log likelihood wrt β and σ2:
                     T
    ln L
                    
             1                                        1
                         2 i x i' /  2                 X' ε
           2      i 1                                2

                                                     T
    ln L                                                                              ε' ε
                                                     
                     T                 1                                  1
                              (              )            i2   (            )[           T]
        2
                    2     2
                                      2     4
                                                     i 1                2   2
                                                                                          2

• Using the f.o.c., we jointly estimate β and σ2:
:  ln L   1         1          ˆ            ˆ
                  X' ε          X' (y  Xβ MLE )  0  β MLE  ( X' X) 1 X' y
             2   2
  ln L
                                                                                                     T                 ˆ
                                                                                                            ( yi  X i β MLE ) 2
                                                                                                 
                1     e'e                     e'e
           ( 2 )[ 2                  ˆ MLE 
                           T ]  0  2          
   2       2 MLE  MLE
              ˆ     ˆ                         T                                                      i 1
                                                                                                                    T
ML: Score and Information Matrix
Definition: Score (or efficient score)

                       log(L(X |  ))                   log(f(xi |  ))
                                            
                                                   n
           S(X ; )                   
                                                 i 1       
S(X; θ) is called the score of the sample. It is the vector of partial
derivatives (the gradient), with respect to the parameter θ. If we have
k parameters, the score will have a kx1 dimension.


Definition: Fisher information for a single sample:
                   log(f(X |  ))  2 
               E                      I ()
                 
                                    

I(θ) is sometimes just called information. It measures the shape of the
log f(X|θ).
ML: Score and Information Matrix
• The concept of information can be generalized for the k-parameter
case. In this case:
                  log L   log L  T 
              E                      I()
                 θ  θ  
                                         

This is kxk matrix.
If L is twice differentiable with respect to θ, and under certain
regularity conditions, then the information may also be written as9
                log L   log L  T      2 log(L(X | θ )) 
            E                      E - 
                                                                   I(θ)
                                                                  
               θ  θ  
                                           
                                                    θθ'        

I(θ) is called the information matrix (negative Hessian). It measures the
shape of the likelihood function.
ML: Score and Information Matrix
• Properties of S(X; θ):
                        log(L(X |  ))           log(f(xi |  ))
                                            
                                            n
            S(X ; )                   
                                          i 1       

(1) E[S(X; θ)]=0.

                                    f ( x; )
        f ( x; )dx  1             
                                               dx  0

                   f ( x; )
     
             1
                               f ( x; )dx  0
         f ( x; ) 
          log f ( x; )
              
                          f ( x; )dx  0  E[S ( x; )]  0
ML: Score and Information Matrix
(2) Var[S(X; θ)]= n I(θ)
   log f ( x; )
       
                  f ( x; )dx  0

Let's differentiate the above integral once more :
     log f ( x; ) f ( x; )          2 log f ( x; )
                    
                               dx           '
                                                         f ( x; )dx  0

     log f ( x; )  1         f ( x; )                   2 log f ( x; )
                  f ( x; )   f ( x; )dx 
                    
                    
                                           
                                           
                                                                  '
                                                                               f ( x; )dx  0

      log f ( x; )                       2 log f ( x; )
                      2

   
          
                       f ( x; )dx 
                                                '
                                                              f ( x; )dx  0

    log f ( x; )  2          2 log f ( x; ) 
E                      E                        I ( )
  
                           
                                        '       
                                                     
                             log f ( x; )
Var [ S ( X ; )]  n Var [                 ]  n I ( )
                                 
ML: Score and Information Matrix
(3) If S(xi; θ) are i.i.d. (with finite first and second moments), then we
can apply the CLT to get:
        Sn(X; θ) = Σi S(xi; θ)   a
                                    N(0, n I(θ)).


Note: This an important result. It will drive the distribution of MLE
estimators.
ML: Score and Information Matrix – Example
• Again, we assume:
         yi  X i β   i                           i ~ N (0,  2 )
   or        y  Xβ  ε                    ε ~ N (0,  2 I T )

• Taking logs, we have the log likelihood function:
                                    T

                                    
         T         1                                 T       T       1
 ln L   ln 2  2
                2
                                            i2     ln 2  ln   2 (y  Xβ)(y  Xβ)
                                                                  2
         2        2                i 1
                                                     2       2      2
• The score function is –first derivatives of log L wrt θ=(β,σ2):
                  T
     ln L
                  
              1                                    1
                       2 i x i' /  2                X' ε
            2   i 1                               2

                                                  T
    ln L                                                                           ε' ε
                                                 
                  T                  1                                 1
                            (            )             i2   (            )[           T]
       2
                  2     2
                                    2   4
                                                  i 1                2   2
                                                                                       2
ML: Score and Information Matrix – Example
• Then, we take second derivatives to calculate I(θ): :
                     T
  ln L2
                 
                                1
               xi xi ' / 2  2 X ' X
  '     i 1               
                             T
   ln L
                              x '
                     1
                                     i       i
    '
        2
                        4
                             i 1

    ln L                    1         ε'ε                   1            ε'ε         1           ε'ε
                                 [               T]  (          )(         )          [2           T]
    '
    2       2
                         2      4
                                          2
                                                             2   2
                                                                             4
                                                                                      2   4
                                                                                                     2



• Then,
                       1                                          
                        ( X'X)                                 0 
              ln L   2
 I ()  E[       ]                                             
             '       0
                                                               T 
                      
                                                             2 4 
                                                                   
ML: Score and Information Matrix
In deriving properties (1) and (2), we have made some implicit
assumptions, which are called regularity conditions:
(i) θ lies in an open interval of the parameter space, Ω.
(ii) The 1st derivative and 2nd derivatives of f(X; θ) w.r.t. θ exist.
(iii) L(X; θ) can be differentiated w.r.t. θ under the integral sign.
(iv) E[S(X; θ) 2]>0, for all θ in Ω.
(v) T(X) L(X; θ) can be differentiated w.r.t. θ under the integral sign.


Recall: If S(X; θ) are i.i.d. and regularity conditions apply, then we can
apply the CLT to get:
         S(X; θ) a
                       N(0, n I(θ))
ML: Cramer-Rao inequality
Theorem: Cramer-Rao inequality
Let the random sample (X1, ... , Xn) be drawn from a pdf f(X|θ) and let
T=T(X1, ... , Xn) be a statistic such that E[T]=u(θ), differentiable in θ.
Let b(θ)= u(θ) - θ, the bias in T. Assume regularity conditions. Then,
                         [u ' ( )]2 [1  b' ( )]2
                Var(T)             
                          nI ( )        nI ( )
Regularity conditions:
(1) θ lies in an open interval Ω of the real line.
(2) For all θ in Ω, δf(X|θ)/δθ is well defined.
(3) ∫L(X|θ)dx can be differentiated wrt. θ under the integral sign
(4) E[S(X;θ)2]>0, for all θ in Ω
(5) ∫T(X) L(X|θ)dx can be differentiated wrt. θ under the integral sign
ML: Cramer-Rao inequality

                       [u ' ( )]2 [1  b' ( )]2
              Var(T)             
                        nI ( )        nI ( )
The lower bound for Var(T) is called the Cramer-Rao (CR) lower bound.
Corollary: If T(X) is an unbiased estimator of θ, then

                      Var(T)  (nI ( )) 1

Note: This theorem establishes the superiority of the ML estimate over
all others. The CR lower bound is the smallest theoretical variance. It
can be shown that ML estimates achieve this bound, therefore, any
other estimation technique can at best only equal it.
ML: Cramer-Rao inequality
Proof: For any T(X) and S(X;θ) we have
       [Cov(T,S)]2 ≤ Var(T) Var(S)     (Cauchy-Schwarz inequality)
Since E[S]=0, Cov(T,S)=E[TS].
Also, u(θ) = E[T] = ∫ T L(X;θ) dx. Differentiating both sides:
          u’(θ) = ∫ T δL(X;θ)/δθ dx = ∫ T [1/L δL(X;θ)/δθ] L dx
               = ∫ T S L dx = E[TS] = Cov(TS)


Substituting in the Cauchy-Schwarz inequality:
        [u’(θ)]2 ≤ Var(T) n I(θ)   => Var(T) ≥[u’(θ)]2/[n I(θ)] ■
ML: Cramer-Rao inequality
Note: For an estimator to achieve the CR lower bound, we need
           [Cov(T,S)]2 = Var(T) Var(S).
This is possible if T is a linear function of S. That is,
           T(X) = α(θ) S(X;θ) + β(θ)
Since E[T] = α(θ) E[S(X;θ)] + β(θ) = β(θ) . Then,
           S(X;θ) = δ log L(X;θ)/δθ =[T(X) - β(θ)]/ α(θ).
Integrating both sides wrt to θ:
                  log L(X;θ) = U(X) – T(X) A(θ)+ B(θ)
That is,          L(X;θ) = exp{ΣiU(Xi) – A(θ) ΣiT(Xi) + n B(θ)}
Or,               f(X;θ) = exp{U(X) – T(X) A(θ)+ B(θ)}
ML: Cramer-Rao inequality
       f(X;θ) = exp{U(X) – T(X) A(θ)+ B(θ)}
That is, the exponential (Pitman-Koopman-Darmois) family of
distributions attain the CR lower bound.
• Most of the distributions we have seen belong to this family: normal,
exponential, gamma, chi-square, beta, Weibull (if the shape parameter is
known), Dirichlet, Bernoulli, binomial, multinomial, Poisson, negative
binomial (with known parameter r), and geometric.


• Note: The Chapman–Robbins bound is a lower bound on the variance
of estimators of θ. It generalizes the Cramér–Rao bound. It is tighter
and can be applied to more situations –for example, when I(θ) does
not exist. However, it is usually more difficult to compute.
    Cramer-Rao inequality: Multivariate Case
    • When we have k parameters, then covariance matrix of the estimator
    T(X) has a CR lower bound given by:
                                u (θ)         u (θ) T
               Covar( T( X))          I(θ) 1
                                 θ             θ
    Note: In matrix notation, the inequality A ≥ B means the matrix A-B is
    positive semidefinite.

    If T(X) is unbiased, then

                          Covar( T( X))  I(θ) 1



C R Rao (1920, India) & Harald Cramer (1893-1985, Sweden)
Cramer-Rao inequality: Example
We want to check if the sample mean and s2 for an i.i.d. sample X={X1,
X2,..., XT}drawn from N(μ,σ2) achieve the CR lower bound. Recall:
                               n          
                               2      0 
                    ln L
       I ()  E[        ]             
                   θθ'       0      n 
                               X
                                     2 4 
                                           

Since the sample mean and s2 are unbiased, the CR lower bound is
given by:
          Covar( T )  I() 1
                  2                         2 4
Then, Var( X)            &      Var(s 2 ) 
                   n                          n
We have already derived that Var( X) = σ2/n and Var(s2) = 2 σ4/(n-1).
Then, the sample mean achieves its CR bound, but s2 does not..
Concentrated ML
• We split the parameter vector θ into two vectors:

                  L(  ) = L(  1 , 2 )

Sometimes, we can derive a formulae for the ML estimate of θ2, say:

                  2 = g( 1 )
If this is possible, we can write the Likelihood function as

                 L(  1 , 2 ) = L(  1 , g(  1 )) = L* (  1 )

This is the concentrated likelihood function.

• This process is often useful as it reduces the number of parameters
needed to be estimated.
Concentrated ML: Example
• The normal log likelihood function can be written as:

                   n
  ln L(  ,  )   ln  
                2
                   2
                        2   1
                           2 2
                                                           i 1 X i
                                                                 n
                                                                            2


• This expression can be solved for the optimal choice of 2 by
  differentiating with respect to 2:

   ln L(  ,  2 )
                                                        i 1  X i   2
                                 n            1              n
                                                                          0
          2
                             2      2
                                              
                                             2   2 2


                            i 1  X i   2
                                 n
         n       2
                                                       0

                                i 1  X i   2
                            1        n
          MLE 
           ˆ2
                            n
Concentrated ML: Example
 • Substituting this result into the original log likelihood produces:

                    n 1
                                        X i   2  
                                   n
      ln L(  )   ln                               
                    2 n           i 1
                                                      

                                                       X i   2
                           1                      n


                                         
                   1   n                          i 1
                 2             X j        2
                   n    j 1

                 n 1
                                  X i   2   n
                             n
             ln                              
                 2 n        i 1
                                                 2

• Intuitively, the ML estimator of  is the value that minimizes the
  MSE of the estimator. Thus, the least squares estimate of the mean
  of a normal distribution is the same as the ML estimator under the
  assumption that the sample is i.i.d.
Properties of ML Estimators
                                                                      ^
(1) Efficiency. Under general conditions, we have that  MLE
                              ^
                     Var(  MLE )  (nI ( ))1
  The right-hand side is the Cramer-Rao lower bound (CR-LB). If an
  estimator can achieve this bound, ML will produce it.

(2) Consistency. We know that E[S(Xi; θ)]=0 and Var[S(Xi; θ)]= I(θ).
  The consistency of ML can be shown by applying Khinchine’s LLN
  to S(Xi,; θ) and then to Sn(X; θ)=Σi S(Xi,; θ).
                                                              ˆ
  Then, do a 1st-order Taylor expansion of Sn(X; θ) around  MLE
  S n (X;  )  S n (X; ˆMLE )  S n ' (X;  n )(  ˆMLE )
                                              *
                                                                |    n |  |   ˆMLE |  
                                                                        *


  S (X;  )  S ' (X;  * )(  ˆ )
    n            n        n         MLE


                  ˆ -
  Sn(X; θ) and (  MLE θ) converge together to zero (i.e., expectation).
Properties of ML Estimators
(3) Theorem: Asymptotic Normality
  Let the likelihood function be L(X1,X2,…Xn| θ). Under general
  conditions, the MLE of θ is asymptotically distributed as
              ˆ
              MLE a         
                    N  , nI ( ) 1     
  Sketch of a proof. Using the CLT, we’ve already established
       Sn(X; θ)  N(0, nI(θ)).
                  
                  p




  Then, using a first order Taylor expansion as before, we get
                    1                   1         ˆ
        Sn (X;  ) 1/2  Sn ' (X;  n ) 1/2 (   MLE )
                                     *
                   n                   n
  Notice that E[Sn′(xi ; θ)]= -I(θ). Then, apply the LLN to get
       Sn′ (X; θn*)/n  -I(θ).
                       
                       p
                                         (using θn*  θ.)
                                                  p
                                                     

  Now, algebra and Slutzky’s theorem for RV get the final result.
Properties of ML Estimators
(4) Sufficiency. If a single sufficient statistic exists for θ, the MLE of θ
                                           ˆ
  must be a function of it. That is,  MLEdepends on the sample
  observations only through the value of a sufficient statistic.

(5) Invariance. The ML estimate is invariant under functional
                                ˆ
  transformations. That is, if  MLEis the MLE of θ and if g(θ) is a
                          ˆ
  function of θ , then g( MLE is the MLE of g(θ) .
                              )
 Method of Moments Estimation
• Simple idea:
Suppose the first moment (the mean) is generated by the distribution,
f(X,θ). The observed moment from a sample of n observations is
                  n
  m1  (1 / n)  xi
                 i 1

Hence, we can retrieve the parameter θ by inverting the distribution
function f(X,θ):
  m1  f ( x |  )         f 1 (m1 )  m1
• Let’s complicate the simple idea:
Now, suppose we have a model. This model implies certain knowledge
about the moments of the distribution. Then, we invert the model to
give us estimates of the unknown parameters of the model, which
match the theoretical moments for a given sample.
Method of Moments Estimation
• We have a model Y = h (X,θ), where θ are k parameters. Under this
model, we know what some moments of the distribution should be.
That is, the model provide us with k conditions (or moments), which
should be met:
        E ( g (Y , X |  ))  0
• In this case, the (population) first moment of g (Y,X, θ) equals 0.
Then, we approximate the k moments –i.e., E(g)- with a sample
measure and invert g to get an estimate of θ:
             ˆ
             MM  g 1 (Y , X ,0)
 ˆ
 MMis the Method of Moment estimator of θ.

Note: In this example we have as many moments (k) as unknown
parameters (k). Thus, θ is uniquely and exactly determined.
  Method of Moments Estimation: Example
We start with a model Y = X β + ε. In OLS estimation, we make the
assumption that the X’s are orthogonal to the errors. Thus,

         E( X ' e)  0
The sample moment analogue for each xi is

              
               n
    (1 / n)           xit et  0    or (1 / n) X ' e  0.
               t 1

And, thus,

     (1/ n) X ' e  0  (1/ n) X ' (Y  X MM )      X 'Y  X ' X MM

Therefore, the method of moments estimator, βMM, solves the normal
equations. That is, βMM will be identical to the OLS estimator, b.
Generalized Method of Moments (GMM)
• So far, we have assumed that there are as many moments (l ) as
unknown parameters (k). The parameters are uniquely and exactly
determined.

• If l < k –i.e., less moment conditions than parameters-, we would
not be able to solve them for a unique set of parameters (the model
would be under identified).

• If l > k –i.e., more moment conditions than parameters-, then all
the conditions can not be met at the same time, the model is over
identified and we have GMM estimation.

If we can not satisfy all the conditions at the same time, we want to
make them all as close to zero as possible at the same time. We have
to figure out a way to weight them.
 Generalized Method of Moments (GMM)
• Now, we have k parameters but l moment conditions l>k. Thus,
 E (m j ( ))  0                 j  1,...l                (l population moments)

                    
                        n
 m ( )  (1 / n)           m j ( )  0       j  1,...l   (l sample moments)
                    t 1
• Then, we need to make all l moments as small as possible,
simultaneously. Let’s use a weighted least squares criterion:
      Min(q)  m ( )'W m ( )
        
That is, the weighted squared sum of the moments. The weighting
matrix is the lxl matrix W. (Note that we have a quadratic form.)
                           m ( )'
• First order condition: 2                W m ( GMM )  0
                             '   GMM
 Generalized Method of Moments (GMM)
• The GMM estimator, θGMM, solves the kx1 system of equations.
There is typically no closed form solution for θGMM. It must be
obtained through numerical optimization methods.
• If plim m ( ) =0, and W (not a function of θ) is a positive definite
matrix, then θGMM is a consistent estimator of θ.

• The optimal W
Any weighting matrix produces a consistent estimator of θ. We can
select the most efficient one –i.e., the optimal W.

The optimal W is simply the covariance matrix of the moment
conditions. Thus,
             OptimalW  W *  Asy Var (m )
Properties of the GMM estimator
• Properties of the GMM estimator.
(1) Consistency.
If plim m ( )=0, and W (not a function of θ) is a pd matrix, then
under some conditions, θGMM  θ.
                               p




(2) Asymptotic Normality
Under some general condition θGMM       a
                                           N(θ, VGMM), and
VGMM=(1/n)[G′V-1G]-1,
where G is the matrix of derivatives of the moments
with respect to the parameters and V  Var (n1/ 2 m ( ))

                                     Lars Peter Hansen (1952)
Bayesian Estimation: Bayes’ Theorem
• Recall Bayes’ Theorem:
                          Prob X |   Prob 
             Prob X  
                               Prob X 

- P(): Prior probability about parameter .
- P(X|): Probability of observing the data, X, conditioning on .
This conditional probability is called the likelihood –i.e., probability of
event X will be the outcome of the experiment depends on .
- P( |X): Posterior probability -i.e., probability assigned to , after X is
observed.
- P(X): Marginal probability of X. This the prior probability of
witnessing the data X under all possible scenarios for , and it
depends on the prior probabilities given to each .
Bayesian Estimation: Bayes’ Theorem
• Example: Courtroom – Guilty vs. Non-guilty
  G: Event that the defendant is guilty.
  E: Event that the defendant's DNA matches DNA found at the
  crime scene.
The jurors, after initial questions, form a personal belief about the
  defendant’s guilt. This initial belief is the prior.
The jurors, after seeing the DNA evidence (event E), will update their
  prior beliefs. This update is the posterior.
Bayesian Estimation: Bayes’ Theorem
• Example: Courtroom – Guilty vs. Non-guilty
- P(G): Juror’s personal estimate of the probability that the defendant
is guilty, based on evidence other than the DNA match. (Say, .30).
- P(E|G): Probability of seeing event E if the defendant is actually
guilty. (In our case, it should be near 1.)
- P(E): E can happen in two ways: defendant is guilty and thus DNA
match is correct or defendant is non-guilty with incorrect DNA match
(one in a million chance).
- P(G|E): Probability that defendant is guilty given a DNA match.


              ProbE | G  ProbG 
 ProbG E  
                                         1x(.3)
                                                    .999998
                   ProbE            .3x1 .7x10-6
Bayesian Estimation: Viewpoints
• Implicitly, in our previous discussions about estimation (MLE), we
  adopted a classical viewpoint.
   – We had some process generating random observations.
   – This random process was a function of fixed, but unknown
     parameters.
   – Then, we designed procedures to estimate these unknown
     parameters based on observed data.

• For example, we assume a random process such as CEO
  compensation. This CEO compensation process can be
  characterized by a normal distribution.
   – We can estimate the parameters of this distribution using
      maximum likelihood.
Bayesian Estimation: Viewpoints
  – The likelihood of a particular sample can be expressed as

   
  L X 1 , X 2 , X n  ,  
                       2
                               1            1   2               2
                                         exp 2 i 1  X i    
                             2  2        2                   
                                  n    n


  – Our estimates of  and 2 are then based on the value of
    each parameter that maximizes the likelihood of drawing
    that sample
                         Thomas Bayes (1702–April 17, 1761)

 Bayesian Estimation: Viewpoints

• Turning the classical process around slightly, a
  Bayesian viewpoint starts with some kind of
  probability statement about the parameters
  (a prior). Then, the data, X, are used to update our prior beliefs (a
  posterior).
   – First, assume that our prior beliefs about the distribution
      function can be expressed as a probability density function ),
      where  is the parameter we are interested in estimating.
   – Based on a sample -the likelihood function, L(X,)- we can
      update our knowledge of the distribution using Bayes’ theorem:

                      Prob X |                 L X    
             X                       
                         Prob X 
                                                      L X    d
                                                  
                                              
                                              
Bayesian Estimation: Example
• Assume that we have a prior of a Bernoulli distribution. Our
  prior is that P in the Bernoulli distribution is distributed Ba,:

 ( P)  f P; a ,                      Pa 1   P  1
                                    1
                                                  1
                                 Ba ,  

                                                  a   
      Ba ,           xa 1   x  1 dx 
                     1
                    0
                                1
                                                  a   

                   a    a 1
           ( P)              P  1  P  1
                   a   
Bayesian Estimation: Example

• Assume that we are interested in forming the posterior distribution
  after a single draw, X:

                                               a    a 1
                           P X 1  P 1 X               P 1  P  1
                                              a   
       P X  
                                              a    a 1
                       P X 1  P 1 X                   P 1  P  1 dP
                   1
                   
                   0                          a   
                           P X a 1 1  P   X
               
                           P X a 1 1  P   X dP
                       1
                      0
Bayesian Estimation: Example
• Following the original specification of the beta function


                    1  P                            1  P 
    1                                     1
   P                                    P
          X a 1           X               a * 1           * 1
                                  dP                                  dP
    0                                     0

   where a *  X  a and  *    X  1
                                         X  a    X  1
                                    
                                             a    1

• The posterior distribution, the distribution of P after the
  observation is then

                          a    1
         P X                             P X a 1 1  P   X
                      X  a    X  1
Bayesian Estimation: Example
• The Bayesian estimate of P is then the value that minimizes a loss
  function. Several loss functions can be used, but we will focus on
  the quadratic loss function consistent with mean square errors

                          P  P 2
                       E ˆ                    
   min E 
    ˆ
    P    
         
                
                ˆP 
                P
                   2

                     
                     
                         
                           ˆ
                            P
                                  
                                    2E P  P  0
                                         ˆ                        
                           ˆ
                         P  E[ P ]

• Taking the expectation of the posterior distribution yields
                           a    1
     E P                                   P X a 1  P   X dP
                    1
                   0  X  a    X  1

                  a    1
                                                 P X a 1  P   X dP
                                             1
           
              X  a    X  1        0
Bayesian Estimation: Example
• As before, we solve the integral by creating a*=a+X+1 and *=-
  X+1. The integral then becomes
     1
                  1  P                   a  X  1  X  1
                                       a*  *
    P
           a 1
            *
                          1*
                                 dP 
     0                                a   
                                         *    *
                                                      a    2

                    a    1 a  X  1   X  1
             EP 
                    a    2 a  X    X  1

• Which can be simplified using the fact Γ(α+1)= α Γ(α):

           a    1 a  X  1        a    1      a  X a  X 
E P                               
           a    2  a  X      a    1a    1 a  X 
                            
                               a  X 
                              a    1
Bayesian Estimation: Example
• To make this estimation process operational, assume that we have a
  prior distribution with parameters a=1.4968 that yields a beta
  distribution with a mean P of 0.5 and a variance of the estimate of
  0.0625.

• Extending the results to n Bernoulli trials yields

                   a    n 
     P X                          P Y a 1
                                                1  P  Y  n 1

               a  Y   Y  n 

   where Y is the sum of the individual Xs or the number of heads in
   the sample. The estimated value of P then becomes:

                  Pˆ  Y a
                       a  n
Bayesian Estimation: Example

• Suppose in the first draw Y=15 and n=50. This yields an
  estimated value of P of 0.31129. This value compares with the
  maximum likelihood estimate of 0.3000. Since the maximum
  likelihood estimator in this case is unbiased, the results imply that
  the Bayesian estimator is biased.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:8/31/2012
language:English
pages:70