Generalized Least Squares (GLS) Theory, Heteroscedasticity

Document Sample
Generalized Least Squares (GLS) Theory, Heteroscedasticity Powered By Docstoc
					Generalized Least Squares (GLS) Theory, Heteroscedasticity
                                      & Autocorrelation

                                                 Ba Chu

                                    E-mail: ba chu@carleton.ca

                             Web: http://www.carleton.ca/∼bchu


    (Note that this is a lecture note. Please refer to the textbooks suggested in the course outline for
details. Examples will be given and explained in the class)


1     Objectives
Until now we have assumed that var(u) = σ 2 I but it can happen that the errors have non-constant
variance, i.e., var(u1 ) = var(u2 ) = · · · = var(uT ) or are correlated, i.e., E(ut us ) = 0 for t = s. This
assumption about the errors is true for many economic variables.
    Suppose instead that V ar(u) = σ 2 Ω, where the matrix Ω contains terms for heteroscedasticity
and autocorrelation. First, we study the GLS estimator and the feasible GLS estimator. Second,
we apply these techniques to do statistical inference on the regression model with heteroscedastic
errors and test for heteroscedasticity. Third, we learn to estimate regression models with autocor-
related disturbances and test for serial correlation. Fourth, we also learn to use the ML technique
to estimate the regression models with autocorrelated disturbances.


2     GLS
Consider the model:
                                              y = Xβ + u,                                              (2.1)

                                                     1
where E[u] = 0 and E[u u] = σ 2 Ω with Ω is any matrix, not necessarily diagonal.

2.1    Problem with the OLS

If we estimate (2.1) with the OLS, we will obtain βOLS = (X X)−1 X y = β + (X X)−1 X u. It is
easy to verify that βOLS is still unbiased. However, since V ar(βOLS = σ 2 (X X)−1 X ΩX(X X)−1 =
σ 2 (X X)−1 , inferences based on the estimator σ 2 (X X)−1 , that we have seen, is misleading. Thus,
t and F tests are invalid. Moreover, βOLS is not always consistent. This is the main reason for us
to learn the GLS.

2.2    Derivation of the GLS

Suppose Ω has the eigenvalues λ1 , . . . , λT , by Cholesky’s decomposition, we obtain:


                                             Ω = SΛS ,


where Λ is a diagonal matrix with the diagonal elements (λ1 , . . . , λT ), and S is an orthogonal
matrix. Thus,


                                   Ω−1 = S −1 Λ−1 S −1

                                         = S −1 Λ−1/2 Λ −1/2 S −1

                                         = PP ,

                                                                                √            √
where P = S −1 Λ−1/2 and Λ−1/2 is a diagonal matrix with the diagonal elements ( λ1 , . . . , λT ).
It is straight-forward to prove that P ΩP = IT . Now, multiplying both sides of (2.1) by P yields


                                         P y = P Xβ + P u.


Setting y 0 = P y, X 0 = P X and u0 = P u, we end up with the classical linear regression model:


                                          y 0 = X 0 β + u0 ,

                                                  2
where E[u0 ] = 0 and E[u0 u0 ] = E[P uu P ] = σ 2 P ΩP = σ 2 IT . The OLS estimate of β is given
by


                                             βGLS = (X 0 X 0 )−1 X 0 y 0

                                             = (X P P X)−1 X P P y

                                             = (X Ω−1 X)−1 X Ω−1 y.


The last equation is defined as the GLS estimator denoted by βGLS . Note that the subscript GLS
is sometimes omitted for brevity.
     The unbiased estimate of σ 2 is given by

                              1                                  1
                σ 2 GLS =        (y 0 − X 0 β) (y 0 − X 0 β) =      (y − X β) Ω−1 (y − X β).
                            T −K                               T −K


2.3      Properties of the GLS estimator

     • The GLS estimator is unbiased.

     • V ar(βGLS ) = σ 2 (X ΩX)−1 .

     • The GLS estimator is BLUE.

     • If u ∼ N (0, σ 2 Ω) then β ∼ N (β, σ 2 (X Ω−1 X)−1 ). The F-test formula is given by

                                        ( R β − r) [R(X Ω−1 X)−1 R ]−1 (Rβ − r)
                                         q×K
                                  F =                                             .
                                                                q σ 2 GLS

                                                        p
     • If limT −→∞ X 0 X 0 /T = Q0 > 0, then β =⇒ β as T −→ ∞.
         √              d               −1
     •       T (β − β) =⇒ N (0, σ 2 Q0 ).




                                                            3
2.4    Feasible GLS

Since the matrix Ω is unknown, we need to use an estimator, say, Ω. We have


                                       βF GLS = (X Ω−1 X)−1 X Ω−1 y.


If the following conditions:


                               p lim (X Ω−1 X/T ) = p lim (X Ω−1 X/T )
                                T −→∞                     T −→∞
                                             √                     √
                              p lim (X Ω−1 u/ T ) = p lim (X Ω−1 u/ T )
                               T −→∞                      T −→∞



hold, then βGLS and βF GLS are asymptotically equivalent.


3     Heteroscedasticity
Let’s consider the model (2.1) in which the matrix Ω is a diagonal matrix with the diagonal elements
  2            2
(ω1 , . . . , ωT ). This is a form of heteroscedasticity defined in terms of inconstant variances. We can
estimate this model by using either the FGLS or the modified OLS technique proposed by White
(1980). The FGLS procedure is straight-forward. We shall describe White’s OLS procedure in
details.
                                                                                2            2            2
    White (1980) proposes a consistent estimator Ω with the diagonal elements (ω1 , . . . , ωt , . . . , ωT )
given by

      2
HC-0 ωt = u2 , where ut is the OLS residual.
           t


      2       T
HC-1 ωt =       u2 .
            T −K t


      2
HC-2 ωt =
              b
             u2
              t
                 ,    where ht is t-th diagonal element of the matrix X(X X)−1 X .
            1−ht


      2
HC-3 ωt =
              u2
               t  b  .
            (1−ht )2




                                                     4
The asymptotic variance-covariance matrix of βOLS is given by


               Avar(βOLS ) = lim (X X/T )p lim(σ 2 OLS X ΩX/T ) lim (X X/T ).
                               T −→∞                                      T −→∞



Based on this heteroscedasticity-consistent estimator, the general linear hypotheses may be tested
by the Wald statistics:


                                                                            d
                    W = (RβOLS − r) [RAvar(βOLS )R ]−1 (Rβ − r) =⇒ χ2 (q).


The White procedure has large-sample validity. It may not work well in finite samples.
    We can identify whether or not there exists heteroscedasticity in the noise by using White’s test.
The null hypothesis is homoscedasticity. To do White’s test, we proceed by regressing the OLS
residuals on a constant, original regressors, the squares of the regressors, and the cross products of
the regressors and obtain the R2 value. The test can be constructed by T R2 ∼ χ2 (q), where q is
the number of variables in the auxiliary regression less one. We reject the null if T R2 is sufficiently
large.


4        Serial Correlation
Note that serial correlation and autocorrelation mean the same thing. Consider the model:


                                   yt = Xt β + ut , ∀ t = 1, . . . , T,                          (4.1)


where E[ut ] = 0, E[u2 = σ 2 and E[ut us ] = 0 for t = s.
                     t

    For example, we can model serial correlation by an AR(1) model, i.e.,


                                           ut = ρut−1 + t ,                                      (4.2)




                                                    5
where   t   ∼ IID(0, σ 2 ) and |ρ| < 1. We can compute

                                                          σ2
                                             E[u2 ] =
                                                t               ,
                                                        1 − ρ2
                                                        ρ|t−s| σ 2
                                            E[ut us ] =            .
                                                         1 − ρ2

More generally, we have

                                                                                  
                                                                 2          T −1
                                            1        ρ      ρ         ... ρ  
                                                                             
                                                                            T −2 
                                   σ2       ρ        1          ρ     ... ρ      2
                         E[uu ] =                                           .  = σu Ω,    (4.3)
                                        
                                  1−ρ 2 
                                            .
                                             .        .
                                                      .          .
                                                                 .          . 
                                            .        .          .     ...  . 
                                                                             
                                            ρT −1 ρT −2 ρT −3          ...  1

       2
where σu and Ω have the obvious meanings.

4.1     Estimate (4.1)-(4.2) by FGLS

The FGLS can be done in the following steps:

  1. Regress yt on Xt and obtain ut = yt − Xt βOLS .
                                                 P uu
                                                 P bub
                                                     T

                                                    b
                                                     t=2 t t−1
  2. Regress ut on ut−1 and obtain ρOLS =             T   2      .
                                                      t=2 t−1



  3. Construct the FGLS by βF GLS = (X Ω−1 X)−1 X Ω−1 y, where Ω is obtained from (4.3) by
      substituting ρ with ρOLS .

                1   T                            p                      p
If p limT −→∞   T   1   Xt ut = 0, then βOLS =⇒ β and ρOLS =⇒ ρ. Therefore, βF GLS is consistent
and asymptotically normal.




                                                       6
4.2         Estimate (4.1)-(4.2) by the ML

An application of the conditional probability rules, we have


                                                pdf (y1 , y2 ) = pdf (y2 |y1 )pdf (y1 ),

                  pdf (y1 , y2 , y3 ) = pdf (y3 |y1 , y2 )pdf (y1 , y2 ) = pdf (y3 |y1 , y2 )pdf (y2 |y1 )pdf (y1 ),

                                                              .........
                                                                            T
                                    pdf (y1 , . . . , yT ) = pdf (y1 )           f (yt |yt−1 , . . . , y1 ).
                                                                           t=2


Since


                      yt = Xt β + ut = Xt β + ρut−1 +                  t   = Xt β + ρ(yt−1 − Xt−1 β) +         t




and     t   ∼ N (0, σ 2 ), we have


                                       yt |yt−1 ∼ N (ρyt−1 + Xt β − ρXt−1 β, σ 2 ).


                               σ            2
Moreover, since y1 ∼ N (X1 β, 1−ρ2 ), we can derive the log-likelihood function:


                                           log L(β, ρ, σ 2 ) = log pdf (y1 , . . . , yT )
                                                                   T
                                                 = log pdf (y1 )           pdf (yt |yt−1 ).
                                                                   t=2


We then maximize this log likelihood function to obtain the MLEs. The asymptotic variance of the
MLEs can be obtained from the CR lower bound.




                                                                   7
4.3     Estimate (4.1) by the modified OLS

In general, the model (4.1) with general form of heteroscedasticity and autocorrelation can be
consistently estimated by the OLS, i.e., βOLS = (X X)−1 X y. βOLS is AN, i.e.,

                              √                     d
                                  T (βOLS − β) =⇒ N (0, σ 2 Q−1 M Q−1 ),


where Q = limT −→∞ X X/T and M = p limT −→∞ X ΩX/T under the assumption E[uu ] = σ 2 Ω.
   Q can be estimated by Q = X X/T ; σ 2 M can be consistently estimated by the Newey-West
heteroscedasticity and autocorrelation consistent covariance (HAC) matrix estimator:

                                                        m
                                  HAC = Γ0 +                w(j, m)(Γj + Γj ),
                                                    j=1


where
                                        T
                                 1
                            Γj =              Xt ut ut−j Xt−j , ∀ j = 1, . . . , m,
                                 T    t=j+1

where w(j, m) is a lag window, m is a lag truncation parameter. If w(j, m) = 1 − j/(m + 1), then
w(j, m) is called the Barlett window.
   Hence, the asymptotic covariance matrix of βOLS can be estimated by Q−1 HAC Q−1 .

4.4     Test for serial correlation

   • The Durbin-Watson test
                                                    T                     T
                                       DW =             (ut − ut−1 )2 /       u2 ,
                                                                               t
                                                    2                     1

      where ut is the OLS residual, is used to test for the first-order serial correlation (i.e., AR(1)).

         – For H0 : ρ = 0 vs. H1 : ρ > 0, reject H0 if DW ≤ DW , do not reject if DW > DWu ,
           and inconclusive if DW < DW < DWu with DW is the lower critical value and DWu
           is upper critical value.

         – For H0 : ρ = 0 vs. H1 : ρ < 0, reject H0 if DW ≥ 4 − DW , accept if DW < 4 − DWu .


                                                            8
    • The Breusch-Godfrey LM test is used to test for high-order serial correlation (i.e., the AR(p)
       model as given by
                                                          p
                                                  ut =            φi ut−i + t ).
                                                         i=1

                                                                                          p
       The test is carried out by doing the OLS regression: ut = Xt γ +                   i=1 bi ut−i   + error. Next,
                                              d
       we do either the LM test T R2 =⇒ χ2 (q) under the null hypothesis of no serial correlation or
       the F-test for b1 = b2 = · · · = bp = 0.



5      Autoregressive Conditional Heteroscedasticity (ARCH)
Engle (1982) suggests that heteroscedasticity may occur in time-series context. In speculative
markets such as exchange rates and stock market returns, one often observes the phenomenon
called volatility clustering – large changes tend to be followed by large changes, of either sign, and
small changes tend to be followed by small changes. Observations of this phenomenon in financial
time series have led to the use of ARCH models. Engle (1982) formulates the notion that the recent
past might give information about the conditional disturbance variance. He postulates the relation:


                                    yt = Xt β + σt t , where           t   ∼ IID(0, 1),
                           2
                          σt = α0 + α1 u2 + · · · + αp u2 , where ut = yt − Xt β.
                                        t−1             t−p




Testing for ARCH can be done in the following steps:

    1. Fit y to X by OLS and obtain the residuals {et }.

    2. Compute the OLS regression, e2 = α0 + α1 e2 + · · · + αp e2 + error.
                                    t            t−1             t−p


    3. Test the joint significance of α0 , α1 , . . . , αp .


6      Exercises
    1. Problems 6.1, 6.2, 6.3 and 6.7 (pp. 202-203, JD97).


                                                              9
2. Consider the model


                                             yt = βyt−1 + ut , |β| < 1,

                                          ut =       t   +θ   t−1 ,   |θ| < 1,


  where t = 1, . . . , T and   t   ∼ IID(0, σ 2 ).

   (a) Show that the OLS estimator of β is inconsistent when θ = 0.

   (b) Explain how you would test H0 : θ = 0 vs. H1 : θ = 0 using the Lagrangian multiplier
       test principle. Is such test likely to be superior to the Durbin-Watson test?.

3. Consider the model


                                             yt = βxt + ut ,

                                             ut = ρut−1 + t , |ρ| < 1,


  where xt is non-stochastic and         t   ∼ IID − N (0, σ 2 ).

   (a) Given a sample {xt , yt }T , construct the log-likelihood function.
                                t=1


   (b) Estimate the parameters β, ρ and σ 2 by the ML method.

   (c) Derive the asymptotic variances of these MLEs.

   (d) Given xT +1 , what do you think is the best predictor of yT +1 ?.

4. Consider the linear regression model y = Xβ +u, where y is a T ×1 vector of observations on
  the dependent variable, X is a T × K matrix of observations on K non-stochastic explanatory
  variables with rank(X) = K < T , β is a K × 1 vector of unknown coefficients, and u is a
  T × 1 vector of unobserved disturbances such that E[u] = 0 and E[uu ] = σ 2 Ω, with Ω
  positive definite and known.

   (a) Derive the GLS estimator βGLS of β.

                                                         10
                                              d
   (b) Assuming that T −1/2 X Ω−1 u =⇒ N (0, σ 2 limT −→∞ X Ω−1 X/T ), derive the limiting dis-
        tribution of βGLS as T −→ ∞. State clearly any further assumptions you make and any
        statistical results you use.

5. In the generalized regression model, suppose that Ω is known.

   (a) What is the covariance matrix of the OLS and GLS estimators of β?

   (b) What is the covariance matrix of the OLS residual vector uOLS = y − X βOLS ?.

    (c) What is the covariance matrix of the GLS residual uGLS = y − X βGLS ?.

   (d) What is the covariance matrix of the OLS and GLS residual vectors?

6. Suppose that y has the pdf pdf (yt |Xt ) = 1/( Xt β ) exp{−y/((Xt β) )}, y > 0. Then
                                                              1×K K×1
                                                         2
  E[yt |Xt ] = β Xt and V ar[yt |Xt ] = (β Xt ) . For this model, prove that the GLS and MLE
  are the same, even though this distribution involves the same parameters in the conditional
  mean function and the disturbance variance.

7. Suppose that the regression model is yt = µ + t , where               has a zero mean, constant variance,
  and equal correlation ρ across observations. Then Cov( t , s ) = σ 2 ρ if t = s. Prove that the
  OLS estimator of µ is inconsistent.

8. Suppose that the regression model is yt = µ + t , where


            E[ t |xt ] = 0, Cov(   t s |xt , xs )   = 0 for i = j, but V ar[ t |xt ] = σ 2 x2 , xt > 0.
                                                                                            t




   (a) Given a sample of observations on yt and xt , what is the most efficient estimator of µ?,
        What is its variance?.

   (b) What is the OLS estimator of µ?, and what is its variance?.

    (c) Prove that the estimator in part (a) is at least as efficient as the estimator in part (b).




                                                        11
References
Engle, R. F. (1982). Autoregressive conditional heteroscedasticity with estimates of variance of
  united kingdom inflation, Econometrica 50: 987–1008.

White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for
  heteroscedasticity, Econometrica 48: 817–838.




                                                12