Docstoc

ε , where

Document Sample
ε , where Powered By Docstoc
					                                     Australian National University
                                  Faculty of Economics and Commerce
                                          School of Economics

                                    Advanced Econometric Methods
                                       EMET3011/EMET8014

Final Exam. Semester 1, 2002.
Attempt all questions.
Questions 1), 3), and 4) are worth 20 points. Question 2) is worth 10 points.
Hypothesis tests should have size (or asymptotic size) equal to 5%. The 5% critical value from a chi-
squared distribution with one degree of freedom is 3.84.

1) Consider   the following three-equation model:

           (1.1)                y1 = γ11 y2 + β11 x1 + β12 x2 + ε1
           (1.2)                y2 = γ23 y3 + β21 x1 + ε2
           (1.3)                y3 = β32 x2 + ε3

where the x’s are exogenous. The corresponding reduced form equations can be written as:

           (y1 y2 y3) = (x1 x2) Π + v

           π 11π 21π 31 
where Π =                . The data runs from i=1 to i=n, but the i indices have been dropped from the
           π 21π 22π 32 
variables.

   a) Write the structural equations in the form yГ+xB = ε, where y = (y1 y2 y3) and x = (x1 x2).
      That is, give the forms of Г and B.
   b) Check the order conditions for identification.
   c) In the reduced form, we know that Π satisfies Π Г + B = 0. Use the equation Π Г + B = 0 to
      find γ23 in terms of the π ’s. Do the same for β21.
   d) Substitute the equation for y3 into that for y2 to obtain the reduced form equation for y2.
      What are the properties of the OLS estimator of this reduced form equation? (Assume the
      errors are homoskedastic and not serially correlated.) What are the properties of the OLS
      estimator of equation (1.3)?
   e) How do the estimators in part d) relate to the terms in part c)?

2) In the standard linear regression model, y=Xβ+ ε, define the residual vector, eIV = y-XbIV, where
   bIV is the IV estimator of β, bIV = (Z’X)-1 Z’y.
   a) Show that Z’eIV = 0.
   b) Suppose that the first element of β is the intercept and the first column of Z is a column of
   ones. What does the first element of Z’eIV imply about the residuals?

3) Consider the heteroskedastic regression model, yi = xiβ + εi , i = 1,…,n, where the εi have mean
   zero, var(εi) = σ i2 , and are not serially correlated. For simplicity, k=1.
   a) Define the vector of errors ε , where ε ' = (ε 1 ...ε n ) , and the covariance matrix Ω =E[ ε ε ' ].
      Give the form of Ω.
                                n
                                    ε i2
     b) Show that ε’ Ω-1 ε =   ∑ σ 2 . [Hint: the inverse of a diagonal matrix, i.e., a matrix with all
                               i =1    i
        numbers off the main diagonal equal to zero, is a diagonal matrix in which the elements on
        the diagonal are the inverses of the elements of the original matrix.]
     c) Consider the transformed model, yi* = xi*β + εi*, where εi* = εi/σi , yi* = yi/σi, and xi* = xi/σi.
        Show that the εi* have mean zero and are homoskedastic.
     d) Give the form of the OLS estimator of β in the transformed model in c). Then give the form
        of this estimator in terms of yi, xi and σi.
                                                     n
                                                       e2
     e) Consider the weighted sum of squares, ∑ i2 , where ei = yi - xiα. (Thus, α is any value for
                                                     i =1   σi
        the parameter β.) Find the estimator of β obtained by minimizing the weighted sum of
        squares with respect to α. Show that the estimator is the same as that obtained in d).

4)      Consider the linear regression model with serially correlated errors,

     (4.1)      zt = β1 + β2xt + εt
     (4.2)      εt = ρεt-1 + ut,

where t=1,…T and ut is homoskedastic white noise; and the alternative model,

     (4.3)      zt = γ0 + γ1zt-1 + γ2xt + γ3xt-1+ ut .

     a) Derive the restriction that must be imposed on the parameters in (4.3) to obtain (4.1)-(4.2).
     b) Suppose that you estimate the model (4.1) by OLS on a set of 150 observations and obtain the
        results in table 1 below. What does the value of the Durbin-Watson statistic say about the
        value of ρ?
     c) Regressing the residuals from OLS on (4.1) on an intercept, the lagged residuals, and xt gives
        the results in table 2. Using these results, or otherwise, test the null hypothesis that ρ = 0
        against the alternative hypothesis that ρ ≠ 0.
     d) Suppose that you decide to estimate the model (4.1) – (4.2) using MLE. The results are
        shown in table 3. Use the results to test the null hypothesis that ρ = 0 against the alternative
        hypothesis that ρ ≠ 0. What is the difference from the test in c)?
     e) Suppose that you estimate the model (4.3) by OLS. The results are shown in table 4. Is (4.3)
        to be preferred to (4.1)-(4.2)? Why or why not? What does you answer imply about the cause
        of the serial correlation in the residuals from estimating (4.1) by OLS?

                                         Table 1. Estimation of (4.1)
Dependent Variable: Z
Method: Least Squares
Sample: 1 150
Included observations: 150
Variable               Coefficient Std. Error      t-Statistic   Prob.
C                      2.932922       0.530331     5.530356      0.0000
X                      0.861947       0.312221     2.760691      0.0065
R-squared              0.048661       Mean dependent var         3.479133
Adjusted R-squared     0.042277       S.D. dependent var         6.178326
S.E. of regression     6.046317       Akaike info criterion      6.449932
Sum squared resid      5447.134       Schwarz criterion          6.489896
Log likelihood         -484.9699      F-statistic                7.621414
Durbin-Watson stat     0.912215       Prob(F-statistic)          0.006493
                               Table 2. Residuals on Lagged Residuals
Dependent Variable: RES_4_1
Method: Least Squares
Sample (adjusted): 2 150
Included observations: 149 after adjusting endpoints
      Variable         Coefficient    Std. Error       t-Statistic       Prob.
        C                0.006936     0.448019      0.015481           0.9877
   RES_4_1_LAG           0.541791     0.069307      7.817290           0.0000
        X               -0.107554     0.263625     -0.407981           0.6839
R-squared                0.293648    Mean dependent var              -0.042263
Adjusted R-squared       0.284038    S.D. dependent var               6.023822
S.E. of regression       5.097030    Akaike info criterion            6.114990
Sum squared resid        3819.017    Schwarz criterion                6.175203
Log likelihood          -455.6243    F-statistic                      30.55578
Durbin-Watson stat       1.992793    Prob(F-statistic)                0.000000



                                  Table 3. Estimation of (4.1) – (4.2)
Dependent Variable: Z
Method: MLE
Sample (adjusted): 2 150
Included observations: 149
Convergence achieved after 8 iterations
      Variable         Coefficient    Std. Error       t-Statistic       Prob.
         C               2.694732     0.944007     2.854567            0.0049
         X               1.240580     0.396397     3.129638            0.0021
        RHO              0.542569     0.069070     7.855318            0.0000
R-squared                0.322214    Mean dependent var              3.479133
Adjusted R-squared       0.313055    S.D. dependent var              6.178326
S.E. of regression       5.120731    Akaike info criterion           6.124139
Sum squared resid        3880.839    Schwarz criterion               6.184085
Log likelihood          -459.3725    F-statistic                     35.17899
Durbin-Watson stat       1.912475    Prob(F-statistic)               0.000000



                                      Table 4. Estimation of (4.3)
Dependent Variable: Z
Method: Least Squares
Sample (adjusted): 2 150
Included observations: 149
      Variable         Coefficient    Std. Error       t-Statistic       Prob.
         C               1.610133     0.488199      3.298107           0.0012
       Z_LAG             0.522461     0.068548      7.621831           0.0000
         X               1.600170     0.418872      3.820193           0.0002
       X_LAG            -1.507101     0.418091     -3.604724           0.0004
R-squared                0.347113    Mean dependent var              3.479133
Adjusted R-squared       0.333789    S.D. dependent var              6.178326
S.E. of regression       5.042860    Akaike info criterion           6.099957
Sum squared resid        3738.274    Schwarz criterion               6.179885
Log likelihood          -456.5467    F-statistic                     26.05125
Durbin-Watson stat       1.876878    Prob(F-statistic)               0.000000

				
DOCUMENT INFO
Shared By:
Categories:
Tags: where
Stats:
views:10
posted:3/13/2010
language:English
pages:3
Description: ε , where