# These are impact multipliers corresponding to 11 0

W
Shared by:
Categories
Tags
-
Stats
views:
3
posted:
2/16/2012
language:
English
pages:
9
Document Sample

```							    VECTOR AUTOREGRESSIONS, IMPULSE RESPONSES, AND
FORECASTING

Consider the following bivariate symmetric system (the negative values on the
coefficients b12 and b21 are for arithmetic convenience only). Assume that both yt and zt
are stationary and that the disturbances (also called innovations, or shocks) are both
uncorrelated white noise with standard deviations y and z.

yt  b10  b12 z t   11 yt 1   12 z t 1   y t
z t  b20  b21 yt   21 yt 1   22 z t 1   z t

This “primitive system” constitutes a first-order VAR which incorporates feedback. And,
since both b12 and b21 are nonzero, then the system includes contemporaneous effects –
i.e., innovations to zt in the form of zt affect yt contemporaneously and vice-versa. These
are not reduced form equations (i.e., they are not estimable in their current form due to
the aforementioned simultaneity). Reduced form expressions follow:

This is referred to as the standard form

1 b12   y t  b10   11  12   yt 1   yt 
b 1   z   b       z    
 21   t   20   21 22   t 1   zt 
          

1 b12                           yt                  b10 
b 1   B;                       z   xt ;           b   0
 21                             t                    20 
 11  12                        yt 
            1 ;                 t
 21  22 
                                 zt 

Bx t  0  1 xt 1   t

 1  b12 
B 1            /(1  b12 b21 )
 b21 1 

A0  B 1 0
A1  B 1 1
et  B 1  t
2

We can write the system as a first-order difference equation –

xt  A0  A1 xt 1  et
which is equivalent, of course, to –
yt  a10  a11 yt 1  a12 z t 1  e1t
z t  a20  a21 yt 1  a22 z t 1  e2t
and this system can be estimated with OLS equation-by-equation. Note, SUR will not
yield any efficiency gains over OLS because both equations have identical regressors.

STABILITY

Iterating backwards to solve this system yields the particular solution:

xt  A0  A1 xt 1  et
xt  A0  A1 ( A0  A1 xt  2  et 1 )  et
n
xt  A0 ( I  A1  ...  A1 )   A1 et i  A1
n           i        n 1
xt  n 1
i 0
1
1
1  a11 L  a12 L 
( I  A1 )                            
 a 21 L 1  a 22 L 

Stability requires the roots of this matrix to lie outside the unit circle.

PROPERTIES OF COMPOSITE DISTURBANCES

Derive E(eit), V(eit2), E(e1t,e2t), E(e1t,e1t-i). Use the following information:

 1  b12                     yt  ( yt  b12 zt ) /(1  b12 b21 )  e1t 
B 1 t            /(1  b12 b21 ) *                                         
 b21 1                      zt  ( zt  b21 yt ) /(1  b12 b21 ) e2t 
                                  

E(e1t,e2t) = -(b12z2 + b21y2)/(1-b12b21)2. Write them as follows—

 var(e1t ) cov(e1t , e2t )  1  12 
2

                                     
cov(e1t , e2t ) var(e2t )   21  2 
2
          

So, the covariance is not necessarily zero. In the standard form, then we can solve for
nine parameters: a10, a11, a12, a20, a21, a22, 12, 22, 12. But, the primitive system has the
following ten parameters:
3

b10 , b12 ,  11 ,  12 ,  2 y
b20 , b21 ,  21 ,  22 ,  2 z
Therefore, in principle, it is impossible to recover the structural parameters from the
reduced form coefficient estimates.

RESTRICTED VAR – IDENTIFICATION

Sims, in his 1980 Econometrica article, suggested restrictions on the primitive form that
yield a “recursive” system, which orders the model so to speak. Consider, (arbitrarily)
restricting the contemporaneous influence of yt innovations on zt to zero, e.g., set b21 to
zero. We can rewrite the primitive form as:

yt  b10  b12 z t   11 yt 1   12 z t 1   y t
z t  b20                 21 yt 1   22 z t 1   z t

And solve its reduced form –

1 b12   y t  b10   11  12   yt 1   yt 
0 1   z   b       z    
        t   20   21 22   t 1   zt 
          

1 b12                              yt                    b10 
0 1   B;                          z   xt ;             b   0
                                   t                      20 
 11  12                           yt 
            1 ;                    t
 21  22 
                                    zt 

Bx t  0  1 xt 1   t

 1           b12 
B 1  
 0           1   

A0  B 1 0
A1  B 1 1
et  B 1  t
4

Which yields another first-order difference equation in vector form –
xt  A0  A1 xt 1  et
Equivalently –

yt  a10  a11 yt 1  a12 z t 1  e1t
z t  a20  a21 yt 1  a22 z t 1  e2t

Now, we can attain OLS estimates for the parameters of this system and recover the nine
structural parameters by solving the following nine relations.
a10  b10  b12 b20
a11   11  b12 21
a12   12  b12 22
a 20  b20
a 21   21
a 22   22
e1t   yt  b12  zt  Var(e1 )   2 y  b12 2 z
e2t   zt  Var(e2 )   2 z
 Cov(e1 , e2 )  b12 2 z

This is a set of nine equations in nine unknowns. Thus, we have exact identification.
Note, that the original series {z} and {y} can also be recovered. (Yippee!) Decomposing
the residuals in this “triangular” fashion constitutes a Choleski decomposition.

IMPULSE RESPONSES

xt  A0  A1 xt 1  et
xt  A0  A1 ( A0  A1 xt 2  et 1 )  et
n
xt  A0 ( I  A1  ...  A1 )   A1 et i  A1
n            i      n 1
xt n 1
i 0

xt     Ai 1 et i
i 0

Here, we define  as the long-run mean. (Evaluate the expected value of xt). Stability
requires of course that the sum of Ai converge. Expand this back out into matrix form –
5

i
 y t   y   a11 a12  e1t i 
 z    z     a a  e 
 t    i 0  21 22   2t i 

a a  1  b12   yti 
i
 yt   y                         

z  z    1 /(1  b12 b21 )   11 12                            
 t                            i  0  a 21 a 22    b21 1   zt i 

1       b 
i  1 /(1  b12 b21 ) A i       12

 b21 1 

 (i)  (i)  yti 
i
 yt   y       

z   z       11 (i)  12 (i)   
 t           i 0  21    22       zt i 


xt      i  t i
i 0

Take the partial derivative with respect to  at i = 0. That is:

yt                         1  b12   yti 
z   1 /(1  b12 b21 )    b 1             
 t                           21       zt i 

These are impact multipliers corresponding to 11(0), 12(0), 21(0), 22(0). For example,
1         1  b12  1    1
yt/yt =                 b       0            = yt/zt.
(1  b12 b21 )  21 1    (1  b12 b21 )

This is the impact of a one standard deviation contemporaneous shock to yt on yt. The
impact from this same contemporaneous shock on zt is:

1         1  b12  1    b21
zt/yt =                       b       0            = zt/zt .
(1  b12 b21 )  21 1    (1  b12 b21 )

It is interesting that these are repeated (i.e., zt/yt= zt/zt). That is because we cannot
distinguish the source of the shocks without some kind of restriction—the same
restrictions that we imposed to identify the system. It is also interesting that restricting
b21 = 0 (i.e., no contemporaneous influence from yt to zt) changes these impact multipliers
yt/yt = 1, yt/zt = 1, zt/yt = 0, zt/zt = 1. You should satisfy yourself that these
are indeed true.

What about the influences generated from shocks that occurred in the past (i>0)?
6

yt                              a a  1  b12   yti 
i


z    1 /(1  b12 b21 )   11 12                            
 t                         i  0 a 21 a 22    b21 1   zt i 

Let evaluate 11(1), 12(1), 21(1), 22(1).

yt                       a11 a12  1  b12   yti 
z   1 /(1  b12 b21 ) a                           
 t                         21 a 22   b21 1   zt i 

1        (a11  a12 b21 ) (a12  a11b12 )  1  (a11  a12 b21 )
yt/yt =                                                           
(1  b12 b21 ) (a 21  a 22 b21 ) (a 22  a 21b12 ) 0 (1  b12 b21 )
 

The other three follow accordingly. Again, note that the recursive system generated by
restricting b21 = 0 greatly simplifies these solutions, e.g., yt/yt = a11 = 11 – b1221.This
says that an innovation from one period past impacts (sorry, impact is a noun, not a verb)
yt this period through its one-period lag (11) filtered by the influence it had on zt and zt’s
contemporaneous influence on yt.

The set of functions 11(i), 12(i), 21(i), 22(i) constitute the system’s impulse responses
which can be either graphed or tabulated. These represent the current period response to
the i-th lagged innovation ty-i or zt-i. Note how their respective values are linked to the
stability conditions imposed on the summation across Ai .

To summarize, we are looking at the sequences:

 b12        11 12 
1 /(1  b12 b21 ) 
1
                           
  b21 1          21  22 

1  b12 
1 /(1  b12b21 ) 
a11 a12 
                 b 1 
a 21 a 22        21     

1  b12 
2

1 /(1  b12 b21 )  11 12 
a   a
a                 b 1 
 21 a 22          21     
.
.
. etc.

With the restriction b21=0, the system is identified and all observed errors from the
sequence {e2t} are attributable to shocks to zt. So, e2t = zt leads to yt – b12zt = e1t. Thus,
yt has only a lagged influence on zt.
7

Variance Decomposition

Consider again the first-order difference equation updated one period — we can write out
the forecasts formed at time period t as follows by iterating forward:

xt 1  A0  A1 xt  et 1
Et xt 1  A0  A1 xt
Et xt  2  ( I  A1 ) A0  A1 xt
2

.
.
.
n 1
Et xt  n  A0 ( I  A1  ...  A1          )  A1 xt
n

Plainly, forecasts are functions of the estimated coefficients in the A0 and A1 matrices.
The associated forecast errors can be written as:

et 1  xt 1  Et xt 1
et  2  A1et 1  xt  2  Et xt  2
.
.
.
n 1
et  n  A1et  n 1  A1 et  n  2  ...  A1 et 1  xt  n  Et xt  n
2

And these errors can be written in terms of their underlying innovations zt and yt. This
decomposition gives the proportion of the movements in the variable in question (yt and
zt) due to its own shocks relative to the shocks from the other variable.

Generally, if zt innovations explain none of the forecast error variance of {yt}, then yt is
exogenous. Typically, a variable explains more of its forecast error variance in the short
term relative to the long term.

Granger Causality

If {yt} does not improve the forecasting performance of {zt}, then {yt} does not Granger-
cause {zt}. This type of causality is tested by testing whether the lags of one variable
enter into the equation for the other variable. Thus, if a21 = 0, then {yt}does not Granger-
cause {zt}.

yt  a10  a11 yt 1  a12 z t 1  e1t
z t  a20  a21 yt 1  a22 z t 1  e2t
8

The test statistic in this case is the standard F-test on zero restrictions for the lags. When
restrictions are imposed across equations, then standard F-tests are inappropriate – notice
then that the Granger-causality tests are 2. Note, that because one variable may not
Granger-cause the other, the other is not necessarily exogenous. Exogeneity requires that
past and present values of yt do not add information in determining the path of the zt.

Lag Length and block exogeneity

Begin with a plausible lag length and pare it down. It is better to err with a longer lag
length than to risk omitting variables. Take a system of equations, for example, with p
lags. Estimate that system using T observations with c coefficients in each equation. The
determinant of the var-cov matrix of residuals is |p|. You’ll need to think about this for a
minute. Suppose we have N equations, each with c parameters (including a constant).
Estimates of each equation will yield a residual vector of size Tx1 for each equation.
Arrange these N columns into a matrix with dimension now equal to TxN. Call it V. The
covariance matrix is V’V ~ NxN. This is . Now, consider reducing the dimension to m
< p. Estimate this system and the new |m|. The distribution of the statistic

(T-c)(log|m| - log |p|) ~ 2 (#restrictions = (p-m)*N)

where c is the number of parameters estimated in each equation of the unrestricted model.
This is a test of cross-equation restrictions. Alternatively, we could utilize the SBC
criteria:

SBC = T log|| + Nlog(T)

Here, N is the total number of parameters in the system.

Some useful Stata commands

Suppose I want to estimate a VAR in two variables yt and zt with two lags. Then:

1. varbasic y z, lags(1/2) nograph

The option nograph suppresses graphing the impulse responses. Lengthen the lag length
by altering the option lags(1/) at will.

Suppose I impose the restriction b21 = 0 establishing the order from zt to yt. I want the
impulse response functions. Then follow varbasic with

2. varirf graph irf, impulse(z) response(y)

What if I just want to tabulate the IRF? Then

3. varirf table irf, impulse(z)
9

Now, consider forecasting. I want to estimate the model through time T (the end of the
sample) and then forecast n-periods out from that point. These are true out-of-sample
forecasts. In Stata, follow the varbasic command with

4. varfcast compute, step(n)

You can look at these using the list command in Stata. Stata will automatically append
more observations into the sample to store these forecasts. Stata will also include lower
and upper 95% CI bounds and the standard errors for each of these forecasts. Typically,
in a regression, forecasting the dependent variable requires also forecasting the
independent variables. VAR estimates of the parameters on the lagged coefficients permit
forecasts that are functions of past forecasts. For relatively short horizons, these forecasts
can be quite useful. In traditional models, you would have to supply values of the
regressors independently by either making them up or estimating a separate model.

What if you wanted to test Granger-causality?

5. vargranger

This will tabulate the test statistics for yt to zt as well as zt to yt.

Finally, you can clear all the information that has been stored in memory by typing
varfcast clear thus setting up another round of forecasts.

**********************************************

```
Related docs
Other docs by HC120216095322