# Page 1 Introduction to Applied Econometrics National Graduate by yaofenjin

VIEWS: 13 PAGES: 13

• pg 1
```									                                 Introduction to Applied Econometrics

National Graduate Institute for Policy Studies

Computer Exercise #3

“Introducing EViews”

Your Assignment: You should write solutions to the following questions. Though you can
discuss the assignment with others, you must write the solutions by yourself. Do not share
solutions! You should include the printed output for any EViews commands you used.

I will inform you of the due date during our class meeting.

Working with Macroeconomic Time Series Data in EViews

Computer Exercise #3 and #4 will use the same dataset. We will build towards estimating a VAR
model. A popular VAR model used by applied macroeconomists is the RMPY. This model includes
R (interest rate), M (money supply), P (price level), and Y (real GDP). Monetary economists and
macroeconomists have spent a lot of time developing economic theories to explain the relationships
between these variables:

   The Aggregate Demand / Aggregate Supply Model, or the Phillip’s Curve model, provides a
way to link all four variables.
   The IS-LM model modified to include inflation can also link all four variables.
   The Fisher equation (i=r+) implies a positive relationship between R (which is i in the Fisher
equation) and P.
   The liquidity preference theory indicates that the money supply is inversely related to the
interest rate.

The VAR does not rely on theory when developing the econometric specification. The VAR only
states that the four variables can be related to each other: each variable is explained by lags of itself
and lags of the other variables. But we can use the results of the VAR estimation to comment about
various economic theories:

   Milton Friedman’s theory that “Inflation is always and everywhere a monetary phenomena”
(Does M Granger Cause P? Is M the only variable to Granger Cause P?)
   Is the real economy affected by inflation? (does P Granger Cause Y?)
   Do changes in money growth affect the business cycle? (does M Granger Cause Y? Note,
Keynesian theory says yes, but Monetarist theory says no.)
   What is the relationship between M and R, between R and Y, between R and P?

Page 1
The file RMPY.csv contains quarterly data on the variables for the US from 1947Q1 to 1992Q4. More
specifically:

R is the three month Treasury bill rate
M is the money supply (M2) measured in billions of dollars
P is the price level measured by the GDP deflator (a price index with 1987 = 1.00)
Y is the real GDP measured in billions of 1987 dollars

1. Import the data into EViews and examine the data. How many observations are in the dataset?

2. Generate a variable, nomY, which is the nominal GDP measures in billions of dollars.

3. Generate the logarithms, differences, and percent changes for these four variables.

4. What is the average quarterly inflation rate during this time period? What is the inflation rate on an
annualized basis?

5. Make time series graphs for real GDP, and for the real GDP growth rates. Comment about the
appearance of these graphs.

6. Examine the autocorrelation functions (with 10 lags) for real GDP and real GDP growth rates.

7. Determine the appropriate AR(p) and whether a deterministic time trend should be included for
each of the four variables in logged form.

Page 2
(Optional introductory exercise) If you are taking “Monetary Economics – Money and Banking”, you
may be interested in looking at this dataset in relation to the Mishkin textbook. The book includes a
variety of graphs showing the relationships between these variables. Can you re-create the graphs?
Can you develop a set of hypotheses to relate the cause and effect patterns between these variables?

(a) Check for Stationarity. To work with a VAR, you must use stationary data. So the first step is to
check if each of these variables has a unit root. But first, for each variable we need to decide on
the appropriate AR(p) model and decide whether to include a deterministic trend. We also need to
decide whether to use logged data. Find the appropriate specification for each variable (R, M, P,
Y) and then test for a unit root.

(b) Check for Stationarity in the differenced data. In (a) you should have found that unit roots exist
for all of the variables. We will also assume that cointegration does not occur. So we will work
with differenced data. Test to make sure that the data is indeed difference stationary.

(c) Estimating a VAR. Calculate the VAR(1) model, including a time trend.

-Do your results match those in Koop’s Table 11.4 (page 197).
-Discuss Granger causality using the method described in Koop.
-Then use the vargranger command to see Stata’s Granger Causality Tests. Does the method for
testing Granger Causality discussed in Koop match the Stata procedure?

(d) Estimating a VAR. Calculate the VAR(2), including a time trend.

-Do your results match Koop’s Table 11.5?
-Discuss Granger Causality using the Stata procedure.
-Are there differences in Granger Causality between using a VAR(1) and VAR(2)?

(e) Choosing the Optimal Lag Length.

-Use the method described in Koop to determine the appropriate p for this VAR(p) model. Use a pmax
of 8.
-Use the varsoc command in Stata for determining the appropriate lag length with pmax of 8. What are
the results of Stata? What lag length do you think we should use?

(f) Forecasting. In order to match up our results with Koop, let’s work with the VAR(2) model.

-Estimate the VAR(2) with time trend for the data through 1991:4. Then use Stata to forecast data for
the year 1992. Compare the results to the actual data.
-Do the same thing in Excel.

(g) Impulse Response Functions. Examine the impulse response functions for the VAR(2) model.
How can we interpret these results? What are the implications for monetary policy?

Page 3

(a) First, you need to decide whether or not to use logged data. Usually, you should used logged data
when working with macroeconomic time series variables. I will do the unit root tests first without
taking the logs of the data, and then with taking the logs of the data. Hopefully, the results will be the
same both ways.

In Lecture 9, we discussed this procedure using a simple method. First we choose a reasonable pmax
and include a deterministic trend, and then we check the significance of the final pth coefficient. If it is
not significant, we re-estimate with p-1, and we repeat this process until we find a final coefficient that
is significant. Once we decide on the value of p, we check to see if the deterministic trend is
significant. If not, then we re-estimate the model without the deterministic trend. Then we are ready
for the unit root test. This is the procedure we will follow in Stata. Remember, we are estimating an
equation of the form:

Yt    Yt 1   1Yt 1  ...   p1Yt  p 1   t  et
which means that the number of differenced terms we include in the regression is one less than the
AR(p) we are estimating. For example, if we created d_y=D.y and obs is our time trend, then an
AR(8) model would be estimated as:

reg d_y L.y L(1/7).d_y obs

In the case of R, we find that the 7th differenced lag is significant, so we want an AR(8) model. The
time trend is not significant (p-value is .444) so we do not include it. Then, we are ready to use the
Dickey-Fuller test. We can see that we cannot reject the null hypothesis of a unit root at 5%
significance, because the t-statistic for the unit root is -1.78, and the 5% critical level is -2.88, and we
only reject the null hypothesis if the test statistic is more negative than the critical level. We have
found evidence that R has a unit root.

Following the procedure I just described, we obtain:

Data in Levels
# of     AR(p)
include    differenced   model     t-stat               unit
variable    time trend?       terms               for     5% tcv     root?
R              no            7           8     -1.78    -2.88       yes
M             yes            7           8     -2.39    -3.44       yes
P             yes            3           4     -1.72    -3.44       yes
Y             yes            1           2     -2.16    -3.44       yes

Again, the null hypothesis for the unit root test is that we have a unit root. We reject the null
hypothesis when the t-statistic is more negative than the critical value. If we reject the null hypothesis,
we have a stationary time series. If we cannot reject the null hypothesis, we have a unit root.

Here are the unit root tests for the logged data:

Page 4
Data in Logs
# of     AR(p)
include    differenced   model   t-stat             unit
variable     time trend?       terms             for     5% tcv   root?
log_R             no            6          7    -2.23    -2.88     yes
log_M            yes            5          6    -2.17    -3.44     yes
log_P            yes            2          3    -1.78    -3.44     yes
log_Y            yes            1          2    -2.28    -3.44     yes

(Note) Since you found unit roots for all the variables, actually the next appropriate step is to check for
cointegration among the variables. This topic is discussed in Koop, Chapter 10. We did not have
time to discuss this in class. We will not check for cointegration, which means that we could
potentially be ignoring some valuable information about the long-run relationships between the
variables. We will work with differenced data, so our VAR will use can only describe short run
relationships.

The Stata code for completing everything so far would look something like this:

clear

insheet using "C:\rmpy.csv", clear

gen obs=_n

gen dates=obs-4*(1960-1947)-1

tsset dates, quarterly
format dates %tq

foreach name in r m p y {

gen log_`name'=log(`name')
gen d_`name'=d.`name'
gen pc_`name'=100*d.log_`name'

foreach num of numlist 7(-1)1 {

* AR(p) model specification for data in levels:
*                 reg d_`name' L.`name' L(1/`num').d_`name' obs
*                 reg d_`name' L.`name' L(1/`num').d_`name'

* AR(p) model specification for logged data:
reg pc_`name' L.log_`name' L(1/`num').pc_`name' obs
reg pc_`name' L.log_`name' L(1/`num').pc_`name'
}
}

dfuller     r,   lags(7)     reg
dfuller     m,   lags(7)     trend reg
dfuller     p,   lags(3)     trend reg
dfuller     y,   lags(1)     trend reg

dfuller     log_r,   lags(6)     reg
dfuller     log_m,   lags(5)     trend reg
dfuller     log_p,   lags(2)     trend reg
dfuller     log_y,   lags(1)     trend reg

Page 5
(b) If we are thinking in terms of levels, we should use differenced data. If we are thinking in terms of
logs, we should use percent changes. In order to match the method used by Koop in Chapter 11, let’s
use percent changes (but notice that Koop follows his usual procedure of using the name of
“differences” for percent changes).

Data in percent changes
include     difference   AR(p)   t-stat              unit
variable   time trend?       terms     model   for      5% tcv   root?
pc_R            no            2           3   -6.96     -2.88     no
pc_M            no            0           1   -4.71     -2.88     no
pc_P            no            1           2    -4.1     -2.88     no
pc_Y            no            0           1   -9.04     -2.88     no

We can see that none of the variables in percent change terms have unit roots. They are all stationary.
So we will be able to use OLS on each equation of the VAR (which is what Stata does).

My Stata code for this part was:

foreach name in r m p y {

gen pc2_`name'=d2.log_`name'

foreach num of numlist 2(-1)1 {

reg pc2_`name' L.pc_`name' L(1/`num').pc2_`name' obs
reg pc2_`name' L.pc_`name' L(1/`num').pc2_`name'
}
}

reg pc2_m L.pc_m obs
reg pc2_m L.pc_m

reg pc2_y L.pc_m obs
reg pc2_y L.pc_m

dfuller    pc_r,   lags(2)    reg
dfuller    pc_m,   lags(0)    reg
dfuller    pc_p,   lags(1)    reg
dfuller    pc_y,   lags(0)    reg

(c) var pc_r pc_m pc_p pc_y, lags(1) exog(obs)
Vector autoregression

Sample: 1947q3    1992q4                                              No. of obs       =          182
Log likelihood = -1252.744                                            AIC              =     14.03015
FPE            = 14.56671                                             HQIC             =     14.20143
Det(Sigma_ml) = 11.18873                                              SBIC             =     14.45266

Equation           Parms      RMSE     R-sq      chi2     P>chi2
----------------------------------------------------------------
pc_r                  6     13.5847   0.2277   53.65175   0.0000
pc_m                  6     .542227   0.6574   349.2755   0.0000
pc_p                  6      .56744   0.4533   150.9273   0.0000
pc_y                  6     .920751   0.2026   46.23159   0.0000

Page 6
----------------------------------------------------------------
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
pc_r         |
pc_r         |
L1 |    .221877   .0730766     3.04   0.002     .0786494    .3651046
pc_m         |
L1 |   3.390594   1.219471     2.78   0.005     1.000475    5.780714
pc_p         |
L1 |   1.778738   1.444631     1.23   0.218    -1.052686    4.610162
pc_y         |
L1 |   3.224263   1.098898     2.93   0.003     1.070462    5.378064
obs          | -.0561754    .0214133    -2.62   0.009    -.0981447    -.014206
_cons        | -3.518505    2.561094    -1.37   0.169    -8.538156    1.501147
-------------+----------------------------------------------------------------
pc_m         |
pc_r         |
L1 |   -.012993   .0029168    -4.45   0.000    -.0187098   -.0072761
pc_m         |
L1 |   .7494569   .0486745    15.40   0.000     .6540566    .8448572
pc_p         |
L1 |   .0606078   .0576616     1.05   0.293    -.0524069    .1736225
pc_y         |
L1 | -.0315711    .0438619    -0.72   0.472    -.1175388    .0543967
obs          |   .0003412   .0008547     0.40   0.690     -.001334    .0020164
_cons        |   .3347869   .1022246     3.28   0.001     .1344303    .5351435
-------------+----------------------------------------------------------------
pc_p         |
pc_r         |
L1 |   .0099351   .0030524     3.25   0.001     .0039524    .0159178
pc_m         |
L1 |   .1206095   .0509378     2.37   0.018     .0207731    .2204458
pc_p         |
L1 |   .5190142   .0603429     8.60   0.000     .4007444     .637284
pc_y         |
L1 | -.0387775    .0459015    -0.84   0.398    -.1287427    .0511878
obs          |   .0018118   .0008944     2.03   0.043     .0000587    .0035648
_cons        |   .1570825    .106978     1.47   0.142    -.0525906    .3667555
-------------+----------------------------------------------------------------
pc_y         |
pc_r         |
L1 |   .0003811    .004953     0.08   0.939    -.0093266    .0100888
pc_m         |
L1 |    .283097   .0826537     3.43   0.001     .1210987    .4450952
pc_p         |
L1 | -.1168863    .0979146    -1.19   0.233    -.3087955    .0750229
pc_y         |
L1 |   .3085509   .0744815     4.14   0.000     .1625699    .4545319
obs          | -.0031337    .0014514    -2.16   0.031    -.0059783   -.0002891
_cons        |   .5017515   .1735866     2.89   0.004     .1615279     .841975
------------------------------------------------------------------------------

Compare carefully. We can see that, except for minor differences in rounding, these results do match
Koop’s table 11.4.

Regarding Granger Causality, we can see with the VAR(1) model that an explanatory variable Granger
Causes the dependent variable with 5% significance if its coefficient is significant at the 5%

Page 7
significance level. We have to do hypothesis tests to see Granger causality. Stata gives us the p-values
for coefficients. Looking at p-values, we can see that, at 5 percent significance, in addition to each
variable’s own lag being significant,

M(+) and Y(+) Granger cause R
R(-) Granger Causes M
R(+) and M(+) Granger Causes P
M(+) Granger Causes Y

Using the “vargranger” command after running the above VAR model, we obtain the same results:
vargranger

Granger causality Wald tests
+------------------------------------------------------------------+
|           Equation           Excluded |  chi2     df Prob > chi2 |
|--------------------------------------+---------------------------|
|               pc_r               pc_m | 7.7305     1    0.005    |
|               pc_r               pc_p |  1.516     1    0.218    |
|               pc_r               pc_y | 8.6089     1    0.003    |
|               pc_r                ALL | 24.214     3    0.000    |
|--------------------------------------+---------------------------|
|               pc_m               pc_r | 19.843     1    0.000    |
|               pc_m               pc_p | 1.1048     1    0.293    |
|               pc_m               pc_y | .51809     1    0.472    |
|               pc_m                ALL |  26.92     3    0.000    |
|--------------------------------------+---------------------------|
|               pc_p               pc_r | 10.594     1    0.001    |
|               pc_p               pc_m | 5.6064     1    0.018    |
|               pc_p               pc_y | .71368     1    0.398    |
|               pc_p                ALL | 15.251     3    0.002    |
|--------------------------------------+---------------------------|
|               pc_y               pc_r | .00592     1    0.939    |
|               pc_y               pc_m | 11.731     1    0.001    |
|               pc_y               pc_p | 1.4251     1    0.233    |
|               pc_y                ALL | 12.222     3    0.007    |
+------------------------------------------------------------------+

(Think about what these results imply for monetary theory)

(d) Estimating the VAR(2) model with a time trend. We obtain:
var pc_r pc_m pc_p pc_y, lags(1/2) exog(obs)

Vector autoregression

Sample: 1947q4    1992q4                                        No. of obs          =        181
Log likelihood = -1204.875                                      AIC                 =   13.75553
FPE            = 11.07257                                       HQIC                =    14.0421
Det(Sigma_ml) = 7.113764                                        SBIC                =   14.46238

Equation           Parms      RMSE     R-sq      chi2     P>chi2
----------------------------------------------------------------
pc_r                 10     12.0595   0.3421    94.1107   0.0000
pc_m                 10     .533896   0.6767   378.8909   0.0000
pc_p                 10     .547858   0.5047   184.4367   0.0000
pc_y                 10     .907615   0.2471   59.39752   0.0000

Page 8
----------------------------------------------------------------
------------------------------------------------------------------------------
|      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
pc_r         |
pc_r         |
L1 |   .3148494    .069509     4.53   0.000     .1786143    .4510845
L2 | -.3458243    .0709026    -4.88   0.000    -.4847909   -.2068577
pc_m         |
L1 |   2.823967   1.688882     1.67   0.095    -.4861806    6.134115
L2 |   -2.20095   1.709912    -1.29   0.198    -5.552316    1.150416
pc_p         |
L1 |   3.049076   1.573373     1.94   0.053    -.0346791    6.132831
L2 |   1.163968   1.517259     0.77   0.443    -1.809805    4.137741
pc_y         |
L1 |   3.696047   1.005351     3.68   0.000     1.725596    5.666499
L2 |   1.085008   1.021413     1.06   0.288    -.9169245    3.086941
obs          | -.0451783    .0198965    -2.27   0.023    -.0841747   -.0061818
_cons        | -3.909845    2.388825    -1.64   0.102    -8.591857    .7721668
-------------+----------------------------------------------------------------
pc_m         |
pc_r         |
L1 | -.0167695    .0030773    -5.45   0.000    -.0228009   -.0107381
L2 |   .0033744    .003139     1.07   0.282    -.0027779    .0095267
pc_m         |
L1 |   .6552472   .0747697     8.76   0.000     .5087012    .8017932
L2 |   .1574915   .0757008     2.08   0.037     .0091207    .3058623
pc_p         |
L1 | -.0196234     .069656    -0.28   0.778    -.1561466    .1168998
L2 |   .0951263   .0671717     1.42   0.157    -.0365278    .2267804
pc_y         |
L1 | -.0506719    .0445086    -1.14   0.255    -.1379072    .0365634
L2 |   .0356292   .0452197     0.79   0.431    -.0529999    .1242582
obs          | -.0002324    .0008809    -0.26   0.792    -.0019588    .0014941
_cons        |   .2618382   .1057575     2.48   0.013     .0545574     .469119
-------------+----------------------------------------------------------------
pc_p         |
pc_r         |
L1 |   .0093898   .0031578     2.97   0.003     .0032007    .0155789
L2 | -.0008624    .0032211    -0.27   0.789    -.0071756    .0054508
pc_m         |
L1 |   .0856009    .076725     1.12   0.265    -.0647774    .2359792
L2 |   .0249308   .0776804     0.32   0.748      -.12732    .1771817
pc_p         |
L1 |   .3660449   .0714775     5.12   0.000     .2259515    .5061383
L2 |   .2822199   .0689283     4.09   0.000      .147123    .4173169
pc_y         |
L1 | -.0097744    .0456726    -0.21   0.831     -.099291    .0797422
L2 | -.0462155    .0464023    -1.00   0.319    -.1371623    .0447313
obs          |   .0011716   .0009039     1.30   0.195       -.0006    .0029432
_cons        |   .1103431   .1085231     1.02   0.309    -.1023583    .3230445
-------------+----------------------------------------------------------------
pc_y         |
pc_r         |
L1 |   .0023011   .0052313     0.44   0.660    -.0079522    .0125543
L2 | -.0095239    .0053362    -1.78   0.074    -.0199828    .0009349
pc_m         |
L1 |   .3102452   .1271074     2.44   0.015     .0611192    .5593712
L2 | -.0937597    .1286902    -0.73   0.466    -.3459878    .1584684

Page 9
pc_p         |
L1 |   .0738481   .1184141     0.62   0.533    -.1582393    .3059355
L2 | -.2328913    .1141908    -2.04   0.041    -.4567013   -.0090814
pc_y         |
L1 |   .2698176    .075664     3.57   0.000     .1215189    .4181164
L2 |    .153286   .0768729     1.99   0.046      .002618     .303954
obs          | -.0025195    .0014974    -1.68   0.092    -.0054545    .0004154
_cons        |   .5185362   .1797861     2.88   0.004      .166162    .8709104
------------------------------------------------------------------------------

. vargranger

Granger causality Wald tests
+------------------------------------------------------------------+
|           Equation           Excluded |  chi2     df Prob > chi2 |
|--------------------------------------+---------------------------|
|               pc_r               pc_m |  2.802     2    0.246    |
|               pc_r               pc_p | 8.1358     2    0.017    |
|               pc_r               pc_y | 17.501     2    0.000    |
|               pc_r                ALL | 31.675     6    0.000    |
|--------------------------------------+---------------------------|
|               pc_m               pc_r |  29.82     2    0.000    |
|               pc_m               pc_p | 2.2976     2    0.317    |
|               pc_m               pc_y | 1.5789     2    0.454    |
|               pc_m                ALL | 39.122     6    0.000    |
|--------------------------------------+---------------------------|
|               pc_p               pc_r |  8.859     2    0.012    |
|               pc_p               pc_m | 4.1246     2    0.127    |
|               pc_p               pc_y | 1.2081     2    0.547    |
|               pc_p                ALL | 15.169     6    0.019    |
|--------------------------------------+---------------------------|
|               pc_y               pc_r | 3.2269     2    0.199    |
|               pc_y               pc_m | 8.4903     2    0.014    |
|               pc_y               pc_p | 4.4325     2    0.109    |
|               pc_y                ALL | 22.056     6    0.001    |
+------------------------------------------------------------------+

Again, we are able to see a match with Koop Table 11.5, aside from some differences in rounding off
numbers.

Using Stata’s Granger Causality Tests, we find a few differences from before. It is still the case that
only R Granger Causes M, and only M Granger Causes Y. But now, P and Y Granger Cause R, and
only R Granger Causes P.

By including a second lag, M lost its ability to Granger Cause R and P, once we hold other factors
constant. Meanwhile, P gained the ability to Granger cause R.

For your reference, here is a more complete list of Granger Causality results at the 5 percent
significance level:

VAR(1)    VAR(2)     VAR(3)      VAR(5)
Dependent Explanatory Hypothesized    VAR(1)     Granger   Granger    Granger     Granger
Variable  Variable      Sign          sign      Cause?    Cause?     Cause?      Cause?
DR          DM            -           +        Yes       No          No          No
DR          DP            +           +        No        Yes      No (close)     No

Page 10
DR           DY            +          +        Yes       Yes      Yes        No
DM           DR        none or -      -        Yes       Yes      Yes        Yes
DM           DP         unclear       +        No        No     No (close)   No
DM           DY         unclear       -        No        No        No        No
DP           DR            +          +        Yes       Yes       No        No
DP           DM            +          +        Yes       No       Yes        No
DP           DY             -         -        No        No        No        No
DY           DR         unclear       +        No        No        No        No
DY           DM        + or none      +        Yes       Yes      Yes        No
DY           DP        none or -      -        No        No       Yes        Yes

(e) Using a p of 8 and including a time trend, we do find a significant coefficient among the 8th lags.
Thus, by Koop’s criteria, we should use a lag of 8.

But Koop’s criteria is only a simple criteria that uses skills we have learned in the course. There are
many more sophisticated techniques we could use. With a pmax of 8, we obtain the following:

. varsoc pc_r pc_m pc_p pc_y, maxlag(8) exog(obs)

Selection order criteria
Sample: 1949q2    1992q4                     Number of obs      =       175
+---------------------------------------------------------------------------+
|lag |    LL      LR      df    p      FPE       AIC      HQIC      SBIC     |
|----+----------------------------------------------------------------------|
| 0 | -1335.78                       54.9273   15.3575   15.4162   15.5022 |
| 1 | -1183.37 304.83     16 0.000 11.5545     13.7985   13.9745   14.2325* |
| 2 | -1148.05    70.64   16 0.000 9.26895     13.5777   13.8711* 14.3011 |
| 3 | -1131.59    32.91   16 0.008 9.22892* 13.5725      13.9833   14.5852 |
| 4 | -1118.18 26.828     16 0.043 9.52047      13.602   14.1302   14.9041 |
| 5 | -1099.47 37.424     16 0.002 9.25234      13.571* 14.2166    15.1625 |
| 6 | -1089.35 20.226     16 0.210 9.93107     13.6383   14.4012   15.5191 |
| 7 | -1075.85 27.013* 16 0.041 10.2675        13.6668   14.5471    15.837 |
| 8 | -1064.67    22.36   16 0.132 10.9179     13.7219   14.7195   16.1814 |
+---------------------------------------------------------------------------+
Endogenous: pc_r pc_m pc_p pc_y
Exogenous: obs _cons

Unfortunately, the results are mixed. Different criteria suggest optimal lag lengths of 1, 2, 3, 5, and 7.

What should we do? It’s not clear. We will continue assuming that a lag length of 2 is optimal for the
rest of the assignment.

(f) Steps to forecast the VAR(2) model with time trend in Stata:

1. estimate the VAR(2) with time trend on the sample 1947:1 to 1991:4

var pc_r pc_m pc_p pc_y if dates<=q(1991q4), lags(1/2) exog(obs)

2. compute the forecasts for 1992 to be compared to the actual values of 1992.

fcast compute var2_1991, step(4)

Page 11
3. list the results

list dates pc_* var2* if dates>=q(1991q1), sep(4)
Actual data for 1992                             Forecasts
dates         pc_r        pc_m       pc_p         pc_y    var2_199~r    var2_19~m    var2_19~p    var2_199~y
1992q1    -15.36699    .7966042    .928919     .8651733    -11.158003     1.034592    .62643653    -.01968567
1992q2    -5.634439    .0760078   .6889954     .6984711    -4.6516364    1.2538859    .73161304     .21987159
1992q3    -17.69122    .2095222   .2891243     .8378029    -4.2362703    1.3529998    .86177913     .27538361
1992q4    -.4323006    .6717682   .8133203     1.392746    -5.6864829    1.4720951    .94033463     .27067321

We can compare with Koop for the variables he includes on page 202 of the 2nd edition.. pc_p and
pc_y are the actual values for quarterly percent changes in 1992. var2_1991pc_p and var2_1991pc_y
are the forecasts. We can see that the forecasts match Koop.

To make the forecasts in Excel, we need to follow the forecasting approach described in Koop.
Making forecasts in Excel is not straightforward. We have to make a new spreadsheet whenever we
have a different number of endogenous and/or exogenous variables or a different VAR(p). I will show
you a spreadsheet for the VAR(2) case that we have used. You can find the spreadsheet on the class
webpage. You can use the calculation formulas in the spreadsheet as a model if you ever wish to
forecast a VAR using Excel in the future.

(i) The impulse response functions show you how a one-time positive shock to one of the endogenous
variables effects not only that variable, but is also transmitted to all the other endogenous variables in
the VAR through the lag structure of the model.

If the error terms are uncorrelated with each other, then interpreting the impulse response is
straightforward: a shock to the i-th error term will simply effect the i-th endogenous variable directly,
and then feed through the whole system in subsequent periods.

But the error terms across the VAR are usually correlated. Using linear algebra, a transformation (such
as the Cholesky decomposition) can be made to the residuals so that they become uncorrelated. For
our VAR(2) model with time trend, we obtain the impulse response functions below.

I will include the graphs of impulse responses from EViews, because I think they look much nicer than
the graphs from Stata. However, here is some Stata code to see impulse response functions in Stata:
irf set results
var pc_r pc_m pc_p pc_y, lags(1/2) exog(obs)
irf create results

irf graph oirf,   impulse(pc_r pc_m pc_p pc_y) response (pc_r) title(How Increases in R M P Y Affect
Interest Rates)
irf graph oirf,   impulse(pc_r pc_m pc_p pc_y) response (pc_m) title(How Increases in R M P Y Affect
Money Growth)
irf graph oirf,   impulse(pc_r pc_m pc_p pc_y) response (pc_p) title(How Increases in R M P Y Affect
Inflation)
irf graph oirf,   impulse(pc_r pc_m pc_p pc_y) response (pc_y) title(How Increases in R M P Y Affect Real
GDP Growth)

Page 12
Response to Cholesky One S.D. Innovations
Response of DR to DR                            Response of DR to DM                              Response of DR to DP                            Response of DR to DY
16                                              16                                                16                                              16

12                                              12                                                12                                              12

8                                               8                                                 8                                               8

4                                               4                                                 4                                               4

0                                               0                                                 0                                               0

-4                                              -4                                                -4                                              -4
1   2   3   4   5   6   7   8   9   10          1   2    3   4   5   6   7   8    9   10          1   2   3   4   5   6   7   8   9   10          1   2   3   4   5   6   7   8   9   10

Response of DM to DR                            Response of DM to DM                              Response of DM to DP                            Response of DM to DY
.5                                              .5                                                .5                                              .5
.4                                              .4                                                .4                                              .4

.3                                              .3                                                .3                                              .3

.2                                              .2                                                .2                                              .2
.1                                              .1                                                .1                                              .1

.0                                              .0                                                .0                                              .0

-.1                                             -.1                                               -.1                                             -.1
-.2                                             -.2                                               -.2                                             -.2

-.3                                             -.3                                               -.3                                             -.3
-.4                                             -.4                                               -.4                                             -.4
1   2   3   4   5   6   7   8   9   10          1   2    3   4   5   6   7   8    9   10          1   2   3   4   5   6   7   8   9   10          1   2   3   4   5   6   7   8   9   10

Response of DP to DR                            Response of DP to DM                              Response of DP to DP                            Response of DP to DY
.6                                              .6                                                .6                                              .6

.5                                              .5                                                .5                                              .5

.4                                              .4                                                .4                                              .4

.3                                              .3                                                .3                                              .3

.2                                              .2                                                .2                                              .2

.1                                              .1                                                .1                                              .1

.0                                              .0                                                .0                                              .0

-.1                                             -.1                                               -.1                                             -.1

-.2                                             -.2                                               -.2                                             -.2
1   2   3   4   5   6   7   8   9   10          1   2    3   4   5   6   7   8    9   10          1   2   3   4   5   6   7   8   9   10          1   2   3   4   5   6   7   8   9   10

Response of DY to DR                            Response of DY to DM                              Response of DY to DP                            Response of DY to DY
1.0                                             1.0                                               1.0                                             1.0

0.8                                             0.8                                               0.8                                             0.8

0.6                                             0.6                                               0.6                                             0.6

0.4                                             0.4                                               0.4                                             0.4

0.2                                             0.2                                               0.2                                             0.2

0.0                                             0.0                                               0.0                                             0.0

-0.2                                            -0.2                                              -0.2                                            -0.2

-0.4                                            -0.4                                              -0.4                                            -0.4
1   2   3   4   5   6   7   8   9   10          1   2    3   4   5   6   7   8    9   10          1   2   3   4   5   6   7   8   9   10          1   2   3   4   5   6   7   8   9   10

Let’s just consider the “Response of DY to DM.” What is we see if that we increase money growth for
one time period, at time period 0, then the growth rate of Y will start to increase, and Y will grow more
quickly than other wise for about one year (four quarters). Then the effects of the expansionary
monetary policy stimulus die out, or are even slightly negative. This is support for the Keynesian
position that monetary policy can affect the real GDP. What other interesting results do you find?

Page 13

```
To top