# Derivation of the Theoretical autocorrelation Function of an AR_1

Document Sample

```					Derivation of the Theoretical autocorrelation Function of an AR(1) Process.

We have studied and worked with the so called autocorrelation function. To repeat, rk or
ACF(k) simply showed the correlation of a variable with its k periods lag in the past. In
other words,
rk = ACF(k) = Corr(xt,xt-k)                  1
Now suppose a time series variable xt is represented by the following equation:
xt  xt 1  et              2
Where et is a white noise process. The purpose of these pages is to derive the population
(or theoretical) autocorrelation function for an AR(1) process. As we will see below, the
shape of the population ACF is needed in forecasting with ARIMA models.
To begin with, for xt to be a zero mean stationary AR(1) process, the following conditions
must hold.
1. E(et) =0
2. cov(xt-k, et) = 0
3. var (et ) = constant
4. cov(et,et-k) = 0
Please note that assumptions 1 through 4 are very similar to the least squares assumptions
we studied before.1 In this context:
1. If the first assumption does not hold, the mean of xt will not be zero.
2. If the assumptions 2 and 4 are violated, the process can not be said to be AR(1) since
the error process can not be said to be purely random (white noise).
3. If the third assumption is violated, the process will not be stationary.

In addition to these assumptions regarding the behavior of the error process, we also need
to assume that   1 . For if   1 , the mean of the time series will eventually grow (or
decline depending on whether  is positive or negative) and hence the series is trended
(non-stationary).

Some Reminders from the stats.
Population variance of a variable xt and the covariance and correlation
coefficient between any two variables yt and xt were given as
Population variance: var(x)= E(x-x)2
Population covariance: cov(x,y) = E(x-x)(y-y)
cov( y, x)
Population correlation : corr ( y, x) 
var( y ) var( x)
Employing the correlation formula to find the corr(xt, xt-1) we have :

cov(xt , xt 1 )
corr ( xt , xt 1 )                                                            3
var(xt ) var(xt 1 )

1
The second condition is different than the one we had before,, simply because the explanatory variable
(xt-k)is explicitly random.
Since xt is taken to be stationary we have var(xt) = var(xt-1) . Taking this into account, the
correlation formula simplifies to

cov(xt , xt 1 )
corr ( xt , xt 1 )                   .                             4
var( xt )
Remembering that mean of xt is zero, the formulas for variance, covariance, and
correlation given above further simplify. In particular, variance of xt becomes
var(x) = E(x2).                                               5
The covariance between xt and xt-1simplifies to
cov(xt,xt-1)= E(xtxt-1)                                       6
Substituting these into the correlation formula we get
E ( xt xt 1 )
corr ( xt , xt 1 )                                          7
E ( xt2 )
Now multiply both sides of equation 2 by xt-1 and we have

xt xt 1  xt21  xt 1et
Taking expected value of both sides we get
E ( xt xt 1 )  E ( xt21 )  E ( xt 1et )
Now according to assumption number 2 above E(xt-1et) which, is the covariance between
xt-1 and et , is equal to zero. Therefore,
E ( xt xt 1 )
.
E ( xt21 )

Finding corr(xt, xt-2)
Lag both sides of equation 2 to get
xt 1  xt 2  et 1                                                         8
Substitute xt-1 from equation 8 back into equation 2
xt   ( xt 2  et 1 )  et
xt   2 xt  2  et 1  et                                              9
Multiplying both sides of equation 9 by xt-2 we obtain
xt xt  2   2 xt2 2  xt  2 et 1  xt  2 et
Taking expected values from both sides
E ( xt xt  2 )   2 E ( xt2 2 )  E ( xt  2 et 1 )  E ( xt  2 et )

The last two terms are the covariances of x and the errors, which according to assumption
2 are equal to zero.
E ( xt xt  2 )   2 E ( xt2 2 )
Now using the correlation formula
E ( xt xt  2 )
corr ( xt , xt  2 )                   2
E ( xt2 2 )
By the same process we can show that rk  corr ( xt , xt  k )   k
Since   1 , the shape of the ACF for an AR(1) becomes clear.
ACF when rho = 0.6
0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
1    2   3   4    5    6    7     8   9   10   11

Autocorrelation Function of a Moving Average Process

First, remember that a moving average process represents the data as weighted moving
average of a white noise process. In general, a moving average of order q is written as

xt  et  1et 1   2 et  2  ..eq xt  q                                    10

This is different from an autoregressive process which represents the data as a function of
its own past.2 Please note that in this regression all right hand side variables are the error
term and its lags! As such estimating 1, 2, ..is not similar to the regular regression

An MA(1) process is:

xt  et  1et 1                                                               11
Finding corr ( xt , xt 1 )
E ( xt xt 1 )
As before we have corr ( xt , xt 1 )                                   12
E ( xt2 )
Lagging both sides of equation 11 we get
xt 1  et 1  1et  2                                                 13
Multiplying both sides of equations 11 and 13 we get
xt xt 1  (et  1et 1 )( et 1  1et  2 )
xt xt 1  et et 1  et1et  2  1et21  1et 1et  2
Taking expected values,
E ( xt xt 1 )  E (et et 1 )  1 E (et et  2 )  1 E (et21 )  12 E (et 1et  2 )
The first, second, and fourth expected values on the right hand side are zero according
to assumption 2. Therefore,
E ( xt xt 1 )  1E (et21 )  1Var (e)                                                 14

2
Although there is a mathematical relation between the two.
On the other hand the denominator, the E ( xt2 ) (or variance of x), can be calculated by
raising both sides of equation 11 to the power two and taking expected values.

xt2  et2   12 et21  2 1et et 1
E ( xt2 )  E (et2 )  12 E (et21 )  21 E (et et 1 )
The first and second terms are equal to the variance of e (a constant) and the third
term is zero. So.
E ( xt2 )  Var (e)  12Var (e)
E ( xt2 )  Var (e)(1  12 )                                          15
Replacing from equations 14 and 15 into 12 we get

E ( xt xt 1 )     1Var (e)         1
corr ( xt , xt 1 )                                                 16
2
E ( xt )        Var (e)(1  1 ) (1  12 )
2

2. Finding corr ( xt , xt 2 )
This derivation is very similar to what we did above and is left as an exercise to you.
Hint: In the previous derivation we started by lagging equation 11 and multiplying it
by 13. This time you need to lag equation 11 by 2 periods and multiply the result by
13. Everything else is exactly the same. The result that you will get is:
corr ( xt , xt  2 )  0
This means that an MA(1) autocorrelation function has only one significant spike at
lag one, and the rest are not significantly different from zero.

ACF of MA(1) when theta = 0.7

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 133 posted: 6/4/2010 language: English pages: 4