Docstoc

Arma

Document Sample
Arma Powered By Docstoc
					ARMA models


  Gloria González-Rivera
  University of California, Riverside
  and
  Jesús Gonzalo U. Carlos III de Madrid
                             White Noise

 A sequence of uncorrelated random variables is called a white noise
 process. a : E ( a )       (normally   0)
            t         t      a                     a

                  Var (at )   a
                                2


                  Cov ( at , at k )  0 for k  0
Autocovari ance and autocorrelation
      a k  0
        2
k  
     0 k  0
     1 k  0
                                      k
k  
     0 k  0                              ....
      1 k  0                             1   2   3   4   k
kk  
      0 k  0
                   The Wold Decomposition


If {Zt} is a nondeterministic stationary time series, then
         
 Zt 
        j 0
                 ja t  j  Vt   ( L)a t  Vt ,


 where
                    
1.  0  1 and
                   j 0
                            2  ,
                             j


2. {a t } is WN(0,  2 ), with  2  0,
3. Cov(a s , Vt )  0 for all s and t ,
4. a t is the limit of linear combinatio ns of Z s , s  t , and
5. {Vt } is determinis tic.
Some Remarks on the Wold Decomposition



    () a t  Z t  P[ Z t | Z t 1 , Z t  2 ,...]

                                      
    () What do we mean by
                                    j 0
                                             ???


                n
    E[ Z t 
               
               j 0
                       ja t  j ]2  0 as n  
               What the Wold theorem does not say


• The at need not be normally distributed, and hence need not be iid
• Though P[at|Zt-j]=0, it need not be true that E[at|Zt-j]=0 (think on the
possible consequences???)
• The shocks a need not be the “true” shocks to the system. When
will this happen???
• The uniqueness result only states that the Wold representation is the
unique linear representation where the shocks are linear forecast
errors. Non-linear representations, or representations in terms of non-
forecast error shocks are perfectly possible.
                       Birth of the ARMA models

Under general conditions the infinite lag polynomial of the Wold
Decomposition can be approximated by the ratio of two finite lag
polynomials:                    q (L)
                     (L) 
                                p (L)
Therefore
                                  q ( L)
            Z t   ( L )a t               at ,
                                  p ( L)


             p ( L ) Z t   q ( L )a t


            (1  1L  ...   p Lp ) Z t  (1  1L  ...   q Lq )a t


            Z t  1Z t 1  ...   p Z t  p  a t  1a t 1  ...   q a t  q



                       AR(p)                                 MA(q)
                            MA(1) processes


Let   at    a zero-mean white noise process                    2
                                                      a t  (0, a )

                Z t    a t  a t 1  MA(1)
Expectation
               E ( Z t )    E (at )  E (at 1 )  
Variance
              Var ( Z t )  E ( Z t   ) 2  E (at  at 1 ) 2 
               E ( at2   2at21  2at at 1 )   a (1   2 )
                                                      2

Autocovariance
        1st. order
        E(Z t   )( Z t 1   )  E ( at  at 1 )( at 1  at 2 ) 
         E ( at at 1  at21  at at 2   2 at 1at 2 )   a
                                                                    2
                       MA(1) processes (cont)
    Autocovariance of higher order
      0  ) 1 j  ta1 ta 2   1 j  ta ta  j  ta1 ta  j  ta ta( E    1 j
       ) 1 j  ta  j  ta() 1 ta  ta( E  )   j  t Z ()   t Z ( E

    Autocorrelation
                      1      2        
                 1                 
                       0 (1   )
                                2   2
                                        1 2
                 j  0 j 1
MA(1) process is covariance-stationary because
    E( Zt )   Var( Zt )  (1   2 ) 2
                                                MA(1) process is ergodic because
                                                    

                                                     j   2 (1   2 )   2  
                                                   j 0
   If a were Gaussian, then Z t would be ergodic for all moments
       t
                        
Plot the function 1 
                       1 2
                     1                            max( 1 )  0.5 for   1
                    0.5
                                                   1  0.4 for   0.5
                                                   1  0.4 for   2

             -1                1
                                        

                   -0.5
                               1          1/          
If in 1          we substitute , 1               
             1 2                     1  (1 /  )2 1   2
   Z t  at  at 1   
                                   Both processes share the same
               1       
   Z t  at  ( )at 1              autocorrelation function
                      
  MA(1) is not uniquely identifiable, except for   1 
                                             Invertibility
Definition: A MA(q) process defined by the equation                       Z t  q (L)a t

is said to be invertible if there exists a sequence of constants
                                                              

                     
                             
   { j} such that
                             j 0
                                  |  j |  and       at 
                                                                          
                                                                  j Z t  j,
                                                                          j 0
                                                                                            t  0,1,...



  Theorem: Let {Zt} be a MA(q). Then {Zt} is invertible if and only if
   ( x)  0 for all x  C such that | x | 1.            The coefficients {j} are determined by
  the relation                     
                        ( x ) 
                                   
                                   j 0
                                           jx j 
                                                      1
                                                     ( x )
                                                            , | x | 1.
                     Identification of the MA(1)



• If we identify the MA(1) through the autocorrelation
structure, we need to decide with value of  to choose, the one
greater than one or othe one less than one. Requiring the
condition of invertibility (think why????) we will choose the
value 1.
• Another reason to choose the value less than one can be found
by paying attention to the error variance of the two “equivalent”
representations:
                                                 0
            Z t  (1  1L)a t , V(a t )               , invertible
                                             (1   2 )
                                                   1
                                                      0 2
                        
            Z t  (1  1 1 L)a  t , V(a  ) 
                                          t
                                                          1
                                                                , non - invertible
                                                   (1   2 )
                                                         1

            V (a t )  V (a  )
                            t
                                             MA(q)
              Zt    at  1at 1  2at 2     qat q
   Moments            E ( Zt )  
                       0  var( Z t )  (1  12   22     q2 ) a
                                                                       2


                       j  E (at  1at 1     q at q )( at  j  1at  j 1     q at  j q )
                          ( j   j 11   j 2 2     q q  j ) 2 for j  q
                      j 
                          0 for j  q
                           j  j   j 11   j 2 2     q q j
                      j  
                             0                     q


     MA(q) is                             i 1
                                                       i 2

covariance-stationaryExample MA(2)
         and                               2
   ergodic for the   1  1 2 1 2 2  2                                  3  4    k  0
                         1  1   2     1  1   22
                                                2

    same reasons
   as in a MA(1)
                                     MA(infinite)
                         
            Zt   
                       j 0
                                ja t  j          0  1


                                      

                                     
 Is it covariance-stationary?
E ( Z t )  ,                   2
                 Var ( Z t )   a          2
                                             i
                                     i 0
                                             
                                
 j  E ( Z t  )( Z t  j  )   2
                                            
                                            i 0
                                                   ii j
                                                               The process is
                                                               covariance-
                                                              stationary
            ii j
                                                       
                                                              provided that
                                                         i2  
 j  i 0                                              i 0
          

                2
                  i
                                                    (square summable sequence)
                     Some interesting results

Proposition 1.                         

                 
                 i 0
                         i      
                                     i 0
                                            i
                                             2



                 (absolutely        (square
                 summable)          summable)

Proposition 2.
                                   

                 
                  i 0
                         i       i  
                                   i 0

                                  Ergodic for the mean
                                                      

 Proof 1.                  i     i2
                         i 0                         i 0

                 
       If        
                 i 0
                                 i       N   such that  i  1 i  N
                                                                   
         i                           i  N   i    i
             2                                                2
            i
                                                      iN           iN

        Now,
                                    N 1                   N 1         

        i 0
                     i
                         2
                               i   i   i   | i |
                                     i 0
                                            2

                                                iN
                                                      2

                                                             i 0
                                                                     2

                                                                          iN
                                                              (1)         (2)
(1) It is finite because N is finite
(2) It is finite because is absolutely summable
                     
   then            i2  
                   i 0
                                                      
Proof 2.              
                      i 0
                                 i        i  
                                                    i 0
                
 j   2  i i  j
               i 0
                                                  
 j   2  i i  j   2   i i  j
                i 0                              i 0
                                                                 


j 0
       j            2
                            
                          j 0 i 0
                                       i   i j          2
                                                                
                                                               j 0 i 0
                                                                           i    i j 
                                                 
  2   i   i j   2   i M   2 M 2  
       i 0               j 0                    i 0
                                                    
because by assumption                             
                                                   j 0
                                                           i j    M
                                         AR(1)

                       Zt  c  Zt 1  at
   Using backward substitution
     Z t  c  c   2 Z t 2  at 1  at 
      c(1      )  at  at 1   at 2  
                               2                              2

         geometric progression               
                                            
                                             MA()
    if   1 
                                    1
    (1) 1     2                 bounded sequence
                                   1
                          
                        1
    ( 2)   j            if   1
                               j

         j 0    j 0  1 
                
Remember:
                
                j 0
                       j     is the condition for stationarity and ergodicity
                                AR(1) (cont)

 Hence, this AR(1) process has a stationary solution if         1
 Alternatively, consider the solution of the characteristic equation:
                                             1
                       1  x  0  x           1
                                             
i.e. the roots of the characteristic equation lie outside of the unit circle
 Mean of a stationary AR(1)
                     c
             Zt         at  at 1   2 at 2  
                   1
                             c
               E (Zt ) 
                           1
 Variance of a stationary AR(1)
                 0  1       
                                           1
                           2    4        2
                                               a 2

                                         1 2
 Autocovariance of a stationary AR(1)
 Rewrite the process as ( Z t   )   ( Z t 1   )  at
       j  E Z t   Z t  j     E  Z t 1     at Z t  j    
         E Z t 1   Z t  j     at Z t  j      j 1
                                    j   j1
                                     j 1
 Autocorrelation of a stationary AR(1)
                    
ACF  j  j   j 1   j 1 j  1
                 o           0
          j   2  j  2   3  j 3     j  0   j
PACF: from Yule-Walker equations 11  `1  
                                        1 1
                                        1  2    2  12  2   2
                                 22                              0
                                       1 1      1  1 2
                                                            1  12


                                        1 1
                                 kk  0 k  2
                             Causality and Stationarity
Definition: An AR(p) process defined by the equation            p (L) Z t  a t
is said to be causal, or a causal function of {at}, if there exists a sequence of constants
                                                              

                    
                           
  { j} such that
                            j 0
                                 |  j |  and       Zt 
                                                            j 0
                                                                    ja t  j,  t  0,1,...


Causality is equivalent to the condition

                         ( x)  0 for all x  C such that | x | 1.


Definition: A stationary solution {Zt} of the equation  p (L) Z t  a t exists (and
is also the unique stationary solution) if and only if


             ( x)  0 for all x  C such that | x | 1.
     From now on we will be dealing only with causal AR models
                                 AR(2)

                  a  2 t Z 2  1 t Z1  c  t Z  t



   Stationarity         Study of the roots of the characteristic equation
                                     1  1 x  2 x 2  0
                                                              2
   (a) Multiply by -1                 (b) Divide by       x
2 x 2  1 x  1  0                     2  1 (1 / x )  1 / x 2  0
      1    42                      1 1    42
                   2                                               2

x1               1
                                                                 1

           22                           x1      2
      1  1  42                            1  12  42
                   2
                                         1
x2                                         
           22                           x2                       2
For a stationary causal                       1
                                 xi  1         1        i  1,2
solution is required that                     xi
                                      1 1
                                1            2  2  1
                                      x1 x2
                                 1    1         1    1
                                         1          2
                                 x1   x2        x1   x2

       Necessary conditions for a                 1  2  1
       stationary causal solution
                                                  2  1  2
      Roots can be real or complex.
      (1) Real roots                                     (2) Complex roots
  42  0
 1
  2

                                                                12  42  0 
        1  12  42     1    1  1  42
                                      2
                                                1
1                                             1                    12
              2            x2         2         x1              2           0
  (    
          2)
                                 (   
                                       1)
                                            
                                                                          4
From (1)  1  2  1
From (2)  2  1  1
                   2

                        1
     2  1  1             2  1  1


-2       -1
                    real
                                1          2       1
                   complex

                        -1
                                             12
                                    2  
                                               4
    a  2 t Z 2  1 t Z1  c  t Z               t


                                            c
    Mean of AR(2)               
                                      1  1  2
    Variance and Autocorrelations of AR(2)
 0  E ( Z t   )2  1E ( Z t 1   )( Z t   )  2 E ( Z t 2   )Z t     E ( Z t   )at
 0  1 1  2 2   2 a
 0  1 1 0  2  2 0   2 a
             2a
0 
     1  1 1  2  2


       j  E( Zt   )( Zt  j   )  1 j 1  2 j 2                                 j 1
                                  Difference equation
 j  1 j1  2  j2   j 1
                                  different shapes according to the
                                  roots, real or complex
                                                       1 
                                                1       
           j  1 1  1  0  2 1                1  2
                                                          
                                                        
           j  2  2  1 1  2  0         1 2
                                         2          2 
                                              1  2      
                                                          
           j  3  3  1  2  2 1
 Partial autocorrelations: from Yule-Walker equations
                   1           2  12
      11  1         ; 22           ; 33  0
                 1  2         1  1
                                      2
                                            AR(p)
                                                    
                Zt  c  1Zt 1  2 Zt 2  ....... p Zt  p  at
                           All p roots of the characteristic equation
stationarity
                           outside of the unit circle
  ACF                                        
           k  1  k 1  2  k 2  ...... p  k  p
                                    
          1  1 0  2 1  ...... p  p 1      System to solve for the first p
                                                   
                                       
          2  1 11  2 0  ...... p  p 2  autocorrelations:
                                                    p unknowns and p equations
                                                  
          p  1  p 1  2  p 2  ...... p 0 
                                             
 ACF decays as mixture of exponentials and/or damped sine waves,
 Depending on real/complex roots
 PACF
                     kk  0 for k  p
                Relationship between AR(p) and MA(q)
Stationary AR(p)
    p ( L) Z t  at  p ( L)  (1  1L  2 L2  .... p Lp )
           1
   Zt           at  ( L)at          ( L)  (1   1L   2 L2  ....)
         p ( L)
      1
             ( L)   p ( L)( L)  1 How to obtain  from ?
    p ( L)
     AR ( 2) (1   L   L2 )(1   L   L2  .....)  1 Example
                       1      2              1   2

     1   1L   2 L2   3 L3  ...... 
      1L  1 1L2  1 2 L3  ...... 
             2 L2  2 1L3  ..........  1
                                         ...
     equating coefficien ts from both polynomial s :
      1  1  0            1  1                    
                                                               j  2
      2  1 1  2  0  2  12  2                 j    j 1 1 j 2 2

      3  1 2  2 1  0 3  1 (12  2 )  21 
                                                        
Invertible MA(q)

 Z t  q ( L)at   q ( L)  (1  1L   2 L2  .... q Lq )
                1
  ( L) Z t          Z t  at    ( L)  (1   1L   2 L2  ....)
              q ( L)
   1                         q ( L)( L)  1
           ( L) 
 q ( L)

                How to obtain  from ?



Write an example, i.e. MA(2), and proceed as in the previous example
                        ARMA (p,q)


 p ( L) Z t   q ( L)at
Invertibil ity  roots of  q ( x)  0    x 1
Causal  roots of  p ( x)  0       x 1
                                           p ( L)
Pure AR representation   ( L) Z t                 Z t  at
                                           q ( L)
                                      q ( L)
Pure MA representation  Z t                   at   ( L)at
                                      p ( L)
                          Autocorrelations of ARMA(p,q)


  Z t  1Z t 1  .... p Z t  p  at  1at 1......  q at  q
  without of loss of generality , assume mean equal to zero :
  Z t Z t k  1Z t 1Z t k  .... p Z t  p Z t k  at Z t k  1at 1Z t k ......  q at  q Z t k

   taking expectations:
                                                                                 Note that
                                   
 k  1  k 1  2  k 2  ...... p  k  p              k  q 1             E ( Z t  k at  i )  0 k  i
 k will depend on i and  i                                k  q 1


    PACF
                           MA  ARMA
                            ARMA(1,1)



(1  L) Z t  (1  L)at
causal    1
invertibil ity    1
pure AR form   (L)Zt  at         j  (   ) j 1     j 1
pure MA form  Z t   (L)at  j  (   )         j 1
                                                            j 1
  ACF of ARMA(1,1)

    Z t Z t k  Z t 1Zt k  at Z t k  at 1Z t k

   taking expectations
                k   k 1  E (at Z t k )  E (at 1Z t k )
k 0      E ( at Z t )   2 a           E ( at 1Z t )  (   ) a
                                                                        2


      0   1   a   (   ) a
                           2                      2
                                                   system of 2 equations and 2 unknowns
                                                   
k  1  1   0   a
                                 2                 solve for  0 and  1

k  2  k   k 1
ACF

            1                    k 0
            
             (   )1   
       k                       k 1
              1   2  2
             k 1
                                 k2


PACF
           MA(1)  ARMA(1,1)
           exponentia l decays
ACF and PACF of an ARMA(1,1)
ACF and PACF of an MA(2)
ACF and PACF of an AR(2)
                                         Problems

P1: Determine which of the following ARMA processes are casual and which of
them are invertible (in each case at denotes a white noise):
                  a. Z t  0.2Z t 1  0.48Z t  2  a t
                  b. Z t  1.9Z t 1  0.88Z t  2  a t  0.2Z t 1  0.7 Z t  2
                  c. Zt  0.6Z t 1  a t  1.2a t 1
                  d. Z t  1.8Z t 1  0.81Z t  2  a t

P2: Show that the two MA(1) processes
                      Z t  a t  a t 1 {a t } is WN(0,  2 )

                      Z t  a   a 1 {a } is WN(0,  2  2 )
                                  1
                              t        t    t
                                  
                      where 0 |  | 1

have the same autocovariances functions.
                                       Problems (cont)

P.3: Let {Zt} denote the unique stationary solution of the autoregressive equations
                            Z t  Z t 1  a t , t  0,  1,....
                                                                                          
Where {a t } is WN(0,  2 ) and |  | 1 . Then is given by the expression Z t  
                                                                                          
                                                                                          j1
                                                                                                  ja t  j

Define the new sequence
                             1
                  Wt  Z t    Z t 1,
                             
                  show that {Wt } is WN(0,  2 ) and express  2 in terms of  2 and .
                                             W                 W


These calculations show that {Zt} is the (unique stationary) solution of the causal
AR equations
                                    1
                               Z t  Z t 1  Wt , t  0,  1,...
                                    
                                      Problems (cont)

P4: Let Yt be the AR(1) plus noise time series defined by Yt =Zt + Wt, where
                                                                      2
      {Wt } is WN(0,  2 ) and Z t  Z t 1  a t with a t a WN(0,  a ) and E(Ws a t )  0
                       W
for all s and t.
• Show that {Yt} is stationary and find its autocovariance functions.
• Show that the time series       U t  Yt  Yt 1 is an MA(1).
• Conclude from the previous point that {Yt} is an ARMA(1,1) and express the three
                                                2
parameters of this model in terms of ,  2 ,  a
                                          W
                        Appendix: Lag Operator L

Definition       LZt  Z t 1
Properties
     1.   L Z t  Z t k
             k


     2.   L(Z t )  LZ t  Z t 1
     3.   L( Z t  Yt )  LZ t  LYt  Z t 1  Yt 1
 Examples
1.    Zt  1Zt 1  2 Zt 2  at  (1  1L  2 L2 )Zt  at
2. (1  1L)(1  2 L) Z t  (1  1L  2 L  12 L2 ) Z t
3.    Z t    at  at 1    (1  L)at
4. (1  L) Z t  at
                       Appendix: Inverse Operator


Definition                                                              
                (1  L) 1  lim j (1  L   2 L2   3 L3  ....... j Lj )
                such that (1  L) 1 (1  L)  L0 (identity operator)

Note that :
if   1 this definition does not hold because the limit does not exist
 Example:
       AR (1)      (1  L) Z t  at
                  (1  L) 1 (1  L) Z t  (1  L) 1 at
                  Z t  at  at 1   2 at 2  ......
                       Appendix: Inverse Operator (cont)


 Suppose you have the ARMA model (L)Z t  (L)a t and want to find
 the MA representation Z t   (L)a t . You could try to crank out
   1 (L)(L) directly, but that’s not much fun. Instead you could find
       (L)a t  (L) Z t  (L)(L)a t , hence (L)  (L)(L)
 and matching terms in Lj to make sure this works.
Example: Suppose  (L)  ( 0  1L) and (L)  ( 0  1L   2 L2 )
                                                             .
Multiplying both polynomials and matching powers of L,
                         0  0 0
                         1  1 0   0 1
                          2  ..........
                                        .....
                         0  1 j1   0  j ; j  3.

  which you can easily solver recursively for the           j TRY IT!!!
                Appendix: Factoring Lag Polynomials

Suppose we need to invert the polynomial                                (L)  (1  1L   2 L2 )

We can do that by factoring it:
                (1  1L   2 L2 )  (1  1L)(1   2 L) with
                1 2   2
                1   2  1

Now we need to invert each factor and multiply:
                                                              

                                         
           (1  1L) 1 (1   2 L) 1  (
                                             j 0
                                                     j
                                                           
                                                    1L j )(
                                                               j 0
                                                                        j
                                                                       2 L j )  (1  (1   2 )L  ... 


                   j

              (
            j 0 k  0
                              j k
                         k  2 )L j
                          1



  Check the last expression!!!!
                        Appendix: Partial Fraction Tricks

There is a prettier way to express the last inversion by using the partial
fraction tricks. Find the constants a and b such that
              1                a         b        a (1  1L)  b(1   2 L)
                                              
     (1  1L)(1   2 L) (1  1L) (1   2 L)      (1  1L)(1   2 L)

    The numerator on the right hand side must be 1, so
    a  b 1
     2 a  1b  0
    Solving,
             2            1
    b             ,a           ,
          2  1       1   2
    so
              1                 1           1       2            1
                                                                        
    (1  1L)(1   2 L)    (1   2 ) (1  1L) (1   2 ) (1   2 L)
      

    j 0
          (
                1      j
                        
                               2
            (1   2 ) 1 (1   2 ) 2
                                         j
                                         )
                    Appendix: More on Invertibility
   Consider a MA(1)     Z t    1  L at
                        If   1, (1  L) 1 is defined
                        (1  L)-1 ( Z t   )  (1  L) 1 (1  L)at  at
                        (1  L   2 L2   3 L3  .......)(Z t   )  at
                                                     
    Definition                              AR (  )


 A MA process is said to be invertible if it can be written as an AR( 
                                                                      )


• For a MA(1) to be invertible we require   1 [ 1  x  0  x  1  1]
                                                                        
• For a MA(q) to be invertible, all roots of the characteristic equation
should lie outside of the unit circle
• MA processes have an invertible and a non-invertible representations
• Invertible representation optimal forecast depends on past information
• Non-invertible representation        forecast depends on the future!!!

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:10/4/2011
language:Spanish
pages:45