Docstoc

Markov Chains - PowerPoint

Document Sample
Markov Chains - PowerPoint Powered By Docstoc
					        Discrete – Time Markov Chains

Many real-world systems contain uncertainty and
     evolve over time

Stochastic processes (and Markov chains)
     are probability models for such systems.

A discrete-time stochastic process
      is a sequence of random variables
      X0, X1, X2, . . . typically denoted by { Xn }.
where index T or n takes countable discrete values



                          1
    An Example Problem -- Gambler’s Ruin
At time zero I have X0 = $2, and each day I make a $1 bet.
I win with probability p and lose with probability 1– p.
I’ll quit if I ever obtain $4 or if I lose all my money.

  Xt = amount of money I have after the bet on day t.

                   3 with probability p
    So, X1 =   {   1 with probability 1 – p

       if Xt = 4 then Xt+1 = Xt+2 =    •••   =4
        if Xt = 0 then Xt+1 = Xt+2 =   •••   = 0.

    The possible values of Xt is S = { 0, 1, 2, 3, 4 }
                              2
    Components of Stochastic Processes

The state space of a stochastic process is
      the set of all values that the Xt’s can take.
(we will be concerned with
      stochastic processes with a finite # of states )

Time: t = 0, 1, 2, . . .
State: m-dimensional vector, s = (s1, s2, . . . , sm )
  or   s = (s1, s2, . . . , sm ) or (s0, s1, . . . , sm-1)
Sequence Xt, Xt takes one of m values, so Xt  s.


                               3
                Markov Chain Definition

A stochastic process { Xt } is called a Markov Chain if

Pr{ Xt+1 = j | X0 = k0, . . . , Xt-1 = kt-1, Xt = i }

       = Pr{ Xt+1 = j | Xt = i }        transition probabilities

       for every i, j, k0, . . . , kt-1 and for every t.

The future behavior of the system depends only on the
current state i and not on any of the previous states.


                                   4
       Stationary Transition Probabilities
Stationary Markov Chains
 Pr{ Xt+1 = j | Xt = i } = Pr{ X1 = j | X0 = i } for all t

           (They don’t change over time)
 We will only consider stationary Markov chains.

The one-step transition matrix for a Markov chain
      with states S = { 0, 1, 2 } is
                    p00   p01   p02 
                   p
               P   10    p11       
                                 p12 
                    p20
                          p21   p22 
                                     
     where pij = Pr{ X1 = j | X0 = i }
                             5
          Property of Transition Matrix
If the state space S = { 0, 1, . . . , m–1} then we have

          j pij = 1  i   and pij  0  i, j

    (we must                        (each transition
     go somewhere)                      has prob  0)

Stationary Property assumes that these values
does not change with time




                             6
  Transition Matrix of the Gambler’s problem
At time zero I have X0 = $2, and each day I make a $1 bet.
I win with probability p and lose with probability 1– p.
I’ll quit if I ever obtain $4 or if I lose all my money.
  Xt = amount of money I have after the bet on day t.

 Transition Matrix of Gambler’s Ruin Problem

               0     1     2     3     4
         0     1     0     0     0     0
         1     1-p   0     p     0     0
         2     0     1-p   0     p     0
         3     0     0     1-p   0     p
         4     0     0     0     0     1
                           7
              State Transition Diagram

State Transition Diagram
      Node for each state,
      Arc from node i to node j if pij > 0

The state-transition diagram of Gambler’s problem

              1-p       p             p       p
          0         1
                    1
                              2
                              2
                                          3
                                          3       4
                                                      4

                        1-p       1-p
          1                                       1

 Notice nodes 0 and 4 are “trapping” node
                                  8
               Printer Repair Problem

• Two printers are used for Russ Center.

• When both working in morning, 30% chance that one
  will fail by evening and a 10% chance that both will fail.

• If only one printer is serving at the beginning of the day,
  there is a 20% chance that it will fail by the close of
  business.

• If neither is working in the morning, the office sends all
  work to a printing service.

• If failed during the day, a printer can be repaired
  overnight and be returned earlier the next day
                                9
  States for Computer Repair Example


Index      s                State definitions
                   No printers have failed. The office
 0      s0 = (0)   starts the day with both computers
                   functioning properly.

                   One printer has failed. The office
 1      s1 = (1)   starts the day with one working
                   computer and the other in the shop
                   until the next morning.

                   Both computers have failed. All
 2      s2 = (2)   work must be sent out for the day.



                     10
Events and Probabilities for Computer Repair Example

         Current            Events            Probabili   Next
 Index
          state                                  ty       state

   0     s0 = (0)   No printer fails.            0.6      s = (0)

                    One printer fails.           0.3      s = (1)

                    Both printer fail.           0.1      s = (2)

   1     s1 = (1)   printer no fail and the               s = (0)
                    other is returned.           0.8

                    printer no fail and the               s = (1)
                    other is returned.           0.2

   2     s2 = (2)   Both printer returned.       1.0      s = (0)

                                 11
       State-Transition Matrix and Network
State-Transition Matrix
The major properties of a Markov chain        0.6 0.3 0.1
can be described by the m  m matrix:
                                          P  0.8 0.2 0 
                                                         
            P = (pij).                        1        0
                                                   0     
  For printer repair example 

                                                              (0 .6)
State-Transition Network:
  Node for each state,                                          0

                                              (0 .1)                         (0 .3)
  Arc from node i to node j if pij > 0.
                                                       (1 )         (0 .8)

                                          2                                       1
  For printer repair example 
                                                                        (0 .2)


                               12
   Market Share/Brand Switching Problem
Market Share Problem:
You are given the original market of three companies. The
following table gives the number of consumers that
switches from brand i to brand j in two consecutive weeks

        Brand   (j) 1      2        3      Total

          (i)

          1       90       7        3      100

          2        5      205      40      250

          3       30       18      102     150
        Total     125     230      145     500

How to model the problem as a stochastic process ?
                           13
      Empirical Transition Probabilities for
              Brand Switching, pij
Transition Matrix

  Brand   (j) 1                  2               3

    (i)

    1     90/100 = 0.90    7/100 = 0.07     3/100 = 0.03

    2      5/250 = 0.02   205/250 = 0.82   40/250 = 0.16

    3     30/150 = 0.20   18/150 = 0.12    102/150 = 0.68




                            14
                 Assumption Revisited
• Markov Property
  Pr{ Xt+1 = j | X0 = k0, . . . , Xt-1 = kt-1, Xt = i }

         = Pr{ Xt+1 = j | Xt = i }    transition probabilities

         for every i, j, k0, . . . , kt-1 and for every t.

• Stationary Property
   Pr{ Xt+1 = j | Xt = i } = Pr{ X1 = j | X0 = i } for all t

              (They don’t change over time)
  We will only consider stationary Markov chains.


                                15
     Transform a Process to a Markov Chain
  Sometimes a non-Markovian stochastic process can
  be transformed into a Markov chain by expanding
  the state space.
Example: Suppose that the chance of rain tomorrow
depends on the weather conditions for the previous two days
(yesterday and today).

Specifically,
P{ rain tomorrowrain last 2 days (RR)}             = .7
P{ rain tomorrowrain today but not yesterday (NR)} = .5
P{ rain tomorrowrain yesterday but not today (RN)} = .4
P{ rain tomorrowno rain in last 2 days (NN)}       = .2

Does the Markovian Property Hold ??
                            16
         The Weather Prediction Problem
How to model this problem as a Markovian Process ??

 The state space:        0 (RR) 1 (NR) 2(RN) 3(NN)

 The transition matrix:
                         0 (RR) 1 (NR) 2(RN) 3(NN)
    0   ( RR)       .7        0         .3        0
  P=      1 (NR)         .5        0         .5        0
    2    (RN)       0         .4        0         .6
    3    (NN)       0         .2        0         .8

This is a Discrete Time Markov Process

                                   17
        Repair Operation Takes Two Days

One repairman, two days to fix computer.
       new state definition required: s = (s1, s2)
s1 = number of days the first machine has been in the shop
s2 = number of days the second machine has been in the shop

For s1, assign 0 if 1st machine has not failed
              1 if it is in the first day of repair
              2 if it is in the second day of repair
For s2, assign 0 or 1


                               18
  State Definitions for 2-Day Repair Times

Index        s                  State definitions

 0       s0 = (0, 0)   No machines have failed.

 1       s1 = (1, 0)   One machine has failed and is in
                       the first day of repair.

 2       s2 = (2, 0)   One machine has failed and is in
                       the second day of repair.

 3       s3 = (1, 1)   Two machines have failed and one
                       is in the first day of repair.

 4       s4 = (2, 1)   Two machines have failed and one
                       is in the second day of repair.

                         19
State-Transition Matrix for 2-Day Repair Times


            0    1   2     3   4
           0.6 0.3 0 0.1 0  0
           0                 1
                 0 0.8 0 0.2 
           
       P  0.8 0.2 0 0 0  2
                            
           0    0 0 0 1 3
           0 1
                   0 0 0 4 



                      20
           Choosing Balls from an Urn
An urn contains two unpainted balls at present. We
choose a ball at random and flip a coin.

If the chosen ball is unpainted and the coin comes
up heads, we paint the chosen unpainted ball red

If the chosen ball is unpainted and the coin comes up
tails, we paint the chosen unpainted ball blue.
If the ball has already been painted, then (whether
heads or tails has been tossed), we change the color of
the ball (from red to blue or from blue to red)

Model this problem as a Discrete Time Markov Chain
(represent it using state diagram & transition matrix)
                            21
         Multi-step (t-step) Transitions

Example: IRS auditing problem:

Assume that whether a tax payer is audited by IRS
or not in the n + 1 is dependent only on whether he
was audit in the previous year or not.
• If he is not audited in year n, he will not be audited
with prob 0.6, and will be audited with prob 0.4
• If he is audited in year n, he will be audited with
prob 0.5, and will not be audited with prob 0.5


How to model this problem as a stochastic process ?

                           22
             An IRS Auditing Problem

State Space: Two states: s0 = 0 (no audit), s1 = 1 (audit)
                                 0   1
   Transition matrix       0 0.6 0.4
                           1 0.5 0.5
                                    
Transition Matrix P is the prob. of transition in one step
How do we calculate the probabilities for transitions
involving more than one step?

Notice:      p01 = 0.4, is conditional probability of
             audit next year given no audit this year.
     OR      p01 = p (x1 = 1 | x0 = 0)
                            23
    (2)
            2-Step Transition Probabilities
Let pij be probability of going from i to j in 2 steps
Suppose i = 0, j = 0, then

 p (x2 = 0|x0 = 0) = p (x1 = 1|x0 = 0)  p (x2 = 0|x1 = 1)

                   + p (x1 = 0|x0 = 0)  p (x2 = 0|x1 = 0)
            (2)
          p00 = p01p10 + p00p00
                     (2)
Similarly           p01 = p01p11 + p00p01
                     (2)
                    p10 = p10p00 + p11p10
                     (2)
                    p11 = p10p01 + p11p11

In matrix form,      P(2) = P  P,
                             24
          n-Step Transition Probabilities
This idea generalizes to an arbitrary number of steps:

                     P(3) = P(2) P = P2 P = P3
                             or more generally
                     P(n) = P(m) P(n-m)
The ij'th entry of this reduces to
               m
       Pij(n) =  Pik(m) Pkj(n-m)         1  m  n1
              k=0

                             Chapman - Kolmogorov Equations

 “The probability of going from i to k in m steps & then
 going from k to j in the remaining nm steps,
 summed over all possible intermediate states k”
                                25
         Transition Probabilities for n Steps

Property 1:Let {Xn : n = 0, 1, . . .} be a Markov chain
          with state space S and state-transition
          matrix P. Then for i and j  S, and n = 1, 2, .
          ..                              (n)


                   Pr{Xn = j |X0 = i } = pij,

          where the right-hand side represents the ijth
          element of the matrix P(n) and

                  P(n) = P  P  …  P
                               n
                             26
  Conditional vs. Unconditional Probabilities

Let state space S = {1, 2, . . . , m}.
Let p(t) be conditional t-step transition probability  P(t).
     ij

Let q(t) = (q1(t), . . . , qm(t)) be vector of all unconditional
probabilities for all m states after t transitions.


Perform the following calculations:
                               (t)
                q(t) = q(0)P         or q(t) = q(t–1)P
where q(0) is initial unconditional probability.
The components of q(t) are called the transient
probabilities.

                                      27
              Brand Switching Example
The initial unconditional qi(0) can be obtained by dividing
total customers using brand i by total sample size:
q(0) = (125/500, 230/500, 145/500) = (0.25, 0.46, 0.29)

To predict market shares for, say, 2 weeks into the future,
we simply apply equation with t = 2:
                      (2)
       q(2) = q(0)P
                                                      2
                                   0.90 0.07 0.03
        q(2)  (0.25, 0.46, 0.29)  0.02 0.82 0.16
                                                 
                                   0.20 0.12 0.69
                                                 

            = (0.327, 0.406, 0.267)
            = expected market share from brands 1, 2, 3
                               28
        Steady-State Solutions – n Steps
What happens with t get large?
the IRS example.
           Time, t    Transition matrix, P(t)

             1              0.6    0 . 4
                                        
                            
                            0 .5   0.5
                                       


             2             0.56    0.44 
                           0.55
                                   0.45 
                                         
             3             0.556     0.444 
                           0.555
                                     0.445 
                                            
             4            0.5556    0.4444 
                          0.5555
                                    0.4445 
                                            

                           29
Steady-State Solutions – n Steps
  the IRS example -- Continued.
  Time, t   Transition matrix, P(t)

    5           0.5555   0.4445 
                                
               
                0.5555   0.4445 
                                 


    6           0.5555
               
                          0.4445 
                                 
               
                0.5555   0.4445 
                                 


    7           0.5555   0.4445 
                                
               
                0.5555   0.4445 
                                 


    8           0.5555   0.4445 
                                
               
                0.5555   0.4445 
                                 



                 30
        Steady State Transition Probability
  Observations: as n gets large, the values in row of
  the matrix becomes identical OR they asymptotically
  approach a steady state value

  What does it mean?

    The probability of being in any future state becomes
    independent of the initial state as time process

 j = limn Pr (Xn=j |X0=i } = limn pij (n) for all i and j

  These asymptotical values are called
                      Steady-State Probabilities
                             31
       Compute Steady-State Probabilities
Let π = (π1, π2, . . . , πm) is the m-dimensional row
vector of steady-state (unconditional) probabilities for
the state space S = {1,…,m}.
Brand switching example:
                                                   0.90 0.07 0.03
                                                   0.02 0.82 0.16 
          1 ,  2 ,  3    1 ,  2 ,  3                   
                                                   0.20 0.12 0.68
                                                                  
          π1 + π2 + π2 = 1,
          π1  0, π2  0, π3  0

Solve linear system: π = πP,                       πj = 1, πj  0, j = 1,…,m

                                            32
       Steady-State Equations for Brand
              Switching Example

   π1 = 0.90π1 + 0.02π2 + 0.20π3
   π2 = 0.07π1 + 0.82π2 + 0.12π3
                                     Total of 4 equations in
   π3 = 0.03π1 + 0.16π2 + 0.68π3     3 unknowns.
   π1 + π2 + π3 = 1
    π1  0, π2  0, π3  0

Discard 3rd equation and solve the remaining system
We get :
             π1 = 0.474, π2 = 0.321, π3 = 0.205
 Recall:    q1(0) = 0.25, q2(0) = 0.46, q3(0) = 0.29
                             33
       Comments on Steady-State Results
1. Steady-state predictions are never achieved in
   actuality due to a combination of
  (i) errors in estimating P,
  (ii) changes in P over time, and
  (iii) changes in the nature of dependence
  relationships among the states.
Nevertheless, the use of steady-state values is an
  important diagnostic tool for the decision maker.

2. Steady-state probabilities might not exist unless
   the Markov chain is ergodic.

                           34
          A stead State Does not Always Exist
               -- Gambler’s Ruin Example

For the Gambler’s Problem, assume p = 0.75, t = 30

                  0      1     2     3     4
           0      1      0     0     0     0
           1   .325      0     0     0   .675
P(30) =    2      .1     0     0     0     .9
           3   .025      0     0     0   .975
           4      0      0     0     0     1


 What does matrix mean?

A Steady State Probability Does Not Exist in This Case

                          35
         Existence of Steady-State Probabilities

A Markov chain is ergodic if it is aperiodic and allows
the achievement of any future state from any initial
state after one or more transitions. If these conditions
hold, then
                      j  lim pijt )
                                (
                         t 


  For example,                   State-transition network

         0.8 0 0.2
     P  0.4 0.3 0.3
                                    1              2
                    
          0 0.9 0.1
                    
                                           3


                            36
 Classification of States in Markov Chain

Example       1 0.4 0.6 0   0   0 
              2 0.5 0.5 0
                            0   0 
                                   
            P3  0   0 0.3 0.7 0 
                                  
              4 0    0 0.5 0.4 0.1
              5 0
                     0  0 0.8 0.2 
                                   
       .6
  1         2                  .7
                                                   .4
       .5                 3    .5             4
  .4            .5   .3
                                    .8
                                              .1

                                    5
                          37             .2
A state j is accessible from state i if pij(t) > 0 for some t > 0.

      In example, state 2 is accessible from state 1
            & state 3 is accessible from state 5
            but state 3 is not accessible from state 2.

States i and j communicate if i is accessible from j
and j is accessible from i.

      States 1 & 2 communicate; also
              states 3, 4 & 5 communicate.
      States 2 & 4 do not communicate

States 1 & 2 form one communicating class.
    States 3, 4 & 5 form a 2nd communicating class.
                                38
 If all states in a Markov chain communicate
    (i.e., all states are members of the same communicating class)
            then the chain is irreducible.

 The current example is not an irreducible Markov chain.
How about the Gambler’s Ruin Problem
    Not an irreducible Markov Chain. It
            has 3 classes: {0}, {1, 2, 3} and {4}.

Let fii = probability that the process will return to state i
           (eventually) given that it starts in state i.

      If fii = 1 then state i is called recurrent.

      If fii < 1 then state i is called transient.
                                 39
If pii = 1 then state i is called an absorbing state.

   Above example has no absorbing states
   States 0 & 4 are absorbing in Gambler’s Ruin problem.

The period of a state i is the smallest n > 1 such that
     all paths leading back to i have a length that is
     a multiple of n;
                i.e., pii(t) = 0 unless t = n, 2n, 3n, . . .

If a process can be in state i at time t or time t + 1
having started at state i then state i is aperiodic.

Each of the states in the current example are aperiodic

                               40
        Example of Periodicity - Gambler’s Ruin

   States 1, 2 and 3 each have period 2.

                 0     1      2     3      4
          0      1     0      0     0      0
          1      1-p   0      p     0      0
          2      0     1-p    0     p      0
          3      0     0      1-p   0      p
          4      0     0      0     0      1

If all states in a Markov chain are
        recurrent, aperiodic, & the chain is irreducible
                    then it is ergodic.

                             41
       Classification of States (continued)
An absorbing state is one that locks in the system once it
enters.
            d1             d2                 d3
       0         1              2                  3        4
                      a1                 a2            a3



 This diagram might represent the wealth of a gambler
 who begins with $2 and makes a series of wagers for
 $1 each.
 Let ai be the event of winning in state i and di the event
 of losing in state i.
 There are two absorbing states: 0 and 4.
                                    42
              Illustration of Concepts
                                             0
Example 1
            State   0   1   2   3
              0     0   X   X   0
              1     X   0   0   0        3       1
              2     0   0   0   X
              3     X   0   0   X


                                             2




 Every pair of states communicates, forming a single
 recurrent class; however, the states are not periodic.
 Thus the stochastic process is aperiodic and
 irreducible.

                                    43
               Illustration of Concepts
Example 2                                         0


             State   0   1   2   3   4
               0     X   X   0   0   0
               1     X   X   0   0   0                1
                                              4
               2     0   0   X   0   0
               3     0   0   X   X   0
               4     X   0   0   0   0

                                              3           2



States 0 and 1 communicate and for a recurrent class.
States 3 and 4 form separate transient classes.
State 2 is an absorbing state and forms a recurrent class.

                                         44
               Illustration of Concepts
Example 3                                        0


                State   0   1   2   3
                  0     0   X   X   0
                  1     0   0   0   X                1
                                             3
                  2     0   0   0   X
                  3     X   0   0   0


                                                 2




   Every state communicates with every other state,
   so we have irreducible stochastic process.

  Periodic? Yes, so Markov chain is irreducible and
             periodic.

                                        45
         Existence of Steady-State Probabilities

A Markov chain is ergodic if it is aperiodic and allows
the achievement of any future state from any initial
state after one or more transitions. If these conditions
hold, then
                      j  lim pijt )
                                (
                         t 


  For example,                   State-transition network

         0.8 0 0.2
     P  0.4 0.3 0.3
                                    1              2
                    
          0 0.9 0.1
                    
                                           3

   Conclusion: chain is ergodic.
                            46
                       Game of Craps
 The Game of Craps in Las Vegas plays as follows

The player rolls a pair of dice and sums the numbers showing.
  A total of 7 or 11 on the first rolls wins for the player
                         Where a total of 2, 3, 12 loses
         Any other number is called the point.

The player rolls the dice again.

   If he/she rolls the point number, she wins
   If he/she rolls number 7, she loses
   Any other number requires another roll

 The game continues until he/she wins or loses
                                47
      Game of Craps as a Markov Chain

                All the possible states

                     Start

Win                                        Lose




      P4   P5        P6        P8    P9   P10

                   Continue

                          48
                    Game of Craps Network

not (4,7)   not (5,7)       not (6,7)           not (8,7)        not (9,7)       not (10,7)



  P4            P5               P6               P8                  P9           P10



                        6    8
                5
            4                     9                                   7 7
                                                                 7           7
                                      5    6     8 9    7
                            10                         7
                 Win              4                 10               Los e
                            (7, 11)     Start       (2, 3, 12)



                                          49
                                       Game of Craps
 Sum      2         3           4           5           6           7           8           9           10          11          12

 Prob.   0.028     0.056       0.083       0.111       0.139       0.167       0.139       0.111       0.083       0.056       0.028



Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223
Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112

                        Start       Win         Lose           P4          P5          P6          P8          P9        P10

           Start           0        0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083
           Win             0           1           0           0           0           0           0           0           0
           Lose            0           0           1           0           0           0           0           0           0
              P4           0        0.083 0.167             0.75           0           0           0           0           0
    P=        P5           0        0.111 0.167                0        0.722          0           0           0           0
              P6           0        0.139 0.167                0           0        0.694          0           0           0
              P8           0        0.139 0.167                0           0           0        0.694          0           0
              P9           0        0.111 0.167                0           0           0           0         0.722         0
           P10             0        0.083 0.167                0           0           0           0           0         0.75
                                                               50
             Transient Probabilities for Craps

 Roll no. Start   Win     Lose     P4          P5      P6      P8      P9     P10
    0       1      0       0       0            0      0       0       0       0
    1       0     0.222   0.111   0.083    0.111      0.139   0.139   0.111   0.083
    2       0     0.299   0.222   0.063        0.08   0.096   0.096   0.080   0.063
    3       0     0.354   0.302   0.047    0.058      0.067   0.067   0.058   0.047
    4       0     0.394   0.359   0.035    0.042      0.047   0.047   0.042   0.035
    5       0     0.422   0.400   0.026    0.030      0.032   0.032   0.030   0.026




Recall, This is not an ergodic Markov Chain
  Where you start counts

                                          51
Absorbing State Probabilities for Craps



     Initial     Win        Lose
      state
      Start     0.493      0.507
       P4       0.333      0.667
       P5       0.400      0.600
       P6       0.455      0.545
       P8       0.455      0.545
       P9       0.400      0.600
      P10       0.333      0.667


                  52
          Interpretation of Steady-State Conditions

1. Just because an ergodic system has steady-state probabilities
   does not mean that the system “settles down” into any one state.
2. j is simply the likelihood of finding the system in state j after a
   large number of steps.

3. The limiting probability πj that the process is in state j after a
   large number of steps is also equals the long-run proportion of
   time that the process will be in state j.

4. When the Markov chain is finite, irreducible and periodic, we
   still have the result that the πj, j S, uniquely solves the
   steady-state equations, but now πj must be interpreted as the
   long-run proportion of time that the chain is in state j.
                                 53
           Insurance Company Example

 An insurance company charges customers annual
       premiums based on their accident history
       in the following fashion:

 No accident in last 2 years:     $250 annual premium
 Accidents in each of last 2 years: $800 annual premium
 Accident in only 1 of last 2 years: $400 annual premium

Historical statistics:
1. If a customer had an accident last year then they
   have a 10% chance of having one this year;
2. If they had no accident last year then they have a
   3% chance of having one this year.
                            54
Find the steady-state probability and the long-run
       average annual premium paid by the customer.

Solution approach: Construct a Markov chain with four
states: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate
(accident last year, accident this year).

                       (N, N)   (N, Y)   (Y, N)   (Y, Y)
            (N,   N)   .97      .03      0        0
            (N,   Y)   0        0        .90      .10
       P=
            (Y,   N)   .97      .03      0        0
            (Y,   Y)   0        0        .90      .10




                                   55
    State-Transition Network for Insurance Company


                       .03           .90              .90
                                      .03      Y, N         Y, Y
    .97   N, N               N, Y
                                                                   .10
                 .97                .10



This is an ergodic Markov chain:
   All states communicate (irreducible);
   Each state is recurrent (you will return, eventually);
   Each state is aperiodic.


                                          56
 Solving the steady – state equations:

         (N,N) = 0.97 (N,N) + 0.97 (Y,N)
         (N,Y) = 0.03 (N,N) + 0.03 (Y,N)
         (Y,N) =   0.9 (N,Y)    + 0.9 (Y,Y)
         (N,N) + (N,Y)+(Y,N) + (Y,Y) = 1

 Solution:
 (N,N) = 0.939, (N,Y) = 0.029, (Y,N) = 0.029, (Y,Y) = 0.003

  & the long-run average annual premium is
0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5


                                   57
                First Passage Times

Let ij = expected number of steps to transition
             from state i to state j

If the probability that we will eventually visit state j
    given that we start in i is less than one then
    we will have ij = +.

For example, in the Gambler’s Ruin problem,
   20 = + because there is a positive probability
   that we will be absorbed in state 4 given that we
   start in state 2 (and hence visit state 0).


                            58
     Computations for All States Recurrent
If the probability of eventually visiting state j given
   that we start in i is 1 then the expected number
   of steps until we first visit j is given by


        ij = 1 + pirrj, for i = 0,1, . . . , m–1
                  rj



It will always take     We go from i to r in the first step
at least one step.      with probability pir and it takes rj
                        steps from r to j.

For j fixed, we have linear system in m equations and
m unknowns mij , i = 0,1, . . . , m–1.
                             59
 First-Passage Analysis for Insurance Company
Suppose that we start in state (N,N) and want to find
the expected number of years until we have accidents
in two consecutive years (Y,Y).

This transition will occur with probability 1, eventually.
For convenience number the states
        0     1     2     3
      (N,N) (N,Y) (Y,N) (Y,Y)

  Then,     03 = 1 + p00 03 + p01 13 + p0223

            13 = 1 + p10 03 + p11 13 + p1223
            23 = 1 + p20 03 + p21 13 + p2223
                             60
                    (N, N)   (N, Y)   (Y, N)   (Y, Y)
         0 (N, N)   .97      .03      0        0
         1 (N, Y)
Using P = (Y, N)
                    0        0        .90      .10
         2          .97      .03      0        0
         3 (Y, Y)   0        0        .90      .10


               03 = 1 + 0.9703 + 0.0313
               13 = 1 + 0.923
               23 = 1 + 0.9703 + 0.0313

  Solution: 03 = 343.3, 13 = 310, 23 = 343.3
  So, on average it takes 343.3 years to transition
  from (N,N) to (Y,Y).

                                 61
            Expected First Passage Times
 The average time it takes to reach other states

              From 0(N,N) 1(N,Y) 2(Y,N) 3 (Y,Y)
       To     0 (N,N) 1.06 2.18 1.06 2.18
              1 (N,Y) 33.3 34.4 33.3 34.4
              2 (Y,N) 34.4 1.11 34.4 1.11
              3 (Y,Y) 343.3 310 343 310

       Stead State    0.938 0.029 0.029 0.003

Recurrence time: first passage time from a state back to itself
              It is the inverse of the steady state probability
                         uii = 1/i
                              62
                 Absorbing States
An absorbing state is a state j with pjj = 1.

Given that we start in state i, we can calculate the
probability of being absorbed in state j.

We essentially performed this calculation for the
Gambler’s Ruin problem by finding
         P(t) = (pij(t)) for large t.

But we can use a more efficient analysis
like that used for calculating first passage times.


                             63
Let 0, 1, . . . , k be transient states and
    k + 1, . . . , m – 1 be absorbing states.

Let qij = probability of being absorbed in state j
          given that we start in transient state i.

Then for each j we have the following relationship
                             k
                qij = pij +  pirqrj , i = 0, 1, . . . , k
                            r=0


         Go directly to j          Go to r and then to j

  For fixed j (absorbing state) we have k+1 linear
  equations in k+1 unknowns, qrj , i = 0, 1, . . . , k.
                                  64
                Absorbing States – Gambler’s Ruin
Suppose that we start with $2 and want to calculate the
probability of going broke, i.e., of being absorbed in state 0.

We know p00 = 1 and p40 = 0, thus
        q20 = p20 + p21 q10 + p22 q20 + p23 q30 (+ p24 q40)
        q10 = p10 + p11 q10 + p12 q20 + p13 q30 + 0
        q30 = p30 + p31 q10 + p32 q20 + p33 q30 + 0

where              0     1     2     3      4
            0      1     0     0     0      0
            1      1-p   0     p     0      0
    P=      2       0    1-p   0     p      0
            3       0    0     1-p   0      p
            4       0    0     065   0      1
       Solution to Gambler’s Ruin Example

Now we have three equations with three unknowns.
Using p = 0.75 (probability of winning a single bet)
we have
          q20 = 0 + 0.25 q10 + 0.75 q30
          q10 = 0.25 + 0.75 q20

          q30 = 0 + 0.25 q20
Solving yields q10 = 0.325, q20 = 0.1, q30 = 0.025

(This is consistent with the values found earlier.)

                          66
            Applications of Markov Chain
                      (Reading)
  Linear Programming
                    Min cx
                Subject to Ax = b, x 0
     where A is a m n matrix, n variables, m constraints.
     nm
Simplex algorithm
    Search along the boundary for improving extreme points (vertex
                           n
 There might be as many as   ,exponential #, of vertexes,
                           m
It seems that simplex algorithm, on average, need exponential
number of iterations?

 NO, see a Markov Chain Model for simplex Algorithm
                     (Handout)
                         67

				
DOCUMENT INFO