Docstoc

MARKOV CHAINS (PowerPoint)

Document Sample
MARKOV CHAINS (PowerPoint) Powered By Docstoc
					       Al-Imam Mohammad Ibn Saud University




                      CS433
              Modeling and Simulation
               Lecture 06 – Part 01
     Discrete Markov Chains

12 Apr 2009           Dr. Anis Koubâa
Goals for Today

 Understand what is a Stochastic Process
 Understand the Markov property
 Learn how to use Markov Chains for
 modelling stochastic processes


                                            2
The overall picture …

   Markov Process
   Discrete Time Markov Chains
     Homogeneous    and non-homogeneous Markov chains
     Transient and steady state Markov chains

   Continuous Time Markov Chains
     Homogeneous    and non-homogeneous Markov chains
     Transient and steady state Markov chains




                                                         3
Markov Process
• Stochastic Process
• Markov Property




                       4
    What is “Discrete Time”?
5




                                          time
            0     1      2     3     4


       Events occur at a specific points in time




                                                   5
What is “Stochastic Process”?
                                                                                     State Space = {SUNNY,
6
                                                                                                    RAINNY}
    X day i   "S " or " R ": RANDOM VARIABLE that varies with the DAY

                       X day 2   "S "               X day 4   "S "             X day 6   "S "

          X day 1  "S "             X day 3  " R "               X day 5  " R "             X day 7   "S "




                                                                                                                         Day
            Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
            THU FRI     SAT SUN     MON TUE WED

                       X day i  IS A STOCHASTIC PROCESS

               X(dayi): Status of the weather observed each DAY                                                                6
            Markov Processes
7


       Stochastic Process X(t) is a random variable that varies with time.
       A state of the process is a possible value of X(t)
       Markov Process
           The future of a process does not depend on its past, only on its present
           a Markov process is a stochastic (random) process in which the probability
            distribution of the current value is conditionally independent of the series of
            past value, a characteristic called the Markov property.
           Markov property: the conditional probability distribution of future states of
            the process, given the present state and all past states, depends only upon
            the present state and not on any past states
     Marko      Chain: is a discrete-time stochastic process with the Markov property
                                                                                         7
What is “Markov Property”?
                Pr X DAY 6  "S " | X DAY 5  " R ", X DAY 4  "S ",..., X DAY 1  "S " 
8
                Pr X DAY 6  "S " | X DAY 5  " R "

            PAST EVENTS                                               NOW FUTURE EVENTS
                   X day 2   "S "              X day 4   "S "
                                                                                              Probability of “R” in DAY6 given all previous states
      X day 1  "S "             X day 3  " R "                  X day 5  " R "
                                                                                          ?   Probability of “S” in DAY6 given all previous states




                                                                                                                 Day
        Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
        THU FRI     SAT SUN     MON TUE WED


    Markov Property: The probability that it will be (FUTURE) SUNNY in DAY 6
    given that it is RAINNY in DAY 5 (NOW) is independent from PAST EVENTS 8
     Notation
9



    Discrete time tk or k       Value of the stochastic
                                process at instant tk or k



                X(tk) or Xk = xk

The stochastic process at time tk or k
                                                         9
Markov Chain
Discrete Time Markov Chains (DTMC)




                                     10
           Markov Processes
11


         Markov Process
              The future of a process does not depend on its past, only on its present

         Pr X t k 1   x k 1 | X t k   x k ,..., X t 0   x 0 
                                                                   Pr X t k 1   x k 1 | X t k   x k 

    Since we are dealing with “chains”, X(ti) = Xi can take discrete values from a
     finite or a countable infinite set.
    The possible values of Xi form a countable set S called the state space of the
     chain
    For a Discrete-Time Markov Chain (DTMC), the notation is also simplified to

     Pr  X k 1  xk 1 | X k  xk ,..., X 0  x0   Pr  X k 1  xk 1 | X k  xk 
                                                                                                                  11
    Where Xk is the value of the state at the kth step
     General Model of a Markov Chain
12


                                            p11

                              p01                     p12                           p22
      p00           S0                 S1                             S2
                                p10                p21
                                      p20

                                                  Discrete Time (Slotted Time)
S  S 0, S 1, S 2 State Space                      time  t 0 , t 1 , t 2 ,..., t k 
                                                              {0,1, 2,..., k }
      i   or   Si   State i
               pij Transition Probability from State Si to State Sj
                                                                                           12
      Example of a Markov Process
      A very simple weather model
13



                                          pSR=0.3

             pSS=0.7           SUNNY                  RAINY          pRR=0.4

                                          pRS=0.6
          State Space
     S  SUNNY , RAINY   
     If today is Sunny, What is the probability that to have a SUNNY weather
      after 1 week?
     If today is rainy, what is the probability to stay rainy for 3 days?

                  Problem: Determine the transition probabilities
                  from one state to another after n events.                     13
  Five Minutes Break
You are free to discuss with your classmates about
the previous slides, or to refresh a bit, or to ask
questions.




                                                      14
Chapman Kolmogorov Equation
Determine transition probabilities from one state
to anothe after n events.




                                                    15
           Chapman-Kolmogorov Equations
16

        We define the one-step transition probabilities at the instant k as
                                     pij  k   Pr  X k 1  j | X k  i

        Necessary Condition: for all states i, instants k, and all feasible transitions
         from state i we have:
                      
                     j   i 
                                  pij  k   1 where   i  is all neighbor states to i

        We define the n-step transition probabilities from instant k to k+n as

                                  pij  k , k  n   Pr  X k  n  j | X k  i

                                                        x1
                                   xi                                         xj
                                                           …




                                                        xR
                                                                                            16
         Discrete time             k k+1               u                      k+n
           Chapman-Kolmogorov Equations
17

        Using Law of Total Probability

     p ij  k , k  n   Pr X k  n  j | X k  i 
                         R
                       Pr X k  n  j | X u  r , X k  i  Pr X u  r | X k  i 
                         r 1



                                              x1
                                xi                            xj
                                                 …

                                              xR
         Discrete time          k k+1        u                k+n

                                                                                         17
           Chapman-Kolmogorov Equations
18


        Using the memoryless property of Markov chains
               Pr  X k  n  j | X u  r , X k  i  Pr  X k  n  j | X u  r


        Therefore, we obtain the Chapman-Kolmogorov Equation
           p ij  k , k  n   Pr X k  n  j | X k  i 
                              R
                             Pr X k  n  j | X u  r  Pr X u  r | X k  i 
                             r 1


                                  R
         pij  k , k  n    pir  k , u  prj  u, k  n ,           k u  k n
                              r 1                                                   18
     Chapman-Kolmogorov Equations
     Example on the simple weather model
19
                                                     pSR=0.3

              pSS=0.7                  SUNNY                          RAINY              pRR=0.4

                                                     pRS=0.6
        What is the probability that the weather is rainy on day 3
         knowing that it is sunny on day 1?
         p sunny  rainy day 1, day 3  p sunny  sunny day 1, say 2   p sunny  rainy day 2, day 3
                                       +p sunny  rainy day 1, day 2   p rainy  rainy day 2, day 3

psunny  rainy day 1, day 3  pss day 1, say 2   psr day 2, day 3 +psr day 1, day 2   p rr day 2, day 3

             psunny  rainy day 1, day 3  pss  psr  psr  p rr
                                              0.7  0.3  0.3  0.4  0.21  0.12  0.33                       19
Transition Matrix
Generalization Chapman-Kolmogorov Equations




                                              20
          Transition Matrix
           Simplify the transition probability representation
21


        Define the n-step transition matrix as
                      H  k , k  n    pij  k , k  n  
                                                           
        We can re-write the Chapman-Kolmogorov Equation as follows:
                  H  k , k  n   H  k , u  H  u, k  n 
        Choose, u = k+n-1, then
                H  k , k  n   H  k , k  n  1 H  k  n  1, k  n 
                                H  k , k  n  1 P  k  n  1

         Forward                                                 One step transition
         Chapman-Kolmogorov                                      probability
                                                                                       21
         Transition Matrix
           Simplify the transition probability representation
22




        Choose, u = k+1, then
                   H  k , k  n   H  k , k  1 H  k  1, k  n 
                                   P  k  H  k  1, k  n 

     Backward                                             One step transition
     Chapman-Kolmogorov                                   probability




                                                                                22
           Transition Matrix
             Example on the simple weather model
23
                                              pSR=0.3

              pSS=0.7             SUNNY                    RAINY             pRR=0.4

                                              pRS=0.6
         What is the probability that the weather is rainy on day 3
          knowing that it is sunny on day 1?


                             p sunny sunny day 1, day 3   p sunny  rainy day 1, day 3 
         H day 1, day 3                                                                    
                             p rainy sunny day 1, day 3   p rainy  rainy day 1, day 3 



                                                                                                   23
             Homogeneous Markov Chains
             Markov chains with time-homogeneous transition probabilities
24

         Time-homogeneous Markov chains (or, Markov chains with time-
          homogeneous transition probabilities) are processes where

                pij  Pr X k 1  j | X k  i   Pr X k  j | X k 1  i 
         The one-step transition probabilities are independent of time k.

            P k   P           or            pij    Pr  X k 1  j | X k  i
                                                                                 
          pij  Pr X k 1  j | X k  i    is said to be Stationary Transition Probability

        Even though the one step transition is independent of k, this does not mean
         that the joint probability of Xk+1 and Xk is also independent of k. Observe
         that:
         Pr X k 1  j and X k  i   Pr X k 1  j | X k  i  Pr X k  i 
                                                p ij Pr X k  i                             24
  Two Minutes Break
You are free to discuss with your classmates about
the previous slides, or to refresh a bit, or to ask
questions.




                                                      25
    Example: Two Processors System
   Consider a two processor computer system where, time is divided
    into time slots and that operates as follows:
       At most one job can arrive during any time slot and this can happen with
        probability α.
       Jobs are served by whichever processor is available, and if both are available
        then the job is given to processor 1.
       If both processors are busy, then the job is lost.
       When a processor is busy, it can complete the job with probability β during any
        one time slot.
       If a job is submitted during a slot when both processors are busy but at least
        one processor completes a job, then the job is accepted
        (departures occur before arrivals).
   Q1. Describe the automaton that models this system (not included).
   Q2. Describe the Markov Chain that describes this model.       26
Example: Automaton (not included)
   Let the number of jobs that are currently processed by the system by the
    state, then the State Space is given by X= {0, 1, 2}.
   Event set:
       a: job arrival,
       d: job departure
   Feasible event set:
       If X=0, then Γ(X)= a
       If X= 1, 2, then Γ(Χ)= a, d.
   State Transition Diagram                    - / a,d

                                 a                        a               -/a/ad
        -               0                   1                      2
                                       d             d / a,d,d
                                           dd                                  27
Example: Alternative Automaton
(not included)
   Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}.
   Event set:
       a: job arrival,        di: job departure from processor i
   Feasible event set:
       If X=(0,0), then Γ(X)= a              If X=(0,1) then Γ(Χ)= a, d2.
       If X=(1,0) then Γ(Χ)= a, d1.          If X=(0,1) then Γ(Χ)= a, d1, d2.
   State Transition Diagram
                                                  - / a,d1
                           a                                 a
                                            10                                   -/a/ad1/ad2
                                       d1          a,d1,d2
-                00                                                    11
                                       a,d2        d1,d2
                                d2          01                   d1
                                                                                          28
                                                   -
         Example: Markov Chain
29

            For the State Transition Diagram of the Markov Chain, each transition is
             simply marked with the transition probability
                                                   p11

                                p01                          p12                      p22
         p00            0                      1                             2
                                   p10                     p21
                                             p20

     p00  1            p01                                  p02  0

     p10   1          p11  1    1                p12   1   

     p20   1   
               2            p21   2  2 1    1   
                                                            p22  1     2 1   
                                                                             2
                                                                                      29
           Example: Markov Chain
30

                                           p11

                           p01                     p12       p22
     p00            0                  1                 2
                             p10                 p21
                                     p20

              Suppose that α = 0.5 and β = 0.7, then,

                                  0.5   0.5   0 
                    P   pij   0.35 0.5
                                            0.15
                                                   
                                  0.245 0.455 0.3 
                                                  
                                                                   30
State Holding Time
How much time does it take for going from one
state to another?




                                                31
         State Holding Times                                          P  A  B | C   P  A | B C   P  B | C 
32

        Suppose that at point k, the Markov Chain has transitioned into state
         Xk=i. An interesting question is how long it will stay at state i.
        Let V(i) be the random variable that represents the number of time
         slots that Xk=i.
        We are interested on the quantity Pr{V(i) = n}

           Pr   i   n   Pr X k  n  i ,  X k  n 1  i ,..., X k 1  i  | X k  i 
               V
                              Pr X k  n  i | X k  n 1  i ,..., X k  i  
                                            Pr X k  n 1  i ,..., X k 1  i | X k  i 

                               Pr X k  n  i | X k  n 1  i   Pr X k  n 1  i | X k  n  2 ..., X k  i  
                                                                Pr X k  n  2  i ,..., X k 1  i | X k  i 
                                                                                                                         32
       State Holding Times
33


     Pr V  i   n  Pr  X k  n  i | X k  n 1  i 
                             Pr  X k  n 1  i | X k  n  2 ..., X k  i 
                            Pr  X k  n  2  i,..., X k 1  i | X k  i
                         1  pii  Pr  X k  n 1  i | X k  n  2  i 
                                        Pr  X k  n  2  i | X k  n 3  i,..., X k  i
                                        Pr  X k  n 3  i,..., X k 1  i | X k  i
                   Pr V  i   n  1  pii  pii 1
                                                  n



         This is the Geometric Distribution with parameter p ii
         Clearly, V(i) has the memoryless property                                      33
          State Probabilities
34


        An interesting quantity we are usually interested in is the
         probability of finding the chain at various states, i.e., we define
                               i  k   Pr  X k  i
        For all possible states, we define the vector
                             π  k    0  k  , 1  k  ...
        Using total probability we can write
                 i  k    Pr  X k  i | X k 1  j Pr  X k 1  j
                             j

                           pij  k   j  k  1
                             j
        In vector form, one can write
     π  k   π  k  1 P  k    Or, if homogeneous
                                                             π  k   π  k 1 P   34
                                    Markov Chain
           State Probabilities Example
35

        Suppose that
              0.5   0.5   0 
          P  0.35 0.5    0.15            with      π  0   1 0 0
                              
              0.245 0.455 0.3 
                              
        Find π(k) for k=1,2,…
                          0.5   0.5   0 
          π 1  1 0 0 0.35 0.5
                                      0.15  0.5 0.5 0
                                           
                          0.245 0.455 0.3 
                                          
         Transient behavior of the system
         In general, the transient behavior is obtained by solving the
          difference equation
                            π  k   π  k 1 P                         35

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:44
posted:5/7/2012
language:
pages:35