# _ _= e tdW

Document Sample

```					AMS 216 Stochastic Differential Equations                                                                              Lecture #11

First, I want to go over a homework problem
Exercise P5:
In the discussion of Ornstein-Uhlenbeck process, we obtained that
E {Y (t )} = e             t
Y (0)                    for t > 0
Is this still valid for -t < 0?
E {Y ( t )} = e tY ( 0 ) ?

Recall the Ornstein-Uhlenbeck process
mdY = bY dt + q dW
==>        dY =               Y dt + dW

==>        e t dY + e tY dt = e t dW

==>                (
d e tY = e t dW)
t
==>        e tY ( t ) Y ( 0 ) =                        e s dW ( s )
0

t
==>        Y (t ) = e          t
Y (0) + e            t
e s dW ( s )
0

This is valid for positive t and for negative -t.
t
Y ( t ) = e Y (0) + e t                    t
e s dW ( s )
0

Let

s=
N
t
,        sj = j s ,                                       ( )
dW j = W s j +1      ( )
W sj

We express the integral as
t                                        N 1
e dW ( s ) = lim
s                                            sj
e            dW j
N
0                                        j=0

Recall that for t > 0 we have the property

{dW        j   Y ( 0 ) = fixed ~ N ( 0,}                           s   ) and are independent of each other      (C1)

-1-
t
If property (C1) is still true for -t < 0, then                                   e s dW ( s ) a Gaussian and its mean and variance
0
are calculated as
t
E              e s dW ( s ) = 0
0

t
(           )
N 1
e dW ( s ) = lim
s                                          sj 2
var                                                       e             s
N
0                                    j=0
t
(1              )
2
=       2
e2 s ds =                        e   2 t

0
2
It follows that Y(-t) is a Gaussian with mean and variance given by
E {Y ( t )} = e tY ( 0 )
t
(e          )
2
var {Y ( t )} = e      2 t
var                e s dW ( s ) =                        2 t
1
0
2
That is,

(e         )
2
Y ( t ) ~ N e tY ( 0 ) ,                            2 t
1
2

We quickly consider the deterministic system
dY =            Y dt
We have
Y ( t ) = e tY ( 0 )         for all values of t.

==>          Y ( t ) = e tY ( 0 )

So Y ( t ) = e tY ( 0 ) is valid for the deterministic system.

Questions:
1) Are these results reasonable for the stochastic process?
2) Do we need to impose any condition on Y(-t)?
3) If we impose some condition on Y(-t), is property (C1) still valid?
4) If property (C1) is no longer valid, how do we calculate the distribution of Y(t)?

-2-
These are difficult questions. We address them from a different angle.
We study conditional probability in the framework of random experiments. Specifically, we
distinguish two different situations in conditional probability:
Forward conditioning:         the condition is imposed on how the experiment will start
Backward conditioning: the condition is imposed on the observed result

Forward conditioning and backward conditioning in conditional probability
Consider the testing for a disease Z. We first introduce some terminology
Prevalence rate = Pr{A random person having disease Z}
False positive rate = Pr { Testing positive Having disease Z}

False negative rate = Pr { Testing negative Not having disease Z}
In a more mathematical language,
A = Having disease Z
AC = Not having disease Z
B = Testing positive
BC = Testing negative
Prevalence rate = Pr{A}

{
False positive rate = Pr B AC     }
{
False negative rate = Pr BC A     }
Suppose we know

{      }
Pr B AC = 0.01 = 1%

{      }
Pr BC A = 0.01 = 1%
We want to calculate
Pr { A B} = Pr {Having disease Z Testing positive}
Two kinds of conditioning:
Forward conditioning:         Pr {B A}

Pr {B A} can be measured by testing patients with disease Z.

Backward conditioning:        Pr { A B}

Pr { A B} can be measured by testing people from a subpopulation.

-3-
But we need to specify what subpopulation to use.
Bayes’ Theorem
Pr {B A} Pr {A}
Pr { A B} =
Pr {B}

{
Pr {B} = Pr {B A} Pr {A} + Pr B AC Pr AC   } { }
Here
Pr{A} is called the prior probability, which characterizes the subpopulation.
Pr { A B} is called the posterior probability.
Let p = Pr{A} denote the prevalence rate.
We have

{
Pr {B} = Pr {B A} Pr {A} + Pr B AC Pr AC   } { }
= 0.99 p + 0.01(1 p ) = 0.98 p + 0.01

Pr { A B} =
0.99 p
0.98 p + 0.01

Pr { A B} =
0.99 p
p = 0.01        ==>                                   = 0.5 = 50%
0.98 p + 0.01

Pr { A B} =
0.99 p
p = 0.001       ==>                                   = 9.02 10 2 = 9.02%
0.98 p + 0.01

Pr { A B} =
0.99 p
p = 0.0001      ==>                                   = 9.80 10 3 = 0.98%
0.98 p + 0.01
Bayes’ theorem tells us that
1) To calculate the posterior probability we need to specify the prior probability
2) The posterior probability is highly affected by the prior probability.

For the Ornstein-Uhlenbeck process,
Forward conditioning:           {Y (t ) Y ( 0 ) = fixed }
It can be determined by solving the evolution starting from Y(0) = fixed.
Backward conditioning:          {Y (   t ) Y ( 0 ) = fixed   }
It can be determined by solving the evolution starting from Y(-t) with Y(-t) drawn from
some distribution. But we need to specify what distribution to use.

-4-
Now we use Bayes’ theorem to calculate                                                                    (Y (            t ) Y (0) .              )
Recall that

(Y (t ) Y ( 0 )) ~ N                                                                     (1                           )
2
e        t
Y ( 0 ),                                             e      2 t
2

(Y ( 0 ) Y ( t )) ~ N                                                                               (1                           )
2
==>                                                          e           t
Y ( t ),                                            e   2 t
2
Let
X1 = Y ( t )

X2 = Y ( 0 )
We have

(x                     )
2
t

(1                       )
2                                                                          e       x1
(X       = x2 X1 = x1 ) ~ N e                                     t
x1 ,                                   e    2 t
~ exp
2

(1                    )
2                                                                                                                                                         2
2                                                                                          2 t
2                 e
2
For X1 = Y ( t ) , we use a Gaussian distribution with the variance as the free parameter.

( X1 = x1 ) ~ N ( 0,                         ) ~ exp
2                                           x12
2
2
Using Bayes’ theorem, we get
(X           = x2 X1 = x1 )                                      ( X1 = x1 )
(X 1     = x1 X2 = x2 ) =
2

( X 2 = x2 )
~       (X   2   = x2 X1 = x1 )                        ( X1 = x1 )

(x                                  )                          (1                           )x
2
2
2                           t                                                          2 t                  2
2        e             x1                                                 e                    1
2
~ exp
(1                             )
2
2                                       2 t
2                                                e
2

(1                                 )
2
2
e    2 t
+                               e       2 t
x12 + 2                  2
e       t
x1 x2
2
~ exp
(1                               )
2
2                                       2 t
2                                                   e
2

-5-
2

2           t
e
x                                                                                           x2
(1                         )
2
2
e       2 t
+                            e    2 t
2
~ exp
(1                            )
2
2                                       2 t
e
2
2
(1                         )
2
2
e       2 t
+                                e    2 t
2

(1                   )
2
2                           2 t
2               t                                                                                 e
e               x2                                                          2
~N                                                                                          ,
(                                   )                                             (1               )
2                                                                                     2
2
e   2 t
+                       1 e                     2 t                   2
e    2 t
+                   e    2 t
2                                                                                   2

(Y (                 )
t ) Y ( 0 ) is a Gaussian distribution with mean and variance given by
t
Y (0)
{                               }
2
e
E Y ( t ) Y (0) =
(1                      )
2
2
e       2 t
+                             e       2 t
2

(1                         )
2
2                                 2 t
e
{
var Y ( t ) Y ( 0 ) =                 }                                       2

(1                      )
2
2
e   2 t
+                          e    2 t
2
We consider two cases:
Case 1:      Suppose we know the process is already at equilibrium at time = -t and it is not
artificially altered.
At equilibrium, we have
2

(Y ( t )) ~ N                   0,
2
2
==>             2
=
2
t
Y (0)
{                                   }
2
e
==>         E Y ( t ) Y (0) =                                                                                                                 =e        t
Y (0)
(1                      )
2
2
e       2 t
+                           e       2 t
2

-6-
(1                           )
2
2                                 2 t
e
{                   }                                                                                                  (1             )
2
var Y ( t ) Y ( 0 ) =
2
=                e   2 t

(1                     )
2
2
2
e    2 t
+                             e   2 t
2

(                   )                                                                 (1                       )
2
==>           Y ( t ) Y (0) ~ N e                             t
Y ( 0 ),                                   e       2 t
2
Recall that

(Y (t )            )                                                        (1                        )
2
Y (0) ~ N e             t
Y ( 0 ),                                      e    2 t
2
Therefore, at equilibrium, we have

(Y (       t ) Y (0) = )       (Y (t )             Y (0)        )
This is called the time reversibility of equilibrium.
The time reversibility of equilibrium is true only if the system is not artificially altered.

Case 2:     Suppose we artificially alter the system at time = -t. We re-start the system at time = -
t with Y(-t) drawn from the uniform distribution in (- , + ) (practically, we use a Gaussian
distribution with very large variance). We consider the limit as 2         .
t
Y (0)
{                   }
2
e
==>        E Y ( t ) Y ( 0 ) = lim
(1                       )
2
2
e    2 t
+                             e   2 t
2

e       t
Y (0)
=                   2 t
= e tY ( 0 )
e

(1                        )
2
2                                 2 t
e
{
var Y ( t ) Y ( 0 ) = lim  }                                            2

(1                       )
2
2
e    2 t
+                             e    2 t
2

(1                               )
2
2 t
e
(e                      )
2
2
=                           2 t
=                     2 t
1
e                                     2

(e                       )
2
In summary, Y ( t ) ~ N e tY ( 0 ) ,                                          2 t
1 is valid if we artificially re-start the system at
2
time = -t with Y(t) uniformly distributed in (- , + ).

-7-
Feynman-Kac formula
Consider the stochastic differential equation

dX = b ( X, t ) dt + a ( X, t ) dW

We use the Ito interpretation

{                              }
E dX X ( t ) = x = b ( x,t ) dt + o ( dt )

{                          }
E ( dX ) X ( t ) = x = a ( x,t ) dt + o ( dt )
2

E {( dX )        X ( t ) = x } = o ( dt )
n
for n           3

Consider the function

u ( x,t,T ) = E exp        {                        T

t
( X ( s ), s ) ds          X (t ) = x   }
Meaning of u(x, t, T)
Suppose (z, s) is the fatality rate at position z at time s.
Suppose we follow a fixed path X(s).
Let
T       t
s=               ,             sj = t + j s
N
Probability of surviving from time sj to sj+1 along the given path X(s) is approximately

1        ( X ( s ), s )
j         j           s    exp          ( ( X ( s ), s ) s )
j       j

Probability of surviving from time t to time T along the given path X(s) is

( ( ) )                                                      ( ( X ( s ), s ) s )
N 1                                                               N 1
1                X sj , sj                      s               exp                   j      j
j=0                                                               j=0

( X ( s ), s )
N 1
= exp                                            j         j   s
j=0

T

exp                       ( X ( s ), s ) ds                    as N
t

Probability of surviving from time t to time T averaged over all paths starting at X(t) = x is
T

u ( x,t,T ) = E exp                                         ( X ( s ), s ) ds           X (t ) = x
t

Examples:

-8-
X = temperature
u = the average size of a bacteria population at time = T

X = the predator population
u = the average size of prey population at time = T

X = the prey population
u = the average size of predator population at time = T

X = oil price
u = the average price of an oil stock at time = T

-9-

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 0 posted: 4/20/2013 language: English pages: 9