# CompSimFinance SS08 by 1Le6cE

VIEWS: 0 PAGES: 205

• pg 1
```									           SS 2008 LV 1837

Computersimulation for Finance
R.Gismondi
FIRM-Research Institute for Computational Methods
Index

• Elements of Probability I                4
• Random Numbers                          27
• Elements of Statistics                  40
• Elements of Probability II              51
• Generating Random Variables             64
• Brownian Motion I                       89
• Geometric Brownian Motion, B&S Model,
Pricing of (exotic) options: 1-dim.    113

Folie 2   Riccardo Gismondi
Index

• Brownian Motion II                       151
• Geometric Brownian Motion,
pricing of (exotic) options: n-dim.     161
• Square root diffusion process:
CIR Model                               173
• Heston Model and stochastic Volatility   192
• Pricing of Exotic Options:               205
Structured Products (Equity)

Folie 3   Riccardo Gismondi
Elements of Probability I

Folie 4   Riccardo Gismondi
Introduction

• Ground-up review of probability necessary to do
and understand simulation
• Assume familiarity with
• Mathematical analysis (especially derivate and integrals)
• Some probability ideas (especially probability spaces, random
variables, probability distributions)
• Outline
• Probability – basic ideas, terminology
• Random variables, joint distributions

Folie 5   Riccardo Gismondi
Probability Basics (1)

• Experiment – activity with uncertain outcome
• Flip coins, throw dice, pick cards, draw balls from urn, …
• Drive to work tomorrow – Time? Accident?
• Operate a (real) call center – Number of calls? Average customer
hold time? Number of customers getting busy signal?
• Simulate a call center – same questions as above
• Sample space – complete list of all possible individual outcomes
of an experiment
• Could be easy or hard to characterize
• May not be necessary to characterize

Folie 6   Riccardo Gismondi
Probability Basics (2)

• Event – a subset of the sample space
• Describe by either listing outcomes, “physical” description,
or mathematical description
• Usually denote by E, F, E1, E2, etc.
• Union, intersection, complementation operations
• Probability of an event is the relative likelihood that it will occur
when you do the experiment
• A real number between 0 and 1 (inclusively)
• Denote by P(E), P(E  F), etc.
• Interpretation – proportion of time the event occurs in many
independent repetitions (replications) of the experiment
• May or may not be able to derive a probability
Folie 7   Riccardo Gismondi
Probability Basics (3)

• Some properties of probabilities
• If S is the sample space, then P(S) = 1
• Can have event E  S with P(E) = 1
• If Ø is the empty event (empty set), then P(Ø) = 0
• Can have event E  Ø with P(E) = 0
• If EC is the complement of E, then P(EC) = 1 – P(E)
• P(E  F) = P(E) + P(F) – P(E  F)
• If E and F are mutually exclusive (i.e., E  F = Ø), then
P(E  F) = P(E) + P(F)
• If E is a subset of F (i.e., the occurrence of E implies the
occurrence of F), then P(E)  P(F)
• If o1, o2, … are the individual outcomes in the sample space, then

Folie 8   Riccardo Gismondi
Probability Basics (4)

• Conditional probability
• Knowing that an event F occurred might affect the
probability that another event E also occurred
• Reduce the effective sample space from S to F, then measure
“size” of E relative to its overlap (if any) in F, rather than relative to
S
• Definition (assuming P(F)  0):

• E and F are independent if P(E  F) = P(E) P(F)
• Implies P(E|F) = P(E) and P(F|E) = P(F), i.e., knowing that one
event occurs tells you nothing about the other
• If E and F are mutually exclusive, are they independent?

Folie 9   Riccardo Gismondi
Random Variables

• One way of quantifying, simplifying events and
probabilities
• A random variable (RV) “is a number whose value is
determined by the outcome of an experiment”.
• Technically, a function or mapping from the sample
space to the real numbers, but can usually define and
work with a RV without going all the way back to the
sample space
• Think: RV is a number whose value we don’t know for
sure but we’ll usually know something about what it can
be or is likely to be
• Usually denoted as capital letters: X, Y, W1, W2, etc.
• Probabilistic behavior described by distribution function

Folie 10   Riccardo Gismondi
Discrete vs. Continuous RVs

• Two basic “flavors” of RVs, used to represent or model
different things
• Discrete – can take on only certain separated values
• Number of possible values could be finite or infinite
• Continuous – can take on any real value in some range
• Number of possible values is always infinite
• Range could be bounded on both sides, just one side, or
neither

Folie 11   Riccardo Gismondi
Discrete Distributions (1)

• Let X be a discrete RV with possible values (range) x1,
x2, … (finite or infinite list)
• Probability mass function (PMF)
p(xi) = P(X = xi)   for i = 1, 2, ...
• The statement “X = xi” is an event that may or may not
happen, so it has a probability of happening, as measured by
the PMF
• Can express PMF as numerical list, table, graph, or formula
• Since X must be equal to some xi, and since the xi’s are all
distinct,

Folie 12   Riccardo Gismondi
Discrete Distributions (2)

• Cumulative distribution function (CDF) – probability that
the RV will be  a fixed value x:

• Properties of discrete CDFs
0  F(x)  1 for all x
As x  –, F(x)  0                          These four properties
are also true of
As x  +, F(x)  1                          continuous CDFs
F(x) is nondecreasing in x
F(x) is a step function continuous from the right with jumps at the xi’s
of height equal to the PMF at that xi

Folie 13   Riccardo Gismondi
Discrete Distributions (3)

• Computing probabilities about a discrete RV – usually
use the PMF
• Add up p(xi) for those xi’s satisfying the condition for the event

• With discrete RVs, must be careful about weak vs. strong
inequalities – endpoints matter!

Folie 14   Riccardo Gismondi
Discrete Expected Values

• Data set has a “center” – the average (mean)
• RVs have a “center” – expected value

• Also called the mean or expectation of the RV X
• Weighted average of the possible values xi, with weights being
their probability (relative likelihood) of occurring
• What expectation is not: The value of X you “expect” to get
E(X) might not even be among the possible values x1, x2, …
• What expectation is:
Repeat “the experiment” many times, observe many X1, X2, …,
Xn
E(X) is what        converges to (in a certain sense) as n  

Folie 15   Riccardo Gismondi
Discrete Variances and
Standard Deviations

• Data set has measures of “dispersion” –
• Sample variance

• Sample standard deviation
• RVs have corresponding measures

• Other common notation:
• Weighted average of squared deviations of the possible values xi
from the mean
• Standard deviation of X is
• Interpretation analogous to that for E(X)

Folie 16   Riccardo Gismondi
Continuous Distributions (1)

• Now let X be a continuous RV
• Possibly limited to a range bounded on left or right or both
• No matter how small the range, the number of possible values for
X is always (uncountably) infinite
• Not sensible to ask about P(X = x) even if x is in the possible
range
• Technically, P(X = x) is always 0
• Instead, describe behavior of X in terms of its falling between two
values

Folie 17   Riccardo Gismondi
Continuous Distributions (2)

• Probability density function (PDF) is a function f(x) with
the following three properties:
f(x)  0 for all real values x
The total area under f(x) is 1:
For any fixed a and b with a  b, the probability that X will fall
between a and b is the area under f(x) between a and b:

• Observed X’s are denser in regions where f(x) is high
• The height of a density, f(x), is not the probability of anything – it
can even be > 1
• With continuous RVs, you can be sloppy with weak vs. strong
inequalities and endpoints
Folie 18   Riccardo Gismondi
Continuous Distributions (3)

• Cumulative distribution function (CDF) - probability that
the RV will be  a fixed value x:

F(x) may or may not have
a closed-form formula

• Properties of continuous CDFs

0  F(x)  1 for all x
As x  –, F(x)  0                              These four properties
are also true of
As x  +, F(x)  1                              discrete CDFs

F(x) is nondecreasing in x
F(x) is a continuous function with slope equal to the PDF:
f(x) = F'(x)

Folie 19   Riccardo Gismondi
Continuous Expected Values,
Variances and Standard Deviations

• Expectation or mean of X is

• Roughly, a weighted “continuous” average of possible values for X
• Same interpretation as in discrete case: average of a large
number (infinite) of observations on the RV X
• Variance of X is

• Standard deviation of X is

Folie 20   Riccardo Gismondi
Joint Distributions (1)

• So far: Looked at only one RV at a time
• But they can come up in pairs, triples, …, tuples, forming
jointly distributed RVs or random vectors
• Input: (T, P, S) = (type of part, priority, service time)
• Output: {W1, W2, W3, …} = output process of times in system of
exiting parts
• One central issue is whether the individual RVs are independent
of each other or related
• Will take the special case of a pair of RVs (X1, X2)
• Extends naturally (but messily) to higher dimensions

Folie 21   Riccardo Gismondi
Joint Distributions (2)

• Joint CDF of (X1, X2) is a function of two variables
Replace “and”
with “,”
• Same definition for discrete and continuous
• If both RVs are discrete, define the joint PMF

• If both RVs are continuous, define the joint PDF f(x1, x2) as a
nonnegative function with total volume below it equal to 1, and

• Joint CDF (or PMF or PDF) contains a lot of information – usually
won’t have in practice

Folie 22   Riccardo Gismondi
Marginal Distributions

• What is the distribution of X1 alone? Of X2 alone?
• Jointly discrete
• Marginal PMF of X1 is

• Marginal CDF of X1 is

• Jointly continuous
• Marginal PDF of X1 is
• Marginal CDF of X1 is
• Everything above is symmetric for X2 instead of X1
• Knowledge of joint  knowledge of marginals – but not vice
versa (unless X1 and X2 are independent)

Folie 23   Riccardo Gismondi
Covariance Between RVs

• Measures linear relation between X1 and X2
• Covariance between X1 and X2 is

• If large (resp. small) X1 tends to go with large (resp. small) X2, then
covariance > 0
• If large (resp. small) X1 tends to go with small (resp. large) X2, then
covariance < 0
• If there is no tendency for X1 and X2 to occur jointly in agreement
or disagreement over being big or small, then Cov = 0
• Interpreting value of covariance – difficult since it depends on
units of measurement

Folie 24   Riccardo Gismondi
Correlation Between RVs

• Correlation (coefficient) between X1 and X2 is

• Has same sign as covariance
• Always between –1 and +1
• Numerical value does not depend on units of measurement
• Dimensionless – universal interpretation

Folie 25   Riccardo Gismondi
Independent RVs

• X1 and X2 are independent if their joint CDF factors
into the product of their marginal CDFs:

• Equivalent to use PMF or PDF instead of CDF
• Properties of independent RVs:
• They have nothing (linearly) to do with each other
• Independence  uncorrelated
• But not vice versa, unless the RVs have a joint normal
distribution
• Important in probability – factorization simplifies greatly
• Tempting just to assume it whether justified or not
• Independence in simulation
• Input: Usually assume separate inputs are indep. – valid?
• Output: Standard statistics assumes indep. – valid?!?
Folie 26   Riccardo Gismondi
Random Numbers

Folie 27   Riccardo Gismondi
Introduction

• The building block of a simulation study is the ability to
generate random numbers.

• Random number represent the value of a random variable uniformly
distributed on (0,1)

Folie 28   Riccardo Gismondi
Pseudorandom Number Generation

•          Multiplicative congruential method

x0 ......seed                        a  75  16807
xn   a * xn1  mod m              m  231  1
xn
un 
m
•          Mixed congruential method
x0 ......seed                        a  75  16807
m  231  1
xn   a * xn1  c  mod m
c
xn
un 
m

Folie 29     Riccardo Gismondi
Computer exercises ( MATLAB )

•          Let       x0  5      a  3 m  150

xn   a * xn1  mod m

find    x1 , x2 ,..., x10
•          Let      x0  3 a  5 m  200 c=7
xn   a * xn1  c  mod m

find     x1 , x2 ,..., x10

Folie 30    Riccardo Gismondi
Using random numbers to evaluate
Integrals (1)

• Let g(x) a function and suppose we want to compute :

1
   g ( x)dx
0
• If U is uniform distributed over (0,1), then we can
express  as                  E  g (U )
• If U1....U k are independent uniform (0,1) random variables, it follows
that the random variables g U ....g U    1    k
are i.i.d variables with mean            

Folie 31   Riccardo Gismondi
Using random numbers to evaluate
Integrals (2)

• Therefore, by the strong law of large numbers,
it follows that, with probability 1:

k                           1
g (U i )
lim                E  g (U )     g ( x)dx
k          k
i 1                              0

• Thus, we can approximate       
by generating a large number of U
 
i
and taking as our approximation the average value of g ui . This
approach to approximate integrals is called Monte Carlo method

Folie 32   Riccardo Gismondi
Computer exercises (MATLAB)

•          Use simulation (Monte Carlo method) to approximate
the following integrals :
1

 x dx
2
1.
0

1

  2t 1 dt
2
2.

0

Folie 33        Riccardo Gismondi
Using random numbers to evaluate
Integrals (3)

• If we want to compute:
b
   g ( x)dx
a

y
 x  a,                   dx
then, by substitution                                  dy 
ba                  ba
1                                  1
   g  a  [b  a ] y  b  a dy   h  y dy
0                                  0

h  y    b  a  g  a  [b  a ] y 

Folie 34   Riccardo Gismondi
Computer exercises (MATLAB)

•          Use simulation (Monte Carlo method) to approximate
the following integrals :

1
1.
 cos( x)dx

3

2

 e            1 dt
 t      2
2.
1

3

Folie 35        Riccardo Gismondi
Using random numbers to evaluate
Integrals (4)

• Similarly, if we want to compute:      g ( x)dx
then, by substitution ,                 0

1                                 dx
y                         dy                      y 2dx
x 1                              x  1
2

1 
1                        g   1
   h  y dy   with    h y    y 
0
y2

Folie 36   Riccardo Gismondi
Example: the estimation of                    

• Suppose that the random vector (X,Y) is uniformly
distributed in the square of area 4 centered in the origin. Let
us consider the probability that this random point is contained
within the inscribed circle of radius 1:
Area of the circle 
P   X , Y  is in the circle   P  X 2  Y 2  1                      
Area of the square 4

(-1,1)                          (1,1)

If U is uniform on (0,1) then 2U
is uniform on (0,2) , and so 2U-1
(0,0)                  is uniform on (-1,1)

(-1,-1)                         (1,-1)

Folie 37   Riccardo Gismondi
Example: the estimation of                 

• Therefore generate random numbers          U1     and    U2
set           X  2U1  1   ,    Y  2U 2  1        and define :

1, if X 2  Y 2  1                                        
I                             E  I   P  X  Y  1 
2      2

 0, otherwise                                              4

Hence we can estimate           / 4 and hence       by the
fraction of pairs for which

 2u1 1   2u2 1        1
2                2

Folie 38   Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate Uniform Random Numbers

•          Write a code to generate Uniform Random Vectors

•          Write a code to estimate      and study the convergence

Folie 39    Riccardo Gismondi
Elements of Statistics

Folie 40   Riccardo Gismondi
Sampling

• Statistical analysis – estimate or infer something about
a population or process based on only a sample from it
• Think of a RV with a distribution governing the population
• Random sample is a set of independent and identically distributed
(IID) observations X1, X2, …, Xn on this RV
• In simulation, sampling is making some runs of the model and
collecting the output data
• Don’t know parameters of population (or distribution) and want to
estimate them or infer something about them based on the sample

Folie 41   Riccardo Gismondi
Sampling (cont’d.)

• Population parameter           • Sample estimate
Population mean m = E(X)      Sample mean
Population variance s2        Sample variance
Population proportion         Sample proportion
• Parameter – need to know       • Sample statistic – can be
whole population                 computed from a sample
• Fixed (but unknown)            • Varies from one sample to
another – is a RV itself, and
has a distribution, called the
sampling distribution

Folie 42   Riccardo Gismondi
Sampling Distributions

• Have a statistic, like sample mean or sample variance
• Its value will vary from one sample to the next
• Some sampling-distribution results
• Sample mean

If
Regardless of distribution of X,
• Sample variance s2
E(s2) = s2
• Sample proportion
E( ) = p

Folie 43        Riccardo Gismondi
Point Estimation

• A sample statistic that estimates (in some sense) a
population parameter
• Properties
• Unbiased: E(estimate) = parameter
• Efficient: Var(estimate) is lowest among competing point
estimators
• Consistent: Var(estimate) decreases (usually to 0) as the sample
size increases

Folie 44   Riccardo Gismondi
Confidence Intervals

• A point estimator is just a single number, with some
uncertainty or variability associated with it
• Confidence interval quantifies the likely imprecision in a
point estimator
• An interval that contains (covers) the unknown population
parameter with specified (high) probability 1 – a
• Called a 100 (1 – a)% confidence interval for the parameter
• Confidence interval for the population mean m :

s   tn-1,1-a/2 is point below which is area
X  t n 1,1a / 2            1 – a/2 in Student’s t distribution with
n   n – 1 degrees of freedom
• CIs for some other parameters – in text

Folie 45   Riccardo Gismondi
Confidence Intervals in Simulation

• Run simulations, get results
• View each replication of the simulation as a data point
• Random input  random output
• Form a confidence interval
• Brackets (with probability 1 – a) the “true” expected output
(what you’d get by averaging an infinite number of replications)

Folie 46   Riccardo Gismondi
Hypothesis Tests

• Test some assertion about the population or its
parameters
• Can never determine truth or falsity for sure – only get
evidence that points one way or another
• Null hypothesis (H0) – what is to be tested
• Alternate hypothesis (H1 or HA) – denial of H0
H 0: m = 6         vs. H1: m  6
H0: s < 10 vs. H1: s  10
H0: m1 = m2 vs. H1: m1  m2
• Develop a decision rule to decide on H0 or H1 based on sample
data

Folie 47   Riccardo Gismondi
Errors in Hypothesis Testing

H0 is really true   H1 is really true

Decide H0     No error                             Type II error
(“Accept” H0) Probability 1 – a                    Probability 
a is chosen                           is not controlled –
(controlled)                         affected by a and n
Decide H1                      Type I Error        No error
(Reject H0)                    Probability a       Probability 1 –  =
power of the test

Folie 48   Riccardo Gismondi
p-Values for Hypothesis Tests

• Traditional method is “Accept” or Reject H0
• Alternate method – compute p-value of the test
• p-value = probability of getting a test result more in favor of H1
than what you got from your sample
• Small p (like < 0.01) is convincing evidence against H0
• Large p (like > 0.20) indicates lack of evidence against H0
• If p < a, reject H0
• If p  a, do not reject H0
• p-value quantifies confidence about the decision

Folie 49   Riccardo Gismondi
Hypothesis Testing in Simulation

• Input side
• Specify input distributions to drive the simulation
• Collect real-world data on corresponding processes
• “Fit” a probability distribution to the observed real-world data
• Test H0: the data are well represented by the fitted distribution
• Output side
• Have two or more “competing” designs modeled
• Test H0: all designs perform the same on output, or test H0: one
design is better than another
• Selection of a “best” model scenario

Folie 50   Riccardo Gismondi
Elements of Probability II

Folie 51   Riccardo Gismondi
Discrete Random Variables

• Binomial Random Variables
Suppose that n independent trials, each of which results in
success (1) with probability p, are to be performed. If X
represents the number of success in the n trials, then X is said
to be a Binomial Random Variable with parameters (p, n).
Its PMF is given by

n i
Pi  P  X  i    p (1  p)ni
i

A binomial (1, p) random variable is called Bernoulli random
variable

Folie 52   Riccardo Gismondi
Discrete Random Variables

• Binomial Random Variables
For a Binomial Random Variable with parameters (p,n)
expectation and variance is given by (Home assignment !)
E  X   np, Var(X)=np 1-p
• Poisson Random Variables
A random variable that takes on one of the values 0,1,2 3,….
is said to be Poisson Random Variable with parameter       
if its PMF is given by

n
 1
i
Pi  P  X  i  e   
, e  lim 1  
i!       n
 n

Folie 53   Riccardo Gismondi
Discrete Random Variables

• Poisson Random Variables
Poisson random variables have a wide range of
applications. One reason is because such random
variables may be used to approximate the distribution of the number
of success in a large number of trials, when each trial has a small
probability of success.
To see why, suppose that X is a binomial random variable (n,p)
and let         np
n i
Then          Pi  P  X  i    p (1  p)ni
i
n!
               p i (1  p) ni
(n  i )!i !

Folie 54   Riccardo Gismondi
Discrete Random Variables

• Poisson Random Variables

ni
n!                        
i

                              1  
(n  i)!i !  n                n
 
n

i 1 
n(n  1)..(n  i  1)   n   

ni           i !   i
1  
 n

Folie 55   Riccardo Gismondi
Discrete Random Variables

• Poisson Random Variable
Now for           n           p
              n(n  1)...(n  i  1)!                            
n                                                         i

1    e ,                             1 ,                      1    1
 n                                                                n
i
n
Hence for n    p                          i
Pi  P  X  i  e  
i!
Since the mean and variance of a binomial random variable Y
are given by              E Y   np, Var(Y)=np 1-p   np
Then for a Poisson variable with parameter  , we get
E  X   Var(X)=

Folie 56   Riccardo Gismondi
Continuous Random Variables

• Uniformly Distributed Random Variables
A random variable X is said to be Uniformly Distributed
over the interval (a,b) if its pdf is given by

 1
       if a<x<b
f ( x)   b  a
0 otherwise

For the mean we have :
b2  a 2 b  a
b
1
E X        xdx  2(b  a)  2
ba a

Folie 57   Riccardo Gismondi
Continuous Random Variables

• Uniformly Distributed Random Variables
For the variance we have:

b3  a3 b2  a 2  ab
b
1
  ba 
E X 2      x 2dx 
3(b  a)

3
a

 b  a  b  a  2ab
2  2   2

E X                  
2

 2           4
b 2  a 2  ab b 2  a 2  2ab 1
Var ( X )  E  X 2   E  X                                            b  a 
2                                              2
                                    3               4       12

Folie 58   Riccardo Gismondi
Continuous Random Variables

• Normal Random Variables
A random variable X is said to be normally distributed if
its pdf is given by


 x  m 2
f  x             1
s 2
e       2s 2
-  x  

It is not difficult to show (Home assignment !) that :

E  X   m Var(X)=s 2

Folie 59   Riccardo Gismondi
Continuous Random Variables

• Normal Random Variables
An important fact about normal random variables is that
if X is normally distributed then (Home assignment !)
X       N ( m , s 2 )  aX  b       N (am  b, a 2s 2 )
It follows from this that if

X m
X       N (m ,s )  Z 
2
N (0,1)
s
Such a random variable is said to have standard normal distribution

Folie 60   Riccardo Gismondi
Continuous Random Variables

• Normal Random Variables
Let          denote the CDF of a X                 N (0,1) , that is
x      t2

  x      1
2   e     2
dt    -  x  

We have the Central Limit Theorem : Let                    X 1 , X 2 ,...   a sequence of
i.i.d random variables with finite mean m and finite variances 2:
then

 X1  X 2  ..  X n  nm    
lim P                            x    x 
n
          s n                

Folie 61   Riccardo Gismondi
Continuous Random Variables

• Exponential Random Variables
A random variable X is said to be exponentially distributed
with parameter             if its pdf is given by
f  x   e     x
-  x  
Its CDF is given by
x
F  x    e dt =1-e
 t         xt
0 x
0
It is not difficult to show (Home assignment !) that :
1               1
E X           Var(X)=
              2

Folie 62   Riccardo Gismondi
Continuous Random Variables

• Exponential Random Variables
The key property of exponential random variable is the
“memoryless property”:
P  X  s  t X  s  P  X  t s,t  0
This equation is equivalent to

P X  s  t  P X  s P X  t

which is clearly satisfied whenever X is an exponential random
variable, since

P X  x  ex

Folie 63   Riccardo Gismondi
Generating Random Variables

Folie 64   Riccardo Gismondi
Discrete Random Variables

• The Inverse Transform Method (1)
Suppose we want to generate a DRV X having PMF
P  X  x j   p j , j=0,1,...,  p j  1
            
j
To accomplish this, we generate u U (0,1) and set

 x , if u  p
 0             0

 x1 , if p0  u  p0  p1
.


X  .
           j-1          j
 x j , if  pi  u   pi
          i=1         i=1
.

.

Folie 65   Riccardo Gismondi
Discrete Random Variables

• The Inverse Transform Method (2)
Since for 0  a  b  1, P  a  u<b   b  a we
have that
 j-1         j

P  X  x j  = P  pi  u   pi  =pi
         
 i=1       i=1  
And so X has the desired distribution

Folie 66   Riccardo Gismondi
Computer exercises (MATLAB)

1.         Simulate a RV X such that

p1  0.2, p2  0.15, p3 =0.25, p4 =0.4
p j  P X  j
2.         Simulate a RV X such that
p1  0.05, p1  0.15, p3  0.22, p4  0.08,
p5  0.12, p6  0.1, p7  0.1, p8  0.08,
p9  0.05, p10  0.03, p11  0.02
p j  P X  j

3.         Simulate a RV X such that
1
p j  P X  j 
n

Folie 67    Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate a DRV with the Inverse
Transform Method

[U,Hist(U)]=f(data,N)

Folie 68    Riccardo Gismondi
Continuous Random Variables

• The Inverse Transform Algorithm (2)
Consider a CRV having PDF F. A general method for
generating such a random is based on the following proposition.
• Proposition :
Let U be a uniform (0,1) RV. For any continuous distribution
function F the RV X defined by              X  F 1 U    has a
distribution F
• Proof : let        FX        denote the PDF of   X  F 1 U  . Then
FX  x   P  X  x 
 P  F 1 U   x 
               
Folie 69   Riccardo Gismondi
Continuous Random Variables

• The Inverse Transform Algorithm (3)
Now since F is a PDF it follows that

a  b  F (a)  F (b)

Hence, we see that

FX  x   P  F  F 1 U    F  x  
                           
= P U  F  x  
            
= F  x

Folie 70   Riccardo Gismondi
Continuous Random Variables

• The Inverse Transform Algorithm (4)
1. Supposed we wanted to generate a CRV X with PDF
F  x   xn , 0<x  1          If we let   x  F 1 u  then
1
u  F  x   xn , x  u     n

2. If X is a exponential RV with rate 1, then is PDF is given by
F  x   1  e x
x  F 1  u   u  F  x   1  e  x
x   log 1  u  ,or
x   log  u  ,since Y       U(0,1)  1-Y        U(0,1)

Folie 71   Riccardo Gismondi
Continuous Random Variables

• The Box-Muller method for generating Standard Normal
Random Variables (1)
Let X and Y be independent standard normal RV and let R and
         denote the polar coordinates of the vector (X,Y) , that is

Y                                R
(X,Y)

R2  X 2  Y 2
sin  
Y
tan   
                                                X
cos  
X

Folie 72     Riccardo Gismondi
Continuous Random Variables

• The Box-Muller method for generating Standard Normal
Random Variables (2)
Since X and Y are independent , their joint PDF is given (why?)
by                       2           2

x
2

2
y

x y 
1                1                1
f ( x, y )     e         2
e       2
    e          2
2               2               2

To determine the joint density of              R 2 and          we make the
change of variables :
d  X 2 Y 2
Y                     Y 
  tan        1
  arctan  
X                     X
Folie 73   Riccardo Gismondi
Continuous Random Variables

• The Box-Muller method for generating Standard Normal
Random Variables (3)
Since the Jacobian of the transformation is equal to 2 ( home
assignment !) it follows that the joint PDF of R 2 and  is
d2
1 1       
f ( d , )       e      2
2 2

However, this is equal to the product of an exponential density
having mean 2 and the uniform density on               0,2  !
It follows that            R 2 and    are independent with
R   2
exp(2),  U  0,2 
Folie 74   Riccardo Gismondi
Continuous Random Variables

• The Box-Muller method for generating Standard Normal
Random Variables (4)
We can now generate a pair of independent standard normal
RV with the following algorithm:
Step 1 : generate uniform random numbers          U 1 and U 2

Step 2:             R2  2log U1  ,  2U2

X  R cos    2log U1  cos  2U 2 
Step 3:
Y  R sin    2log U1  sin  2U 2 

Folie 75   Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate a standard normal RV with
the Box-Muller Method

[U,hist(U)]=f(N)

Folie 76    Riccardo Gismondi
Continuous Random Variables

• The Polar method for generating Standard Normal
Random Variables (1)
Unfortunately, the Box-Mueller Transformation is computationally
not very efficient: sine and cosine trigonometric functions
Idea: indirect computation of a sine and cosine of a random angle.

(-1,1)                             (1,1)

V2     V2
sin                1
R
   V2
R
V1  V2  2
V1    V1
V1                 cos    
R V V 1
(-1,-1)                             (1,-1)                 1 2 2
Folie 77   Riccardo Gismondi
Continuous Random Variables

• The Polar method for generating Standard Normal
Random Variables (2)
Thus:
V1
X           2 log U  cos  2 U      2 log U                 1
V1  V2    2

V2
Y           2 log U  sin  2 U     2 log U                  1
V1  V2    2

If   U1    and U 2 are Uniform over (0,1) , then     V1  2U1  1 and
V2  2U 2  1 are also Uniform over (0,1); setting
R 2  V12  V22  S we obtain that
Folie 78   Riccardo Gismondi
Continuous Random Variables

• The Polar method for generating Standard Normal
Random Variables (3)
1
V1              2 log  S     2
X          2 log  S           1
 V1               
S    2              S       
 2 log  S  
V2
Y  2 log  S        1
 V2               
 S 2            S       
are independent standard normal when V1,V2 is a randomly           
chosen point in the circle of radius 1 centered at the origin .
Summing up we have the following algorithm to generate a pair
of independent standard normal

Folie 79   Riccardo Gismondi
Continuous Random Variables

• The polar method Algorithm
STEP 1 : Generate random numbers U and U2 1
STEP 2 : Set V1  2U1  1 V2  2U 2  1 S  V 2  V 2
1     2
STEP 3 : if S  1 go to STEP 1
1
 2 log  S  
else
2
return the i.s.n   X                V1
      S       
1
 2 log  S  
2
Y                V2
      S       

Folie 80   Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate a standard normal RV with
the Polar Method

[U,hist(U)]=f(N)

Folie 81    Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normal Variables (1)
A d-dimensional multivariate normal distribution is
characterized by a d-vector m and a d  d matrix                             :
we abbreviate it as N      , .    m   must be symmetric and
positive semidefinite, meaning that
xT  x  0            for all   x   d

If    is positive semidefinite, then the normal distribution N  m, 
has density
 1                     
  xm  1 x m  
T
1                   
 m ,  x                 d       1
e    2                     
, x   d

 2    2      2

Folie 82   Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normal Variables (2)
The standard d-dimensional multivariate normal distribution
N  0, I d  with I dthe d  d identity matrix, is the special case:
 1 T 
1             x x
 0,I d  x                     d
e    2   

 2     2

If X N  m, , then its i-th component X i has distribution
N  mi , ii  with s i  ii . The i-th and j-th components have
covariance           Cov  X i X j   E  X i  mi   X j  m j   ij
                          

Folie 83   Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normal Variables (3)
Any linear transformation of a normal vector is again normal:
X         N  m ,    AX         N  Am , AAT       , for any d-vector   m
and       d d        matrix    , and any   k d   matrix A, for any k.
Using any of the methods described before, we can generate
standard normal variables Z1 , Z 2 ,...., Z d and assemble them into
a vector Z    N  0, I  . Thus the problem of sampling X from the
multivariate normal N  m,  reduces to finding a matrix A for
which AA  
T

Folie 84   Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normal Variables (4)
Among all such A, a lower triangular one is particularly
convenient because it reduces the calculation of                 m  AZ
to the following:             X 1  m1  A11Z1
X 2  m2  A21Z1  A22 Z 2
.
.
X d  md  Ad 1Z1  Ad 2 Z 2  ..  Add Z d

A full multiplication of the vector Z by the matrix A would
require approximately twice as many multiplications and additions.

Folie 85   Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normal Variables (5)
A representation of                as   AAT with A lower triangular is a
Cholesky factorization of A .
Consider a            2  2 covariance matrix      represented as
 s           2
s 1s 2 
             1

 s 1s 2              s2 
2

Assuming s 1  0 and s 2  0 , the Cholesky factor is ( verified ! )

 s1    0      
A              
 s s 1   2 
 2   2        
Folie 86   Riccardo Gismondi
Continuous Random Variables

• Generating Multivariate Normals Variables (6)
Thus we can generate a bivariate normal distribution             N  m, 
by setting                    X 1  m1  s 1Z1
X 2  m2  s 2 Z1  s 2 1   2 Z 2
For a d-dimensional case we need to solve :

 A11                              A11     A12     .    Ad 1 
                                                            
 A21           A22                        A22     .    Ad 2 

 .             .        .                         .     . 
                                                            

Folie 87   Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate Multivariate Normal RVs

 X1, X 2 ,.. X d ,  hist ( X1 ), hist ( X 2 ),..hist ( X d )  f ( m, )
                                                                

Folie 88    Riccardo Gismondi
Brownian Motion I

Folie 89   Riccardo Gismondi
Brownian Motion

• One Dimension:
• a standard one-dimensional Brownian motion on [0, T ] is
a stochastic process {W (t ),0  t  T } with the following
properties:
• W (0)  0
• t  W (t ) is, with probability 1, a continuous function on [0, T ]
• the increments {W (t1 )  W (t0 ), W (t 2 )  W (t1 ),..., W (t k )  W (t k 1 )}
are independent for any k and any 0  t0  t1  ...t k  T
• W (t )  W ( s) ~ N (0, t  s) for any 0  s  t  T

Folie 90   Riccardo Gismondi
Brownian Motion

• One Dimension:
• because of its mentioned properties, the following
condition W (t ) ~ N (0, t ) is valid for 0  t  T
• for constants m and s 2  0 , the process         X (t ) is a
Brownian Motion with drift m and diffusion coefficient s 2 , if

X (t )  mt          is a Standard Brownian Motion
s

•      X (t )  mt  sW (t ) … represents a Brownian Motion with drift
m   and diffusion coefficient s 2

Folie 91    Riccardo Gismondi
Brownian Motion

•          Standard Brownian Motion   W (t )

Folie 92    Riccardo Gismondi
Brownian Motion

•          Brownian Motion     X (t )

Folie 93   Riccardo Gismondi
Brownian Motion

• One Dimension:
• Consequently:
•    X solves the stochastic differential equation (SDE)
dX (t )  mdt  sdW (t )
•   for deterministic, but time-varying m (t ) and s (t )  0
dX (t )  m (t )dt  s (t )dW (t )
• through integration
t           t
X (t )  X (0)   m ( s)ds   s ( s)dW ( s)
0            0
• we come to the results.

Folie 94   Riccardo Gismondi
Brownian Motion

• One Dimension:
• The process             X
• has continuous sample paths and
• independent increments
• Each increment              X (t )  X ( s)   is normally distributed with
• mean
t
E[ X (t )  X ( s )]   m (u )du
s
• and variance

 t s (u )dW (u )  t s 2 (u )du
Var[ X (t )  X ( s)]  Var 
s
                 s


Folie 95   Riccardo Gismondi
Brownian Motion

• Random Walk Construction:
• subsequent values for a standard Brownian motion
from t 0  0 and W (0)  0 can be generated as follows:

W (ti 1 )  W (ti )  ti 1  ti Zi 1           i  0,...,n  1 Z i ~ N (0,1)
• for     X ~ BM ( m , s 2 )
X (ti 1 )  X (ti )  m (ti 1  ti )  s ti 1  ti Zi 1               i  0,...,n  1
• with time-dependent coefficients
ti1                  ti1
X (ti 1 )  X (ti )   m ( s)ds                   s 2 (u )du Z i 1   i  0,...,n  1
ti                  ti

• with Euler approximation
X (ti 1 )  X (ti )  m (ti )(ti 1  ti )  s (ti ) ti 1  ti Zi 1 i  0,...,n  1
Folie 96     Riccardo Gismondi
Home assignment (Matlab)

•          Write a code to generate a Standard Brownian
Motion W (t )

•          Write a code to generate a Brownian Motion   X (t )

Folie 97    Riccardo Gismondi
Brownian Motion

• Random Walk Construction:
• for a standard Brownian motion, we know that

E W (ti )  0
and for the covariance matrix we get

CovW ( s), W (t )  CovW ( s), W ( s)  (W (t )  W ( s)) 
 CovW ( s), W ( s)  CovW ( s), W (t )  W ( s)
 s0  s

for the covariance matrix of   W (t1 ),..., W (t n )   we denote:
Cij  min( ti , t j )

Folie 98     Riccardo Gismondi
Brownian Motion

• Cholesky Factorization
• the vector        W (t1 ),..., W (t n ) has the distribution N (0, C )
• and we may simulate this as vector              AZ, where
Z  ( Z1 ,..., Z n )T ~ N (0, I )
• and A satisfies AAT  C

• by applying the Cholesky Factor, we get                A as a lower triangular
matrix of the form
    t1       0        ...         0        
                                           
    t1    t 2  t1    ...         0        
A                                           
   ...      ...       ...         ... 
    t1    t 2  t1    ...     t n  t n 1 
                                           
Folie 99   Riccardo Gismondi
Home assignment (Matlab)

•       Write a code to generate a Standard Brownian
Motion W (t ) with Random Walk construction
( Cholesky factor )

Folie    Riccardo Gismondi
100
Brownian Motion

• Brownian Bridge Construction
• besides generating the vector W (t1 ),..., W (t n ) from
left to right, there also exists another method
• e.g.
• generating the last sample step first:        W (t n )
• then generate W (t              ) conditional on the value of W (t n )
n / 2 
• and proceed with filling in the intermediate values
• this method is useful for
• variance reduction and
• low-discrepancy methods
• but it does not give us any computational advantage

Folie      Riccardo Gismondi
101
Brownian Motion

• Brownian Bridge Construction
• suppose        0u st
• consider the problem, to generate       W (s ) conditional on
W (u )  x            and   W (t )  y
• the unconditional distribution is given by

W (u )         u u u 
                         
 W ( s )  ~ N  0,  u s s  
 W (t )        u s t 
                         

Folie      Riccardo Gismondi
102
Brownian Motion

• Brownian Bridge Construction
• first we permutate the entries

 W (s)        s u s 
                       
W (u )  ~ N  0,  u u u  
 W (t )       s u t 
                       

• and by applying the conditioning formula to find the distribution of   W (s )
conditional on the value of (W (u ),W (t )) we get the mean
E[W ( s ) | W (u )  x,W (t )  y ]
1
 u u   x  (t  s ) x  ( s  u ) y
u t   y  
 0  (u s )       
                  (t  u )
Folie      Riccardo Gismondi
103
Brownian Motion

• Brownian Bridge Construction
• and the variance
1
 u u   u  ( s  u )(t  s)
u t   s  
s  (u s)       
              (t  u )

• since
1
u u       1  t / u  1

 u t   t  u  1 1 
                 
                      
• the conditional mean is obtained by linearly interpolating between
(u, x) and (t , y)
Folie      Riccardo Gismondi
104
Brownian Motion

• Brownian Bridge Construction
• suppose that W ( s1 )  x1 , W ( s2 )  x2 ,..., S ( sk )  xk
• combining the observations of the mean and the variance
(W ( s) | W ( s1 )  x1 , W ( s2 )  x2 ,...,W ( sk )  xk ) 
 ( si 1  s) xi  ( s  si ) xi 1 ( si 1  s )(s  si ) 
N
                                   ,                       
            ( si 1  si )                ( si 1  si )      ... si  s  si 1

• and finally …

( si 1  s) xi  ( s  si ) xi 1   ( si 1  s)(s  si )
W ( s)                                                            Z
( si 1  si )                  ( si 1  si )

Folie      Riccardo Gismondi
105
Brownian Motion

• Brownian Bridge Construction

• Brownian Bridge construction of Brownian path. Conditional on
W ( s )  x and W ( s )  x , the value is normally distributed
i          i    i 1   i 1

Folie     Riccardo Gismondi
106
Home assignment (Matlab)

•       Write a code to generate a Standard Brownian
Motion W (t ) with Brownian Bridge Construction

Folie    Riccardo Gismondi
107
Brownian Motion

• Brownian Bridge Construction
• How could the construction be modified for a Brownian
motion with drift m
• sampling the rightmost point would change
• sample     W (t m )   from   N ( mt m , t m ) instead from N (0, t m )

Folie      Riccardo Gismondi
108
Brownian Motion

• Principal Components Construction
• to visualize the construction of single Brownian path
we can write in vector form:
 W (t1 )   a11      a12              a1n 
                                     
W (t2 )   a21       a22              a2 n 
 ...    ...  Z1   ...  Z 2  ...  ...  Z n
                                     
W (t )   a         a                a 
     n     n1       n2               nn 
• with ai  ( a1i ,... ani )T originally derived from AAT  C   through
Cholesky-Factorzation
• now, we want to derive a i through principal component construction,
such that ai  i vi
• where 1  2  ...  n  0 are the eigenvalues of C and vi
are the eigenvectors
Folie      Riccardo Gismondi
109
Brownian Motion

• Principal Components Construction
• it can be shown that for an n-step path with equal
spacing ti 1  ti  t
2         2i  1    
vi ( j )        sin         j                  j  1 ,...,n
2n  1  2n  1 
t          2i  1                       i  1 ,...,n
i       sin  2           
4           2n  1 2 

• because of           Cv  v   we can write    min(t , t )v( j) v(i)
j
i   j

• for the discrete case
1
          
• and 0 min( s, t ) ( s )ds   (t ) for the continuous limit with the
eigenfunction  on [0,1]

Folie      Riccardo Gismondi
110
Brownian Motion

• Principal Components Construction
• the solution to this equation is
2
 (2i  1)t                2       
 i (t )  2 sin                  i  
 (2i  1)   

     2                             
• Karhounen-Loève expansion of Brownian Motion

W (t )   i i (t ) Z i           0  t 1
i 0

Folie      Riccardo Gismondi
111
Home assignment (Matlab)

•       Write a code to generate a Standard Brownian
Motion W (t ) with Principal Components
Construction (Karhounen-Loève)

Folie    Riccardo Gismondi
112
Geometric Brownian Motion, B&S Model,
Pricing of (exotic) options: 1-dim

Folie   Riccardo Gismondi
113
Geometric Brownian Motion

• a stochastic process S (t ) is a geometric Brownian
Motion if log S (t ) is a Brownian motion with initial value
log S (0)
• in other words: a geometric Brownian motion is simply an
exponentiated Brownian motion
• all methods for simulating Brownian motion become methods for
simulating geometric Brownian motion through exponentiation
• a geometric Brownian motion is always positive, because the
exponential function takes only positive values

S (t 2 )  S (t1 ) S (t3 )  S (t 2 )       S (t n )  S (t n 1 )
,                   ,...,
S (t1 )           S (t 2 )                  S (t n 1 )
• are independent for t1  t 2  ...t n rather than the absolute changes
S (ti 1 )  S (ti )
Folie      Riccardo Gismondi
114
Geometric Brownian Motion

• Basic properties
• suppose W is a standard Brownian motion and X
a Brownian Motion generated from W, such that
X ~ BM ( m , s 2 )
• we set S (t )  S (0) exp( X (t ))  f ( X (t ))
• and with application of Itô’s formula
1
dS (t )  f ' ( X (t ))dX (t )  s 2 f ' ' ( X (t ))dt
2
1
 S (0) exp( X (t ))[mdt  s dW (t )]  s 2 S (0) exp( X (t ))dt
2
1
 S (t )(m  s 2 )dt  S (t )s dW (t )
2

Folie      Riccardo Gismondi
115
Geometric Brownian Motion

• Basic properties
• in contrast, a geometric Brownian motion is specified
through the SDE ( B&S Model )
dS (t )
 mdt  s dW (t )
S (t )

• there seems to be an ambiguity in the role of           m
• the drift of a geometric Brownian Motion is mS (t ) and implies
1 2
d log S (t )  ( m  s )dt  s dW (t )
2
as can be verified through Itô’s formula
• we will use the notation S ~ GBM ( m , s 2 )

Folie      Riccardo Gismondi
116
Geometric Brownian Motion

• Basic properties
• if S has initial value S (0), then
1
S (t )  S (0) exp([m  s 2 ]t  sW (t ))
2

• and respectively, if u  t
1
S (t )  S (u ) exp([m  s 2 ](t  u )  s (W (t )  W (u )))
2

• this provides a recursive procedure for simulating values of S at
0  t0  t1  ...t n
1 2
S (ti 1 )  S (ti ) exp([m  s ](ti 1  ti )  s ti 1  ti Z i 1 )   i  0,1,...n  1
2

Folie      Riccardo Gismondi
117
Home assignment (Matlab)

•       Write a code to generate a Geometric Brownian
Motion S (t )

Folie    Riccardo Gismondi
118
Geometric Brownian Motion

• B&S Model
• if S has initial value S (0), then
1
S (t )  S (0) exp([m  s 2 ]t  sW (t ))
2

• and respectively, if u  t
1
S (t )  S (u ) exp([m  s 2 ](t  u )  s (W (t )  W (u )))
2

• this provides a recursive procedure for simulating values of S at
0  t0  t1  ...t n
1 2
S (ti 1 )  S (ti ) exp([m  s ](ti 1  ti )  s ti 1  ti Z i 1 )   i  0,1,...n  1
2

Folie      Riccardo Gismondi
119
Geometric Brownian Motion

• Lognormal Distribution
• if S ~ GBM ( m , s ) , then the marginal distribution of
2

S (t ) is that of the exponential of a normal random variable,
which is called a lognormal distribution
• Y ~ LN ( m , s 2 ) , if the random variable Y has the distribution of
exp(m  sZ ), Z ~ N (0,1)
• the distribution thus is given by
log( y)  m            log( y)  m
P(Y  y)  P( Z                  )  (                 )
s                      s
• and the density

1    log( y )  m
(              )
ys        s
Folie      Riccardo Gismondi
120
Geometric Brownian Motion

• Lognormal Distribution
• for Y ~ LN ( m , s 2 )
1
m s
• Expected Value:                 E[Y ]  e     2

2 m s 2    s2
• Variance:                      Var[Y ]  e               (e 1)

1
• if S ~ GBM ( m , s 2 ) then ( S (t ) / S (0)) ~ LN ([m  s 2 ]t , s 2t )
and                                                     2
E[ S (t )]  e m t S (0)
2m t 2 s 2t
Var[S (t )]  e S (0)(e 1)

Folie      Riccardo Gismondi
121
Geometric Brownian Motion

• Lognormal Distribution
• in fact, we have
E[ S (t ) | S ( ), 0    u ]  E[ S (t ) | S (u )]  e m (t u ) S (u ) , u  t

• the first equality is the Markov property (follows from the fact that S is
a one-to-one transformation of a Brownian motion, itself a Markov
process)
•   m acts as an average growth rate for S
• for a standard Brownian motion W , we have t 1W (t )  0
with probability 1
1                       1
• for S ~ GBM ( m , s 2 ), we therefore find that log( S )  m  s 2
with probability 1                              t             2

Folie        Riccardo Gismondi
122
Geometric Brownian Motion

• Lognormal Distribution
1 2
• if the expression m      s (growth rate) is positive
2
S (t )   as t  

• if it is negative
S (t )  0 as t  

Folie      Riccardo Gismondi
123
Geometric Brownian Motion

• Risk-Neutral Dynamics
• the difficulty within this model lies in finding the correct
probability measure for building the expectation and
• for finding the appropriate discount rate
• this bears on how the paths of the underlying are to be generated and
on how the drift parameter m is chosen

• we assume the existence of a continuous compounding interest rate r
for riskless borrowing and lending
• therefore a dollar investment grows as follows  (t )  e r t at time t
• we call  the numeraire asset

Folie      Riccardo Gismondi
124
Geometric Brownian Motion

• Risk-Neutral Dynamics
• suppose the existence of an asset S that does not pay
dividends
• then under the risk-neutral measure, the discounted price process is
a martingal
S (u)     S (t )                     
 E        | {S ( ),0    u}
 (u)      (t )                     
• if S is a GBM under the risk neutral measure, then it must have   m r
dS (t )
• and further               r dt  s dW (t )
S (t )

• in a risk neutral world, all assets would have the same average rate
of return

Folie      Riccardo Gismondi
125
Geometric Brownian Motion

• Risk-Neutral Dynamics
• therefore the drift parameter for S (t ) will equal the risk
free rate r
• should an asset pay dividends, then the martingal property continues
to hold, but
• with S replaced by the sum of S , any dividends paid and any
interest earned from investing these dividends at r

• let D(t ) be the value of all dividends paid over [0, t]
• and assume that the asset pays a continuous dividend yield of   
such that it pays dividends of  S (t ) at time t

Folie      Riccardo Gismondi
126
Geometric Brownian Motion

• Risk-Neutral Dynamics
• D grows at           dD(t )
 S (t )  rD (t )
dt

• if S ~ GBM ( m , s 2 ) , then its drift in ( S (t )  D(t )) is

mS (t )  S (t )  rD(t )

• now apply the martingal property to the combined process
(S (t )  D(t )) , requires that this drift equal r ( S (t )  D(t ))
• therefore, we must have m    r , i.e. m  r  

Folie      Riccardo Gismondi
127
Geometric Brownian Motion

• Risk-Neutral Dynamics
• Example: Futures Contract
• commits the holder to buying an underlying asset at a fixed
price at a fixed date in the future
• both, the buyer and the payer agree to the futures price, specified in
the contract, without either party paying anything immediately to the
other
• S (T ) ... price of an asset at future time T
t
F (t , T ) ...futures price at time t for a contract o be settled at T
t T

Folie      Riccardo Gismondi
128
Geometric Brownian Motion

• Risk-Neutral Dynamics
• for a futures contract with a zero value at the inception
time t entails
0  e  r (T t ) E[( S (T )  F (t , T )) | Ft ]
• where Ft is the history of the market prices up to time t
• at t  T the spot and futures prices must agree, so S (T )  F (T , T )
• and we may rewrite the condition as F (t , T )  E[ F (T , T ) | Ft ]
• the futures price is a martingale under the risk neutral measure
• when modeling a futures price with a GBM, then you should set the
drift parameter to zero: dF (t , T )
 s dW (t )
F (t , T )
( r  )(T t )
• and finally this reveals to F (t , T )  e                 S (t )
Folie       Riccardo Gismondi
129
Geometric Brownian Motion

• B&S Model
In Black-Scholes model many theoretical assumptions:
1. the short interest rate is known and is constant through
time;
2. the underlying asset pays no dividends
3. the option is European
4. no transaction costs
5. is it possible to borrow any fraction of the price of a security to
buy it or to hold it, at the short interest rate.
6. trading can be carried on continuously

Folie     Riccardo Gismondi
130
Geometric Brownian Motion

• B&S Model
With the assumption that the underlying follows a GBM
model, S ~ GBM ( m , s 2 ) , and with no-arbitrage argument,
Black and Scholes obtained the amazing formula for the
European call option price :
C  S  t  , T , r , s , K   S  t  N  d1   Ke                  N d2 
 T  t  r

 S t         1 2
log            r  s  T  t 
 K             2  
d1                                                d2  s                T  t 
s   T  t 
 S t         1    
log            r  s 2  T  t 
  K            2    
d2 
Folie
s T  t 
Riccardo Gismondi
131
Home assignment (Matlab)

•       Write a code for pricing a European call/put option
simulating S (t ) as S ~ GBM ( m , s 2 ) and
study the asymptotic convergence to the theoretical
B&S price.
•       Describe and plot the convergence of the MC estimator.
•       What is the variance of the estimator?
•       When will it slow down ?
•       How many simulations you need to be “sure”?

Folie    Riccardo Gismondi
132
Geometric Brownian Motion

• Path-Dependent Options
• the reason behind simulating GBM paths, is that this
can be used for pricing options, (e.g. not only Vanillas)
• particularly for those whose payoff depend on the path of an
underlying asset S and not simply its value S (T )
• the price of an option may be represented as an expected
discounted payoff, so
• simulate by generating paths of the underlying asset,
• evaluate the discounted payoff on each path
• and average over all paths

Folie      Riccardo Gismondi
133
Geometric Brownian Motion

• Path-Dependent Payoffs
• an option price could in principal depend on the complete
path over an interval [0, T ]
• e.g.
• Asian options – discrete monitoring:
is an option on a time average of the underlying asset.

 
Payoff Asian Call: S  K


Payoff Asian Put:   K  S 


with constant strike price K

1 n
and S   S (ti )
n i 1
Folie      Riccardo Gismondi
134
Geometric Brownian Motion

• Path-Dependent Payoffs
• Asian options – continuous monitoring:
just replace the discrete average with the continuous

1 t
S
t u u S ( )d
over an integral [u, t ]

Folie   Riccardo Gismondi
135
Geometric Brownian Motion

• Path-Dependent Payoffs
• Geometric average option:
replacing the arithmetic average S with
1/ n
 n         
  S (ti ) 
           
 i 1      

• such options are useful in test cases for computational
procedures
• they are mathematically convenient to work with because of its
geometric average
• we find (with             m replaced by r ), that
1/ n
   n
                            1 2 1 n    s n       
  S (ti ) 
                        S (0) exp [r  s ]  ti  W (ti ) 
 i 1                                  2   n i 1 n i 1    
Folie   Riccardo Gismondi
136
Home assignment (Matlab)

•       Write a code to price a Asian Option with
arithmetic/geometric mean and discrete monitoring :

function               [Call , Put ]  AsianOption  S0 , K , T , r , s , ts, MeanType 
S0 ........................Value of S in t  0  Startvalue 
K ........................Strike
T .........................Espiration, e. g. 1 year
r..........................Risk  free rate
s ..........................Volatility
1
ts..........................timestep, e. g. daily,
252
 Aritmetical
MeanType............ 
Geometrical

Folie      Riccardo Gismondi
137
Geometric Brownian Motion

• Path-Dependent Payoffs
• Barrier Options:
the option gets “knocked out”, if the underlying asset
crosses a prespecified level.
e.g. a “down-and-out call” with
• barrier b
• strike K
• and expiration T
• has the payoff      1{ (b)  T }( S (T )  K ) 
• where  (b)  inf{ti : S (ti )  b} is the first time that the price
of the underlying drops below b
• and 1{ } represents the indicator function of the event in the
braces
Folie   Riccardo Gismondi
138
Geometric Brownian Motion

• Path-Dependent Payoffs
• Barrier Options:
respectively, a “down-and-in call” has the payoff
1{ (b)  T }( S (T )  K ) 

and gets “knocked-in” only when the underlying asset crosses
the barrier.

• Up-and-out and up-and-in calls and puts can be derived
analogously!

Folie   Riccardo Gismondi
139
Home assignment (Matlab)

•       Write a code to price a Barrier Option (down and in,
down and out) with discrete monitoring :
function               [Call , Put ]  BarrierOption  S 0 , K , T , r , s , b, ts , Type 
S0 ........................Value of S in t  0  Startvalue 
K ........................Strike
T .........................Espiration, e.g . 1 year
r..........................Risk  free rate
s ..........................Volatility
b...........................Barrier
1
ts..........................timestep , e.g . daily ,
252
 D _ In
Type.................... 
 D _ Out

Folie      Riccardo Gismondi
140
Geometric Brownian Motion

• Path-Dependent Payoffs
• Lookback options:
like barrier options, lookback options depend on
extremal values of the underlying asset price.
Such puts and calls expiring at t n have payoffs:
( max S (ti )  S (tn )) and (S (tn )  min S (ti ))
i 1,...,n                            i 1,...,n

• e.g. a lookback call can be viewed as
• the profit from buying at the lowest price over t1 ,..., t n
• and selling at the final price S (t n )

Folie   Riccardo Gismondi
141
Geometric Brownian Motion

• Incorporating a Term Structure
• till now, we assumed that risk free rate r is constant
• therefore the price of (riskless) zero-coupon bond maturing
at T  t can be calculated as follows:

B(t , T )  e  r (T t )

• now we assume, that on the markets we observe bond prices B(0, T )
that are incompatible with the above mentioned pricing formula
• therefore, to use this term structure in option-pricing, we have to
introduce a deterministic, but time-varying risk-free rate r (u )

r (u )        log B (0, T )
T               T u

Folie      Riccardo Gismondi
142
Geometric Brownian Motion

• Incorporating a Term Structure
  T r (u )du) 
 0
• then we get B(0, T )  exp                  

• with this time-varying risk-free rate r (u ) , the dynamics of
an asset price S (t ) under the risk-neutral measure can be described
by the SDE (stochastic differential equation)
dS (t )
 r (t )dt  s dW (t )
S (t )
 t
• with solution S (t )  S (0) exp  r (u ) du  s 2t s W (t ) 
1

0         2



• and this process can be simulated over 0  t0  t1  ...  t n

 ti1         1                                        
S (ti 1 )  S (ti ) exp   r (u ) du  s 2 (ti 1  ti )  s ti 1  ti Z i 1 
 ti           2                                        
Folie      Riccardo Gismondi
143
Home assignment (Matlab)

•       Write a code to price a Asian Option with discrete
monitoring and time-varying interest rate:
function               [Call , Put ]  AsianOption _ IntVar  S0 , K , T , r  t  , s , b, ts, MeanType 
S0 ........................Value of S in t  0  Startvalue 
K ........................Strike
T .........................Espiration, e. g . 1 year
r  t  .....................Time dipendent Risk  free rate
 e.g in  line function 
s ..........................Volatility
b...........................Barrier
1
ts..........................timestep, e.g. daily,
252
 Aritmetical
MeanType............ 
Geometrical
e.g., you can take :                            r t  
1
100
sin  2 t   t  3 , 0  t  T
Folie      Riccardo Gismondi
144
Geometric Brownian Motion

• Incorporating a Term Structure
• if we observe bond prices B(0, ti ),..., B(0, t n )
• we can calculate the term structure as follows
B (0, ti )           ti1 r (u ) du 
 exp                
B (0, ti 1 )         ti            

• and we can simulate S (t ) using
B (0, ti )         1 2                                     
S (ti 1 )  S (ti )               exp   s (ti 1  ti )  s ti 1  ti Z i 1 
B (0, ti 1 )      2                                       

Folie      Riccardo Gismondi
145
Geometric Brownian Motion

• Simulating Off a Forward Curve
• now, we not just assume the observation of S (0)
• but also a series of forward prices F (0, T ) for the asset
• under the risk-neutral measure, F (0, T )  E[S (T )]
• this implies
 1 2             
S (t )  F (0, T ) exp  - s T  s W(t ) 
 2               
• and we can simulate
F (0, ti 1 )      1                                         
S (ti 1 )  S (ti )                 exp   s 2 (ti 1  ti )  s ti 1  ti Z i 1 
F (0, ti )         2                                         

Folie      Riccardo Gismondi
146
Geometric Brownian Motion

• Deterministic Volatility Functions
• on the markets, it has been widely observed, that option
prices are incompatible with a GBM model for the
underlying asset
• applying the Back-Scholes model for option prices, this would
assume to use the same volatility s for all traded options with
• different strikes,
• different maturities that
• are traded on the same underlying asset.
• in practice, the observed volatility of these traded assets varies with
• strike and
• maturity

Folie      Riccardo Gismondi
147
Geometric Brownian Motion

• Deterministic Volatility Functions
• Black-Scholes has to be modified to find
the real market prices!
• consequently, we let our volatility depend on S and t , such
that it looks like the following s ( S , t )
• this leads to the model
dS (t )
 rdt  s ( S (t ), t )dW (t )
S (t )

• assuming a deterministic volatility function, an option could be
hedged through a position in the underlying asset
• this would not be the case in a stochastic volatility model

Folie      Riccardo Gismondi
148
Geometric Brownian Motion

• Deterministic Volatility Functions
• to get s ( S , t ) a numerical optimization problem has to be
solved
• in general, there is no exact simulation procedure for these models
and it is necessary to use an Euler scheme of the form

S (ti 1 )  S (ti ) 1  r (ti 1  ti )  s (S (ti ), ti ) ti 1  ti Zi 1   
• or the Euler scheme for log( S (t ))

S (ti 1 )  S (ti ) exp [r  1 s 2 (S (ti ), ti )](ti 1  ti )  s (S (ti ), ti ) ti 1  ti Zi 1
2


Folie      Riccardo Gismondi
149
Home assignment (Matlab)

•       Write a code to price a Asian Option with
arithmetic/geometric mean, discrete monitoring and
time-varying volatility :
function                                                          
[Call , Put ]  AsianOption _ VolVar S 0 , K , T , r , s  S  t  , t  , ts, MeanType   
S0 ........................Value of S in t  0  Startvalue 
K ........................Strike
T .........................Espiration, e.g . 1 year
r..........................Risk  free rate
s  S  t  , t  ...........Volatility function  In  line function 
1
ts..........................timestep, e.g. daily,
252
 Aritmetical
MeanType............ 
Geometrical

Folie      Riccardo Gismondi
150
Brownian Motion II

Folie   Riccardo Gismondi
151
Brownian Motion

• Multiple Dimensions
• a process W (t )  (W1 (t ),..., Wd (t )) , 0  t  T
T

is called a standard Brownian Motion on d , if it has
•   W (0)  0
• continuous sample paths
• independent increments and
• W (t )  W (s) ~ N (0, (t  s) I ) for all   0 s t T
• I is the d  d identity matrix
• it follows that each process Wi (t ), i  1,..., d is a standard one-
dimensional Brownian motion and that Wi and W j are independent
for i  j

Folie      Riccardo Gismondi
152
Brownian Motion

• Multiple Dimensions
• suppose: m is a vector in d and             is a d  d matrix,
positive definite or semidefinite
•   X is Brownian Motion with drift m and covariance 
X ~ BM (m , )

• X (t )  X (s) ~ N ((t  s)m , (t  s))
• X (t )  mt  BW (t ) …. whereby BBT  
• this process solves the SDE (stochastic differential equation)
dX (t )  mdt  BdW (t )
• for deterministic, but time-varying m (t ) and (t ):
dX (t )  m (t )dt  B(t )dW (t ) … whereby B (t ) B T (t )  (t )
Folie        Riccardo Gismondi
153
Brownian Motion

• Multiple Dimensions
• this process has continuous sample paths, independent
increments and
 t m (u)du, t (u)du 
X (t )  X (s) ~ N  
 s         s        


                    
• Cov X i ( s ), X j (t )  min( s, t ) ij

Folie       Riccardo Gismondi
154
Brownian Motion

• Random Walk Construction
• let Z1 , Z 2 ,... be independent N (0, I ) random vectors in
d
• we can construct a standard d-dimensional Brownian motion at times
0  t0  t1  ...  t n by setting W (0)  0 and
W (ti 1 )  W (ti )  ti 1  ti Zi 1                    i  0,...,n  1

• to simulate X ~ BM (m , ) we first have to find a matrix B
for which BBT  
• set X (0)  0 and
X (ti 1 )  X (ti )  m (ti 1  ti )  ti 1  ti BZi           i  0,...n  1

Folie      Riccardo Gismondi
155
Brownian Motion

• Random Walk Construction
• thus, simulation of BM ( m , ) is straightforward once
 has been factored
• for the case of time-dependent coefficients, we may set
ti1
X (ti 1 )  X (ti )   m (s)ds  B(ti , ti 1 )Zi   i  0,...n  1
ti

with
ti1
B(ti , ti 1 ) B(ti , ti 1 )   (u)du
T
ti

Folie      Riccardo Gismondi
156
Home assignment (Matlab)

•       Write a code to generate a d-dimensional Standard
Brownian Motion W (t )

•       Write a code to generate a d-dimensional Brownian Motion   X (t )

Folie    Riccardo Gismondi
157
Brownian Motion

• Brownian Bridge Construction
• we can apply independent one-dimensional constructions
also for the multi-dimensional case
• to include a drift vector, it suffices to add m i t n to Wi (t n ) at the first
step of the construction of the ith coordinate

• let W be a standard k-dimensional Brownian motion
• and B a d  k matrix, with k  d ,
• satisfying BBT  
• this results into X (t )  m (t )  BW (t ) as X ~ BM (m , )
and X (t1 ),..., X (t n ) are recovered through a linear transformation

Folie      Riccardo Gismondi
158
Brownian Motion

• Principal Components Construction
• by application of the principal components construction
to the multidimensional case
• the covariance matrix can be represented as
 C11 C12 ... C1n  
                      
 C21 C22 ... C2 n  
(C  )  
...  ... ... ... 
                      
 C  C  ... C  
 n1    n2       nn 

the eigenvectors of this matrix can be written as (vi  w j ) where
vi are the eigenvectors from C
w j are the eigenvectors from 
and the eigenvalues i j where
 i are the eigenvalues from C
 j are the eigenvalues from 
Folie      Riccardo Gismondi
159
Brownian Motion

• Principal Components Construction
• ranking the products of the eigenvalues
(i j ) (1)  (i j ) ( 2 )  ...( i j ) ( nd )

• then for any k  1,...,n

r 1 (i j )(r )                  i
k                         k

 i 1

          (i j ) ( r )           i
nd                        n
r 1                      i 1

• the variability of the first k factors is always smaller for a d-
dimensional Brownian motion than for a scalar Brownian motion over
the same time points
• since the d-dimensional process has greater total variability

Folie      Riccardo Gismondi
160
Geometric Brownian Motion,
Pricing of (exotic) options : n-dim

Folie   Riccardo Gismondi
161
Geometric Brownian Motion

• Multiple Dimensions
• a multidimensional geometric Brownian motion can be
written by the SSDE (system of stochastic differential
equations)
dSi (t )
 mi dt  s i dX i (t )             i  1,...,d
Si (t )
• where each X i is a standard one-dimensional Brownian motion
• and X i and X j have the correlation  ij

Folie      Riccardo Gismondi
162
Geometric Brownian Motion

• Multiple Dimensions
• by setting            ij  s is j  ij (variance-covariance-matrix)
• then (s 1 X 1 ,..., s d X d ) ~ BM (0, )
• and S  ( S1 ,..., S d ) ~ GBM ( m , ) with m  ( m1 ,..., m d )

• the actual drift vector is given by ( m1S1 (t ),..., m d S d (t ))
• and the covariances are given by
( mi  m j ) t        ijs is j
Cov[Si (t ), S j (t )]  Si (0)S j (0)e                                      (e               1)
• with
1
( m i  s i2 ) t s i X i ( t )                                 i  1,...,d
S i (t )  S i (0)e              2

Folie      Riccardo Gismondi
163
Geometric Brownian Motion

• Multiple Dimensions
• with cholesky factorization from  we get a matrix A
• such that AAT  

d
dSi (t )
• and we can formulate           mi dt   Aij dW j (t )
Si (t )            j 1

• this leads to an algorithm for simulating GBM ( m , ) at the times
0  t0  t1  ...  t n
1
( m i  s i2 )(t k 1  t k )  t k 1  t k d1 Aij Z k 1, j
S i (t k 1 )  S i (t k )e
j
2

Folie      Riccardo Gismondi
164
Home assignment (Matlab)

•       Write a code to generate a d-dimensional
Geometric Brownian Motion S (t )

Folie    Riccardo Gismondi
165
Geometric Brownian Motion

• Multiple Dimensions
• options depending on multiple assets
• e.g.
a call option on the spread between two assets S1 and S 2
has the payoff
([ S1 (T )  S 2 (T )]  K ) 

an option on a portfolio of underlying assets and has the payoff
([ c1S1 (T )  c2 S 2 (T )  ...  cd S d (T )]  K ) 

Folie      Riccardo Gismondi
166
Home assignment (Matlab)

•       Write a code to price a Basket Option and discrete
monitoring:

            
function [Call , Put ]  BasketOption  S 1 , S 2 ,.., S d  , c1 , c 2 ,.., c d  , K , T , r , ts
                                         
 S 1 , S 2 ,.., S d  ........................Time Series Matrix of Underlyings
                    
c1 , c 2 ,.., c d  ..........................Weights Vector
                  
K ..........................................Strike
T ..........................................Espiration, e.g . 1 year
r...........................................Risk  free rate vector
1
ts..........................................timestep, e.g. daily,
252

Folie      Riccardo Gismondi
167
Geometric Brownian Motion

• Multiple Dimensions
• Outperformance Option
two options on the maximum or minimum of multiple
assets; e.g. the payoff can be

(max{ c1S1 (T ), c2 S 2 (T ),..., cd S d (T )}  K ) 

• Barrier Options
e.g. a two asset barrier – here a down-and-in put – with payoff
1{ min S 2 (ti )  b}(K  S1 (T ))
i 1,...,n

this is a down-and-in put on S1 that knocks in when S 2 drops
below the barrier b

Folie   Riccardo Gismondi
168
Home assignment (Matlab)

•       Write a code to price a Outperformance Option and
discrete monitoring:

             
function [Call , Put ]  OutPerfOption  S 1 , S 2 ,.., S d  , c1 , c 2 ,.., c d  , K , T , r , ts
                                         
 S 1 , S 2 ,.., S d  ........................Time Series Matrix of Underlyings
                    
c1 , c 2 ,.., c d  ..........................Weights Vector
                  
K ..........................................Strike
T ..........................................Espiration, e.g . 1 year
r...........................................Risk  free rate vector
1
ts..........................................timestep, e.g. daily,
252

Folie      Riccardo Gismondi
169
Geometric Brownian Motion

• Multiple Dimensions
• Quantos
options that are sensitive to a stock price and an
exchange rate (e.g. different currencies for the option
and the stock)

with S1 as the stock price and S 2 as the exchange rate
(expressed as the quantity of domestic currency required per unit
of the foreign currency)

the payoff of the option in the domestic currency is given by
S 2 (T )( S1 (T )  K ) 

Folie   Riccardo Gismondi
170
Geometric Brownian Motion

• Multiple Dimensions
• Quantos (cont)
respectively, the payoff

              K 
 S1 (T ) 
                    
           S 2 (T ) 


corresponds to Quanto, whereby the strike is fixed in the
domestic currency and the payoff of the option is made
in foreign currency

Folie   Riccardo Gismondi
171
Home assignment (Matlab)

•       Write a code to price a Quanto Option and discrete
monitoring :
function [Call , Put ]  QuantosOption  S , Currency d / Currency f , K , T , r , ts 
S .........................................................Time Series of Underlying
Currency d / Currency f .......................Time Series of Exchange Rate
K ........................................................Strike
T .........................................................Espiration, e.g. 1 year
r..........................................................Risk  free rate vector
1
ts.........................................................timestep, e.g. daily,
252

Folie     Riccardo Gismondi
172
Square root diffusion process:
CIR Model

Folie   Riccardo Gismondi
173
Square-Root Diffusions

• Basic properties
• Consider a class of process that includes the square-root diffusion

dr (t )  a (b  r (t ))dt  s r (t )dW (t ),
with W a standard one-dimension Brownian motion.
• We consider the case in which:
a  0, b  0
• If r (0)  0, then r (t )  0 for all t
• If 2ab  s 2 , then r (t ) remains strictly positive for all t
• All of the coefficients could in principle be time-dependent
• i.e , with b  b(t )

dr (t )  a (b(t )  r (t ))dt  s r (t )dW (t )
Folie      Riccardo Gismondi
174
Square-Root Diffusions

• Applications
• CIR Model
• Heston Model

Folie     Riccardo Gismondi
175
Square-Root Diffusions

• Applications
• CIR Model
• The square-root diffusion process
dr (t )  a (b  r (t ))dt  s r (t )dW (t ),
can be used as a model of the short rate.
• This model was done to illustrate the workings of a general
equilibrium model and was proposed by Cox-Ingersoll-Ross
(CIR) (1985).

Folie     Riccardo Gismondi
176
Square-Root Diffusions

• Applications
• Heston Model
• Heston proposed a stochastic volatility model in which
the price of an asset S (t ) is governed by
dS (t )
 mdt  V (t ) dW1 (t )
S (t )
the squared volatility V (t ) follows a square-root diffusion

dV (t )  a (b V (t ))dt  s V (t )dW2 (t )
and
(W1 ,W2 ) is a two-dimensional Brownian motion with
corr (dW1 , dW2 )   dt

Folie     Riccardo Gismondi
177
Square-Root Diffusions

• Simulating r (t ) with Euler Method
• Discretization at times t1 , t 2 ,..., t n by setting:

r (ti 1 )  r (ti )  a (b  r (ti ))[ti 1  ti ]  s r (ti ) ti 1  ti Zi 1

with Z1 , Z 2 ,..., Z n independent N (0,1) random variables
• Notice:
• we have taken the positive part of r (ti ) inside the square root.
Some modification of this form is necessary because the values
of r (ti ) produces by Euler discretization may become negative.

Folie      Riccardo Gismondi
178
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• A noncentral chi-square random variable  ( ) with 
'2

degrees of freedom and noncentrality parameter 
has distribution

P( '2 ( )  y)  F ' 2 (  ) ( y)  e  / 2 
0.5  j / j!      y


j 0               2

2( / 2) j (  j ) 0
z ( / 2) j 1e  z / 2 dz,

for y  0
• The transition law of r (t ) can be expressed as

s 2 (1  e a (t u ) ) '2 4ae a ( t u )
r (t )                         d ( 2       a ( t u )
r (u )), t  u
4a                 s (1  e             )
4ba
where d  2
s

Folie      Riccardo Gismondi
179
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• This says that, given r (u ), r (t ) is distributed as
s 2 (1  e a (t u ) ) /(4a ) times a noncentral chi-square
random variable with d degrees of freedom and noncentraly
parameter                   a ( t  u )
4a e
 2       a ( t  u )
r (u );
s (1  e              )
• Equivalently

      4a y            
P(r (t )  y | r (u))  F ' 2 ( )  2        a ( t u ) 
,
d
 s (1  e            )

Folie      Riccardo Gismondi
180
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• Chi-Square
• If  is a positive integer and Z1 , Z 2 ,..., Z are independent
N (0,1) random variables, then the distribution of
Z12  Z12  ...  Z2
is called the chi-square distribution with degrees of freedom.
• The chi-square distribution is given by
y
1
2 ( / 2) 
P(   y )  ( / 2)
2
e z / 2 z ( / 2)1dz,
0

where (.)denotes the gamma function and (n)  (n  1)! for
n

Folie      Riccardo Gismondi
181
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• Noncentral Chi-Square
• For integer  and constants a1 , a2 ,..., a , the distribution of


 (Z
i 1
i    ai ) 2
is noncentral chi-square with  degrees of freedom and
noncentrality parameter   a12  a2  ...  a2 .
2

• if   1, then
'2 ( )  1'2 ( )  21
• Consequently

'2 ( )  (Z   )2  21

Folie      Riccardo Gismondi
182
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• If N is a Poisson random variable with mean  / 2, than
(  / 2) j
P( N  j )  e  / 2                , j  0,1,2,...
j!
• Conditional on N  j has the random variable   2 N the ordinary chi-
2

square distribution with  2 j degrees of freedom:
y
1
(( / 2)  j ) 
P( 2 2 N     y | N  j )  ( / 2) j                   e  z / 2 z ( / 2) j 1dz
2                           0

The unconditional distribution is thus given by
                                                   
( / 2) j
 P( N  j)P(
j 0
2
 2 N    y | N  j)   e
j 0
 / 2

j!
P( 2 2 j  y)

Folie      Riccardo Gismondi
183
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• Simulation of square-root diffusion

dr(t )  a (b  r (t ))dt  s r (t )dW (t ),
on time gird 0  t0  t1  ...  t n with d  4ba / s 2  1 by sampling
from the transition density

Folie      Riccardo Gismondi
184
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• Simulation of square-root diffusion

dr(t )  a (b  r (t ))dt  s r (t )dW (t ),
on time gird 0  t0  t1  ...  t n with d  4ba / s 2  1 by sampling
from the transition density

Folie      Riccardo Gismondi
185
Square-Root Diffusions

• Simulating r (t ) with Transition Density Method
• Comparison of exact distribution (solid) and one-step Euler
approximation (dashed) for a square-root diffusion:
a  0.2, s  0.1, b  5%, r (0)  4%

t  0.25                                t 1

Folie      Riccardo Gismondi
186
Square-Root Diffusions

• Sampling Gamma and Poisson
• Gamma Distribution
• The gamma distribution with shape parameter a and scale
parameter  has the density
1
f ( y)  f a, ( y)              y a 1e  y /  ,   y0
(a )  a
with mean a and variance a  2
• The chi-square distribution is a special case of gamma
distribution with scale parameter   2

Folie     Riccardo Gismondi
187
Square-Root Diffusions

• Sampling Gamma and Poisson
• Gamma Distribution
• Algorithms GKM1 for sampling from the gamma distribution with
parameters (a,1), a  1

Folie     Riccardo Gismondi
188
Square-Root Diffusions

• Sampling Gamma and Poisson
• Gamma Distribution
• Algorithms (Ahrens-Dieter) for sampling from the gamma
distribution with parameters (a,1), a  1

Folie     Riccardo Gismondi
189
Square-Root Diffusions

• Sampling Gamma and Poisson
• Poisson Distribution
• The poisson distribution with mean   0 is given by

   k
P( N  k )  e             , k  0,1,2,...
k!
• We write N ~ Poisson( )
• Poisson is the distribution of the number of events in [0,1] when
the times between consecutive events are independent and
exponentially distributed with mean 1 /  .

Folie      Riccardo Gismondi
190
Square-Root Diffusions

• Sampling Gamma and Poisson
• Poisson Distribution
• Inverse transformations method for sampling from the Poisson
distribution with mean   0

Folie      Riccardo Gismondi
191
Heston Model and stochastic Volatility

Folie   Riccardo Gismondi
192
The Heston Model

• Is a stochastic volatility model in which the price
of an asset S (t ) is governed by
dS (t )
 m dt  V (t )dW1 (t )
S (t )
the squared volatility V (t ) follows a square-root diffusion

dV (t )  a (b V (t ))dt  s V (t )dW2 (t )
and
(W1 ,W2 ) is a two-dimensional Brownian motion with
corr (dW1 , dW2 )   dt

Folie   Riccardo Gismondi
193
The Heston Model

• Equations description
• The first equation gives the dynamic of a stock price
• The parameters represents
• S (t ) the stock price at time t
•   m the risk neutral drift
• V(t) the volatility

Folie      Riccardo Gismondi
194
The Heston Model

• Equations description
• The second equation gives the evolution of the
variance which follows the square-root process
• The parameters represents
• b the long vol, or long run average price volatility; as t tends to
infinity, the expected value of V (t ) tends to b.
• a the mean reversion parameter; rate at which V (t ) reverts to b
• s the vol of vol, or volatility of the volatility; this determines the
variance of V (t ).

Folie      Riccardo Gismondi
195
The Heston Model

• Simulating S (t )
• 1. Simulate W (t )  (W1 (t ), W2 (t ))
( e.g. with random walk construction ) by setting
W (ti 1 )  W (ti )  ti 1  ti BZi 1
with      B t B  ,
1          
             and
          1

Z1 , Z 2 ,... independent standard random normal vectors

Folie      Riccardo Gismondi
196
The Heston Model

Simulating S (t )
• 2. Simulate V (t ) ( e.g. with Euler discretization ) at times
t1 , t2 ,..., tn by setting
V (ti 1 )  V (ti )  a (b  V (ti ))[ti 1  ti ]  s V (ti )  (W2 (ti 1 )  W2 (ti ))
• 3. Simulate S (t ) at times t1 , t2 ,..., tn by setting

S (ti 1 )  S (ti )  m S (ti )[ti 1  ti ]  V (ti ) S (ti )(W1 (ti 1 )  W1 (ti ))

Folie       Riccardo Gismondi
197
The Heston Model

• Option pricing
• Call Option
• The time t price of a European call option with
time to maturity (T  t ) denoted Call ( S , v, t ), is given by
Call ( S , v, t )  S (t ) P  KP(t , T ) P2
1

• Put Option
• The price of a European put option at time t is obtained through
put-call parity and is given by
Put (S , v, t )  Call ( S , v, t )  KP(t, T )  S (t )
or

Put ( S , v, t )  ( P  1) S (t )  (1  P2 ) KP(t , T )
1

Folie      Riccardo Gismondi
198
The Heston Model

• Option pricing
• P , P2 are given by
1

1 1      eiu ln a  j ( S0 ,V0 , t , T , u ) 

Pj    Re                                      du,   j  1, 2.
2  0                   iu                  


• P(t , T ) is given by

P(t , T )  e  ( r  q )(T t )

Folie      Riccardo Gismondi
199
The Heston Model

Option pricing
• with

 j ( S0 ,V0 , ;  )  exp{C j (r;  )  D j (r;  )V0  i S0 },   T  t.
dC j ( ;  )
 a bD j ( ;  )  (r  q) i  0,
d
dD j ( ;  ) s 2 D 2 ( ;  )                                          2
                  (b j  s i ) D j ( ;  )  m j i      0.
j

d               2                                                   2

C j (0;  )  D j (0;  )  0.

Folie      Riccardo Gismondi
200
The Heston Model

• Option pricing
• and solutions
ab 
                           1  ged       

C ( ; )  (r  q) i  2 (b j  s i  d )  2 ln                
s                             1 g          

b j  si  d  1  ed 
D( ; )                           
s2        1  ged 
where
b j  si  d
g                      , d  ( si  b j )2  s 2 (2u ji   2 )
b j  si  d

u1  0.5, u2  0.5, b1  a    s ,          b2  a  

Folie      Riccardo Gismondi
201
The Heston Model

• Option pricing
• The parameters represents the following
• S (t ) the spot price of the asset
• K the strike price of the option
• r interest rate
• q dividend yield
• P(t , T ) a discount factor from time t to time T .
• P1 and P2 are the probabilities that the call option expires in-
the-money.

Folie      Riccardo Gismondi
202
The Heston Model

• Exact Simulation
• The stock price at time t given the values of S (u ) and V (u ) for
u  t can be written as:
             1
t           t

S (t )  S (u ) exp  m (t  u )   V ( s)ds   V ( s)dW2 ( s) 
             2u            u                
• were the variance at time t is given by the equation:
t              t
V (t )  V (u )  a b(t  u )  a  V ( s)ds  s  V ( s)dW1 ( s)
u              u

Folie      Riccardo Gismondi
203
The Heston Model

• Exact Simulation (Algorithm)
• Input: S (u ),V (u ) Output: S (t ) with u  t
• 1. Generate a sample from the distribution of V (t ) given V (u )
t


• 2. Generate a sample from the distribution of V ( s )ds given
V (t ) and V (u ).                        u
t                                 t
• 3. Recover
u

V ( s)dW1 ( s) given V (t ),V (u ) and  V ( s )ds
• 4. Generate a sample from the distribution of S(t) u given
t                        t

    V ( s)dW1 ( s) and    V (s)ds
u
u

Folie      Riccardo Gismondi
204
Pricing of Exotic Options:
Structured Products (Equity)

Folie   Riccardo Gismondi
205

```
To top