Documents
User Generated
Resources
Learning Center

# Variance Reduction Techniques

VIEWS: 19 PAGES: 5

• pg 1
```									                                                   Chapter 6
Variance Reduction Techniques

Introduction

Variance reduction is a procedure that is used to increase the precision of the estimator that
can be obtained from several simulation runs. To obtain greater precision and smaller confidence
intervals for the output random variables, variance reduction techniques can be used. The important
methods are common random numbers, control variable, antithetic variable, conditioning,
importance sampling, and stratified sampling.
The common random numbers method is typically used in which we need to compare two or
more systems. All other methods are useful when we need to consider the performance measure of a
single system.

Common random numbers

We will see how the common random numbers method is used to reduce the variance of an
estimator. Suppose X and Y are two random variables, then
Var(X − Y)= Var(X)+ Var(Y)− 2Cov(X,Y)
In the case of that X and Y are positively correlated Cov(X,Y) > 0. Hence the variance of (X-Y) will
be smaller than in the case of X and Y are independent.
The positive correlated variables can be generated using common random numbers. If we use one
set of random numbers to generate X values and then another set of random numbers to generate Y
values, the covariance of X and Y will be zero. But, if we use common set of random numbers to
generate both X and Y, then the covariance between X and Y is positive.

Example:

Suppose there are two tellers and we are interested in comparing average time that two tellers take
to finish the service per customer. Let 1 is the average time taken by teller 1 to finish the service

per customer and  2 is the average time taken by teller 2 to finish the service per customer.

We need to estimate  = 1 -  2 . If we estimate  using n numbers of customers, then
Z j  X 1 j  X 2 j for j= 1,2,…….n and all observations are independent. Now consider the variance

of estimator Z j .

Var( Z j ) = Var( X 1 j - X 2 j ) = Var( X 1 j )+Var ( X 2 j )-2Cov( X 1 j , X 2 j )

To reduce the variance of Z j , we can use a set of random numbers {U1, ...,Un} to generate both X 1 j
and X 2 j .
Antithetic variable method

The method of antithetic variable makes use of the fact that if U is uniformly distributed on (0,1)
then so is 1 – U. Furthermore U and 1 − U are negatively correlated. Suppose X 1 and X 2 are the
outcomes of two successive simulation runs of an identical system, then
X1  X 2    1          1            1
Var(          )= Var( X 1 )+ Var( X 2 )+ Cov( X 1 , X 2 ).
2        4          4            2
X  X2
To reduce the variance of ( 1     ) , the Cov( X 1 , X 2 ) must be negative. The antithetic variable
2
method can be used to obtain the negative correlation between X 1 and X 2 From the fact that U and
1 − U are negatively correlated, we may expect that, if we use the random variables U1,….Un to
compute the outcome of the first simulation run ( X 1 ) and after that 1−U1,….,1– Un to compute the

outcome of the second simulation run ( X 2 ) then X 1 and X 2 are negatively correlated. The flow of
antithetic variable method is presented in the following flow chart.

Generate a set of
random numbers

Simulate X 1 using U1,….Un                                       Simulate X 2 using 1−U1,…,1– Un

X1  X 2
Compute X=
2
Example:
1
Suppose we are interested in using simulation to estimate           =  e x dx .
0

Let f (u)  e is clearly a monotone function. The antithetic variable approach can be used to
u

n

X       i
U
reduce the variance of estimate. Here = E[e ] . That is             can be estimated by X       i 1
where
n
X i  eU i . To easy to understand we consider two simulation output rather than considering n

simulation output.
e U1  e U 2    Var (eU )
Suppose independent random numbers are used then, Var (                        )            0.1210
2             2
Note that Var (eU )  E[e 2U ]  [ E (eU )]2

=

(e 2  1)
=            (e  1) 2
2
= 0.2420
But if we use the antithetic variables U and (1-U),
e u  e1u    Var (e u ) Cov(e u , e1u )
Var (            )                             0.0039
2           2            2
Note that Cov(eU , e1U )  E[eU e1U ]  E[eU ]E[e1U ]
=
= -0.2342
This shows that a variance reduction of 96.7% [(0.1210-0.0039)/0.1210*100] is obtained using
antithetic variable method.

Control variable method

Suppose we want to estimate                           E (X ) using simulation. Here X is the output of a simulation run.

Suppose there is some other output variable Y, and the expected value of Y is known. [ E (Y )   y ]

Then another unbiased estimator of                       can be defined for any constant c is X  c(Y   y ) .

When we consider E[ X  c(Y   y ) ] =                      + c(  y   y ) =

Now consider Var ( X  c(Y   y ))  Var ( X  cY )  Var ( X )  c 2Var (Y )  2cCov ( X , Y )

Cov( X , Y )
Then it can be shown that minimum of Var ( X  c(Y   y )) occurs when c *  
Var (Y )

Thus, the value of the variance of estimate X  c(Y   y ) for c * is

[Cov( X , Y )]2
Var ( X  c (Y   y ))  Var ( X ) 
*
.
Var (Y )
The Y is called a control variable for the estimator X.
We will look how the variance of an unbiased estimator X  c(Y   y ) of                         can be controlled using

control variable Y. Suppose X and Y are positively correlated, then c* is negative and that the X is
large when Y is large and vice versa. Thus, if Y   y then it is probably X > . In this case, the error

can be corrected by decreasing the value of the estimator X, and this is done since c* is negative.
When X and Y are negatively correlated, similar argument can be presented. Further, the value of c*
can be computed using sample information.
n
( X  X )(Yi  Y )                  n
(Yi  Y )             2

It is noticed that Cov( X , Y )   i
ˆ                                      and Var (Y )   n  1
ˆ
then,
i 1       n 1                        i 1

n

(X             i    X )(Yi  Y )
c 
*      i 1
n

 (Yi 1
i    Y )2
1              (Cov( X , Y )) 2
Now the variance of controlled estimate is Var ( X  c * (Y   y ))              (Var ( X )                   )
n                Var (Y )

Example:
1

 e dx  E (e
x           U
Suppose we are interested in simulating              =                       ) . It can be estimated with variance
0

1                   1
reduction using control variable U. Since U ~ uni(0,1) we have E (U )           and Var (U )  .
2                  12
Now we have an unbiased estimator of              for any constant c is e  c(U  E (U )) . It is noticed that
U

E[eU  c(U  E (U )]  E[eU ] and the variance of estimator using control variable U is

1                 [Cov(eU ,U )]2
Var (eU  c * (u  ))  Var (eU ) 
2                    Var (Y )
But,
Var (eU )  E[e 2U ]  ( E[eU ]) 2
1
=  e 2 x dx  (e  1) 2
0

(e 2  1)
=              (e  1) 2
2

=0.2420
and
Cov(eU ,U )  E[UeU ]  E[U ]E[eU ]
(e  1)
1
= 0 xe x dx 
2
(e  1)
= 1
2
= 0.14086
1   (0.14086) 2
Now Var (eU  c* (u  )) =             = 0.0039
2       1
12
1

 e dx =
x
If we estimate                  E (eU ) , the variance is Var (eU ) = 0.2420. But, if we estimate E (eU ) using
0

1
control variable U, the variance of estimator Var (eU  c* (u  )) is 0.0039. This shows that there is
2
a 98.4% reduction in variance when we use control variable method.

Variance reduction using conditioning method

Suppose X and Y are output variable of a simulation run and we have
Var ( X )  E[Var ( X / Y )]  Var[ E( X / Y )]
It is noticed that E[Var ( X / Y )]  0 and Var[ E( X / Y )]  0 . It follows that Var ( X )  Var[ E( X / Y )]
Suppose we are interested in estimating            = E[X ] using simulation, where X is an output variable
of simulation run. It is known that E[ E( X / Y )]  E[ X ]  . It follows that E[ X / Y ] is an unbiased
estimator of . Suppose there is another variable Y, such that E[ X / Y ] can be determined from the
simulation. Because Var ( X )  Var[ E( X / Y )] , it can be concluded that E[ X / Y ] is superior to the
estimator X of

Example:

Let us consider the use of simulation in estimating  . We have estimated  by looking at how often
a randomly chosen point in the square of area 4 centered on the origin falls within the inscribed
circle of radius 1.

Let Vi  2U i  1; where i=1,2 and                                                          Then it is known that E[ I ] 
4

The value of I to estimate     can be improved upon by using E[ I / V1 ] rather than I.
4
When considering E[ E ( I / V1 )]  E[ I ]

Now consider E[ I / V1 ]  P{V12  V22  1 / V1  v}

= P{v 2  V22  1 / V1  v}

= P{V22  1  v 2 } (Because V1 , V2 are independent)

= P{(1  V 2 )1 2  V2  (1  V 2 )1 2 }
1
(1v 2 )        2
1
=                 1 2
dx ; V2 ~ Uni(1,1)
(1v 2 )       2

1
= (1  v 2 )                   2

1
Hence E[ I / V1 ] = (1  V12 )       2

1           
So the estimator (1  V12 )       2
and has smaller variance than I. Also the estimator can be
has mean
4
1
1                  1   1
simplified because E[(1  V12 ) 2 ]   (1  x 2 ) 2 ( )dx
1
2
1
1
=  (1  x 2 ) 2 dx
0
1
= E[(1  U 2 ) 2 ]
1
Thus improvement in variance is obtained using estimator (1  U 2 ) 2 rather than I. When we
1                  1                      2 
consider the variance of estimator (1  U 2 ) 2 , Var[(1  U 2 ) 2 ] = E[(1  U 2 )]    ( ) 2  0.0498
4 3   4
                          
Since I is Bernoulli random variable having mean and Var ( I )  (1  )  0.1686
4                  4       4
1
Thus using the estimator (1  U 2 )                          2
to estimate I result in 70.44% reduction in variance.

```
To top