Your Federal Quarterly Tax Payments are due April 15th

# Stochastic Optimization in Finance by malj

VIEWS: 31 PAGES: 14

• pg 1
```									                    Stochastic Optimization in Finance
Krastyu Gumnerov
Institute of Information Technologies – B A S
gumnerov@iinf.bas.bg

Introduction
The financial activity like many other activities has two characteristics:
1)     The decision-making is under uncertainty, e.g., it depends on the future
values of parameters, unknown at the moment of the decision-making, so they are
random quantities with respect to the information at the moment.
2)     The decisions are optimal with respect to some objective.
Thus it is natural a system for financial planning (portfolio management) to have
two modules:
 1. a module describing the random quantities of the model and their
evolution (scenario generator).
 2. an optimization module for given objective function and variables
evolution.
This review examines some methods for building the second module . The
purpose of this review is to offer a brief description of the main approaches for dynamic
stochastic optimization and to show some of their applications in finance. The author
hopes in this way to attract the attention of the Bulgarian experts engaged in financial
mathematics to these approaches unpopular (up to now) in Bulgaria.

I. General Statement of the Dynamic Stochastic Optimization Problem
(Stochastic Control)
To understand the stochastic control it is useful to keep in mind the analogy with some
more simple problems:
1) The elementary problem of finding the conditional extremum of a function;
2) The problems of the calculus of variations and the variational approach in the
classical mechanics and mathematical physics;
3) The problems of the deterministic optimal control.
In general, the solution of these problems, including the stochastic control
problems, is reduced to some optimality conditions, in the form of equations (more
often). These are the equations of the considered system: they describe the evolution of
the parameters defining the system. Characteristic examples are: the equation for the
stationary points of a function of numerical variables, the Kun-Taker equations, the
Euler-Lagrange equations in the calculus of variations, the equations of the
mathematical physics, the Hamilton-Jacobi equation, the Hamilton-Jacobi-Bellman
equation.
The advantage of this approach consists in the fact that it allows to see sometimes
the patterns of the system in consideration, it permits sometimes a qualitative
investigation of the solutions of the problem. The stochastic programming actually
gives us methods for numerical solution of these equations, but in principle other
methods are also possible and in some special cases it is possible to obtain solutions in
explicit form. An example is the Merton problem.
In the following we give a brief presentation of a basic theorem of the stochastic
control and as well a simplified variant of Merton problem.
Assume the system in question is described in a probability space       (, Ft, P)
with the Ito process of the form
u
(1)            dXt = dXt = b(t, Xt, ut)dt +  (t, Xt, ut)dBt
where Xt  Rn, b: R  Rn  U  Rn, : R  Rn  U  Rnxm, Bt is the m-dimensional
Brownian motion and ut  U  Rk is a parameter, whose values in a given set U we can
choose at each moment t to control the system. Thus ut = u(t, ) is also a stochastic
(m)
process, Ft -adapted. It will be called “control”.
s,x
Let {Xh }hs be the solution of (1), where Xt /t=s = x, i.e.,
s,x          h                               h
Xh = x +  b(r , X rs , x , u r )dr    (r , X rs , x , u r )dBr , h  s.
s                               s

Let L: R  R  U  R and K: R  Rn  R be given continuous functions, G  R
n
s,x
 Rn be a domain and let T be the first exit time after s from G for the process {Xh }rs
ˆ
i.e.
ˆ ˆ
T  T s, x ( )  inf{r : r  s, (r, X rs, x ( )  G}
We define the quantity “performance”
ˆ
T
(2)                                                        ˆ
J ( s, x)  E [  L(r , X r , u r )dr  K (T , X Tˆ )  {Tˆ } ]
u             s,x

s

To simplify the notations we introduce
ˆ
Yt  (s  t , X ssc ) for t  0, Y0  ( s, x)  y, T  T  s .
,x

Then (1) becomes
dYt  dYt u  b(Yt , ut )dt   (Yt , ut )dBt
The problem is for each y  G to find a control
u* = u*(t,) = u*(y,t,), so that
J u*(y) = supJ u(y)
u(t,)

The function (y) = J u*(y) is called “optimal performance”.
The optimality condition is given by equations, satisfied by the function (y)
and formulated in the theorem that follows:
2
For v  U and g  C0 (R  Rn ) we define

D g ( y)  g ( y)   b ( y, v) xg ( y)   a                                    2g
n                                n
v
ij ( y, v)
s                                                                     xi x j
i
i 1                    j       i , j 1

1
where aij  ( T ) ij
2
For each choice of the function u: R  Rn  Rk the operator A, defined by
2
(Ag)(y) = (D u(y)g)(y), g  C0 (R  Rn)
is an infinitesimal generator of the process Yt, which is the solution of the equation

2
dYt = b(Yt, u(Yt))dt + (Yt, u(Yt))dBt.
2
Theorem 1 ([30]). Let the function  be bounded and belong to C (G)  C( G ); Let T
<  a.s. for each y  G and let exist an optimal control u*. Then
(3)                    sup{L(y, v) + Dv(y)} = 0, y  G
vU

and
(y) = K(y), y  G
The supremum in (3) is abtained when v = u*(y), where u* is an optimal control,
i.e.
L(y, u*(y)) + Du*(y)(y)} = 0, y  G.
The equation (3) is called Hamilton-Jacobi-Bellman equation (HJB).

Example 1. Portfolio optimization ([30]).
There are given the assets p1 and p2 with price processes, satisfying the equations
dp1
(4)                     adt  dBt , b  a, .
p1
dp2
(5)                     bdt , b  a,
p2
At the moment t let Xt be the investor’s wealth. He divides it into two parts: utXt and (1
– ut)Xt, 0  ut < 1. With the part utXt he buys the assets p1, and with the part (1 – ut)Xt -
ut X t
the assets p2 . In this way he composes the portfolio, which contains an amount                                   of
p1 (t )
(1  u t ) X t
the asset p1 and an amount                                   of the asset p2 . The portfolio price increase dXt
p 2 (t )
will be
ut X t       (1  u t ) X t       uX                         (1  u t ) X t
dX t           dp1                 dp 2  t t  p1 adt  p1dBt                  p 2 bdt 
p1              p2                p1                            p2
X t [u t a  (1  u t )b]dt  X t u tdBt .
At the initial moment s < T the investor’s wealth Xs is determined = X. Let the
performance N be an increasing concave function of the wealth Xt, for example
N(Xt) = Xtr, 0 < r < 1.
The investor has an investment horizon T and he wants trading without loan to
maximize the performance at the moment T , more exactly to maximize the quantity
u
Ju(s, x) = Es,x[N(X )],
where Es,x is the expectation with respect to the probability law (the conditional
probability density) of the process, which at the moment s has the value x;  is the first
exit time from the domain G = {(t, x): t<T, x>0}.
The problem is to find a function (s, x) and a stochastic (markovioan) process
ut*, 0  ut* < 1, to satisfy the conditions (s, x) = sup { Ju(s, x), u is a markovian
process, 0  u < 1},     (s, x) = J u*( s, x).

3
To solve this problem we compose the Hamilton-Jacobi-Bellman (HJB) equation. The
operator D v is
f                   f 1          2 f
( D v f )(t , x)        x(av  b(1  v))   2 v 2 x 2 2 .
t                   x 2          x
The equation HJB is

                              
sup ( D v )(t , x)  0 за (t , x)  G,
(6)                                    v
(t, x)  N(x) for t  T, (t,0)  N(0) за t  T.

From this equation for each (t, x) we find v=u(t, x), so that the function
                    1 2 2 2  2 
(7)       (v)  Lv               x(b  (a  b)v)     v x
t                   x 2       x 2
has a maximum. The function (v)                                     is a polynomial of second order, hence if
           2
 0 and 2  0 , it gets a maximum for
x        x

( a  b)
(8)              v  u (t , x)             x .
 
2
x 2 2
x
Substituting (8) in (7) and (6) we obtain the following nonlinear boundary
problem for (t, x)
  
2

(a  b)        
                    x   0
(9)                     bx    
t      x           2  
2
2
x 2
(t , x) t T  N ( x), (t , x)               x 0    N (0),
т.е.        (t , x)   N ( x),   {(T , x)}  {(t ,0)}.
Let N(x) = x , 0 < r < 1 and let look for (t, x) in the form
r

(t, x) = f (t, xr).
By substituting in (9) we get
( a  b) 2 r
(10)                 (t , x)  e  (t  x ) x r ,   br                    .
2 2 (1  r )
Now from (8) and (10) we obtain
a b
(11)                u * (t , x) 
 (1  r )
2

ab
If 0                1 (11) is the solution of the problem (u* is in fact a constant).
 (1  r )
2

This result means that in the portfolio practical management, the investor invests
ab            ab
his capital at the initial moment in the proportion                                             , 1 2         and he does not
 (1  r )
2
 (1  r )
change it up to horizon T.
If u*(t, x) depends on t and x the investor rebalances the portfolio at each
moment t (practically in discrete moments), and at the moment t + dt he observes at the
market the increases dp1 and dp2 of both assets prices, and calculates the increase
4
       dp1                    dp 2 
dX t  X t u (t )          (1  u (t ))           of his wealth and composes a portfolio of both assets
       p1 (t )                p 2 (t ) 
in proportion ut* , 1 – ut* , where
+dt       +dt

ut* = u*(t+dt, Xt + dX t) = u*(t+dt, Xt+dt)
+dt

Example 2. The Merton problem ([14], [23]).
There are traded at the market N + 1 assets p0, p1, …, pN, with the following price
processes:
dp t0
 rdt
p t0
dp ti                     N
(12) i   i ( pt , t )dt    ij ( pt , t )dBt0 , i  1,..., N
pt                      i 1
p t  ( p t1 ,..., p tN ), r  const.
The investor has a horizon T and at an arbitrary moment t  [0, T] he possesses a
0    1       N
portfolio of the assets p0, p1, …, pN , in quantities respectively t , t , …, t , doing a
0       1           N
consumption with speed Ct  0 , where t , t , …, t , Ct are stochastic processes. At
the initial moment his wealth is X0  0 and he trades and consumes without exterior
incomes. That means that his wealth Xt at the moment t is
N                   N   t                t
X t   p  X 0      dp   c d  0, t  [0, ].
t
i   i
t
i   i

i 0                 i 1 0               0

It follows that
N
(13)                              dX t    ti dp ti  ct dt.
i 0

This is a “budget equation”.
It is convenient instead of processes  ti to introduce the processes
 ti pti            ti pti
 ti                     N
, i  0,..., N ,  i  1,
Xt               t pt
j   j                                  i
j 0
i
which represent the part of the wealth included in pt , i.e. the proportion in which the
wealth is distributed in the assets.
Substituting (12) in (13) the budget equation gets the form of
N                   N              
dX    i   i p i dt    ij p i dB j    0 rp 0 dt  cdt 
                              
i 1                j 1            
N                             N
   i i Xdt                           ij    i XdB j  r o Xdt  cdt.
i 1                      i , j 1

i.e.
dX   X  i i  X 0 r  c dt  X    i ij dB t .
N                             N     N
(14)                                                                       
 i 1                          j 1 i 1   
The investor’s “performance” is defined by the consumption done in the period [0, ]
and by the wealth possessed at the moment . More exactly it is given by the formula

5
T                                     
J ( c , ) (t , x)  E t , x   L( , X  ; c ,  )d  K ( X  ), where E
t,x
is the expectation with respect to
t                                     
the probability law of the process X , which begins at the moment t with the value x.
The investor’s goal is by trading and consuming, to maximize J(c,a)(t, x). Let
def
(t , x)  sup J ( c, ) (t , x)  J ( c*, *) (t , x),
( c , )

where c*, a* are the optimal consumption and investment strategy. The function (t, x)
satisfies the equation HJB and the boundary condition
(t, x) = K(x).
Let us compose the equation HJB. The infinitesimal generator of the process
according to (14) is
2
  N                         1   N
 N       2
D   ( , )
   x i  i  x 0 r  c   x 2    i ij       .
t  i 1                    x 2 j 1  i 1    x
2

The equation HJB is
                        
sup D ( c , )  (t , x)  L(t , x)  0,
( c , )

 (t , x)  K ( x).
To solve the problem we follow the procedure: having arbitrary fixed t, x, we calculate
the supremum of the function
 (t , x)  N i                           
 (c,  )             x    i  x 0 r  c        (t , x) 
t         i 1                        x
1 2 N  N i 2  2
x      ij           (t , x)  L(t , x; c,  ).
2 j 1 i 1        x 2
Setting equal to zero the derivatives with respect to c and a of the function (c,a), we
obtain a linear system of equations, from which we obtain c and a as functions of
  2 
и      , i.e. c  c( x ,  xx ),                       i   i ( x ,  xx ) . Substituting in (c,a) we obtain the
x x 2
equation
(15)          (c (x, xx), (x, xx)) = 0,
which is a nonlinear partial differential equation.
In the special case when L = 0 and K(x) = xs, 0< s <1 the problem can be solved
explicitly. We look for a solution of (15) in the form of Ф(t, x) = f(t)xs and obtain
ordinary differential equation of first order for f(t).

II. Two-Stage Stochastic Linear Programs with Fixed Recourse (2S-SLPR)
In the common case the stochastic programs are generalizations of the deterministic
optimization programs, in which some uncontrollable data are unknown surely. Their
typical features are many decision variables with many potential values, discrete time
periods for the decisions, the use of expectation functionals for the goals and known (or
partially known) distributions.

II.1. Formulation ([7])
6
The two-stage stochastic linear program with fixed recourse takes the following form:
(16)                     min z  cT x  E min q( )T y( )
s. t. Ax = b,
T(  )x + Wy(  ) = h(  ),
x  0, y ( )  0 ,
where:
c is a known vector in R n ,  1

b is a known vector in R m ,      1

A and W are known matrices of size m1 x n1 , m2 x n2 .
W is called the recourse matrix, which we assume here is fixed.
For each  , Т(  ) is m2 x n1 matrix, q(  )  R n , h(  )  R m . Piecing together the
2             2

stochastic        components              of      the        problem,   we     obtain    a   vector:
 ( )  (q( ) , h( ) , T1 ( ), ...,Tm ( )) , with N  n2  m2  m2 x n1 components, where Ti ( )
T            T       T
2

is the i-th row of Т(  ).
E represents the mathematical expectation with respect to .
A distinction is made between the first stage and the second stage. The first- stage
decisions are represented by vector x. Corresponding to x are the first-stage vectors and
matrices c, b and A. In the second stage, a number of random events                        (set of
all random events) may realize. For a given realization  , the second-stage problem
data q(  ), h(  ) and Т(  ) become known. Each component of q, T and h is thus a
possible random variable.
Let also   R n be the support of  , i.e. the smallest closed subset in R n , such that
P(  ) 1 .
As just said, when the random event  is realized, the second-stage problem data
q(  ), h(  ) and Т(  ) become known. Then the second-stage decisions y(  , x) must be
taken. The dependence of y on  is of a completely different nature from the
dependence of q or other parameters on  . It is not functional but simply indicates that
the decisions y are typically not the same under different realizations of  . of this type
we have two stages:
- in the first stage we make a decision;
- in the second stage we see a realizations of the stochastic elements of the
problem but are allowed to make further decisions to avoid the constraints of the
problem becoming infeasible.
In other words, in the second stage we have recourse to a further degree of
flexibility to preserve feasibility (but at a cost). Note particularly here that in this second
stage the decisions that we make will be dependent upon the particular realization of the
stochastic elements observed.
The objective function of (16) contains a deterministic term c T x and the
expectation of the second-stage objective q( )T y ( ) taken over all realizations of the
random event  . This second-stage term is the more difficult one because, for each  ,
the value y(  ) is the solution of a linear program.
Using discrete distributions, the resulting model can be reformulated as a linear
programming problem called deterministic equivalent program (DEP).
7
Problem (16) is equivalent to the DEP:
(17)                                     ˆ
min z  c T x  Q( x)
(18)                         s.t. Ax = b,
x  0,
where:
(19)                 ˆ
Q( x)  E Q( x,  ( )) ,

and for a given realization  ,
Q( x,  ( ))  min y {q( ) T y / Wy  h( )  T ( ) x, y  0}

is the second-stage value function.

II.2. Feasibility Sets ([7])
The expected value of the function of the second stage is given in (19). Because of its
importance for the application, let’s go into the situation, when  is finite discrete
random variable, and precisely     with  a finite or countable set.
The second-stage value function is then the weighted sum of the Q(x, ) values
for the various possible realizations of .
Let K1 = {x/Ax = b, x  0} be the set determined by the fixed constraints, namely
those that do not depend on the particular realization of the random vector, and let
ˆ
K 2  {x, Q( x)  } be the second-stage feasibility set. We may now redefine the DEP as
follows:
ˆ
min z ( x)  c T x  Q( x)
s.t. x  K 1  K 2
Let K2 () = {x|Q(x, ) < +} be the elementary feasibility sets and
p
K2 = { x/   y  0 s.t. Wy = h – T.x} =  K2()
 

The following is valid:
Theorem 2. а) For each , the elementary feasibility set is a closed convex polyhedron,
p
hence the set K2 is closed and convex.
p
b) When  is finite, then K2 is also polyhedral and coincides with K2.
Theorem 3. When W is fixed and  has finite second moments:
(а) K2 is closed and convex.
(b) If Т is fixed, K2 is polyhedral.
(c) Let  (Т) be the support of the distribution of Т. If h()and T() are
independent and  (Т) is polyhedral, then K2 is polyhedral.
Theorem 4. For a stochastic program with fixed recourse where  has finite second
moments,
ˆ
(а) Q( x) is a Lipschitzian convex function and is finite on K2 .
ˆ
(b) When  is finite, Q( x) is piecewise linear.
8
(c) If F() is an absolutely continuous distribution, Q( x) is differentiable on K2.
ˆ
The proofs of these theorems see in [7].
These theorems allows us to propose without great efforts algorithms for solving
2S-SLPR (corresponding to the theorems’ conditions).

III. Multistage Stochastic Linear Programs with Recourse (MS-SLPR)
III.1.            Formulation ([7])
The multistage stochastic linear program with fixed recourse takes the following form:
min z  c 1 x 1  E  [min c 2 () x 2 ( 2 )    E  [min c H () x H ( H )]]
2                                     H

s. t. W 1 x 1  h 1 ,
T 1 () x 1  W 2 x 2 ( 2 )  h 2 (),
(38)                                               
T H 1 () x H 1 ( H 1 )  W H x H ( H )  h H (),
x 1  0; x t ( t )  0, t  2,  , H ;
where:
c 1 is a known vector in R n ,          1

h 1 a known vector in R m ,  t ( )T  (c t ( )T , ht ( )T , T1t 1 ( ),...,Tm1 ) is a random Nt-
1                                         t
t

vector      defined    on     (,     t,   P),       (where             t                 t+1    for  all
t = 2, …, H ) and each W is a known mt  nt matrix. The decisions x depend on the
t

history up to time t, which we indicate by  t. We also suppose that t is the support of
t
 .
A multistage stochastic program with recourse is a multi-period mathematical
program where parameters are assumed to be uncertain along the time path.
The term recourse means that the decision variables adapt to the different out-
comes of random parameters at each time period. Different formulations of MS-SLPR
are proposed in the literature, for example see [5] and [12].
Using discrete distributions, the resulting model on every stage can be
reformulated as a linear programming problem also called deterministic equivalent. In
this way it is possible to present the realization of the random vector  t by the so called
scenarios tree. As an illustration, we present in Figure 1 an example of a scenarios tree.

9
Figure 1. A tree of seven scenarios over four periods.

We first describe the deterministic equivalent form of this problem in terms of a
dynamic program. If the stages are 1 to H, we can define the states as xt(  t). Noting that
the only interaction between periods is through this realization, we can define a
dynamic programming type of recursion. For terminal conditions we have
Q H ( x H 1 ,  H ( ))  min c H ( ) x H ( )
(39)
s.t. W H x H ( )  h H ( )  T H 1 ( ) x H 1 , x H ( )  0.
ˆ
Letting Q t 1 ( x t )  E [Q t 1 ( x t ,  t 1 ( ))] , for all t, we obtain the recursion for
t 1

t = 2, …, H – 1,
ˆ
Q t ( x t 1 ,  t ( ))  min c t ( ) x t ( )  Q t 1 ( x t )
(40)
s.t. W t x t ( )  h t ( )  T t 1 ( ) x t 1 ,     x t ( )  0.

where we use xt to indicate the state of the system. Other state information in terms of
the realizations of the random parameters up to time t, should be included if the
distribution of  is independent of the past outcomes.
The value we seek is:
(41)                                     ˆ
min z = cx1 + Q (x1)
s.t. W x = h , x  0,
1 1    1 1

which has the same form as the two-stage deterministic equivalent program.
It is plain that an obvious extension to the above simple 2S-SLPR is to have more
stages. In such cases it is common that:
- the stochastic elements have a discrete distribution;
- the realizations of the stochastic elements are represented as a number of
scenario's of the future.
10
Observe that our stochastic program is actually a deterministic program (in fact a
linear program in this particular case). Although the original problem had stochastic
elements the use of scenarios to explicitly represent the set of all possible futures has
enabled us to formulate the problem deterministically.

III.2. Approaches for Solving MS-SLPR
A stochastic programming implementation of a deterministic model means that one has
to deal with two challenges:
 First, the generation of a much larger mathematical programming problem
which combines several variants of the deterministic version.
 Second, a heavy computational burden, since the size of the model is
roughly multiplied by the number of nodes in the scenarios tree.
The real computational difficulty arises due to the number of scenarios that can
often appear. Observe that we essentially need a complete set of constraints for each
possible scenario.

III.2.1. Nested Decomposition ([7])
The Nested decomposition for the deterministic case is presented first in [18] and [17].
Actually their approaches represent a inner linearization. They treat all former periods
as subproblems of the main problem for the current period.
The difficulty with these primal (originally) included decomposition or inner
linearization methods consists in the fact that the set of inputs can be entirely different
for different last periods of the realizations. Nevertheless, some results are reached in
[29], as it will be described, applying inner linearization of the dual problem and this is
again an outer linearization of the main problem.
The common original (primal) approach is, therefore, to use the outer
linearization, built on the two-stage L-shaped method. The structure can be used to
render the dynamic problems aspects, rendering the uncertainty in all parameters and
securing random parameters for moderate number scenarios. The problem is
decomposed into the subproblems of each scenario.
As before, you need to be clear about the sequence that applies as you move
down the tree in order to formulate the problem correctly.
In any MS-SLPR it is important to be clear about the order that pertains as you
move through a scenario. In particular you should be clear at each stage about which
stochastic elements have been realized and which unrealized.
Note here that this is a key point - scenarios with a common history must have the
same set of decisions - and must always be taken into account when formulating a
scenario based decision problem.
Louveaux is the first to present that multi-stage quadratic problem
generalization. Birge extended the two-stage method for the linear case. The same
approach is observed also in Pereira and Pinto [31].
The basic concept of “Nested” L-shaped method or Benders decomposition
ˆ
method is to place cuts on Q i 1 ( x i ) in (40) and add other cuts in order to reach xi. Cuts

11
ˆ
represent successive linear approximations of Q i 1 ( x i ) . Because of polyhedral structure
ˆ
of Q i 1 ( x i ) this is a convergence process to an optimal solution of a finite number steps.
To summarize then, as we move down the scenario tree (for any scenario) we have
the following order of actions:
- in the first stage a decision as to how much to produce; then
- in the second stage a realizations of the stochastic element; then
- a decision as to the values of the recourse variables; then
- in the second stage a decision as to how much to produce; then
- in the third stage a realizations of the stochastic element ;
-…;
- in the H-th stage a realizations of the stochastic element; and finally
- a decision as to the values of the recourse variables.

For every stage t = 1, …, H – 1 and each scenario at that stage k = 1, …, Nt we
have the following master problem, which generates cuts to stage t and proposals for
stage t+1.
(42)     min( ck )T xk   kt
t     t

(43) s.t. W t xkt  hkt  Tkt 1 xa1 ) ,
t
(k

(44)     Dk , j x k  d k , j , j  1,..., rkt, j ,
t       t     t

(45)     Ekt , j xk   kt  ek , j , j  1,..., sk , j ,
t           t                   t

(46)     xk  0,
t

where a(k) is the ancestor scenario of k at stage t – 1, xa1 ) is the current solution
t
(k

from       that     scenario      and     where       for      t=1,     we        interpreted
1     0 0
b = h – T x as initial conditions of the problem. We may refer also to the stage H
H
problem in which  k and constraints (44) and (45) are not present. To designate the
period and scenario of the problem (42)-(46), we also denote this subproblem NLDS(t,
k). Dt(j) denotes the scenarios following in the period t of the scenario j of the period t –
1. We assume also all the variables in (38) to have finite support.

III.2.2. Parallel Decision
Yang and Zenios in [38] have implement the factorization in parallel on a Connection
Machine CM-5 with up to 64 processors. Due to memory efficiencies in the parallel
version, they actually achieve superlinear speedups in parallel compared to serial times.
They also report solutions of problems with up to 18 million variables and almost 3
million constraints.
These data seem to show that the special factorizations offer exceptional
advantages in the framework of the stochastic programs inner point methods. The
concrete factorization form can be defined by the accessibility of the parallel processing
and by the possibilities to solve the necessary problems in parallel. But it is affirmed
that the decomposition methods are the best for the parallel solutions.
12
IV. Model of Financial Planing (Assets/Liabilites Management)
Recent papers analyze the effects of asset return predictability on asset allocation
decision of long-term investors. These papers investigate how the investor’s horizon or
the uncertainty of the estimated parameters affects the allocation decision. There has
been a growing interest in the development of multiperiod stochastic models for asset
and liability management (ALM). Kusy and Ziemba in [20] developed a multistage
stochastic linear programming model for Vancouver City Savings Credit Union.
Another successful application of multistage stochastic programming is the Russell-
Yasuda Kasai model in [9]. The investment strategy suggested by the model resulted in
extra income of \$79 million during the first two years of its application (1991 and
1992). For other examples see in [24], [34] and [35].
Here we us consider a model taken from [24].
Assume the investor having a determined goal at the horizon T, invests
in given assets by rebalancing his portfolio until the moment T according to
the received information and his expectations for to the price at the market.
The moments, in which the portfolio is rebalanced are numbered with
{0, 1, …, , +1, …, T}.
Asset investment categories are defined by the set A = {1, 2, …, I},
with category 1 representing cash. The possible scenarios are numbered with
{1, 2, …, s, …, S}. Two different scenarios can coincide up to a moment,
e.g. in their case up to this moment the random quantity realizations
coincide. Hence the decisions based on these scenarios must coincide till that
moment. This scenarios characteristic is called nonanticipativity.
There are two types of variables, which determine the model. The
variables, which don’t depend on the investor, are called parameters, the
others depending on the investor are called decision variables.

“parameters”
s          s           s
ri,t = 1 + i,t, where i,t is the return for asset i, in moment t, under scenario s;
s Probability that scenario s, occurs   s  1 ;
S

s 1

w0     Wealth in moment 0;
i,t   Transaction costs incurred in rebalancing asset i at the beginning of
time period t (cost of selling equals cost of buying);
s
t     Borrowing rate in period t under scenario s;

13
“ decision variables ”
s
x i,t Amount of money for asset category i, in time period t, under scenario
s, after rebalancing;
s
v i,t Amount of money for asset category i, in the beginning of time period
t, under scenario s, before rebalancing;
s
wt Wealth at the beginning of time period t, under scenario s;
s
p i,t   Amount of asset i, purchased for rebalancing in period t, under
scenario s;
s
d i,t   Amount of asset i, sold for rebalancing in period t, under scenario s;
s
bt      Amount borrowed in period t, under scenario s.

Model SP
S
(47)    max z    s f ( ws ),
s 1

s.t.
(48)     xi ,0  w0 , s  S ,
s

i

(49)  xis,t  ws ,             s  S ,
i

(50)    v is,t  ri st 1 x is,t 1 , s  S , t  1,...,T ; i  A,
,

(51)    xis,t  vis,t  pis,t (1   i ,t )  d is,t , s  S , i  1, t  1,..., ,

(52)    x1s,t  v1s,t   dis,t (1   i ,t )   pis,t  bts1 (1   ts1 )  bts , s  S , t  1,..., ,
i 1                 i 1

s         s’
(53) xi,t = xi , t for all scenarios s and s’ with identical past up to time t.

The model works as follows:
Assume we had solved the optimization problem, e.g. according to the
given scenarios for the parameters (from the scenario generator), we have
determined the optimal “decision variables”.
Assume in the moment t – 1 the scenario s’ is realized and after
rebalancing the investor invests (places funds) in the assets i the sum of xis,t' 1 .
- Assume in the following moment t the scenario s is realized. It is identical
with s’ till the moment t – 1 and because of that (taking into consideration
the nonanticipativiy) xis,t 1  xis,t' 1
- Then we determine vis,t using (50) vis,t  ri ,st 1 xis,t 1  ri ,st 1 xis,t' 1 ;
- and from (51) and (52) we determine xis,t .
14

```
To top