# Closed-Form Solutions to the Hamilton-Jacobi-Bellman Equation

Document Sample

```					Closed-Form Solutions to the Hamilton-Jacobi-Bellman Equation
Solomon M. Antoniou

SKEMSYS
Scientific Knowledge Engineering and Management Systems
37 oliatsou Street, Corinthos 20100, Greece
solomon_antoniou@yahoo.com

Abstract
We initiate a solution procedure for the Hamilton-Jacobi-Bellman equation. The method relies on the determination of the underlying Lie symmetries of the equation. After finding the symmetries, we consider a linear combination of the infinitesimal generators, which in turn determine two invariants. Using the transformations based on the two invariants, we transform the HJB equation to an ordinary non-linear differential equation, which can be converted into an Abel differential equation. Abel’s differential equations have a closed form solution. We have thus obtained four families of solutions. Keywords: HJB equation, Lie Symmetries, Nonlinear PDEs, Exact Solutions, Portfolio Optimization.

1

1. Introduction
Stochastic Control ([1]-[5]) is an optimisation procedure used in diversified areas of financial applications, like • Portfolio Optimization ([6]-[11]) • Option Pricing ([12]-[15]) • Consumption-Investment Strategies ([16]-[19]) • Stochastic Interest Rates [8] • Asset Allocation [20] • Risk ([21]-[22]) • Pension Funds ([23]-[24]) • Insurance [25]

The basic equation used in the optimization process is the Hamilton-JacobiBellman (HJB) equation, derived under the principle of Dynamic Programming ([26]). Despite the successful predictions of the models relying on the HJB equation, there is not any general solution procedure available. The solutions obtained so far are based on a linearization procedure, using a utility function as an input. It is the purpose of this paper to initiate a solution procedure (and to find some general solutions as well) based on Lie Symmetry. The Lie symmetry approach to differential equations was introduced by S. Lie ([27]-[28]) and is best described in refs ([29]-[37]). This technique was used for the first time in Finance by Gazizov and Ibragimov ([38]). The same technique was also used by the author in solving the Bensoussan-Crouhy-Galai equation ([39]). There is however a growing list of papers in applying Lie symmetry methods to Finance. For a (non-complete) set of references, see ([40]-[44]).

2

The paper is organized as follows: In section 2 we consider a simple model, which leads to the Hamilton-Jacobi-Bellman equation. In section 3 we perform the Lie symmetry analysis to this equation. In other words we find the infinitesimal generators of the Lie group. In section 3 we find general solutions to the equation, by considering first a linear combination of the infinitesimal generators and then determining the invariants of the equation. The invariants allow us to convert the equation into a nonlinear, second order ordinary differential equation. Using another transformation, we convert this nonlinear equation to an Abel equation which posses a closed-form solution. In section 4 we present our conclusions and the four families of our solution.

2. The Hamilton-Jacobi-Bellman Equation
In this section we consider a selection portfolio model, optimized under the principle of Dynamic Programming, leading to an HJB equation. The model we consider is a Brownian model, in which the money market account and stocks are modeled by (2.1) (2.2) d S0 ( t ) = S0 ( t ) r ( t ) dt , S0 (0) = 1 d S( t ) = S( t ) [ ( t ) dt + dW ( t )] , S(0) > 0 and are progressively measurable processes being

The coefficients r,

uniformly bounded. Let also ( t ) : the number of the stocks invested at time t.
0 ( t ) S0 ( t ) : money invested in the money market account at time t

X ( t ) = 0 ( t ) S0 ( t ) + ( t ) S( t ) : the wealth of the investor (t) = ( t ) S( t ) : proportion invested in stock X( t )

1− ( t ) : proportion invested in the money market account.

3

The following proposition holds true. Proposition. The wealth dynamics of a self-financing trading strategy is described by the so-called wealth equation (2.3) dX( t ) = X( t ) [(r + ( − r )) dt + X ( 0) = x 0 ( x 0 ≥ 0 ) We come now to the portfolio optimization one investor has with initial wealth X(0) = x 0 > 0 and would like to have optimal final wealth X * (T ) and optimal final strategy * . For a given initial wealth x 0 > 0 , the investor maximizes the dW ( t )]

following utility functional (2.4) where A ={ : admissible and E [ U(X ( )) − ] < ∞ } sup E [ U(X ( ))]
∈A

and X ( t ) is given by (2.3): dX ( t ) = X ( t ) [ [ r ( t ) + ( t ) ( ( t ) − r ( t ) )] dt + ( t ) dW ( t )] X (0) = x 0 The function U( x ) defined for positive real numbers, is called the utility function, which is strictly concave, strictly increasing and continuously differentiable, satisfying the conditions U ′(0) = lim U ′( x ) = ∞ , U ′(∞) = lim U ′( x ) = 0
x ↓0 x ↑∞

Define the value function G ( t , x ) = sup E t , x [ U(X ( ))]
∈A

where E t , x [ U(X ( ))] = E [ U( X ( )) | X( t ) = x ]

4

G ( t , x ) = U( x ) Suppose now that there is an optimal strategy * . The corresponding HJB

equation, which maximizes the final wealth, derived under the principle of Dynamic Programming, is given by 1 2 2   (2.5) G t ( t , x ) + sup G x ( t , x ) [r + ( − r ) ] x + x G xx ( t , x ) 2  = 0 2   for any admissible strategy We consider the case where r, Pointwise maximization over (2.6) and final condition G (T, x ) = U( x ) . and are all constants.

yields the first order condition

G x ( t , x ) ( − r ) x + 2 x 2 G xx ( t , x ) ( t ) = 0 − r G x (t, x ) 2 x G (t, x) xx is
2

implying that the optimal strategy is given by (2.7) *= −

Since the derivative of (2.6) with respect to * is the global maximum if G xx < 0 . Substituting (2.8) where k =

x 2 G xx ( t , x ) , the candidate

* into the HJB equation (2.5), yields a PDE for G ( t , x ) : 1 2 G 2 (t, x) x =0 G t (t, x) + r x G x (t, x) − k 2 G xx ( t , x ) −r .

The traditional way of solving equation (2.8) is by separation of variables. In fact, considering that (2.9) G(t, x ) = 1 x f ( t )1−

equation (2.8) yields the first order differential equation f ′( t ) = K f ( t ) of the function f ( t ) , where K = −  1 1 2 r + k .  1−  2 1−  

5

The above differential equation has the solution f ( t ) = e − K (T − t ) , satisfying the condition f (T ) = 1 . Therefore solution (2.9) is compatible with the condition G (T , x ) = U ( x ) ≡ 1 x . From (2.9) and (2.7) we determine the value function and

the optimal strategy respectively. For the type of solution we have obtained, we can check that G xx < 0 for < 1.

In section 4 we provide our own method of solution, based on Lie Symmetry Analysis.

3. Lie Symmetry Analysis of the HJB Equation
We now consider the HJB equation (2.8) written in the equivalent form (3.1) 1 u t u xx + r x u x u xx − k 2 u 2 = 0 x 2

where we have changed the notation of the function from G ( t , x ) to the more traditional notation u ( x , t ) We are going to perform a Lie symmetry analysis of this equation. By this we mean that we shall determine the infinitesimal generators of the Lie symmetry group. That would be the essential step, which then is to be used to determine the invariants, necessary to convert the partial differential equation into an ordinary differential equation. We introduce the following concepts and notation ([29]). We consider a base space M, which is the Cartesian product X × U of a 2-dimensional space X of independent variables ( x , t ) ∈ X by a 1-dimensional space U of dependent variables u ∈ U . Let U (1) be the space of the first derivatives of the functions u ( x , t ) with respect to x and t: (u x , u t ) ∈ U (1) . Let also U ( 2) be the space of the second derivatives of the functions u (x, t ) with respect to x and t:

6

(u xx , u xt , u tt ) ∈ U ( 2) . Since differential equation (3.1) is of second order, we introduce a second order jet bundle M ( 2) by considering the Cartesian product M ( 2) = M × U (1) × U ( 2) The coordinates of the space M ( 2) are labeled by z = ( x , t , u , u x , u t , u xx , u xt , u tt ) ∈ M ( 2) In the space M ( 2) , equation (3.1) can be expressed as (3.2) Let L
( x , t , u ( 2 ) ) = 0 where

1 ( x , t , u ( 2) ) = u t u xx + r x u x u xx − k 2 u 2 x 2 be the solution manifold of (3.1): L = {z ∈ M ( 2) | = 0} ⊂ M ( 2) = 0 is defined by

A symmetry group G G

of equation

= {g ∈ Diff (M ( 2) ) | g : L → L }

We want to determine a subgroup of Diff (M ( 2) ) , compatible with the structure of L . We shall first find the symmetry Lie algebra Diff ( ( 2) ) ⊂ Diff ( ( 2) ) and then use the main Lie theorem to determine G . Let us denote by V ( V ∈ Diff ( ) ) an element of a vector field on M (the generator of Lie symmetries), defined by (3.3) where V = ( x, t, u ) ∂ ∂ ∂ + ( x, t, u ) + ( x, t, u ) ∂x ∂t ∂u

( x, t , u ), ( x, t , u ) and ( x , t , u ) are smooth functions of their arguments. will have the structure of (3.3) and will

The infinitesimal generators of g ∈ G

form an algebra Diff ( ) . The algebra Diff ( ( 2) ) will be spanned by the vectors pr ( 2) V , the second prolongation of V, defined by

7

∂ ∂ ∂ ∂ ∂ + t + xx + xt + tt pr ( 2) V = V + x ∂ux ∂ut ∂ u xx ∂ u xt ∂ u tt The symmetries are determined by the equation ([29], Theorem 2.31) (3.4) as long as (3.5)
( x , t , u ( 2) ) = 0

pr ( 2) V [ ( x , t , u ( 2) )] = 0

We implement next the equation pr ( 2) V [ ( x , t , u ( 2) )] = 0 . We have that pr ( 2) V [ ( x , t , u ( 2) )] = 0 1 ⇔ pr ( 2) V [ u t u xx + r x u x u xx − k 2 u 2 ] = 0 x 2 or (3.6)
( r u x u xx ) + x ( r x u xx − k 2 u x ) + t u xx +

+ xx (u t + r x u x ) = 0 The coefficients
x
t

x

,

t

and

xx

are calculated to be ([29], Example 2.38)

= x + ( u − x )u x − u u 2 − x u t − u u x u t x
= t − t u x + ( u − t )u t − u u x u t − u u 2 t = xx + ( 2 xu − xx ) u x − xx u t + ( uu − 2 xu ) u 2 − x − 2 xu u x u t − uu u 3 − uu u 2 u t + ( u − 2 x ) u xx − x x − 2 x u xt − 3 u u x u xx − u u t u xx − 2 u u x u xt

xx

Since for the functions = (t) , equation (3.6) becomes (3.7)

,

and

we have the dependence = (x, t, u )

= (x, t )

( r u x u xx ) + ( r x u xx − k 2 u x ) { x + ( u − x ) u x } +

8

+ { t − t u x } u xx + ( u − t ) u xx u t + + { xx + ( 2 xu − xx ) u x + uu u 2 + x
+ ( u − 2 x ) u xx } u t +

+ r x u x { xx + ( 2 xu − xx ) u x + uu u 2 + x
+ ( u − 2 x ) u xx } = 0

We now have to take into account the condition

= 0 . Therefore we substitute u t

u2 1 by − r x u x + k 2 x in the previous equation (3.7): 2 u xx (3.8)
( r u x u xx ) + ( r x u xx − k 2 u x ) { x + ( u − x ) u x } +

 1 2 u2   x + + ( t − t u x ) u xx + ( u − t ) u xx − r x u x + k  2 u xx     + { xx + ( 2 xu − xx ) u x + uu u 2 + x
 1 2 u2   x + + ( u − 2 x ) u xx } − r x u x + k  2 u xx    

+ r x u x { xx + ( 2 xu − xx ) u x + uu u 2 + x
+ ( u − 2 x ) u xx } = 0

We equate all the coefficients of the function u and their derivatives in the previous equation (after multiplying by u xx ) to zero: Coefficient of (u x ) 4 : (3.9) 1 2 k uu = 0 2

Coefficient of ( u x ) 3 : (3.10) 1 2 k ( 2 xu − xx ) = 0 2

9

Coefficient of ( u x ) 2 : (3.11) 1 2 k xx = 0 2

Coefficient of u x u xx : (3.12)
− k2 x = 0

Coefficient of (u x ) 2 u xx : (3.13) 1 − k 2 ′( t ) = 0 2

Coefficient of ( u xx ) 2 : (3.14) rx x + t = 0

Coefficient of u x ( u xx ) 2 : (3.15)
r − t + r x { ′( t ) − x } = 0

Equations (3.9)-(3.15) are the determining equations of the Lie symmetries of equation (3.1). We are now going to solve the system of equations (3.9)-(3.15). From equation (3.9) we get that with respect to u: (3.16) ( x, t, u ) = (x, t ) ⋅ u + ( x, t )
x = 0 . Therefore, using (3.16), we have uu = 0 , which means that

is a linear function

From equation (3.12) we get that
x ⋅u + x = 0

from which we get

x = 0 and

x = 0 . Therefore the functions

and
x = 0 , we

introduced in (3.16) are independent of x. On the other hand, since get from (3.14) that
t = 0 and t = 0 . Therefore

t ⋅ u + t = 0 , from which we have that

t = 0 , which means that functions

and

are to be constants. We

then obtain that (3.17) ( x , t , u ) = a1 ⋅ u + a 2

10

Equation (3.11) then becomes an identity. We get from (3.13) that ′( t ) = 0 . Therefore ( t ) is a constant: (3.18) (t) = a 3

We get from (3.10), because of (3.17), that xx = 0 . Therefore ( x , t ) is a linear function with respect to x: (3.19) (x, t ) = f (t ) ⋅ x + g(t )

where f ( t ) and g ( t ) are functions to be determined. Using the previous expression, equation (3.15) becomes − f ′( t ) x + { r ⋅ g ( t ) − g ′( t )} = 0 from which we get , since this equation should be true for any x, that f ′( t ) = 0 and r ⋅ g ( t ) − g ′( t ) = 0 We thus have (3.20) and (3.21) g ( t ) = a 5 e rt f (t) = a 4

Therefore the function ( x , t ) takes the form (3.22) (x, t ) = a 4 ⋅ x + a 5 ⋅ e r t

Relations (3.22), (3.18) and (3.17) allow us to write down the generator of the symmetries: (3.23) V = (a 4 ⋅ x + a 5 ⋅ e r t ) ∂ ∂ ∂ + a 3 + (a 1 ⋅ u + a 2 ) ∂x ∂t ∂u

Therefore we have the following

Theorem 1. The Lie algebra of the infinitesimal transformations of the HJB
equation (3.1) is spanned by the five vector fields

11

(3.24) (3.25) (3.26) (3.27) (3.28)

∂ ∂u ∂ X2 = ∂u ∂ X3 = ∂t ∂ X4 = x ∂x ∂ X5 = er t ∂x X1 = u

4. General Solutions to the HJB Equation
In order to find a solution to the HJB equation (3.1), we have to determine the invariants of the equation. For this purpose we consider a linear combination of the above found generators of the symmetries. Therefore we consider the combination X3 + X 4 +
5+ 1+ 2

In other words we consider the generator (4.1) ∂ ∂ ∂ + ( ⋅ x + ⋅ er t ) +( ⋅u + ) ∂t ∂x ∂u and will be determined on the basis of finding

where the coefficients , ,

closed form solutions to the equation. Based on (4.1), we consider the system (4.2) dt dx du = = rt ⋅u + 1 ⋅x + ⋅e

In order to solve the first equation (4.3) dt dx = 1 ⋅ x + ⋅ er t ≠ r and = r. ≠ r case. If ≠ r , equation (4.3) has the general solution

of the above system, we have to consider two cases: 4.1. The

12

(4.4)

C1 = e − t x −

r−

e (r − ) t

Therefore one invariant of the HJB equation (3.1) is given by (4.5) The equation dt du = ⋅u + 1 has the general solution C 2 + t = ln( u + ) . Therefore another invariant is given by (4.6) u= 1 (e t − ) = ( y) , where y = e− t x − r− e (r − ) t

Using (4.5) and (4.6), we find that the partial derivatives of the function u transform as (4.7) 1    ut =  −  y + e (r − ) t  y  e t r−     ux = 1 (r − ) t e y 1 (r − 2 ) t e yy

(4.8) (4.9)

u xx =

Using (4.7)-(4.9), the original HJB equation (3.1) is transformed into (4.10)
yy + ( r − ) y y yy −

1 2 k ( y )2 = 0 2

which is a nonlinear, second order ordinary differential equation. Under the transformation (4.11) w ( y) =
y

equation (4.10) takes the form

13

(4.12)

1   [ + (r − ) y w ] w y +  − k 2  w 2 + (r − ) y w 3 = 0 2  

which is an Abel differential equation. Introducing the notation (4.13) 1 = − k 2 and 2 =r−

equation (4.12) can be written as (4.14) wy = − w 2 + y w3 + yw

which, in turn, under the substitution (4.15) w ( y) = 1 Z( y)

takes the form (4.16) zy = ⋅ Z( y ) + ⋅ y ⋅ Z( y ) + ⋅ y

Introducing the function V( y) by (4.17) Z( y) = y ⋅ V( y) V+ V+

we transform equation (4.16) to the equation V + y Vy = which is equivalent to V( V + ) − ( V + ) = − y Vy V+ Making the choice = , the previous equation takes the form

V2 − = − y Vy V+ or, introducing a new parameter (4.18) = ⋅ 2 by

14

we obtain the equation (4.19) V2 − 2 V+
2

= − y Vy

The above differential equation can be written as 1 1+ ⋅   2 V− + 1− 1  1 ⋅  Vy = − 2 V+  y

which can be integrated to give (4.20) 1+ 1− ⋅ ln(V − ) + ⋅ ln(V + ) = − ln y + C 2 2

The previous equation can be written in equivalent form as (4.21) (V − ) (1+ ) ⋅ (V + ) (1− ) = y2

where A is a constant. The function (4.22) = ( y) is determined by integration of
y

=

1 y ⋅ V ( y)

where V( y) is given implicitly by (4.21). Therefore we find that (4.23) y d  ( y) = B ⋅ exp  ∫  y ⋅ V(  0   )  

where B is a constant. 4.2. The = r case. In this case the system we consider is given by dt dx du = = ⋅u + 1 r ⋅ x + ⋅ er t The previous system has the general solutions: C1 = x ⋅ e − r t − t and C 2 + t = ln( u + ) Therefore in this case we have the two invariants

15

(4.24) and (4.25) where

y = x ⋅ e−r t − t

u= = ( y) .

1

(e t − )

Using (4.24) and (4.25), we find that the partial derivatives of the function u transform as (4.26) (4.27) (4.28) 1   u t =  − [ r (y + t) − ] y  e t   ux = 1 ( − r) t e y 1 ( − 2 r) t e yy

u xx =

Substituting (4.26)-(4.28) into the original equation (3.1), we find that the function = ( y) satisfies the equation (4.29)
yy + y yy −

1 2 k ( y )2 = 0 2

which is a nonlinear, second order ordinary differential equation. Under the transformation (4.30) w ( y) =
y

equation (4.29) takes the form (4.31) ( + w) w y = − w 2 − w 3 = 0 and ≠ 0.

which is an Abel differential equation. In solving (4.31), we shall consider two cases: 4.2.1. If (4.32) = 0 , we find that the general solution to the equation (4.31) is given by w ( y) = y+A

16

where A is an arbitrary constant. Integrating further w ( y) =

y

, we find that

(4.33)

( y) = B ( y + A ) ≠ 0 , equation (4.30) takes the form wy = − w2 + w3 + w

where B is an arbitrary constant. 4.2.2. If (4.34) 4.2.2a. If (4.35)

≠ 0 , under the substitution w ( y) = ⋅ Y ( y) −

equation (4.34) takes the form Y ( )Y + ( − ) and by integration we obtain
y = 2

( ≠ )

(4.36)

(

)Y + ( − ) ln Y = 2 y + A

where A is an arbitrary constant. Therefore ( y) is the solution to the equation (4.37) where (4.38)
y

=

⋅ ( y) −

( y) is given implicitly by (4.36). Therefore we find that y d  ( y) = B ⋅ exp  ∫ y ⋅ ( )−  0     

where B is an arbitrary constant. 4.2.2b. For = 0 (i.e. = 1 2 k ), equation (4.34) becomes 2

17

(4.39)

w3 wy = − + w

Under the substitution w= 1 X ( y)

the above equation (4.39) takes the form Xy = ⋅X +

which has the general solution (4.40) ⋅ 2 +2 =2 y+A

where A is a constant. Therefore ( y) is the solution to the equation (4.41)
y

=

1 X ( y)

where X( y) is given implicitly by (4.40). Hence, we find that (4.42) y  d   ( y) = B ⋅ exp  ∫  y X( )    0 

where B is a constant.

4. Conclusions and Discussion.
We list the four families of solutions we have found next. Collecting the results of the previous section, we arrive at the following

Theorem 2. The HJB equation (3.1) admits the following four families of general
solutions (I)-(IV) Solution I

18

(4.43)

u (x, t ) = B e

t

y d  ⋅ exp  ∫  y ⋅ V(  0

  +C )  

where V( y) is a function defined by (4.44) and (4.45) y = e− t x − e (r − ) t ( ≠ 0 ) [ V( y) − ](1+ ) ⋅ [ V( y) + ](1− ) = y2

r− A, B, C are arbitrary constants and , are free parameters with while is defined by = ⋅ 2 , where =r− and

≠ 0 and

≠r, = .

1 = − k 2 , with 2

Solution II (4.46) where (4.47) y = x ⋅ e−r t is a free parameter with ≠ 0 , and u ( x, t ) = B e t ( y + A) + C

A, B, C are arbitrary constants and 1 = − k 2 with 2 Solution III (4.48) ≠ 0.

u (x, t ) = B e

t

y d  ⋅ exp  ∫ y ⋅ ( )−  0

  +C  

where Y( y) is defined by (4.49) and (4.50) y = x ⋅ e−r t − t ( ≠ 0) ( )Y( y) + ( − ) ln Y( y) = 2 y + A ( ≠ 0)

19

A, B, C are arbitrary constants and , 1 = − k 2 with 2 Solution IV (4.51) ≠ 0.

are free parameters with

≠ 0,

≠ 0 , and

u (x, t ) = B e

t

y  d   ⋅ exp  ∫ +C  y X( )    0 

where X( y) is defined by (4.52) and (4.53) provided that parameters with y = x ⋅ e−r t − t = ( ≠ 0) are free ⋅ 2 ( y) + 2 ( y) = 2 y + A

1 2 k ( = 0 ). A, B, C are arbitrary constants and , 2 ≠ 0, ≠ 0

It is only one solution that has been considered in the literature so far. It is the solution (4.46)-(4.47). In fact we get from this solution, under the choice A = C = 0 and B = 1 , and an obvious adjustment (and renaming) of the parameters the solution (2.9), using the same boundary condition u ( x , T ) = U( x ) ≡ 1 x .

The solutions (4.43)-(4.45), (4.48)-(4.50) and (4.51)-(4.53) appear for the first time in the literature. It remains to be seen if the general solutions found in this article are to have any practical interest in Finance and Stochastic Control.

References
[1] W. H. Fleming and H. M. Soner: “Controlled Markov Processes and Viscosity Solutions” Springer, 1993 [2] B. Øksendal: “Stochastic Differential Equations”

20

Springer, 2005 [3] N. Touzi: “Stochastic Optimal Control Problems, Viscosity Solutions, and Application to Finance” Lectures at Pisa, 2002 [4] H. M. Soner: “Stochastic Optimal Control in Finance” Lectures at Pisa, 2003 [5] L. C. Evans: “Partial Differential Equations” Graduate Studies in Mathematics, Vol. 19, AMS 1997 [6] R. Korn and E. Korn: “Option Pricing and Portfolio Optimization” AMS, 2001 [7] R. Korn: “Optimal Portfolios: Stochastic Models for Optimal Investments and Risk Management in Continuous Time” World Scientific, 1997 [8] H. Kraft: “Optimal Portfolios with stochastic short rates and defaultable assets”, Springer, 2004 [9] H. Kraft: “Financial Mathematics I: Stochastic Calculus, Option Pricing, Portfolio Optimization” Lecture Notes, University of Kaiserslautern, August 2005 [10] R. C. Merton: “Continuous-time Finance” Blackwell 1990 [11] F. E. Benth and K. H. Karlsen: “A Note on Merton’s portfolio selection problem” Stochastic Analysis and Applications 23 (2005) 687-704 [12] E. N. Baron and R. Jensen: “A stochastic control approach to the pricing of options” Mathematics of Operations Research 15 (1990) 49-79 [13] M. H. A. Davis, V. G. Panas and T. Zariphopoulou: “European Option Pricing with Transaction Costs”

21

SIAM Journal on Control and Optimization 31 (1993) 470-493 [14] A. L. Lewis: “Option Valuation under Stochastic Volatility” Finance Press, second printing 2005 [15] S. Mudchanatongsuk, J. A. Primbs and W. Wong: “Optimal Pairs Trading” Stanford University preprint [16] R. E. Elliott and P. E. Kopp: “Mathematics of Financial Markets” Springer 2005 [17] I. Karatzas, J. P. Lehoczky, S. Sethi and S. E. Shreve: “Explicit solution of a general consumption/investment problem” Math. Oper. Res. 11 (1986) 261-294 [18] I. Karatzas, J. P. Lehoczky and S. E. Shreve: “Existence and uniqueness of multi-agent equilibrium in a stochastic, dynamic consumption/ investment model” Math. Oper. Res. 15 (1990) 80-128 [19] T. Zariphopoulou: “Optimal investment and consumption models with non-linear stock dynamics” Math. Methods Oper. Res. 50 (1999) 271-296 [20] C. Munk: “Dynamic Asset Allocation” Lecture Notes, University of Southern Denmark, March 2008 [21] T. R. Bielecki and S. R. Pliska: “Risk sensitive asset management with transaction costs”. Finance and Stochastics 4 (2000) 1-33 [22] G. Dmitrasinovic-Vivodic, A. Lari-Lavassani, X. Li and A. Ware: “Dynamic Portfolio Selection under Capital-at-Risk” University of Calgary preprint [23] R. Gerrard, S. Haberman and E. Vigna: “Optimal Investment Choices Post Retirement in a Defined Contribution Pension Scheme” City University, Cass Business School preprint

22

[24]

J-F Boulier, E Trussant and D. Florens: “A dynamic model for pension funds management” Proceedings of the 5th AFIR Int. Colloquium 1995 (pp. 361-384)

[25]

C. Hipp: “Stochastic Control with Application in Insurance” University of Karlsruhe report

[26]

R. Bellman: “Dynamic Programming” Princeton University Press 1957

[27]

S. Lie: “Gesammelte Werke”, Vols. 1-7, F. Engel and P. Hergard (Eds.) Teubner, Leipzig 1899

[28]

S. Lie: “Vorlesungen über Differentialgleichungen mit bekannten infinitesimalen Transformationen” Teubner, Leipzig 1912

[29]

P. J. Olver: “Applications of Lie Groups to Differential Equations ” Graduate Texts in Mathematics, vol.107, Springer Verlag, New York 1993

[30]

H. Stephani: “Differential Equations: Their Solutions Using Symmetries ” Cambridge University Press, 1989

[31]

G. Bluman and S. Kumei: “Symmetries and Differential Equations ” Springer Verlag, 1989

[32]

L. V. Ovsiannikov: “Group Analysis of Differential Equations” Academic Press, New York 1982

[33]

N. H. Ibragimov: “Elementary Lie Group Analysis and Ordinary Differential Equations” Wiley, New York

[34]

L. Dresner: “Applications of Lie’s Theory of Ordinary and Partial Differential Equations” IOP Publishing, 1999

[35]

R. L. Anderson and N. H. Ibragimov: “Lie-Bäcklund Transformations in Applications ” SIAM, Philadelphia 1979

[36]

N. H. Ibragimov (ed.): “CRC Handbook of Lie Group Analysis of

23

Differential Equations” Vol.1 1994, Vol.2 1995, Vol.3 1996 CRC Press, Boca Raton, FL. [37] W. Hereman: “Review of Symbolic Software for the Computation of Lie Symmetries of Differential Equations” Euromath Bulletin 2, No.1 Fall 1993 [38] N. H. Ibragimov and R. K. Gazizov: “Lie Symmetry Analysis of differential equations in finance” Nonlinear Dynamics 17 (1998) 387-407 [39] S. M. Antoniou: “A General Solution to the Bensoussan-Crouhy-Galai Equation appearing in Finance”, SKEMSYS Report [40] J. Goard: “New Solutions to the Bond-Pricing Equation via Lie’s Classical Method”, Mathematical and Computer Modeling 32 (2000) 299-313 [41] L. A. Bordag: “Symmetry reductions of a nonlinear model of financial derivatives”, arXiv:math.AP/0604207 [42] L. A. Bordag and A. Y. Chmakova: “Explicit solutions for a nonlinear model of financial derivatives” Int. Journal of Theoretical and Applied Finance 10 (2007) 1-21 [43] L. A. Bordag and R. Frey: “Nonlinear option pricing models for illiquid markets: scaling properties and explicit solutions” arXiv:0708.1568v1 [math.AP] 11 Aug 2007 [44] G. Silberberg: “Discrete Symmetries of the Black-Scholes Equation” Proceedings of 10th Int.Conference in Modern Group Analysis.

24

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 467 posted: 1/9/2010 language: English pages: 24
Description: We initiate a solution procedure for the Hamilton-Jacobi-Bellman Equation, associated to the optimization of a portfolio problem. The method relies on the determination of the underlying Lie symmetries of the equation. After finding the symmetries, we consider a linear combination of the generators, which in turn determine two invariants. Using the transformations based on the two invariants, we transform the HJB equation to an ordinary nonlinear differential equation, which can be converted into an Abel differential equation. Abel's differential equations have a closed-form solution. We have thus obtained four families of solutions.