Stochastic Optimization in Finance



Stochastic Optimization in Finance

Krastyu Gumnerov

Institute of Information Technologies – B A S

gumnerov@iinf.bas.bg

Introduction

The financial activity like many other activities has two characteristics:

1) The decision-making is under uncertainty, e.g., it depends on the future values of parameters, unknown at the moment of the decision-making, so they are random quantities with respect to the information at the moment.

2) The decisions are optimal with respect to some objective.

Thus it is natural a system for financial planning (portfolio management) to have two modules:

• 1. a module describing the random quantities of the model and their evolution (scenario generator).

• 2. an optimization module for given objective function and variables evolution.

This review examines some methods for building the second module. The purpose of this review is to offer a brief description of the main approaches for dynamic stochastic optimization and to show some of their applications in finance. The author hopes in this way to attract the attention of the Bulgarian experts engaged in financial mathematics to these approaches unpopular (up to now) in Bulgaria.

I. General Statement of the Dynamic Stochastic Optimization Problem (Stochastic Control)

To understand the stochastic control it is useful to keep in mind the analogy with some more simple problems:

1) The elementary problem of finding the conditional extremum of a function;

2) The problems of the calculus of variations and the variational approach in the classical mechanics and mathematical physics;

3) The problems of the deterministic optimal control.

In general, the solution of these problems, including the stochastic control problems, is reduced to some optimality conditions, in the form of equations (more often). These are the equations of the considered system: they describe the evolution of the parameters defining the system. Characteristic examples are: the equation for the stationary points of a function of numerical variables, the Kun-Taker equations, the Euler-Lagrange equations in the calculus of variations, the equations of the mathematical physics, the Hamilton-Jacobi equation, the Hamilton-Jacobi-Bellman equation.

The advantage of this approach consists in the fact that it allows to see sometimes the patterns of the system in consideration, it permits sometimes a qualitative investigation of the solutions of the problem. The stochastic programming actually gives us methods for numerical solution of these equations, but in principle other methods are also possible and in some special cases it is possible to obtain solutions in explicit form. An example is the Merton problem.

In the following we give a brief presentation of a basic theorem of the stochastic control and as well a simplified variant of Merton problem.

Assume the system in question is described in a probability space ((, Ft, P) with the Ito process of the form

(1) dXt = dXtu = b(t, Xt, ut)dt + σ (t, Xt, ut)dBt

where Xt ( Rn, b: R ( Rn ( U ( Rn, σ: R ( Rn ( U ( Rnxm, Bt is the m-dimensional Brownian motion and ut ( U ( Rk is a parameter, whose values in a given set U we can choose at each moment t to control the system. Thus ut = u(t, ω) is also a stochastic process, Ft(m) -adapted. It will be called “control”.

Let {Xhs,x}h(s be the solution of (1), where Xt /t=s = x, i.e.,

Xhs,x = x + [pic]

Let L: R ( Rn ( U ( R and K: R ( Rn ( R be given continuous functions, G ( R ( Rn be a domain and let [pic]be the first exit time after s from G for the process {Xhs,x}r(s i.e.

[pic]

We define the quantity “performance”

(2) [pic]

To simplify the notations we introduce

[pic] for [pic].

Then (1) becomes

[pic]

The problem is for each y ( G to find a control

u* = u*(t,ω) = u*(y,t,ω), so that

J u*(y) = supJ u(y)

u(t,ω)

The function ((y) = Ju*(y) is called “optimal performance”.

The optimality condition is given by equations, satisfied by the function ((y) and formulated in the theorem that follows:

For v ( U and g ( C02(R ( Rn ) we define

[pic]

where [pic]

For each choice of the function u: R ( Rn ( Rk the operator A, defined by

(Ag)(y) = (D u(y)g)(y), g ( C02(R ( Rn)

is an infinitesimal generator of the process Yt, which is the solution of the equation

dYt = b(Yt, u(Yt))dt + σ(Yt, u(Yt))dBt.

Theorem 1 ([30]). Let the function ( be bounded and belong to C2(G) ( C([pic]); Let T < ( a.s. for each y ( G and let exist an optimal control u*. Then

(3) sup{L(y, v) + Dv((y)} = 0, y ( G

v(U

and

((y) = K(y), y ( (G

The supremum in (3) is abtained when v = u*(y), where u* is an optimal control, i.e.

L(y, u*(y)) + Du*(y)((y)} = 0, y ( G.

The equation (3) is called Hamilton-Jacobi-Bellman equation (HJB).

Example 1. Portfolio optimization ([30]).

There are given the assets p1 and p2 with price processes, satisfying the equations

(4) [pic].

(5) [pic]

At the moment t let Xt be the investor’s wealth. He divides it into two parts: utXt and (1 – ut)Xt, 0 ( ut < 1. With the part utXt he buys the assets p1, and with the part (1 – ut)Xt - the assets p2 . In this way he composes the portfolio, which contains an amount [pic] of the asset p1 and an amount [pic] of the asset p2 . The portfolio price increase dXt will be [pic]

At the initial moment s < T the investor’s wealth Xs is determined = X. Let the performance N be an increasing concave function of the wealth Xt, for example

N(Xt) = Xtr, 0 < r < 1.

The investor has an investment horizon T and he wants trading without loan to maximize the performance at the moment T , more exactly to maximize the quantity

Ju(s, x) = Es,x[N(Xτu)],

where Es,x is the expectation with respect to the probability law (the conditional probability density) of the process, which at the moment s has the value x; τ is the first exit time from the domain G = {(t, x): t0}.

The problem is to find a function ((s, x) and a stochastic (markovioan) process ut*, 0 ( ut* < 1, to satisfy the conditions ((s, x) = sup { Ju(s, x), u is a markovian process, 0 ( u < 1}, ((s, x) = Ju*( s, x).

To solve this problem we compose the Hamilton-Jacobi-Bellman (HJB) equation. The operator [pic] is

[pic]

The equation HJB is

(6) [pic]

From this equation for each (t, x) we find v=u(t, x), so that the function

(7) [pic]

has a maximum. The function ((v) is a polynomial of second order, hence if [pic], it gets a maximum for

(8) [pic]

Substituting (8) in (7) and (6) we obtain the following nonlinear boundary problem for ((t, x)

(9) [pic]

[pic]

Let N(x) = xr, 0 < r < 1 and let look for ((t, x) in the form

((t, x) = f (t, xr).

By substituting in (9) we get

(10) [pic]

Now from (8) and (10) we obtain

(11) [pic]

If [pic] (11) is the solution of the problem (u* is in fact a constant).

This result means that in the portfolio practical management, the investor invests his capital at the initial moment in the proportion [pic] and he does not change it up to horizon T.

If u*(t, x) depends on t and x the investor rebalances the portfolio at each moment t (practically in discrete moments), and at the moment t + dt he observes at the market the increases dp1 and dp2 of both assets prices, and calculates the increase [pic] of his wealth and composes a portfolio of both assets in proportion ut*+dt, 1 – ut*+dt, where

ut*+dt = u*(t+dt, Xt + dX t) = u*(t+dt, Xt+dt)

Example 2. The Merton problem ([14], [23]).

There are traded at the market N + 1 assets p0, p1, …, pN, with the following price processes:

(12) [pic]

The investor has a horizon T and at an arbitrary moment t ( [0, T] he possesses a portfolio of the assets p0, p1, …, pN , in quantities respectively θt0, θt1, …, θtN, doing a consumption with speed Ct ( 0 , where θt0, θt1, …, θtN, Ct are stochastic processes. At the initial moment his wealth is X0 ( 0 and he trades and consumes without exterior incomes. That means that his wealth Xt at the moment t is

[pic]

It follows that

(13) [pic]

This is a “budget equation”.

It is convenient instead of processes [pic] to introduce the processes

[pic]

which represent the part of the wealth included in pti , i.e. the proportion in which the wealth is distributed in the assets.

Substituting (12) in (13) the budget equation gets the form of

[pic]

i.e.

(14) [pic]

The investor’s “performance” is defined by the consumption done in the period [0, (] and by the wealth possessed at the moment (. More exactly it is given by the formula [pic]where Et,x is the expectation with respect to the probability law of the process X , which begins at the moment t with the value x.

The investor’s goal is by trading and consuming, to maximize J(c,a)(t, x). Let

[pic]

where c*, a* are the optimal consumption and investment strategy. The function Φ(t, x) satisfies the equation HJB and the boundary condition

Φ(t, xτ) = K(xτ).

Let us compose the equation HJB. The infinitesimal generator of the process according to (14) is

[pic]

The equation HJB is

[pic]

To solve the problem we follow the procedure: having arbitrary fixed t, x, we calculate the supremum of the function [pic]

Setting equal to zero the derivatives with respect to c and a of the function ((c,a), we obtain a linear system of equations, from which we obtain c and a as functions of [pic], i.e. [pic] [pic]. Substituting in ((c,a) we obtain the equation

(15) ((c(Φx, Φxx),α(Φx, Φxx)) = 0,

which is a nonlinear partial differential equation.

In the special case when L = 0 and K(x) = xs, 0< s ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download