Lecture 8: Stochastic Differential Equations

Lecture 8: Stochastic Differential Equations

Readings

Recommended:

? Pavliotis (2014) 3.2-3.5 ? Oksendal (2005) Ch. 5

Optional:

? Gardiner (2009) 4.3-4.5 ? Oksendal (2005) 7.1,7.2 (on Markov property) ? Koralov and Sinai (2010) 21.4 (on Markov property)

We'd like to understand solutions to the following type of equation, called a Stochastic Differential Equation

(SDE):

dXt = b(Xt ,t)dt + (Xt ,t)dWt .

(1)

Recall that (1) is short-hand for an integral equation

t

Xt = b(Xs, s)ds + (Xs, s)dWs.

(2)

0

In the physics literature, you will often see (1) written as

dx dt = b(x,t) + (x,t)(t),

where (t) is a white noise: a Gaussian process with mean 0 and covariance function E(s)(t) = (t - s).

Each term in (1) has a different interpretation.

? The term b(Xt ,t)dt is called the drift term. It describes the deterministic part of the equation. When this is the only term, we obtain a canonical ODE.

? The term (Xt ,t)dWt is called the diffusion term. It describes random motion proportional to a Brownian motion. Over small times, this term causes the probability to spread out diffusively with a diffusivity locally proportional to 2.

If the diffusion term is constant, i.e. (x,t) R, then the noise is said to be additive. If the diffusion

term

depends

on

x, i.e.

x

(x,

t)

=

0

in

(1),

the noise is

said

to

be

multiplicative.

We

will

see

that

equations

with multiplicative noise have to be treated more carefully then equations with additive noise.

We learned how to define the integrals in the expressions above last class. In this one we'll look at properties of the solutions themselves. We will ask: when do solutions exist? Are they unique? And how can we actually solve them, and extract useful information?

1

Miranda Holmes-Cerfon

Applied Stochastic Analysis, Spring 2019

8.1 Existence and uniqueness

Definition. A stochastic process X = (Xt )t0 is a strong solution to the SDE (1) for 0 t T if X is continuous with probability 1, X is adapted1 (to Wt ), b(Xt ,t) L1(0, T ), (Xt ,t) L2(0, T ), and Equation (2) holds with probability 1 for all 0 t T .

Definition. A strong solution X to an SDE of the form (1) is called a diffusion process.

Remark. To be a diffusion process, it is important that the coefficients of (1) depend only on (Xt ,t) ? they can't be general adapted functions f (,t).

Theorem. Given equation (1), suppose b Rn, Rn?m satisfy global Lipschitz and linear growth conditions:

|b(x,t) - b(y,t)| + | (x,t) - (y,t)| K|x - y| |b(x,t)| + | (x,t)| K(1 + |x|)

for all x, y Rn, t [0, T ], where K > 0 is a constant. Assume the initial value X0 = is a random variable with E 2 < and which is independent of (Wt )t0. Then (1) has a unique strong solution X.

Remark. "Unique" means that if X1, X2 are two strong solutions, then P(X1(t, ) = X2(t, ) for all t) = 1. That is, the two solutions are equal everywhere with probability 1. This is different from the statement that X1, X2 are versions of each other ? you should think about how.

This theorem bears a lot in common with similar theorems regarding the existence and uniqueness to the solution to an ODE. Counterexamples that show the necessity of each of the conditions of the theorem that apply to ODEs, can also be used for SDEs.

Example. To construct an equation whose solution is not unique, we drop the condition of Lipschitz con-

tinuity. Consider the ODE dXt = Xt2/3dt, which has solutions Xt = 0 for t a, Xt = (t - a)3 for t > a, for any a > 0. However, b(x) = 3x2/3 is not Lipschitz continuous as 0. For an example involving a Brownian

motion, consider

dXt = 3Xt1/3dt + 3Xt2/3dWt ,

X0 = 0.

This has (at least) two solutions: Xt = 0, Xt = Wt3. But, again, the coefficients of the SDE are not Lipschitz continuous.

Example. To construct an equation which has no global solution, we drop the linear growth conditions.

Consider

dXt = Xt2dt, X0 = x0.

The

solution

is Xt

=

1

1 x0

-t

,

which

blows

up

at

t

=

1 x0

.

Proof (Uniqueness, 1d). (from Evans (2013), section 5.B.3) Let Xt , X^t V ([0, T ]) be two strong solutions

to (1). Then

t

t

Xt - X^t = b(Xs, s) - b(X^s) ds + (Xs, s) - (X^s, s) dWs .

0

0

1Actualy we ask for something slightly stronger, namely that X be progressively measurable with respect to F , the filtration generated by (Wt )t0

2

Miranda Holmes-Cerfon

Applied Stochastic Analysis, Spring 2019

Square each side, use (a + b)2 2a2 + 2b2, and take expectations to get

t

2

t

2

E|Xt - X^t |2 2E

b(Xs, s) - b(X^s, s) ds + 2E

(Xs, s) - (X^s, s) dWs .

0

0

We estimate the first term on the right-hand side using the the Cauchy-Schwarz inequality, which implies

that

t 0

f ds

2

t

t 0

|

f

|2ds.

We

then

use

the

Lipschitz

continuity

of

b.

The

result

is

E

t

2

b(Xs, s) - b(X^s, s) ds T E

t b(Xs, s) - b(X^s, s) 2 ds K2T

t

E|Xs - X^s|2ds .

0

0

0

Now we estimate the second term using the Ito^ isometry and the Lipschitz continuity of :

t

2

t

t

E

(Xs, s) - (X^s, s) dWs = E| (Xs, x) - (X^s, s)|2ds K2 E|Xs - X^s|2ds .

0

0

0

Putting these estimates together shows that

t

E|Xt - X^t |2 C E|Xs - X^s|2ds

0

for some constant C, for 0 t T . If we let (t) E|Xt - X^t |2, then the inequality is

t

(t) C (s)ds

0

for all 0 t T .

Now we can use Gronwall's Inequality, which says that if we are given a function f and nonnegative numbers

a, b 0, then

t

f (t) a + b f (s)ds = f (t) aebt .

0

The proof is given in the appendix. Applying Gronwall's Inequality with f (t) = (t) = E|Xt - X^t |2 and

a = 0, b = C shows that E|Xt - X^t |2 = 0 for all 0 t T .

Therefore for each fixed t [0, T ] we have that Xt = X^t a.s. We have to show this holds for all t simultaneously, i.e. the whole trajectory is equal, except for in a set of measure 0. We can argue that Xr = X^r a.s. for all rational 0 r T , i.e. P(Xt = X^t t Q [0, T ]) = 1. This is because can extend the equality to a countable set of t-values, say {t1,t2, . . .}, because for each ti, the "bad" -values { : Xti () = X^ti ()} form a measure-zero set, and a countable union of measure-zero sets is measure zero. By assumption X, X^ have

continuous sample paths almost surely, so we can extend the equality to all values of t using the fact that the rationals form a dense set in R, so P(Xt = X^t t [0, T ]) = 1.

Proof (Existence, for a simpler equation). (Based on Evans (2013), section 5.B.1) We will show existence for the simpler equation

dXt = b(Xt )dt + dWt , X0 = x R ,

(3)

where b C1 with |b | K for some constant K. The proof in the more general case uses similar ideas, see e.g. Evans (2013) section 5.B.3 or Oksendal (2005) section 5.2.

3

Miranda Holmes-Cerfon

Applied Stochastic Analysis, Spring 2019

The proof is based on Picard iteration, as for the typical ODE existence proof. Let Xt0 = x, and define

t

Xtn+1 = X0 + b(Xsn, s)ds + dWt ,

0

n = 0, 1, . . .. Define

Dn(t

)

max

0st

|Xsn+1

-

Xsn|

.

Notice that for a given, continuous sample path of Brownian motion we have

s

D0(t) = max b(x)dr +Ws C

0st 0

for all 0 t T , where the constant C depends on the sample path via .

We claim that

Dn(t) C Kn tn . n!

We show this by induction. The base case n = 0 is true. Assume it holds for n - 1 and calculate

s

Dn(t) = max

0st

0

b(Xrn) - b(Xrn-1) dr

t

K Dn-1(s)ds

0

t Kn-1sn-1

K C

ds

0 (n - 1)!

by the induction assumption

= CKntn . n!

Therefore for m n we have

max

0tT

|Xtm

-

Xtn|

C

k=n

KkT k!

k

0

as n .

Therefore with probability 1, Xn converges uniformly for t [0, T ] to a limit process X. One can check that X is continuous, adapted and solves (3). (See Varadhan (2007), p.90 for a more explicit construction of the uniform convergence argument.)

8.2 Examples of SDEs and their solutions

Let's look at some specific SDEs and their solutions. First we recall some useful properties of the Ito^ integral. We showed last lecture that

?

The non-anticipating property:

E

t 0

f (s, )dWs = 0.

? The Ito^ isometry: E

t 0

f (s, )dWs

2=E

t 0

f 2(s, )ds.

4

Miranda Holmes-Cerfon

Applied Stochastic Analysis, Spring 2019

Another useful property is the following: for adapted processes g, h,

t

t

t

E g(s, )dWs h(s, )dWs = E[g(s, )h(s, )]ds .

(4)

0

0

0

To prove this property, apply Ito^'s isometry with f = h + g.

Formally, (4), as well as Ito^'s isometry, can be derived from the substitutions EdWu = EdWv = 0, EdWudWv =

(u-v)dudv, and the fact that g(u), h(v) are adapted, so they are each independent of dWu, dWv respectively.

That is, write

t

E

g(u)dWu

0

t

tt

h(v)dWv =

E[g(u)h(v)dWudWv] .

0

00

Now decompose the integrand into different pieces, depending on the relationship between u, v. Since

1v ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download