Essentials of Stochastic Processes

i

Essentials of Stochastic Processes

Rick Durrett

70

10-Sep

60

10-Jun

10-May

50

at expiry

40

30

20

10

0

500

520

540

560

580

600

620

640

660

680

700

Almost Final Version of the 2nd Edition, December, 2011 Copyright 2011, All rights reserved.

ii

Preface

Between the first undergraduate course in probability and the first graduate course that uses measure theory, there are a number of courses that teach Stochastic Processes to students with many different interests and with varying degrees of mathematical sophistication. To allow readers (and instructors) to choose their own level of detail, many of the proofs begin with a nonrigorous answer to the question "Why is this true?" followed by a Proof that fills in the missing details. As it is possible to drive a car without knowing about the working of the internal combustion engine, it is also possible to apply the theory of Markov chains without knowing the details of the proofs. It is my personal philosophy that probability theory was developed to solve problems, so most of our effort will be spent on analyzing examples. Readers who want to master the subject will have to do more than a few of the twenty dozen carefully chosen exercises.

This book began as notes I typed in the spring of 1997 as I was teaching ORIE 361 at Cornell for the second time. In Spring 2009, the mathematics department there introduced its own version of this course, MATH 474. This started me on the task of preparing the second edition. The plan was to have this finished in Spring 2010 after the second time I taught the course, but when May rolled around completing the book lost out to getting ready to move to Durham after 25 years in Ithaca. In the Fall of 2011, I taught Duke's version of the course, Math 216, to 20 undergrads and 12 graduate students and over the Christmas break the second edition was completed.

The second edition differs substantially from the first, though curiously the length and the number of problems has remained roughly constant. Throughout the book there are many new examples and problems, with solutions that use the TI-83 to eliminate the tedious details of solving linear equations by hand. My students tell me I should just use MATLAB and maybe I will for the next edition.

The Markov chains chapter has been reorganized. The chapter on Poisson processes has moved up from third to second, and is now followed by a treatment of the closely related topic of renewal theory. Continuous time Markov chains remain fourth, with a new section on exit distributions and hitting times, and reduced coverage of queueing networks. Martingales, a difficult subject for students at this level, now comes fifth, in order to set the stage for their use in a new sixth chapter on mathematical finance. The treatment of finance expands the two sections of the previous treatment to include American options and the the capital asset pricing model. Brownian motion makes a cameo appearance in the discussion of the Black-Scholes theorem, but in contrast to the previous edition, is not discussed in detail.

As usual the second edition has profited from people who have told me about typos over the last dozen years. If you find new ones, email: rtd@math.duke.edu.

Rick Durrett

Contents

1 Markov Chains

1

1.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Multistep Transition Probabilities . . . . . . . . . . . . . . . . . 7

1.3 Classification of States . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4 Stationary Distributions . . . . . . . . . . . . . . . . . . . . . . . 17

1.5 Limit Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.6 Special Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1.6.1 Doubly stochastic chains . . . . . . . . . . . . . . . . . . . 29

1.6.2 Detailed balance condition . . . . . . . . . . . . . . . . . . 31

1.6.3 Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . 35

1.6.4 The Metropolis-Hastings algorithm . . . . . . . . . . . . . 36

1.7 Proofs of the Main Theorems* . . . . . . . . . . . . . . . . . . . 39

1.8 Exit Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 44

1.9 Exit Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

1.10 Infinite State Spaces* . . . . . . . . . . . . . . . . . . . . . . . . 53

1.11 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 59

1.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

2 Poisson Processes

77

2.1 Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . 77

2.2 Defining the Poisson Process . . . . . . . . . . . . . . . . . . . . 80

2.3 Compound Poisson Processes . . . . . . . . . . . . . . . . . . . . 86

2.4 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

2.4.1 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

2.4.2 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . 89

2.4.3 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . 90

2.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 91

2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

3 Renewal Processes

101

3.1 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . 101

3.2 Applications to Queueing Theory . . . . . . . . . . . . . . . . . . 106

3.2.1 GI/G/1 queue . . . . . . . . . . . . . . . . . . . . . . . . 106

3.2.2 Cost equations . . . . . . . . . . . . . . . . . . . . . . . . 107

3.2.3 M/G/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . 108

3.3 Age and Residual Life* . . . . . . . . . . . . . . . . . . . . . . . 109

3.3.1 Discrete case . . . . . . . . . . . . . . . . . . . . . . . . . 110

3.3.2 General case . . . . . . . . . . . . . . . . . . . . . . . . . 111

3.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 113

iii

iv

CONTENTS

3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

4 Continuous Time Markov Chains

119

4.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . 119

4.2 Computing the Transition Probability . . . . . . . . . . . . . . . 123

4.3 Limiting Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 127

4.4 Exit Distributions and Hitting Times . . . . . . . . . . . . . . . . 133

4.5 Markovian Queues . . . . . . . . . . . . . . . . . . . . . . . . . . 137

4.6 Queueing Networks* . . . . . . . . . . . . . . . . . . . . . . . . . 143

4.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 149

4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

5 Martingales

159

5.1 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . 159

5.2 Examples, Basic Properties . . . . . . . . . . . . . . . . . . . . . 161

5.3 Gambling Strategies, Stopping Times . . . . . . . . . . . . . . . . 165

5.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

5.5 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

6 Mathematical Finance

179

6.1 Two Simple Examples . . . . . . . . . . . . . . . . . . . . . . . . 179

6.2 Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

6.3 Concrete Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 187

6.4 Capital Asset Pricing Model . . . . . . . . . . . . . . . . . . . . . 191

6.5 American Options . . . . . . . . . . . . . . . . . . . . . . . . . . 194

6.6 Black-Scholes formula . . . . . . . . . . . . . . . . . . . . . . . . 198

6.7 Calls and Puts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

6.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

A Review of Probability

207

A.1 Probabilities, Independence . . . . . . . . . . . . . . . . . . . . . 207

A.2 Random Variables, Distributions . . . . . . . . . . . . . . . . . . 211

A.3 Expected Value, Moments . . . . . . . . . . . . . . . . . . . . . . 215

Chapter 1

Markov Chains

1.1 Definitions and Examples

The importance of Markov chains comes from two facts: (i) there are a large number of physical, biological, economic, and social phenomena that can be modeled in this way, and (ii) there is a well-developed theory that allows us to do computations. We begin with a famous example, then describe the property that is the defining feature of Markov chains

Example 1.1. Gambler's ruin. Consider a gambling game in which on any turn you win $1 with probability p = 0.4 or lose $1 with probability 1 - p = 0.6. Suppose further that you adopt the rule that you quit playing if your fortune reaches $N . Of course, if your fortune reaches $0 the casino makes you stop.

Let Xn be the amount of money you have after n plays. Your fortune, Xn has the "Markov property." In words, this means that given the current state,

Xn, any other information about the past is irrelevant for predicting the next state Xn+1. To check this for the gambler's ruin chain, we note that if you are still playing at time n, i.e., your fortune Xn = i with 0 < i < N , then for any possible history of your wealth in-1, in-2, . . . i1, i0

P (Xn+1 = i + 1|Xn = i, Xn-1 = in-1, . . . X0 = i0) = 0.4

since to increase your wealth by one unit you have to win your next bet. Here we have used P (B|A) for the conditional probability of the event B given that A occurs. Recall that this is defined by

P (B A) P (B|A) =

P (A)

If you need help with this notion, see section A.1 of the appendix. Turning now to the formal definition, we say that Xn is a discrete time

Markov chain with transition matrix p(i, j) if for any j, i, in-1, . . . i0

P (Xn+1 = j|Xn = i, Xn-1 = in-1, . . . , X0 = i0) = p(i, j)

(1.1)

Here and in what follows, boldface indicates a word or phrase that is being defined or explained.

Equation (1.1) explains what we mean when we say that "given the current state Xn, any other information about the past is irrelevant for predicting

1

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download