Introduction to Stochastic Processes - Lecture Notes

Introduction to Stochastic Processes - Lecture Notes

(with 33 illustrations)

Gordan Zitkovi Department of Mathematics The University of Texas at Austin

Contents

1 Probability review

4

1.1 Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Countable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Discrete random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.5 Events and probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.6 Dependence and independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.7 Conditional probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Mathematica in 15 min

15

2.1 Basic Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Numerical Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3 Expression Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.4 Lists and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.5 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.6 Predefined Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.7 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.8 Solving Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.9 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.10 Probability Distributions and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.11 Help Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.12 Common Mistakes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3 Stochastic Processes

26

3.1 The canonical probability space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2 Constructing the Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.3.1 Random number generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.3.2 Simulation of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.4 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 The Simple Random Walk

35

4.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.2 The maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

1

CONTENTS

5 Generating functions

40

5.1 Definition and first properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.2 Convolution and moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.3 Random sums and Wald's identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6 Random walks - advanced methods

48

6.1 Stopping times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.2 Wald's identity II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.3 The distribution of the first hitting time T1 . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6.3.1 A recursive formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6.3.2 Generating-function approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6.3.3 Do we actually hit 1 sooner or later? . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.3.4 Expected time until we hit 1? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

7 Branching processes

56

7.1 A bit of history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

7.2 A mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

7.3 Construction and simulation of branching processes . . . . . . . . . . . . . . . . . . . . 57

7.4 A generating-function approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

7.5 Extinction probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

8 Markov Chains

63

8.1 The Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

8.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

8.3 Chapman-Kolmogorov relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

9 The "Stochastics" package

74

9.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

9.2 Building Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

9.3 Getting information about a chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

9.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

9.5 Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

9.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

10 Classification of States

79

10.1 The Communication Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

10.2 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

10.3 Transience and recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

10.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

11 More on Transience and recurrence

86

11.1 A criterion for recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

11.2 Class properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

11.3 A canonical decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Last Updated: December 24, 2010

2

Intro to Stochastic Processes: Lecture Notes

CONTENTS

12 Absorption and reward

92

12.1 Absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

12.2 Expected reward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

13 Stationary and Limiting Distributions

98

13.1 Stationary and limiting distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

13.2 Limiting distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

14 Solved Problems

107

14.1 Probability review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

14.2 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

14.3 Generating functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

14.4 Random walks - advanced methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

14.5 Branching processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

14.6 Markov chains - classification of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

14.7 Markov chains - absorption and reward . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

14.8 Markov chains - stationary and limiting distributions . . . . . . . . . . . . . . . . . . . . 148

14.9 Markov chains - various multiple-choice problems . . . . . . . . . . . . . . . . . . . . . 156

Last Updated: December 24, 2010

3

Intro to Stochastic Processes: Lecture Notes

Chapter 1

Probability review

The probable is what usually happens. --Aristotle It is a truth very certain that when it is not in our power to determine. what is true we ought to follow what is most probable --Descartes - "Discourse on Method" It is remarkable that a science which began with the consideration of games of chance should have become the most important object of human knowledge. --Pierre Simon Laplace - "Th?orie Analytique des Probabilit?s, 1812 " Anyone who considers arithmetic methods of producing random digits is, of course, in a state of sin. --John von Neumann - quote in "Conic Sections" by D. MacHale I say unto you: a man must have chaos yet within him to be able to give birth to a dancing star: I say unto you: ye have chaos yet within you . . . --Friedrich Nietzsche - "Thus Spake Zarathustra"

1.1 Random variables

Probability is about random variables. Instead of giving a precise definition, let us just metion that a random variable can be thought of as an uncertain, numerical (i.e., with values in R) quantity. While it is true that we do not know with certainty what value a random variable X will take, we usually know how to compute the probability that its value will be in some some subset of R. For example, we might be interested in P[X 7], P[X [2, 3.1]] or P[X {1, 2, 3}]. The collection of all such probabilities is called the distribution of X. One has to be very careful not to confuse the random variable itself and its distribution. This point is particularly important when several random variables appear at the same time. When two random variables X and Y have the same distribution, i.e., when P[X A] = P[Y A] for any set A, we say that X and Y are equally distributed and write X (=d) Y .

4

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download