Sums, Integrals, and Counting



Formulas

Trigonometric Identities

sin(x+y) = sinx cosy + cosx siny cos(x+y) = cosx cosy - sinx siny

sinx siny = ½ [ cos(x-y) - cos(x+y) ] cosx cosy = ½ [ cos(x-y) + cos(x+y) ]

sinx cosy = ½ [ sin(x+y) + sin(x-y) ]

sin2x = ½ [ 1 - cos(2x) ] cos2x = ½ [ 1 + cos(2x) ]

sinh x = ½ [ e x - e-x ] cosh x = ½ [ e x + e-x ]

eiy = cos y + i sin y e-iy = cos y - i sin y

sin x = [ eix - e-ix ] cos x = ½ [ eix + e-ix ]

Sums, Integrals, and Counting

1 + x + x2 + ... + xn + ... = 1 + x + x2 + ... + xn =

1 + 2x + 3x2 + ... + n xn-1 + ... = 1 + 2x + 3x2 + ... + n xn-1 =

x + 2x2 + 3x3 + ... + n xn + ... =

= uv - (integration by parts) = - ( + ) e-ax

# of ways to select r objects from n objects with order being taken into account =

# of ways to select r objects from n objects disregarding the order in which they are selected = ) =

Probability

Notation

S = sample space = set of all possible outcomes

a = symbol used to represent a typical outcome (i.e. something one observes)

event = a set of outcomes. A, E are symbols often used for typical events

Pr{A} = probability of outcome being in A, if A is an event. If one were to make many independent observations then the fraction of times the outcome was in A should be close to Pr{A}.

Addition Formulas

Pr{A(B} = Pr{A} + Pr{B}, if A and B are disjoint.

Pr{A(B} = Pr{A} + Pr{B} - Pr{A(B}

Pr{A(B(C} = Pr{A} + Pr{B} + Pr{C} - Pr{A(B} - Pr{A(C} - Pr{B(C} + Pr{A(B(C}

Pr{E1 ( ( ( En} = Pr{E1} + ( + Pr{En}, if E1, (, En are disjoint.

Pr{E1 ( ( ( En ( (} = Pr{E1} + ( + Pr{En} + (, if E1, (, En, ( are disjoint.

Pr{Ac} = 1 – Pr{A}, where Ac = complement of A.

Conditional Probability, Multiplication Formulas, and Independence

Pr{A | B} =

Pr{A(B} = Pr{B} Pr{A | B} (multiplication formula)

The addition formulas above hold for conditional probability, e.g.

Pr{E1 ( ( ( En | B} = Pr{E1 | B} + ( + Pr{En | B}, if E1, (, En are disjoint.

Pr{A(B | C} = Pr{B | C} Pr{A | B(C}

A and B are independent ( Pr{A | B} = Pr{A} ( Pr{A(B} = Pr{A} Pr{B}

Random Variables

A random variable X assigns a value X(a) to each outcome a.

Pr{X ( x} is short for Pr{a: X(a) ( x} = the probability the value of the random variable is ( x.

F(x) = FX(x) = Pr{X ( x} = cumulative distribution function (CDF) of X.

X and Y are independent ( Pr{X ( x | Y ( y} = Pr{X ( x} for all x, y

( Pr{X ( x and Y ( y} = Pr{X ( x} Pr{Y ( y} for all x, y

Discrete Random Variables

X is discrete ( X assumes only a finite or countably infinite set of values.

f(k) = fX(k) = Pr{X = k} = F(k) – F(k-) = probability mass function (PMF) of X.

F(x) =

( = (X = E(X) = = expectation of X = mean of X = average of X

E( h(X) ) = = expectation of h(X).

(2 = ( = E( (X - ()2 ) = = variance of X = E(X2) - (2 = - (2

( = (X = = standard deviation of X

fX,Y(j, k) = Pr{X = j and Y = k} = joint probability mass function of X and Y

fX(j) =

fY(k) =

fX|Y(j | k) = Pr{X = j | Y = k} =

X and Y are independent ( Pr{X = j | Y = k} = Pr{X = j} for all j, k

( Pr{X = j and Y = k} = Pr{X = j} Pr{Y = k} for all j, k

( fX|Y(j | k) = fX(j)

( fX,Y(j, k) = fX(j) fY(k)

E(X | Y = k) = = expectation of X given Y = k

Continuous Random Variables

X is continuous ( Pr{a ( X ( b} = for some function f(x)

f(x) = = probability density function (PDF) of X.

F(a) =

( = (X = E(X) = = expectation of X = mean of X

E( h(X) ) = = expectation of h(X).

(2 = ( = E( (X - ()2 ) = = variance of X = E(X2) - (2 = - (2

( = (X = = standard deviation of X

X and Y are independent ( Pr{X ( x and Y ( y} = Pr{X ( x} Pr{Y ( y} for all x, y

( fX,Y(x, y) = fX(x) fY(y)

Z = X + Y and X and Y independent (

fZ(z) =

If fX(x) = 0 for x < 0 and fY(y) = 0 for y < 0 then fZ(z) = 0 for z < 0 and

fZ(z) = for z ( 0.

W = min{X, Y} and X and Y independent (

FW(w) = 1 - (1 – FX(w))(1 – FY(w))

Common Distributions

Uniform–discrete:

f(k) = for k = a, a+1, (, a+n-1. (e.g. roll a die)

E(X) = a + ( = )

Binomial:

f(k) = ) pk (1-p)n-k for k = 0, 1, 2, …, n

= Pr{k successes in n independent trials if probability of success = p on each trial}

E(N) = np ( =

Geometric:

f(k) = p (1-p)k-1 for k = 1, 2, ….

= Pr{k trials are required to get a success if probability of success = p on each trial}

E(N) = ( = )

Poisson:

f(k) = e-( for k = 0, 1, 2, ...

= Pr{there are k occurrences of some event (e.g. an arrival) in a given time period}

E(N) = ( = average number of occurrences ( =

Important example: If times between occurrences of some event have exponential distribution with occurrence rate ( and are independent, then the number of occurrences in time t has Poisson distribution with parameter ( = (t.

Uniform–continuous:

f(x) = for a ( x ( b

E(X) = ( = )

Normal:

f(x) = ( ) e-(x-()2 / 2(2

E(X) = ( ( = (

Exponential:

f(t) = (e-(t for t > 0

Distribution for times between occurrences of some event

E(T) = = average time between occurrences ( =

( = rate of occurrence of the event

Erlang:

f(t) = for t > 0

Distribution for times between n occurrences of some event, i.e. density function of Sn = T1 + ( + Tn where T1, (, Tn are independent exponential random variables all with occurrence rate (.

E(Sn) = ( = ,()

Central limit theorem

Let X1, X2, … , Xn, … be an infinite sequence of independent random variables with the same distribution. Let ( and ( be the mean and standard deviation of each Xn. Then for n large the distribution of S = X1 + X2 + … + Xn is approximately normal with mean n( and standard deviation ( and the distribution of (X1 + X2 + … + Xn)/n is approximately normal with mean ( and standard deviation .

Single Period Inventory

w = wholesale cost

y = the number we buy wholesale

r = retail price

D = demand (the amount the public wants to buy)

fk = Pr{ D = k } Fk = Pr{ D ( k } =

R = revenue from retail sales = )

s = salvage amount.

S = revenue from the salvage = )

P = total profit = R + S - wy

y* = value of y that maximizes the expected profit

= smallest y such that Fy (

Markov Chains

Notation & Markov property

Xn = state of system at time step n. n = 0, 1, 2, …

i, j, k = typical states. States are often numbered 1, 2, …, m. Some of the following formulas may require additional hypotheses if there are an infinite number of states.

Pr{Xn+1 = j | Xn = i, Xo = ko, X1 = k1, … Xn-1 = kn-1} = Pr{Xn+1 = j | Xn = i} (Markov property)

Pr{Xn+1 = j | Xn = i} = Pr{X1 = j | Xo = i} (time invariance)

Single step transition probabilities

pij = Pr{Xn+1 = j | Xn = i} = probability next state is j given current state is i.

P = = Markov matrix (or transition matrix).

(j = Pr{Xo = j} = probability system starts in state j. ( = ((1, …, (m) = initial probability vector.

((P)j = probability next state is j if ( are probabilities of current state.

c(i) = ci = cost (or profit) for being in state i at one time step. c = = cost vector.

(Pc)i = E[ c(X1) | Xo = i] = expected cost for next state if current state is i.

(Pc = expected cost for next state if ( are probabilities of current state.

Multistep transition probabilities

p = Pr{Xn = j | Xo = i} = probability state is j in n time steps given current state is i.

p = (Pn)ij (This is a theorem, not a definition)

((Pn)j = probability of being in state j in n time steps if ( are probabilities of current state.

(Pnc)i = E[ c(Xn) | Xo = i] = expected cost in n steps if current state is i.

(Pnc = expected cost in n steps if ( are probabilities of current state.

Pn = TDnT-1 Dn = diagonal matrix with the powers of the eigenvalues (k on the diagonal.

T = matrix whose columns are the eigenvectors.

To find the eigenvalues ( solve det(P - (I) = 0. At least one eigenvalue is equal to 1, let's call this (1. All of the eigenvalues have absolute value no more than 1. If some of the eigenvalues are equal then this formula for Pn may have to be modified.

To find the eigenvectors v for an eigenvalue (, solve (A - (I)v = 0. At least one of the eigenvectors corresponding to the eigenvalue 1 is the vector of all 1's.

p = (ij + cp+1( + ( + cm( where the ck are constants which also depend on i and j and are related to the eigenvectors. We are assuming (1 = (2 = ( = (p = 1. If some (k is the same as another then the term ck( may be modified to cknq( where q is a positive integer.

Steady state probabilities

(ij = = long run average probability of being in state j if current state is i.

(ij = Pr{Xn = j | Xo = i} = steady state probability of being in state j if current state is i if the only eigenvalues with absolute value equal to 1 are equal to 1.

(ij = (j if you can go from any recurrent state to any other recurrent state. ( = ((1, …, (m).

(P = (, (1 + ( + (m = 1.

(c = expected steady state cost rate.

Time to get to a state

T(j) = min{n ( 1: Xn = j} = first time n ( 1 in which you visit state j (( if you never visit j).

F = Pr{T(j) ( n | Xo = i} = probability you visit state j by time n if you start in state i.

Modify the Markov chain (MC) to stay in state j once you get there. If i ( j then F is the probability of being in state j in the modified MC at time n if you start in state i. If i = j, make an additional modification by introducing a second state j, called the "starting state j", while the original state j is called the "ending state j". There are no transitions into the starting state j and transitions out of the starting state j are the same as in the original MC. If there is a transition from j to j in the original MC, then there is a transition from the starting j to the ending state j in the modified MC. Then F is the probability of being in the ending state j at time n if you start in the starting state j.

Fij = Pr{T(j) < ( | Xo = i} = F = probability of reaching state j if you start in state i.

Fi = Fii = Pr{T(i) < ( | Xo = i} = probability of returning to state i if you start in state i.

i is recurrent if Fi = 1 and transient if Fi < 1. So, i is recurrent if you always return to i if you start at i. i is transient if you might not return to i if you start at i. If there is a finite number of states then i is recurrent if and only if whenever you can go from i to some other state k then you can go from k to i.

Fij = ( (I-PT)-1b )i if i is transient and j is recurrent. PT is the sub-matrix of P corresponding to the transient states and b is the vector obtained by first summing the columns of P which correspond to the states that you can go to from j and then restricting this vector to the transient states. If i and j are both transient then modify the Markov chain as in the calculation of F above (which makes j recurrent). Or use the formula sij = Fijsjj (see below).

f = Pr{T(j) = n | Xo = i} = probability you visit j for the first time at time n if you start in i.

Modify the MC to go to a dead end state m+1 after visiting state j. If i ( j then f is the probability of being in state j in the modified MC at time n if you start in state i. If i = j, make an additional modification by introducing a starting state j as in the calculation of gabove.

F = ) Fij = )

E( T(j) | Xo = j ) = ) = = expected time to return to j if you start in j (assuming j is recurrent).

Number of visits to a state

N(j) = ((Xn, j) = number of times you visit state j (including time n = 0 if you start in state j). Here ((i,j) is 1 if i = j and 0 if i ( j.

sij = E(N(j) | Xo = i) = pij(n) = expected number of times you visit state j (including time n = 0 if you start in state j). This will be infinite if j is recurrent and one can go from i to j.

S = =

S = (I-PT)-1 if we restrict S to the transient states and PT is the restriction of P to the transient states.

sjj = Fj = 1 - if j is transient

sij = Fijsjj if i and j are transient and i ( j.

= total cost (or profit) for all time. Assume cj = 0 if j is recurrent.

E( = = ( (I-PT)-1c )i = expected total cost if you start in state i. (The last equation assumes i is transient.)

((I-PT)-1c = expected total cost if ( are initial state probabilities restricted to the transient states.

Markov Processes

Matrix Operations

-1 =

etR = T T-1 where the (k are eigenvalues of R and T is the matrix whose columns are the eigenvectors.

R = ( etR =

Notation & Transition Rates

Yt = state of system at (continuous) time t, where t ( 0.

i, j, k = typical states. States are often numbered 1, 2, …, m. Some of the following formulas may require additional hypotheses if there are an infinite number of states.

T = time of first (or next) transition = min{t: Yt ( Yo}.

qij = transition rate from state i to state j (for i ( j) = (Often one is given or can measure the qij)

vi = qi,1 + ( + qi,i-1 + qi,i+1 + ( + qi,n = transition rate out of state i. T is an exponential random variable with rate vi.

pij = = probability next state is j if current state is i = Pr{YT = j | Yo = i}.

R = = generator matrix.

Pr{Ys+t = j | Ys = i, Yro = ko, Yr1 = k1, … Yrn-1 = kn-1} = Pr{Ys+t = j | Ys = i} if the rj are less than s, (Markov property)

Pr{Ys+t = j | Ys = i} = Pr{Yt = j | Yo = i} (time invariance)

Transition probabilities

pij(t) = Pr{Yt = j | Yo = i} = probability state is j at time t given starting state is i.

P(t) = = etR = transition matrix at time t

P(s+t) = P(s) P(t) (Chapman-Kolmogorov equation) ((6.8) on p. 363)

= P(t) R (Kolmororov's forward differential equations, pp. 367, 388)

(j = Pr{Yo = j} = probability system starts in state j. ( = ((1, …, (m) = initial probability vector.

((P(t))j = probability state is j at time t if ( are probabilities of initial state.

ri = reward (or cost) per unit time while in state i. r = = reward vector.

hij = profit/cost for a transition from state i to state j.

gi = ri + = effective reward per unit time while in state i.

(P(t)g)i = E[ g(Yt) | Xo = i] = expected reward per unit time at time t if initial state is i.

= expected profit/cost during the time interval a ( t ( b if initial state is i.

= expected time in state j during the time interval a ( t ( b if initial state is i. (a special case)

Steady state probabilities

pij = pij(t) = steady state probability of being in state j if current state is i.

pij = pj if you can go from any state to any other state. p = (p1, …, pm).

pR = 0, p1 + ( + pm = 1.

pg = expected steady state profit/cost rate.

Machine Replacement

c1 = replacement cost of the machine.

c2 = additional cost if machine fails before replacement.

T = length of time until the machine fails.

F = cumulative distribution function for T.

f = probability mass (if time is discrete) or density (if time is continuous) function for T.

q = time at which you replace the machine if it hasn't already failed.

C = Cq = cost for one replacement if you replace at time q if it hasn't already failed.

E(C) = expected cost of one replacement if you replace at time q if it hasn't already failed

= c1 + c2F(q)

Tq = replacement time of one machine if you replace at time q if it hasn't already failed

= min{T, q}

E(Tq) = expected time of one replacement if you replace at time q if it hasn't already failed

= + q (1-Fq) = if time is discrete., + q (1-F(q)) = if time is continuous.) )

z(q) = expected long run average cost for the machine

=

= ) if time is discrete.,) if time is continuous.) )

q* = value of q which minimizes z(q)

= = ( + F(q)) (1 - F(q)) if time is continuous.) )

Renewal Theory

T1 = length of time until the first occurrence of some event.

Tn = length of time between the (n-1)st occurrence of the event and the nth occurrence. Assume all the Tn are independent and have the same distribution

f(t) = probability density function for any Tn.

F(t) = cumulative distribution function for any Tn.

Sn = T1 + ( + Tn = time until the nth occurrence.

fn(t) = f(t) * f(t) * ( * f(t)

= convolution of f with itself n times

= probability density function for Sn.

Fn(t) = cumulative distribution function for Sn. (Let Fo(t) = 1)

N(t) = number of occurrences between times 0 and t

pn(t) = Pr{ N(t) = n }

= probability of exactly n occurrences between times 0 and t

= Fn(t) - Fn+1(t)

m(t) = E{ N(t) }

= average number of occurrences between times 0 and t

= renewal function

=

=

Renewal equation: m(t) = F(t) + f(t) * m(t)

Limit theorems:

( as t ( ( with probability 1

( as t ( (

Queues

( = arrival rate

Ta = = average time between arrivals

( = service rate at one of the servers

Ts = = average time to be served

M/M/1 queue

( = (/( = traffic intensity

p0 = (1-() = probability that no one is being served

1 – p0 = ( = probability that someone is being served = average number being served

pn = (1-() (n = probability that n are in system including one being served

L = = average number in system including one being served

W = = = average time in system including being served

Wq = W - Ts average waiting time to be served not including time to be served

Lq = (Wq = L - ( = = average number in system excluding one being served

c = cost per unit time for each customer in the system

ccustomer = cW = average waiting cost for each customer

ctime = cL = average waiting cost per unit time

M/M/2 queue (2 servers)

( = = traffic intensity

p0 = = probability that no one is being served

L = = average number in system including one being served

W = = = = average time in system including being served

Wq = W - Ts = = = average waiting time to be served not including time to be served

Lq = (Wq = = average number in system excluding one being served

M/M/c queue

c = number of servers

( =

po = + ( + + ) = probability system no one is being served

pn = n = 0 1 … c-1, n = c\, c+1\, …) )

Lq =

Wq =

W = Wq + Ts

L = (W = Lq + c(

M/M/1/K queue

K = maximum number at queue including one being served

( = (/(

pn = n = 0, 1, …, K.

L =

W =

Wq = W - Ts

Lq = L - ( (1-pK)

Queuing networks

pij = probability of going to queue j after leaving queue i. PT =

S = = (I-PT)-1 sij = Sij

sij = number of times a customer visits queue j if it starts at queue i

rj = rate at which entering customers go to queue j. r = (r1, …, rm)

(j = = probability an entering customer goes to queue j. ( = ((1, …, (m)

(j = rate at which customers are arriving at queue j

= r1s1j + ( + rmsmj

( = ((1, …, (m) = rS

(j = service rate of a server at queue j.

Wj = average time a customer spends at queue j (including time being served) for each visit to queue j (calculated using the formula for M/M/1 or M/M/c queue)

W =

(SW = average time a job to be in system.

Brownian Motion

Regular

X(t) = position (or value) at time t of some object (or quantity) whose motion is a Brownian motion.

X(t) - X(0) has a normal distribution with mean (t and standard deviation (

( = drift rate - units: (length)/(time)

(2 = variance parameter - units: (length)2 / (time)

D = = diffusion coefficient - units: (length)2 / (time)

pt(x|() = density function of X(t) given X(0) = ( = ) e-(x-(-(t)2/(2(2t) = ) e-(x-(-(t)2/(4Dt)

c(x,t) = concentration at position x and time t of particles that are moving (diffusing) according to a Brownian motion

=

Geometric

X(t) = eY(t) where Y(t) is a regular Brownian motion, i.e. ln[X(t)] = Y(t) is a regular Brownian motion. Thus, Y(t) - Y(0) has a normal distribution with mean (t and standard deviation (.

E[ X(t) | X(0) = ( ] = (e(t + (2t/2

Stock Option Pricing

X(t) = price of a stock. Assume it is a geometric Brownian motion with ln[X(t)] a regular Brownian motion with drift parameter ( and variance parameter (. Unless stated otherwise, assume t is measured in years.

( = volatility of stock

( = (continually compounded) annual interest rate, i.e. an amount M invested now would be worth e(tM at a time t in the future while a payment of v at a time t in the future would be worth e-(tv now – this is called the present value of a payment of v at a time t in the future.

( = ( - (2/2 if the expected present value of the stock price is constant in time. This assumption is often made in options pricing.

call option: allows the holder (called the buyer) to buy an item (e.g. a share of a certain stock) at a certain future date for a certain price K (called the strike price). However, the holder is not obligated to buy the item. (This is actually a European call option. An American call option allows the holder to buy the item at any time before a certain future date for a certain price.

Value of option = Present value of expected gain to the buyer of a stock option assuming ( = ( - (2/2

= E[ e-(t(X(t) - K)+ | X(0) = ( ]

= ( ((- b + () - Ke-(t((- b)

b = ((- b) = probability price exceeds K at time t

Ke-(t((- b) = present value of what one expects to pay to buy the stock at time t under terms of option

( ((- b + () = present value of what one expects to make from selling the stock under terms of option.

((x) = cumulative distribution function of a normal random variable with mean zero and standard deviation one.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download