Formulas



Formulas

Trigonometric Identities:

sin(x+y) = sinx cosy + cosx siny cos(x+y) = cosx cosy - sinx siny

sinx siny = cosx cosy =

sinx cosy =

sin2x = cos2x =

sinh x = (e x - e-x) cosh x = (e x + e-x)

Curves:

+ = 1 ellipse - = 1 hyperbola

Derivatives and Integrals:

S(x,a) = sensitivity of x with respect to a

=

sin-1x = ) cos-1x = ) tan-1x =

tanx = sec2x secx = tanx secx ax = ax ln(a)

= + (Chain rule) (z = i + j(Gradient)

= uv - = - ( + ) e-ax

= - x cos x + sin x = x sin x + cos x

= =

Maxima and Minima:

Quadratic functions: z = Ax2 + Bxy + Cy2 + Dx + Ey + F

B2 - 4AC > 0 ( hyperbolic paraboloid (saddle surface)

B2 - 4AC < 0 & A > 0 ( elliptic paraboloid opening up (bowl shaped)

B2 - 4AC < 0 & A < 0 ( elliptic paraboloid opening down (g upside-down bowl)

B2 - 4AC = 0 ( parabolic cylinder (trough shaped)

General functions: z = f(x,y) (Let A = , B = , C = )

Interior maximum or minimum ( = 0 and = 0.

= 0 and = 0 and AC > B2 and A > 0 and C > 0 ( local minimum

= 0 and = 0 and AC > B2 and A < 0 and C < 0 ( local maximum

= 0 and = 0 and AC < B2 ( saddle point

Newton’s method

Problem: Solve h(z) = 0 where h(z) is either a real valued function of a real z or a vector valued function of a vector z.

Solution Make a guess zo of a solution. Then a revised estimate is

z1 = zo - [h’(zo) ]-1 h(zo)

This can be repeated with zo replaced by z1 to get another estimate z2 and so on to generate a sequence zn of estimates. If one starts close enough to a true solution z* the zn converge to z* provided h’(z*) has an inverse.

If z = and h(x,y) = then h’(x,y) = , ))

Lagrange multipliers: Optimize z = f(x,y) subject to g(x,y) = c. Solve the equations

= ( = ( g(x,y) = c

If x = x(c), y =y(c), ( = ((c) is a solution and z = z(c) = f(x(c), y(c)) is the optimal value of the objective function, then dz/dc = (.

Linear Programming: Maximize or minimize a linear function z = zo + c1x1 + ( + cnxn where the variables x1, x2, … xn are subject to a collection of linear constraints. A typical constraint has one of the following forms

a1x1 + ( + anxn ( b, a1x1 + ( + anxn ( b, a1x1 + ( + anxn = b

It is possible to transform any linear programming problem into the following form.

Standard Maximum Problem: Maximize z = c1x1 + ( + cnxn subject to

a11x1 + ( + a1nxn ( b1

(

am1x1 + ( + amnxn ( bm

x1 ( 0, ... , xn ( 0.

Once it is in this form one can find the solution using the following algorithm.

Simplex method: Introduce slack variables xn+1, xn+2, … xn+m

xn+1 = b1 - a11x1 - ( - a1nxn

(

xn+m = bm - am1x1 - ( - amnxn

The problem can now be phrased as

Maximize z = c1x1 + ( + cnxn+m subject to

a11x1 + ( + a1,n+mxn+m = b1

(

am1x1 + ( + am,nxn+m = bm

x1 ( 0, ... , xn+m ( 0.

with cn+1 = …. = cn+m = 0 and ai,n+j = 1 if i = j and ai,n+j = 0 if i ≠ j. We start out with x1, x2, … xn as the non-basic variables and xn+1, xn+2, … xn+m as the basic variables.

Let ci = max{c1, ..., cn+m}. If ci ( 0, then the maximum is z = zo when the non-basic variables are 0 and the basic variables have the values on the right sides of the constraints. If ci > 0, then let Qk = bk/aik for 1 ( k ( m. If all the Qk are negative or undefined (if aik = 0), then z is unbounded and one can stop. Otherwise, let Qj be the minimum of the positive Qk. Solve the jth constraint for xi. Substitute this formula in for xi in the equations for z and the other constraints. The variables xi becomes a basic variable and whatever basic variable appears in the jth constraint becomes a non-basic variable. Repeat this process until one reaches the point where all the cj are non-positive or one has determined that z is unbounded.

If when you reach the end one of the original variables xi is still a non-basic variable and the coefficient of this variable in the final objective function is – p then reducing the original coefficient of xi be a little more than p will cause xi to change to a basic variable in the final solution.

If when you reach the end one of the slack variables xi is still a non-basic variable and the coefficient of this variable in the final objective function is – p then if one can "purchase" additional amounts of what ever this variable measures slack of for less than p per unit will increase profit.

The dual of the standard maximum problem is the following

Standard Minimum Problem: Minimize w = b1p1 + ( + bmpm subject to

a11p1 + ( +am1pn ( c1

(

a1np1 + ( + amnpn ( cn

p1 ( 0, ... ,pn ( 0.

Differential and Difference Equations:

Single Differential Equations: = f(x)

ξ is an equilibrium point ( x(t) ( ξ for all t is a solution ( f(ξ) = 0.

In the following we assume ξ is an equilibrium point.

ξ is a sink ( any solution that starts near ξ approaches ξ as t ( (

( f(x) changes from positive to negative as x crosses ξ from negative to positive.

ξ is a source ( any solution that starts near ξ moves away from ξ as t increases

( f(x) changes from negative to positive as x crosses ξ from negative to positive.

ξ is a neither ( solutions that start near ξ on one side of ξ approach ξ as t ( ( and solutions that start near ξ on the other side of ξ move away from ξ as t increases

( f(x) has the same sign for all values of x near ξ.

Derivative test: f'(ξ) < 0 ( ξ is a sink f'(ξ) > 0 ( ξ is a source

Single Difference Equations: xn+1 = f(xn)

ξ is an equilibrium point ( xn ( ξ for all n is a solution ( f(ξ) = ξ.

In the following we assume ξ is an equilibrium point.

ξ is a sink ( any solution that starts near ξ approaches ξ as n ( (

ξ is a source ( any solution that starts near ξ goes away from ξ as n increases

ξ is a neither ( solutions that start near ξ on one side of ξ approach ξ as n ( ( and solutions that start near ξ on the other side of ξ move away from ξ as n increases

Derivative test:

| f'(ξ) | < 1 ( ξ is a sink | f'(ξ) | > 1 ( ξ is a source

If f'(ξ) = 1 then

if y = f(x) crosses the line y = x from above to below as x crosses ξ from negative to positive ( ξ is a sink

if y = f(x) crosses the line y = x from below to above as x crosses ξ from negative to positive ( ξ is a source

if y = f(x) is on one side of the line y = x for x near ξ ( ξ is neither

If f'(ξ) = -1 then consider the sequence yn defined by yn = x2n. It satisfies the difference equation yn+1 = g(yn) where g(y) = f( f(y) ). Note that ξ is also an equilibrium point for the equation yn+1  =  g(yn) and g'(ξ) = 1. If ξ is a sink for yn+1 = g(yn) then it is also a sink for xn+1  =  f(xn). If ξ is a source for yn+1 = g(yn) then it is also a source for xn+1 = f(xn).

Eigenvalues and Eigenvectors

( is an eigenvalue of A ( Au = (u for some u ( 0 ( det(A - (I) = 0. Let the eigenvalues of A be (1, (2, …, (p

X is an eigenvector of A corresponding to the eigenvalue ( ( AX = (X ( (A - (I)X = 0. Let X1, X2, …, Xp be eigenvectors of A corresponding to (1, (2, …, (p.

A = TDT-1 T = matrix whose columns are the eigenvectors of A,

D = diagonal matrix with the eigenvalues of A on the diagonal.

An = TDnT-1 Dn = diagonal matrix with the powers of the eigenvalues on the diagonal.

etA = TetDT-1 etD = diagonal matrix whose diagonal entries are e(jt.

Linear Homogeous Difference Equations

un+1 = Aun ( un = Anuo

Linear Homogeneous Differential Equations

= Au ( u(t) = etAu(0)

Nonlinear Systems of Differential Equations

= f(x,y) = g(x,y)

The phase plane is the plane of the dependent variables. In this case it is the xy-plane.

If (x(t), y(t)) is a solution to the system then the corresponding trajectory is the directed curve in the phase plane defined by (x, y) = (x(t), y(t)) as t varies.

The x-nullclines are the curves in the phase plane defined by = 0, i.e. f(x,y) = 0.

The y-nullclines are the curves in the phase plane defined by = 0, i.e. g(x,y) = 0.

(x*, y*) is an equilibrium point

( (x(t), y(t)) = (x*, y*) for all t is a solution to the system.

( f(x*, y*) = 0 and g(x*, y*) = 0

( (x*, y*) is a point of intersection of an x-nullcline and y-nullcline.

Linearization: Let (x*, y*) be an equilibrium point and A = , ) ) where the partial derivatives are evaluated at (x*, y*). The phase portrait near (x*, y*) is similar to the phase portrait of the linear system = Au near the origin provided all the eigenvalues of A have non-zero real part. In particular,

i. If both eigenvalues are negative, the equilibrium point is a stable node.

ii. If both eigenvalues are positive, the equilibrium point is an unstable node.

iii. If one eigenvalue is positive and the other negative, it is a saddle.

iv. If the eigenvalues are complex with negative real part, it is a stable spiral.

v. If the eigenvalues are complex with positive real part, it an unstable spiral.

Sums and Counting

1 + x + x2 + ... + xn + ... = 1 + x + x2 + ... + xn =

1 + 2x + 3x2 + ... + n xn-1 + ... = 1 + 2x + 3x2 + ... + n xn-1 =

x + 2x2 + 3x3 + ... + n xn + ... =

# of ways to select r objects from n objects with order being taken into account =

# of ways to select r objects from n objects disregarding the order in which they are selected = ) =

Probability

S = sample space = set of all possible outcomes

a = symbol used to represent a typical outcome (i.e. something one observes)

event = a set of outcomes.

Pr{A} = probability of outcome being in A, if A is an event. If one were to make many independent observations then the fraction of times the outcome was in A should be close to Pr{A}.

Addition Formulas

Pr{A(B} = Pr{A} + Pr{B}, if A and B are disjoint.

Pr{A(B} = Pr{A} + Pr{B} - Pr{A(B}

Pr{E1 ( ( ( En} = Pr{E1} + ( + Pr{En}, if E1, (, En are disjoint.

Pr{Ac} = 1 – Pr{A}, where Ac = complement of A.

Conditional Probability, Multiplication Formulas, and Independence

Pr{A | B} =

Pr{A(B} = Pr{B} Pr{A | B} (multiplication formula)

Pr{E1 ( ( ( En | B} = Pr{E1 | B} + ( + Pr{En | B}, if E1, (, En are disjoint.

Pr{A(B | C} = Pr{B | C} Pr{A | B(C}

A and B are independent ( Pr{A | B} = Pr{A} ( Pr{A(B} = Pr{A} Pr{B}

Random Variables

A random variable X assigns a value X(a) to each outcome a.

Pr{X ( x} is short for Pr{a: X(a) ( x} = the probability the value of the random variable is ( x.

F(x) = FX(x) = Pr{X ( x} = cumulative distribution function (CDF) of X.

X and Y are independent ( Pr{X ( x | Y ( y} = Pr{X ( x} for all x, y

( Pr{X ( x and Y ( y} = Pr{X ( x} Pr{Y ( y} for all x, y

Discrete Random Variables

X is discrete ( X assumes only a finite or countably infinite set of values.

f(k) = fX(k) = Pr{X = k} = F(k) – F(k-) = probability mass function (PMF) of X.

F(x) =

( = (X = E(X) = = expectation of X = mean of X = average of X

E( h(X) ) = = expectation of h(X).

(2 = ( = E( (X - ()2 ) = = variance of X = E(X2) - (2 = - (2

( = (X = = standard deviation of X

X and Y are independent ( Pr{X = j | Y = k} = Pr{X = j} for all j, k

( Pr{X = j and Y = k} = Pr{X = j} Pr{Y = k} for all j, k

Continuous Random Variables

X is continuous ( Pr{a ( X ( b} = for some function f(x)

f(x) = = probability density function (PDF) of X.

F(a) =

( = (X = E(X) = = expectation of X = mean of X

E( h(X) ) = = expectation of h(X).

(2 = ( = E( (X - ()2 ) = = variance of X = E(X2) - (2 = - (2

( = (X = = standard deviation of X

X and Y are independent ( Pr{X ( x and Y ( y} = Pr{X ( x} Pr{Y ( y} for all x, y

( fX,Y(x, y) = fX(x) fY(y)

Z = X + Y and X and Y independent (

fZ(z) =

If fX(x) = 0 for x < 0 and fY(y) = 0 for y < 0 then fZ(z) = 0 for z < 0 and

fZ(z) = for z ( 0.

Common Distributions

Binomial:

f(k) = ) pk (1-p)n-k for k = 0, 1, 2, …, n

= Pr{k successes in n independent trials if probability of success = p on each trial}

E(N) = np ( =

Geometric:

f(k) = p (1-p)k-1 for k = 1, 2, ….

= Pr{k trials are required to get a success if probability of success = p on each trial}

E(N) = ( = )

Poisson:

f(k) = e-( for k = 0, 1, 2, ...

= Pr{there are k occurrences of some event (e.g. an arrival) in a given time period}

E(N) = ( = average number of occurrences ( =

Important example: If times between occurrences of some event have exponential distribution with occurrence rate ( and are independent, then the number of occurrences in time t has Poisson distribution with parameter ( = (t.

Uniform–continuous:

f(x) = for a ( x ( b

E(X) = ( = )

Normal:

f(x) = ( ) e-(x-()2 / 2(2

E(X) = ( ( = (

Exponential:

f(t) = (e-(t for t > 0

Distribution for times between occurrences of some event

E(T) = = average time between occurrences ( =

( = rate of occurrence of the event

Erlang:

f(t) = for t > 0

Distribution for times between n occurrences of some event, i.e. density function of Sn = T1 + ( + Tn where T1, (, Tn are independent exponential random variables all with occurrence rate (.

E(Sn) = ( = ,()

Central limit theorem

Let X1, X2, … , Xn, … be an infinite sequence of independent random variables with the same distribution. Let ( and ( be the mean and standard deviation of each Xn. Then for n large the distribution of S = X1 + X2 + … + Xn is approximately normal with mean n( and standard deviation ( and the distribution of (X1 + X2 + … + Xn)/n is approximately normal with mean ( and standard deviation .

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download