Econ 604 Advanced Microeconomics



Econ 604 Advanced Microeconomics

Davis

Spring 2006

Reading. Chapter 3 (pp. 66-86) for today

Chapter 4 (pp. 91 to 113) for next time

Problems: To collect today Chapter 2, 2.3, 2.5,2.6, 2.9.

For next time: Chapter 3, 3.2, 3.4, 3.5, 3.7

Lecture #3.

REVIEW

II. The Mathematics of Optimization

A. Maximization of a Function of One Variable.

1. Necessary and sufficient conditions for a maximum.

Recall, our rule is that optimization requires that the first order condition equal zero, and that the second derivative be negative (for a maximum) or positive (for a minimum).

Example: f(x) = 150x – 5x2

f’(x) = 150 – 10x

f’’(x) = -10

B. Functions of Several Variables

1. Partial Derivatives. Given [pic]. The partial derivative of y =f(x1, …., xn) with respect to x1 we denote as [pic]. Second order conditions are [pic]= fij = fji. Young’s Theorem assures that the cross partial is independent of the order in which the derivatives were taken.

2. Maximizing Functions of Several Variables.

a. Total differentiation. Given several variables, the total differential is

dy = f1dx1+ f2dx2+… + fndxn

b. First-Order Condition. Critical points arise when dy=0, or when f1 = f2 =… = fn =0.

c. Second -Order Condition. The same as the single variable case, except that there are “cross effects.” For a maximum or a minimum, own effects must dominate cross effects, or (in the 2 variable case)

f11(f22) –f122 >0

Note: the same second order condition implies a minimum as well as a maximum (intuitively the own effects must dominate the cross effects).

i. Implicit Functions. Note, we can write functions in “implicit form” (that is, an equality set equal to zero). If second order conditions hold, we can solve for one variable in terms of another.

ii. The Envelope Theorem. A major application of the implicit function theorem that we will use frequently. Consider an implicit function f(x, y, a) = 0, where x and y and variables and a is a parameter.

The envelope theorem states that for small changes in a, dy*/da can be computed by holding x constant at its optimal value x*, and simply calculating (y/(a {x = x*(a)}.

C. Constrained Maximization. A problem that arises frequently in practice (e.g., consumers maximize utility subject to an income constraint.

1. Lagrangian Multiplier Method. In general, we optimize a function f(x1, …, xn) in light of a series of constraints about those independent variables g(x1, …, xn).=0. Write: L = f(x1, …, xn) +(g(x1, …, xn) and take FONC, a series of n+1 equations in n+1 unknowns.

2. Interpretation of the Lagrangian Multiplier. Each of the first order conditions in the above system may be solved for (. That is f1/g1 = f2/g2 =, …., fn/gn = (, or

(= marginal benefit of xi / marginal cost of xi

Observation: One way to better understand the role of ( from an intuitive level would be to consider a constrained function of a single variable. Suppose f(x) = 10x – x2 and suppose that this was subjected to the constraint that x=3.

Then, L =10x- x2 +((3-x)

From FONC

10 – 2x = ( and 3 – x = 0

When x=3, 10-2(3) = 4. Suppose we relax the constraint, and increased x to 4. Then

10 – 2x = ( and 4 – x = 0

and ( = 2. In x, f(x) space, this is just the slope of the line tangent to f(x).

Reasoning similarly, when x=5 or more, ( =0 and the constraint is no longer binding. So ( is the marginal increase in the objective brought about by relaxing the constraint, as long as the constraint binds.

3. Duality. Observe that for every constrained maximization problem, there is an implied constrained minimization problem (this is conceptually no more complicated than looking at a simple sum, and realizing that there is an implied difference associated with it. Thus, for my 8 year old daughter, given the problem 24-17 = 7, it was much easier to solve dual problem 17+7 = 24.) We encounter the same problems in optimization, albeit they are more complex.

4. Envelope Theorem in Constrained Maximization Problems. The envelop theorem applies just as it did for unconstrained problems. Find optimal value of the dependent variable in terms of the parameters at the critical point. Then take the derivative w.r.t. the constant.

5. Second Order Conditions with Constrained Optimization. A final observation. Optimize an objective f(x1, x2) subject to a linear constraint, g(x1, x2) =c - b1x1 - b2x2=0. Form the Lagrangian expression

L = f(x1, x2) + (( c - b1x1 - b2x2)

Totally differentiating the objective twice, and inserting the constraint, we find that d2y = (f11 f22 - 2f12 f1 f2 + f22 f12)(dx12/ f22) ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download