Econ 604 Advanced Microeconomics



Econ 604 Advanced Microeconomics

Davis

Spring 2004

Reading. Chapter 2 (pp. 26 to 56) for today

Chapter 3 (pp. 66-86) for next time

Problems: You should be able to work the problems at the end of Chapter 2. (Brief answers to many of the problems appear in the back of the book.) I would like for you to hand in next time complete answers to the following problems

Handout for Chapter 1 problems 1 and 2.

Chapter 2, problems 2.3; 2.6; 2.7 2.9.

Lecture #2.

REVIEW

I. Introduction.

A. The logical status of theoretical models. Models are necessarily oversimplifications. We usually evaluate models indirectly, that is in terms of predictive performance. Thus empirically testable implications are an essential ingredient of useful models.

B. General Features of Economic Models:

1. The Ceteris Paribus Assumption: We “abstract” away from reality by holding lots of things constant. This assumption becomes controversial when statistical rather than laboratory methods must be used to evaluate them.

2. Optimization Assumptions: Optimization is useful because it allows for solutions, and thus testable implications.

3. Positive/Normative Distinction. Most of what we do will involve positive analysis (in as much as we can be “positive”)

C. Some Historical Perspective: The Development of the Economic Theory of Value. The notion of value, as being determined by price is a consequence of marginal value theory, something essential to modern logic.

D. Basic Models and Analytical Contexts.

1 .Supply Demand Equilibrium. We set up a a very simple linear demand and supply model. Then we

a. Solved for an equilibrium.

b. Induced comparative statics effects (and solved).

2. General Equilibrium Analysis and the Production Possibilities Frontier. We used the production possibilities frontier to illustrate general social choices, including Economic Efficiency, and Increasing Marginal opportunity cost.

II. Chapter 2. The Mathematics of Optimization. A chapter that reviews some of the primary mathematical tools that we will use in this course.

A. Maximization of a Function of One Variable.

1. The logic of a derivative

2. Some rules for derivatives

3. Necessary and sufficient conditions for a maximum. Recall, our rule is that optimization requires that the first order condition equal zero, and that the second derivative be negative (for a maximum) or positive (for a minimum).

Example: f(x) = 10x – x2

f’(x) = 10 – 2x

f’’(x) = -2

Here x=5 maximizes this function, because x=5 is a flat place, and you are climbing more slowly.

Example: f(x) = 26+ x2 -10x

f’(x) = 2x - 10

f’’(x) = 2

Here x=5 minimizes this function, because x=5 if a flat place, and you are climbing more quickly. (That is, descending more slowly).

PREVIEW

B. Functions of Several Variables

1. Partial Derivatives

a. Intuition

b. Calculating Partial Derivatives

2. Maximizing Functions of Several Variables.

a. Total differentiation

b. Second order Conditions

c. Implicit Functions

d. The Envelope Theorem

C. Constrained Maximization

1. Lagrangian Multiplier Method.

2. Interpretation of the Lagrangian Multiplier

3. Duality

4. Envelope Theorem in Constrained Maximization Problems.

E. Final Comments

LECTURE____________________________________________

B. Functions of Several Variables. In most problems of economic interest agents optimize by choosing among several independent variables. Firms, for example, create output by using a mix of possible inputs. Households, similarly, optimize utility by selecting among a combination of possible products. Let’s review how to optimize in such contexts.

1. Partial Derivatives

a. Intuition

Generically, an agent faces the problem

[pic]

Conceptually, the agent is looking at n-dimensional mound. One way to optimize would be for the agent to parachute to different points on the mound, and then evaluate (a) if he/she is at a flat place in all directions. One could evaluate this by stepping in one direction at a time. Analytically, we could take a partial derivative in a particular by finding the slope in that dimension, treating values for all other dimension as constants.

Consider the partial derivative of y =f(x1, …., xn) with respect to x1 We denote this as [pic]

b. Calculating Partial Derivatives. As a practical matter partial derivates are calculated in much the same was as derivatives of a single variable.

Example: Consider the function y =f(x1, x2)= ax12 + bx1x2 + cx22

Then f1 = 2ax1 + bx2 and f2 = bx1 + 2cx2. We could solve for the optimum by simultaneously solving these first order conditions.

i. Second Order Partial Derivatives The interpretation of second order conditions parallel the single variable case. The only difference is that one may also calculate cross partial effects. Generally, we write

[pic]

If i=j, then the second order partial derivative parallels the second order condition in the single variable case. Using the above example,

f11 = 2a, f22= 2c

However, we can also calculate two additional conditions

f12 = b, f21= b

These “cross-partial” derivatives assess the change in one direction as you move in another. In terms of climbing a mountain, if your primary variables were north/south and east/west, the cross partial derivative would tell you how much a slight movement west would affect your progress up the hill from the north. These cross partial effects are important for finding a maximum, a topic to which we’ll return at the end of this chapter.

ii. Young’s Theorem However, one observation from the above example that does generalize is that fij = fji. Intuitively, going a little north and then west has the same effect on altitude as going first a little west and then North. This result is known as Young’s Theorem

2. Maximizing Functions of Several Variables. Using the partial derivative, we can now talk about optimization of functions with several variables. We do this by elaborating on some notation from the single variable case. Given the function y = f(x), we could optimize by considering a the consequences of a small change in x on y. That is

dy = f’(x)dx

The necessary condition for a maximum is that f’(x) =0. That is, y doesn’t change at all in response to a change in x. This is called a point-slope formula. We can develop the same formula for multiple variables

[pic]

Intuitively, if you’re traveling North, (x1) then this partial derivative reflects the change in altitude (dy) from traveling dx units north.

i. The total Differential. Now consider the effects on y of varying all independent variables by a small amount, e.g.,

dy = f1dx1+ f2dx2+… + fndxn

This expression is called the total differential of f

ii. First-Order Condition. The possible way to be at a maximum or minimum is if dy=0 for any combination of small changes. Alternatively stated, the total differential must equal zero. Since by assumption the dxi’s are all nonzero, This condition holds only if

f1 = f2 =… = fn =0.

Points where this condition holds are called critical points.

iii. Second -Order Condition Analogous to first order conditions in the single variable case, critical points can be a maximum or a minimum. However, optimization requires satisfaction of a second order condition.

The second order condition is somewhat more complicated than in the single variable case. To see this consider yet again the problem of climbing a mountain by climbing north (x1) and west (x2). To be at an optimum, it must be the case that you are at a critical point (a “flat place”). Further, it must be the case that you must have been climbing in the North direction, (f11 0

a maximum.

a. Implicit Functions. Recall in our last lecture, we observed that the normal expression for demand was Q = f(P,K) where K equals a number of other variables that are held constant. However, for the purposes of graphing, we needed to solve for indirect demand, that is, an expression P = g(Q,K). Situations like these arise frequently in economics. In such instances, it is often useful to place all the variables of an equation on the left side of an equation that equals zero. Then, provided that appropriate underlying conditions hold, we can develop relationships between the variables of interest by total differentiation.

Example. Consider the function

y = 3x + 5.

In implicit function form, we write

y - 3x – 5 = 0

To find the effects of a change in x on y, take totally differentiate this implicit function

dy – 3dx =0 or dy/dx = -3.

Similarly dx/dy = -1/3

Now, in this case, it is so easy to solve explicitly for x and y that direct expressions are most easily used. However, in more complicated instances solutions are not so easy. Notice the function f(x,y,m)=0 (m a constant) is totally differentiated as

fxdx + fydy = 0

Thus,

dy/dx = -fy/fx

This is very useful result: In any implicit function (satisfying conditions of the implicit function theorem) the effect of a change in variable x on another variable y equals the negative of the ratio of the partial derivatives

Example, consider the production possibility frontier

2x2 + y2 = 225.

In implicit function form

2x2 + y2 - 225 = 0.

Totally differentiating,

4xdx + 2ydy = 0.

Thus,

dy/dx = - fx/fy = -2x/y

A final observation: When can one use the implicit function theorem? It is necessary that that there is a unique (one to one and onto) relationship between the independent variables. We won’t develop appropriate conditions here, but it turns out that the same second order conditions necessary for a critical value to be an optimum suffice.

b. The Envelope Theorem. Now we consider a major application of the implicit function theorem that we will use frequently. Consider an implicit function f(x, y, a) = 0, where x and y and variables and a is a parameter. Now we can find optimal values for x and y in terms of a. But what happens if the parameter changes? The envelope theorem provides a straightforward way to answer this question.

Application: Suppose you are a manager of a competitive firm, and you wish to know minimum per unit costs given a cost condition TC = q2 + k where q denotes output, and k denotes fixed costs. How do minimum unit costs y change with changes in fixed cost conditions?

Development. Write the above function

y = f(q,k) = (q2 + k)/q

= q + k/q

A standard, but rather lengthy, way to solve this problem involves finding the optimal q expressed in terms of k. Then express the objective in terms of k.

To find the optimal value of q take a derivative and solve

f’(q) = 1- k/q2 = 0

Thus, the optimal value of q, which we denote as q* = (k)1/2. Now, to consider the effects of changes in fixed costs on minimum per unit cost insert q* into f(q, k)

y = (k)1/2 + (k)1/2 = 2k1/2

Now, taking the derivative of y with respect to the parameter k,

dy/dk = k-1/2

Thus, for example, if k = 4. So with a q* = 2 and dy/dk = ½.

The envelope shortcut. The envelope theorem states that for small changes in k, dy*/dk can be computed by holding x constant at its optimal value x*, and simply calculating (y/(k {q = q*(k)}. Application

y = f( q, k) = (q2 + k)/q

(y/(k = 1/q

Now, at an optimum, q* = k1/2

Thus, (y/(k{q = q*(k)} = 1/ k1/2 = k-1/2

as before.

The many variable case. The envelope theorem is particularly useful for problems involving many variables. Consider the general function

y = f(x1, . . . , xn, a)

To find an optimal value, you would take a series of n first order conditions, f1, … fn and solve for optimal values x1*, …., xn*. These values, however, are sensitive to (and, if the conditions of the implicit function theorem hold, functions of) a. Thus, we can write x1*(a), …., xn*(a). Substituting these optimal values back into the objective yields an expression for the optimal value in terms of a,

y* = f(x1*(a) . . . , xn*(a), a)

Totally differentiating this expression with respect to a

dy*/da = f1 dx1*/da +, . . . , + fn dxn*/da + fa

But, because the first order conditions for all the independent variables have been satisfied, f1= …= fn = 0. Thus,

dy*/da = fa{x1*, … xn*}

Example: Consider again the problem y =f(x1, x2)= -(x1 -3)2- (x2 -4)2 + 25. This might be interpreted as a health status problem, where x1 and x2 were quantities of medication, and 25 is a general health level. Taking first order conditions, and optimizing, we found that

x1* = 3 ; x2* = 4 and

y* = 25

Suppose now we change 25 to an arbitrary parameter a.

y =f(x1, x2, a)= -(x1 -3)2- (x2 -4)2 + a.

In this case optimal value x1* and x2* do not depend on a, thus

dy*/da = 1.

Intuitively, a change in general health level improves health status at a 1 for 1 rate.

C. Constrained Maximization. To this point, we have maximized objective functions in the absence of any restrictions on admissible values for independent variables. However, in many instances of economic relevance, restrictions are important. Consumers, for example, maximize utility subject to a budget constraint. Seller maximize profits subject to a cost constraint. In this section we review the most standard method for optimizing in light of such constraints.

1. Lagrangian Multiplier Method. In general form, we optimize a function f(x1, …, xn) in light of a series of constraints about those independent variables g(x1, …, xn).=0. These constraints an take on a variety of forms, such as a budget constraint (e.g, I- x1 + x2 =0) or another economic relationship between (e.g., 100 - x1 - x22 =0). Notice that if function f has n independent variables, first order conditions will generate a system of n equations that will allow a unique solution. One way to include the constraint would be to impose the constraint on the first order conditions. A second approach, one that also generates additional useful information, is to include the constraint as an n+1 equation with the new variable (. Formally, we write

( = f(x1, …, xn) +(g(x1, …, xn)

FONC (first order necessary conditions become

(( /(x1= f1 + (g1=0

(( /(x2= f2 + (g2=0

.

.

.

(( /((= g(x1, …, xn)=0

This is a system of n+1 eqations in n+1 unknowns. The solution to this system will differ from the unconstrained (n variable) case, unless the constraint is not binding, in which case ( = 0.

2. Interpretation of (. Notice that each of the first order conditions in the above system may be solved for (. That is

f1/g1 = f2/g2 =, …., fn/gn = (

Notice that these ratios are all equal. Further, the numerator is a marginal “benefit” of increasing xi, while the denominator is a marginal “cost.” associated with displacing other variables. Thus these equalities imply that at a maximum, the marginal effect of relaxing the constraint is the same in each dimension, and further that for each i,

.

(= marginal benefit of xi / marginal cost of xi

Now at an optimum the marginal benefit of increments for each xi are identical. On the other hand, at an optimum, the only way to increase “costs” is by relaxing the constraint Thus, ( reflects the marginal benefit of relaxing the constraint. For example, for a consumer maximizing utility f(x1, … ,xn) subject to a budget constraint g(x1, … ,xn), ( reflects the marginal utility of income. Similarly, for a firm maximizing output subject to an input expenditure constraint, ( reflects the marginal productivity of increasing input expenditures.

3. Duality. As the above examples make clear, a tight relationship exists between an objective function and its constraint. Rather than maximizing utility subject to a budget constraint, for example, a consumer might minimize expenses, subject to a given level of utility. Again, rather than maximizing output subject to a resource, or input expenditure constraint, a firm might minimize costs subject to a given output level. The capacity to recast constrained maximization problems as constraint minimum problems is termed duality theory. In many instances it will be convenient to look at the dual for a problem, rather than the primary problem.

Example: Consider again our health maximization problem.

y = f(x1, x2)= -(x1 -3)2- (x2 -4)2 + 25.

But this time suppose that x1 and x2 are medicines, and an individual can only safely consume one dose a day. That is x1 + x2 =1 or

1-x1 - x2 =0

To solve this constrained problem, write the Lagrangian expression

( = f(x1, …, xn) + (g(x1, …, xn)

= - (x1 -3)2- (x2 -4)2 + 25 + ((1-x1 - x2)

Taking FONC

(( /(x1= -2x1 +6 - ( =0

(( /(x2= -2x2 + 8- ( =0

(( /((= 1-x1 - x2 =0

-2x1 +6 = ( = -2x2 + 8

Thus, x1 = x2 – 1

Inserting this result into the constraint yields

1- (x2 – 1) - x2 =0

or 1 = x2, so, x1 =0 .

This result implies that if you can only take one pill per day, take pill for x2, since it contributes more to health improvement than condition x1. Finally, insert values for x1 and x2 into either of the FONC, and we get (=6. That is, the marginal effect of relaxing the constraint (say, to take 2 pills per day) would be 6 increments in health status. Comparing values of the constrained and unconstrained objectives makes this clear. Absent the constraint, f(x1, x2 ) =25. With the constraint, -(0 -3)2- (1 -4)2 + 25 = -18+25 = 7.

Consider what would happen to health status if you could take 2 pills per day.,

4. Envelope Theorem in Constrained Maximization Problems. The Envelope Theorem discussed above for unconstrained problems also applies to the case of constrained problems. Specifically, if

y = f(x1, …, xn, a), and if this equation is optimized subject to the constraint g(x1, …, xn, a), we may evaluate the effect of relaxing a as follows. First write,

( = f(x1, …, xn, a) + (g(x1, …, xn, a)

Take first order conditions for the Lagrangian expression and solve for a optimal values x1*,.. , xn*. Then compute

dy*/da = hen maximum. Then coulda on

The envelope theorem is particularly useful for problems involving many variables. Consider the general function

y = (( /(a (x1*, . . . , xn*, a)

5. Second Order Conditions with Constrained Optimization Recall that we developed second order conditions in the single variable case, as well as in the multi-variable case without constraints. In these cases, evaluating the second order conditions was equivalent to establishing a concavity condition. In closing this section, we consider a special case of second order conditions with a constraint. Consider the objective

f(x1, …, xn) optimized subject to a linear constraint

g(x1, …, xn) : c - b1x1 - b2x2=0

Write the Lagrangian expression

( = f(x1, …, xn) + (( c - b1x1 - b2x2)

via partial differentiation, recover

f1 - ( b1 = 0

f2 - ( b2 = 0

c - b1x1 - b2x2 = 0

Now, in general we can solve this system for x1, x2 and (.

Absent constraints, we could use the concavity condition to ensure that the critical point that defines this solution indeed a maximum. That is

d2y =f11 dx12 +f22 dx22 -2f12 dx1dx2 ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download