You are on page 1of 14

Lecture4:EqualityConstrainedOptimization

TianxiWang
wangt@essex.ac.uk

2.1 Lagrange Multiplier Technique


(a) Classical Programming
max f (x1 , x2 , ..., xn )

objective function

where x1 , x2 , ..., xn are instruments/control variables


subject to a constraint g(x1 , x2 , ..., xn ) = b.
The most popular technique to solve this constrained optimization
problem is to use the Lagrange multiplier technique.
(b) Lagrangean Method
We introduce a new variable , called the Lagrange multiplier, and
set up the Lagrangean.
For example if we want to maximize f (x1 , x2 ) subject to g(x1 , x2 ) =
b, the Lagrange function is
L(x1 , x2 , ) = f (x1 , x2 ) + [b g(x1 , x2 )]
Necessary conditions for a local maximum/minimum requires setting all first-order conditions equal to zero.
f
g
L
=

=0
x1
x1
x1
L
f
g
=

=0
x2
x2
x2
L
= b g(x1 , x2 ) = 0

(1)
(2)
(3)

The third first-order condition (equation (3)) automatically guarantees the constraint is satisfied.
From the first-order condition we then solve for the critical values
x1 , x2 and .
Divide equations (1) and (2) to eliminate
g/x1
f /x1
=
f /x2
g/x2
and re-arranging equation (3)
b = g(x1 , x2 )
we now have two equations to find x1 , x2 .
To compute re-arrange equation (1)
=

f /x1
g/x1

and plug in x1 and x2 .


1

Example: f = x 2 y 2 s.t. ax + cy = b.
1

L = x 2 y 2 + [b ax cy]
F.O.Cs:

1 1 1
Lx = x 2 y 2 a = 0
2
1 1 1
Ly = x 2 y 2 c = 0
2
L = b ax cy = 0

(i)
(ii)
(iii)

Dividing (i) by (ii) yields:


y
a
ax
= y=
x
c
c
3

(iv)

Using (iv) to substitute out y from (iii) yields x


x =

b
2a

and substituting x into (iv) yields y


y =

b
2c

To find re-arrange (i) and substitute the values obtained for x


and y
1
1
1
1
= x 2 y 2 =
2a
2a

b
2a

 21 
=

1
b 2
2c
1
1

2a 2 c 2

Section 2.2: Interpretation of the Lagrange Multiplier

Since x1 , x2 and are all functions of the constraint parameter b (where


b is a fixed exogenous parameter), what happens to these critical values
when we change b?

L = f (x1 , x2 ) + [b g(x1 , x2 )]
Totally differentiating L with respect to b we find
dL
d
dx
dx
= fx1 1 + fx2 2 + [b g(x1 , x2 )]
db
db
db
db


dx
dx
1

gx2 2
+ 1 gx1
db
db
Re-arranging:
dx
dx
dL
= (fx1 gx1 ) 1 + (fx2 gx2 ) 2
db
db
db

d
+[b g(x1 , x2 )]
+
db
where fx1 ,fx2 ,gx1 and gx2 are all evaluated at the optimum.
Now at the optimum we have fx1 gx1 = 0, fx2 gx2 = 0 and
b g(x1 , x2 ) = 0.
Thus we obtain:

dL
=
db

Therefore the Lagrange multiplier tells us the effect of a change in the


constraint via parameter b on the optimal value of the objective function
f.
5

Note: for this interpretation of the Lagrangean must be formulated


where the constraint enters as [b g(x1 , x2 )] and not [g(x1 , x2 ) b].

Section 2.3: Second-Order Conditions for Constrained


Optimization

(a) Sufficient conditions for a local max/min


Suppose we have an n variable function f (x1 , x2 , ..., xn ) and one
constraint g(x1 , x2 , ..., xn ) = b.
Construct a Bordered Hessian matrix of the Lagrange function where
the bordered elements are first-order partial derivatives of the constraint g and the remaining elements are second-order partial derivatives of the Lagrangean function L.

0 g1 g2

g1 L11 L12

HB =
g2 L21 L22
..
..
..
.
.
.
gn Ln1 Ln2

...

gn

L1n

L2n

..
.
Lnn

Note that the all partial derivatives in this matrix are evaluated at
the critical values (x1 ,x2 ,...,xn ; )
Check the sign of the leading principal minors.

Sufficient condition for a local max: the bordered Hessian is negative


definite.
| H1B |< 0, | H2B |> 0, | H3B |< 0, | H4B |> 0 ...
where

and so on.



0 g

1
| H1B |=

g1 L11



0 g1 g2




B
| H2 |= g1 L11 L12


g2 L21 L22

Sufficient condition for a local min: the bordered Hessian is positive


definite.
| H1B |< 0, | H2B |< 0, | H3B |< 0...
Example: z = xy s.t. x + y = 6
L = xy + [6 x y]
nec:

Lx = y = 0
Ly = x = 0
L = 6 x y = 0
x = y = = 3
suff:
Lxx = Lyy = 0
Lxy = Lyx = 1
gx = gy = 1
7

Construct the Bordered Hessian matrix of the Lagrange function




0 1 1




B
| H |= 1 0 1


1 1 0

Check the sign of leading principal minors:


| H1B |= 0 1 < 0.
| H2B |= 1(0 1) + 1(1 0) = 2 > 0.
Thus we have a local max at x = y = 3.

(b) A Further Look at the Bordered Hessian


Border Hessian used in constrained optimization problems

0 g1 g2


g1 L11 L12

| H B |= g2 L21 L22
..
..
..
.
.
.

gn Ln1 Ln2

...


gn

L1n

L2n
..
.

Lnn

Bordered Hessian used to test quasiconcavity/quasiconvexity



0


f1

| B |= f2
..
.

fn

f1

f2

f11 f12
f21 f22
..
..
.
.
fn1 fn2

...


fn

f1n

f2n
..
.

fnn

Two key differences:


(1) Bordered elements in | B | are first-order partial derivatives of
the function f rather than g.
(2) Remaining elements in | B | are second-order partial derivatives
of f rather than the Lagrange function L.
However in the special case of a linear constraint g(x1 , x2 , ..., xn ) =
a1 x1 + ..., an xn = m
(1) The second-order partial derivatives of Lagrange function L reduces to the second-order partial derivatives of f :
Lij = fij
(2) The border in | B | is simply that of | H B | multiplied by a
positive scalar .
(3) Therefore: | B |= 2 | H B |
Under this special case the leading principle minors | Bi |, | HiB |
must share the same sign. i.e. if | B | satisfies the sufficient condition for strict quasiconcavity then | H B | must satisfy the secondorder sufficient condition for constrained maximization.

Section 2.4: Economic Applications

(a) Utility Maximization and Consumer Demand


U (x, y) s.t. Px x + Py y = M
Standard consumer problem. Consumer must maximize utility subject that she spends all her income M on purchasing two goods x, y,
where the prices of both goods are market determined and hence exogenous and we will assume that the marginal-utility functions are
continuous and positive i.e. Ux , Uy > 0.
L = U (x, y) + [M Px x Py y]
Lx = Ux Px = 0
Ly = Uy Py = 0
L = M Px x Py y = 0
First-order equations imply that the consumer equalizes the ratio of
marginal utility to the price for each good:
Ux
Uy
=
=
Px
Py

(i)

Here the Lagrange multiplier can be interpreted as the marginal


utility of money when utility is maximized:
=

U
M

An alternative interpretation of (i) is:

Px
Ux
=
Uy
Py
10

i.e. Marginal rate of substitution (the slope of the indifference curve)


= price ratio (the slope of the budget constraint).
Recalling that an indifference curve is the locus of combinations of
x and y that yield a constant level of utility U :
dU = Ux dx + Uy dy = 0
dy
=0
Ux + Uy
dx
dy
Ux

=
dx
Uy
Since Ux , Uy > 0 the slope must be negative.
Re-arranging the budget constraint
y=

M Px
x
Py Py

y
6

Indifference Curve
)

dy

(slope= dx

= UUxy )

Budget Constraint
dy
x
(slope= dx = P
P )
y


-

Figure 1: Utility Maximization

11

Second-order conditions


0 P x Py




B
| H |= Px Uxx Uxy


Py Uxy Uyy
| H1B |= Px2 < 0
| H2B |= Px2 Uyy Py2 Uxx + 2Px Py Uxy > 0
Recalling from F.O.Cs that Px =

Ux

and Py =

Uy
.

Therefore:

Ux2 Uyy Uy2 Uxx + 2Ux Uy Uxy


>0
|=
|
2
| H2B |= Ux2 Uyy + Uy2 Uxx 2Ux Uy Uxy < 0 f or a local max
H2B

But this is just the condition that the utility function be strictly
quasiconcave [For Homework check this!].
Therefore the tangency point E is a local max when the indifference
curve is strictly convex to the origin i.e. when the utility function is
strictly quasiconcave.
Note that strict quasiconcavity of the utility function also ensures
that the local maximum is unique and globally optimal.
Theorem 1 In a constrained maximization problem
max f (x) s.t. g(x) = 0
where f and g are increasing functions of x, if:
(a) f is strictly quasiconcave and g is quasiconvex, or
(b) f is quasiconcave and g is strictly quasiconvex,
then a locally optimal solution is unique and also globally optimal.

12

(b) Cost Minimization under Cobb-Douglas Production Technology


min rk + wl s.t. y = k l where , > 0.
L = rk + wl + [y k l ]
First-order conditions:

Lk = r k 1 l = 0
Ll = w k l1 = 0
L = y k l = 0
Solve for k and l to obtain the demand functions for capital and
labor. Here = marginal cost.
Second-order conditions:

Lkk = ( 1)k 2 l
Lll = ( 1)k l2
Lkl = Llk = k 1 l1
gk = k 1 l
gl = k l1
Homework: Derive the Bordered Hessian and show that the conditions for a local minimum are satisfied. i.e. | H1B |< 0 and
| H2B |< 0, since , > 0.
Recall from problem set 1 Q3(b) where we showed that a CobbDouglas production function is strictly quasiconcave for any , >
0. Since the objective function is linear and hence quasiconvex, the
standard cost minimization problem of the firm under Cobb-Douglas
13

technology generates a local minimum which is unique and is also a


global minimum.
Theorem 2 In a constrained minimization problem
min f (x) s.t. g(x) = 0
(a) if f is strictly quasiconvex and g is quasiconcave, or
(b) if f is quasiconvex and g is strictly quasiconcave,
then a local minimum is unique and a global minimum.

14

You might also like