You are on page 1of 7

Note: Yellow is edited material

Boxes indicate new material

Karesh-Kuhn-Tucker conditions

The Karesh-Kuhn-Tucker conditions extend the work of Lagrange in solving


optimisation problems with equality constraints to inequality constraints. The most basic
type of inequality constraint is to restrict a variable to be positive or more strictly, non-
negative. This is like the situation where a tank of petrol can only have petrol or be
empty (zero petrol). This is because it is not possible to have a negative amount of petrol
in a petrol tank. Similarly, it is not usually possible to have negative prices for goods or
services. Also, it makes little sense to have a negative speed unless the travel is in
reverse. There are many similar situations. To be able to recognise the optimum solution
of problems that are required to have non-negative variables, the Karesh-Kuhn-Tucker
conditions (more commonly known as the Kuhn-Tucker conditions) are used (Intrilligator
1971; Chiang 1984; Simon and Blume 1994).

The Kuhn-Tucker conditions were developed by Harold W. Kuhn and Albert W. Tucker
and first presented at the Second Berkeley Symposium on Mathematical Statistics and
Probability in 1950. William Karush proved the theorem in 1939 in his master’s thesis
but this received little attention and Fritz John also derived a similar result in 1948
(Kjeldsen 2000, Kuhn and Tucker 1951). However, the paper by Kuhn and Tucker
received much attention and has laid the foundation for a very extensive system of
practical mathematical modelling.

Consider the simple problem in which a function, f(x), in one variable, x, is maximised
(or minimised) subject to the constraint that the variable is restricted to non-negative
values.

(3.29) Maximise f(x)


subject to x ≥ 0 ,

where f(x) is assumed to be differentiable and well-behaved (concave and continuous).

To find the maximum of a function, a stationery point is found, where the gradient of the
function is zero. If only non-negative values are allowed, x ≥ 0, then it is possible to have
a solution on the boundary where x is zero. There are two types of solution possible at
this point. The first is the case where the slope of the function is equal to zero either at a
positive value of x (Figure 3.12, first diagram) or at the boundary (middle diagram) and
the second is where the slope of the function is negative at the boundary (third diagram).
Thus, there are three cases for a maximum when a non-negativity restriction is involved
(Figure 3.12). What Karesh, and Kuhn and Tucker observed, was that by multiplying the
value of the slope of the function by the level of the variable a value of zero was obtained
in each case.
f(x) df(dxx*)=0f(x) δφδ(ξξ∗)=0 f(x)
δφδ(ξξ∗)<0
0 ξ∗>0 ξ ξ∗=0 ξξ∗=0 ξ
Figure 3.12 Three possible solutions for maximum problems when restricted to non-
negative values.

By considering the value of the solution variable at the optimum (x in this case) and the
gradient, df(x*)/dx, then the nature of the solution can be determined. The very
important observation is that the product of the level of the variable and the gradient will
always be zero in each case. Thus, mathematically there are three cases (the fourth case
of an increasing gradient at the boundary will either end up as the first case above or be
unbounded) that can be described as follows where x* is the relative maximum:

(3.30) df(x*)/dx = 0, x* > 0, x* df(x*)/dx = 0 ,


(3.31) df(x*)/dx < 0, x* = 0, x* df(x*)/dx = 0 , and
(3.32) df(x*)/dx = 0, x* = 0, x* df(x*)/dx = 0 .

This can be summarised as:

(3.33) df(x*)/dx ≤ 0, x ≥ 0, x df(x)/dx = 0 ,

which forms the first-order necessary conditions for a local maximum where the choice
variable, x, is non-negative. These conditions can also be generalised to many variables
and many constraints by letting x = {x1, x2, x3, … xn} be a vector rather than a single
variable. The more general problem can then be written as:

(3.34) Maximise f(x)


Subject to g(x) ≤ 0
x≥0.

where f(x) is a concave function and g(x) ≤ 0 is a set of inequality constraints forming a
closed and bounded set and that they satisfy a set of constraint qualifications (Simon and
Blume 1994).

First, form the Lagrangian function. To do so the inequality constraints, g(x), must be
converted into equality constraints by adding a vector of ‘slack’ variables. Thus,

(3.35) g(x) + s = 0 ,
where s is a non-negative vector, s ≥ 0. The Lagrangian function is now:

(3.36) L = f(x) + (g(x) – s) ,

where  is a vector of Lagrangian multipliers which are not restricted to be non-negative


but the non-negativity conditions apply to x and s. The problem then becomes:

(3.37) Maximise L = f(x) +  (g(x) – s)


Subject to x, s ≥ 0 .

The first-order and complementary conditions for this problem at the optimum are then: 1

(3.38) ∂L/∂x = ∂f(x)/∂x +  ∂g(x)/∂x ≤ 0 ,

(3.39) (∂L/∂x) x = (∂f(x)/∂x +  ∂g(x)/∂x) x = 0

(3.40) ∂L/∂ = g(x) - s = 0 ,

(3.41) ∂L/∂s = - ≤ 0 ,

(3.42) (∂L/∂s) s = - s = 0

(3.43) x ≥ 0 and

(3.44) s ≥ 0 .

To obtain what is known as the Karesh-Kuhn-Tucker conditions the above conditions can
be simplified by eliminating s using (3.40) in (3.42) so that: 2

(3.38) ∂L/∂x = ∂f(x)/∂x +  ∂g(x)/∂x ≤ 0 , (∂L/∂x) x = 0

(3.40) ∂L/∂ = g(x) ≥ 0 , (∂L/∂)  = 0

(3.43) x ≥ 0 and  ≥ 0

In order to determine if the solution is at a maximum or a minimum then a further set of


conditions need to be satisfied. These are known as the second-order conditions as they
involve the rates of change of the functions involved (Chiang 1984, ch 21; Intrilligator
1971, ch. 3; and Simon and Blume 1994, ch. 19). These will not be considered in this
chapter. For many practical problems, the nature and properties of the functions being
used will be clear. However, as the complexity of the functions increases and, if
nonlinear constraints are being used, the nature of the problem may then not be clear. In
this case, examination of the second order conditions will be useful. For standard spatial
1 For convenience the notation of x* to indicate the optimum has been omitted.
2 Note that since  is not restricted to be non-negative that there is no complementary condition
relating to . The complementary condition for x is equation (3.39) and for s is (3.42).
equilibrium models the use of a quadratic objective function, which may be maximised or
minimised, subject to a set of linear constraints is straightforward.

Consider an example:

(3.37) Max Z= F(x,y) = 10 -x2 + x y - 4y2


Subject to x + y ≥ 1
x ≥ 0, y ≥ 0.

The maximum for this problem is an objective function value of 9.375 at x = 0.75 and y =
0.25 as is illustrated in Figure 3.13 by the red dot. The constraint is shown as a vertical
panel at the rear of the diagram and the volume behind the panel is excluded from the
problem. The red point is positioned on the function surface at the maximum.

Figure 3.13 Maximisation of a quadratic function subject to a linear constraint

The Lagrangian function is:

(3.35) L = 10 -x2 + x y - 4y2 + λ (1 – (x + y))


The first-order Kuhn-Tucker conditions for this problem are:

(3.35) ∂L/∂x = -2x + y - λ ≤ 0 (∂L/∂x) x = (-2x + y - λ ) x = 0

(3.36) ∂L/∂y = x – 8y - λ ≤ 0 (∂L/∂y) y = (x – 8y - λ ) y = 0

(3.37) ∂L/∂λ = 1 - x – y ≥ 0 (∂L/∂λ ) λ  (1     ) λ  

It is worth observing that these particular conditions do not provide a means of obtaining
the optimum. The conditions allow for testing a solution to see if an optimum has been
obtained. Optimising algorithms that allow for efficient search paths are needed to find
the optimum. A number of such algorithms exist such as Solver in MS Excel, GAMS
which includes MINOS and various other algorithms (see the relevant websites for the
commercial algorithms).

Integer Programming

Nonlinear integer programming is now possible with some of the algorithms that are
available. Integer programming involves developing models that have some or all of the
values as allowing only integer values (lee, Moore and Taylor 1990). For example, it
makes sense to have one car or tractor or harvester but not 0.75 of a car. There are spatial
equilibrium type models which have been built in which goods are produced and traded
and the parcels of land used to produce the products can also be traded. In this case
quadratic programming with some variables as integer variables is appropriate. Also,
models in which goods are traded and the decision is examined of building a factory in a
location or not is examined.

There are three main types of integer programming. The first is zero-one programming
in which the integer variables take on only the values of zero or one. The second type is
where the variables may take on integer values over a range of values and the third is
mixed integer programming where continuous and integer variables are mixed in the
same problem.

In this chapter only a very simple integer model will be considered, for example:

(3.38) Maximise 80 x1 + 90 x2 + 40 x3
Subject to
10 x1 + 7 x2 + 8 x3 ≤ 70
5 x1 + 10 x2 + 2 x3 ≤ 50
x1, x2, x3 ≥0 and integer

The solution to this problem, after defining the variables x1 to x3 as integer, was obtained
in MS Excel as x1 = 3, x2 = 3 and x3 = 2 with an objective function value of 590. The
calculated value of the first constraint was 67 and 50 for the second constraint (that is,
insert the solution values into the constraints). Thus, neither constraint was binding or
equal to the right-hand-side values.
In subsequent chapters, details of the use of integer constraints will be given in some of
the model formulations.

A Nonlinear Optimisation Problem

An illustration of the application of the Karesh-Kuhn-Tucker conditions to a quadratic


programming problem are given in this section.

(3.39) Maximise 200 y1 – 0.2 y12+ 200 y2 – 0.1 y22


Subject to y1 + y2 ≤ 100
y1, y2 ≥ 0

The solution to this problem using MS Excel Solver was y1 = 33.33 and y2 = 66.67 and
the Lagrange multiplier, shadow value or λ = 186.67. The interpretation of the
Lagrange multiplier is that if the right-hand-side value of 100 were increased by 1 unit
the value of the objective function would rise by 186.67.

The Lagrangian function and the first-order conditions for this problem are:

(3.40) L = 200 y1 – 0.2 y12+ 200 y2 – 0.1 y22 + λ ( 100 − y1 + y2)

(3.41) ∂L/∂y1 = 200 – 0.4 y1 - λ = 0

(3.42) ∂L/∂y2 = 200 – 0.2 y2 - λ = 0

(3.43) ∂L/∂λ = 100 – y1 – y2 = 0

(3.44) y1, y2 ≥ 0

By inserting the solution values into (3.41) 200 – 0.4 * 33.33 – 186.67 = -0.002 a value of
zero is obtained if sufficient digits for the solution values are used. Inserting values into
(3.42) and (3.43) also gives values close to zero (within rounding error limits). As each
of the first-order conditions are zero, the complementary conditions must also be zero.

Concluding Comments

The main mathematical tools for developing spatial equilibrium models have been
covered in this chapter. The idea of finding the maximum or minimum of a function and
the criteria with which such a point can be calculated have been developed. The use of
the Lagrangian function for formulating a constrained optimisation problem has been
outlined and also the use of the Karesh-Kuhn-Tucker conditions for defining the optimum
of problems constrained to the non-negative space and with inequality constraints.
Illustrative examples have also been provided.

The Karesh-Kuhn-Tucker conditions are central to understanding how spatial equilibrium


models can be formulated as mathematical representations of trading systems with
transactions costs. This will be demonstrated in the subsequent chapters.

Exercises

1. Find the stationary points of the function and plot it over a suitable range.
y = 5 - 2 x - 6 x2 + 3 x3

2. Find the solution using MS Excel Solver to the problem

Maximise 200 y1 – 0.1 y12+ 200 y2 – 0.1 y22


Subject to y1 + y2 ≤ 200
y1, y2 ≥ 0

References

Chiang, A.C. (1987), Fundamental Methods of Mathematical Economics, 3rd edn,


McGraw-Hill International Book Company, Auckland.

Intrilligator, M.D. (1971), Mathematical Optimization and Economic Theory, Prentice-


Hall, Inc, Englewood Cliffs, New Jersey.

Karush, W. (1939). Minima of functions of several variables with inequalities as side


constraints, M.Sc. Dissertation, Department of Mathematics, University of
Chicago, Chicago, Illinois.

Kjeldsen, T.H. (2000), “A contextualized historical analysis of the Kuhn-Tucker theorem


in nonlinear programming: the impact of World War II”, Historia Mathematica,
27(4), 331-361. Accessed on 17 May 2008 at http://www.ideallibrary.com.

Kuhn, H.W. and Tucker, A.W. (1951) “Nonlinear programming”, Proceedings of the
Second Berkeley Symposium on Mathematical Statistics and Probability (July 31
- August 12, 1950), University of California Press, Berkeley, California.

Lee, S.M., Moore, L.J. and Taylor, B.W. (1990), Management Science 3rd edn, Wm C
Brown Publishers, Dubuque, Iowa.

Simon, C.P. and Blume, L. (1994), Mathematics for Economists, Norton, New York.

Wikipedia (2007), Joseph Louis Lagrange, Wikipedia The Free Encylopedia. [Online.]
Accessed on 23 June 2007 at
http://en.wikipedia.org/wiki/Joseph_Louis_Lagrange.

You might also like