You are on page 1of 47

CHAPTER 11:

NONLINEAR PROGRAMMING
to accompany
Operations Research: Applications & Algorithms,
4th edition, by Wayne L. Winston

1
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Chapter 12 - Learning
1. Objectives
Learn the differences between the
2.
3.

4.

LP and the nonlinear program (NLP).


Study solution schemes or
approaches for NLPs.
Understand the wide range of real
applications for which NLPs are
used.
Learn about the available software
to solve NLPs.

2
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

What is a NLP?
NLPs are closer to general and
realistic (and possibly
unsolvable) models than the LPs.
Some LPs are the linearized
versions of NLPs out of necessity.
NLPs have non-proportional
and non-additive relationships.

3
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The most general mathematical model


is likely to have nonlinear terms with
random (and possibly dependent)
coefficients. These difficulties are
some of the reasons why a
deterministic LP is, often, used as an
approximation to a stochastic NLP.

4
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Why the term Nonlinear?

Up until this chapter, decision


variables, anywhere in model, were
always in additive (hence linear)
form: 3x1+4x2, etc.
There never was a case when other
algebraic operators were ever seen.
In NLP, no such limitations exist.
The LP is actually a subset of NLP.

5
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

What causes non- linearity?


Common operations such as
multiplication (x1x2), power (x2), and the
others in Table 2 make a model
nonlinear even if only one of them
appears just once anywhere in the
model.

6
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

A Real and a Simple NLP:


There used to be a time in some
foreign students life when he/she
needed a wooden crate to ship the
books, belongings, and the stereo (no
PC then) home.
Shipping companies (sea) had all
sorts of limits on the dimensions of
the crate for various price categories.

7
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The problem often came to this: what


should the dimensions of the crate
(a box, often a prism) be so that the
volume is maximum or adequate.
Weight did not matter much. A
cube will maximize the volume, but
a cube may not always be feasible.

8
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Maximum Volume of a
LetBox
the height, width, and length
be a, b, and, c.
Volume = a b c (product is not
linear)
The model is:
Max Volume;
subject to something?
(constraints)

9
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Without the constraints, optimizers such as


LINGO (later in this chapter) or LINDO (in
linear case) would set each dimension to
infinity to get volume that is infinite.
Obviously, the dimensions are the
decision variables and they must be
positive. The shipper may require that
height is no more than Y feet and total
surface area is limited to X square feet.

10
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The Model :
Maximize Volume
S.t. :
2( ab + ac + bc) <= X ;
a<= Y;
(a, b, c,) > 0.
This problem was a real one. A
carpenter often built a special crate.
11
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

A Real Class Exercise to


Illustrate NLP:
An instructor decides to illustrate the
concept of optimization using a NLP fun
example rather than an LP case first.
The instructor buys poster papers and cuts
them into 11x 13.75 pieces and gives one
piece to each student. The challenge is to
construct a cylinder with no lid such that the
volume is maximum.

12
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

A Real Class Exercise to


Illustrate NLP contd:
Many students do a good job via trial
and error and some use calculus too.
The problem is another case in NLP
modeling. Let the cylinder have a
radius of r and a height of h in inches.

13
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The volume (V) is (pi* r2 * h). The


constraint is that the area used can
not exceed the available area of
151.25 in2. The decision variables
are r and h. The main constraint
is :
(pi)*(r2) + 2*(pi)*(r) *(h) <= 151.25
Both r and h are positive.

14
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

THE ANSWER
The radius ( r ) should be 4.08 inches and
the height should be 3.86 inches to have
an open cylinder (no lid) with a maximum
Volume using a sheet the available sheet
of 11x13.75. Notice that the
dimensions do matter although their
product was used in the constraint.
An
odd shaped sheet may not be feasible
even if the model gives a solution.
15
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

NLP vs. LP Applications


If possible, the analyst should strive to
model a decision process as a LP. Many
management and production type
problems have long been solved as LPs.
Different set of problems (engineering
design and stock selection, for
example) must contain non-linear terms
that can not be avoided.

16
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Unlike the LPs, one is not always sure if


a given NLP solution is optimal or not.
In NLP, decision variables are not
automatically non-negative. This
allows certain physical values such as
temperature to assume negative
values, if necessary.

17
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Concepts of Limits &


Derivatives
It appears that geometry and algebra
were sufficient until this chapter. This
ends with NLPs. Examples 1, 2, and 3
refresh our memory on necessary
calculus needed in NLP.

18
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

In example 3, f(p) is similar to objective


functions seen in previous chapters, but
it is unconstrained. The derivative, f
(p) is the rate of change of f(p) or the
slope of the revenue function, f(p).
This is a common application in
econometric analysis. If the price, p, is
more than $1 already, additional price
increases will result in a revenue loss.

19
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 5 is the same as the others, but


it has two variables. Example 6
illustrates the role of second and partial
derivatives in NLPs.
This chapter has much calculus. Why?
Calculus is the backbone of NLP much
like matrix algebra was for LP earlier in
the text.

20
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

It is possible to use software (LINGO


and EXCEL) to solve NLPs just like
using LINDO for LPs without
worrying much about the
underlying mathematics.

21
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

LINGO is a great tool for


NLP! LINGO was used to solve some
Previously,

special LPs (e.g. TSP, assignment, etc.)


with unique formulations.
LINGO is not limited to special models .

LINGO allows the user to include


unusual operators such as absolute
value, logarithm, and exponentiation
in the modeling process also.

22
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

LINGO is a great tool for


NLP!
LINGO is not limited to special
models. LINGO can be used to
solve (or attempt to solve) NLPs of
any form. It is also possible to have
negative and/or integer (even
binary) decision variables with
LINGO.

23
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

CONVEXITY and CONCAVITY in NLP


Classic calculus (definitions 3 and
4 and Figures 9 and 10) facts are
very important in NLP. In general,
the sign of the second derivative
of a function (objective function in
NLP) tells if the function is convex
or concave or neither.

24
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Concave vs. Convex NLPs


Both have the so called convex constraint
sets. A concave NLP has a concave
objective function and it is a
maximizing model.
A convex NLP has a convex objective
function and it is minimizing model.
To tell which set we have, apply the
classic second derivative test.

25
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

USING EXCEL TO SOLVE NLP


Figure 8 shows how to solve example 8
(previously solved using LINGO) with
EXCEL. It is also shown how the EXCEL
SOLVER fails to find the optimum in
another problem shown below:
Max Z = (x-1) (x-2) (x-3) (x-4) (x-5)
Where x ranges from 1 to 5

26
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 9: The Oil Mix Problem, a


Real (and Useful) Case of NLP.
This is one of those problems that clearly
explain why NLP (and OR in general) is
important.
Tables 3 and 4 and Figure 6 show the
problem and its solution using LINGO. This
problem (saves $30 million/year) has to be
nonlinear in part.

27
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The objective function (revenue-cost) is


linear along with most of the constraints
(except No.8, 9, 16-21) in Figure 6.
The decision variables are R, U, and P.
Constraints 8 and 9 calculate chemical
contents, causing nonlinearity.

28
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 9 Continued,
Ideally, we would have no or lesser
amount of nonlinearity, but there is no
way to express certain chemical ratios
linearly. Notice the rows 22-29 in Figure
6: decision variables must be declared
to be positive if they have to be
positive.

29
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 10: Facility Location Problem


of Section
This example has a linear objective function
and mildly non-linear constraints. Figure 7
shows how LINGO software is used in
determining the optimal location (in x, y
coordinates) of a new warehouse.

30
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 11: Rubber Production


Problem
Figure 8 shows how common formulas for
strength, elasticity, and hardness are used
as constraints in a physics like manner.
This is quite typical in NLPs.

31
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Multivariate Functions vs. Convexity


and Concavity Concepts.
Similar to second derivative tests for single variable
functions, the Hessian matrix and the ith principal
minor tests are performed. Definitions are on page
812. Theorems 3 and 3, and examples 17, 18, 19,
and 20 illustrate these concepts.
NOTE: While important, these mathematical details
are not critical for most practioners.

32
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Section 12.4: Solving One-variable


NLPs Manually.
This section provides a detailed treatment of
the fundamentals involved in solving
constrained NLPs that have just one decision
variable.

33
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 21: Production Level


This Selection
example is all about supply and
demand. Notice the sales price is 10-x ; it
goes down as we are able to supply/sell
more. Profit is found by subtracting the cost
from the revenue. The problem becomes
Max P(x) = 5x-x2 where x ranges from 0 to
10.
Case 1 check tells us x=2.5 is a local optimal
solution.

34
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Embellishing Example 2.1


The answer was x=2.5, a continuous value.
This means that the product is divisible type
that can be sold in fractional quantities.
What if x has to be an integer? You can have
an integer NLP:
MODEL:
MAX=5*x-x^2;
x>0;
x<10;
@GIN( x);
END
Objective value: 6.000000
Variable value X 2.000000
35
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Golden Section Search of


Section
So far,
we have12.5
dealt with nice objectives
functions that were differentiable. If this is not
the case or the roots of derivative can not be
found easily, then the general NLP schemes,
described so far, do not work.
The Golden Section Method can be used if the
function is of unimodal kind.

36
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Example 23 : On the Golden


Section Method 2

Max X -1
S.t. x ranging from 1 to 0.75
The first and second derivatives are 2X and
2. X=0 is the answer using calculus. This
problem does not actually need the Golden
Section Method, but it is done for illustration.

37
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The Golden Section Method is only


able to tell that the answer lies in
the interval of
from -0.072 to 0.0815 when the
correct
answer is known to be zero.

38
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Unconstrained Optimization
Section
12.6
Theorems 6, 7, 7, 7 provide the basics

of
unconstrained NLPs that may have two or more
decision variables. Example 24 illustrates
these types of problems.
If the problem is
constrained, it is important to know that the
solution (obtained from LINGO) may not always
be the true optimum. It might be a local
optimum. This is not the case if the problem
is unconstrained.

39
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Section 12.7 : The Method of


Steepest Ascent :
This is the prime method used in realistic
NLPs that are unconstrained.
Example 27
shows the steps needed to implement this
method in a small problem.
Note that this
example has no constraints. It simply says
The Xs must belong to real number set.

40
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Section 12.8 : Lagrange


Multipliers
use
this concept if the NLP comes

We
with all
equality constraints. As shown in equation 12.
Example 28 shows how to perform the
mathematics of the Lagrange multipliers. This
may be a tough task at times.
Good News: LINGO will bypass all the math. You
just type the problem and run it.

41
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Shadow Prices in Operations


Research
Do you
remember this concept from the LPs?
Lagrange Multipliers are the equivalent of the shadow
prices in NLP. They are the rate of change of the
optimal value as a fraction of the changes in the RHS
values of the NLP model.
Example 28 illustrates this concept. LINGO output in
Figure 28 gives the so called Lagrange Multipliers
under the price
Column.

42
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

QUADRATIC PROGRAMMING
(QP) of Section of Section
This12.10
is a very special and a highly realistic
form of NLP. The constraints are linear and
the objective function has a unique and a
mildly non-linear form. The terms of the
objective function can be either in square of
one variable or the multiplication of any of
the two variables.

43
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

QP Contd.
QPs application in portfolio optimization is so
important that LINGO has a special structure for it.
The goal is to find how to allocate our funds to
several securities while minimizing the portfolio
variance and achieving a minimum
return of 12%. Example 33 shows how EXCEL and
LINGO can be used to solve QPs.

44
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

The QP is in the same class of important problems


as the transportation, assignment, and ,of course,
the traveling salesperson problems presented
earlier. LINGO has special ready to use structures
for all these problems.
This portfolio QP application is used daily by many
investment firms.

45
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Section 12.11 Separable


Programming.

This concept is all about linearizing mildly nonlinear


terms encountered in objective functions and/or
the constraints. Figures 35, 36, and 37 illustrate
the concept using geometry.
There is no software for linearizing terms, but many
common software can be used in unique efforts.
Separable programming is an advanced topic in
O.R.

46
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

Section 12.12 The Method of


Feasible Directions
This method takes the steepest ascent method of
section 12.7 into a case where the NLP now has
constraints. Example 35 illustrates this
advanced concept employed by many
optimizers.
This example is less nonlinear than even the
quadratic form. LINGO
can easily solve this problem.

47
Copyright 2004 Brooks/Cole, a division of Thomson Learning,

You might also like