You are on page 1of 56

Solution Techniques

Exact Optimization Methods

Exact Optimization Methods


We will consider exact optimization methods for solving transport
optimization problems in this part of our course.
What are the advantages and disadvantages of exact
optimization methods?
Branch and Cut
Branch and Price
Travelling Salesman Problem
Vehicle Routing Problem and Variants

...

Linear Program Definition


Linear Program

Introduction to the Simplex


Algorithm
The simplex algorithm has been developed in 1947 by George
Dantzig.

Today it is possible to solve linear programs involving several


millions of variables and constraints.
There exist worst-case examples that can not be solved by the
simplex algorithms in polynomial time.

However, the simplex algorithm is the most commonly used in


practice.
There exist polynomial time approaches for solving linear
programs, e.g.: the ellipsoid method.

Introduction to the Simplex


Algorithm
x1 + x2 = 8

x1 + 3x2 = 12

5
4

2x1 x2 = 10

3
2
1
0

3x1 + 2x2 = 22
0

Introduction to the Simplex


Algorithm

More details: Algorithmics, Mathematical Programming


R. Vanderbei.
Linear Programming: Foundations and Extensions. Kluwer. 1998.

Integer Program
Integer Program

Total Enumeration
Class exercise- C1
Exponential growth!!

LP Relaxation
Relaxation (omitting) of the integrality constraints
Linear programs can be solved efficiently

Lower bound for integer programs


Solution values of the relaxation can hint at solutions of the
integer problem

Branch and Bound

Enumeration methods: Search tree is generated by depthfirst search


Computing lower and upper bounds in order to cut off parts of
the search tree.
Lower bound: LP relaxation
Upper bound: feasible solution

Branch and Bound


Solution of the LP relaxation in the root node
Let xi be a variable with a fraction value xi .
The search tree is then branched as follows:

and

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
We consider the following integer program:
max 7x1 3x2 4x3
x1 + 2x2 + 3x3 x4
x5
3x1 +
x2 +
x3
x1 , x2 , x3 , x4 , x5
x1 , x2 , x3 , x4 , x5

=
=

8
5
0
Z

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
The LP relaxation provides the solution:
x3 = x4 = x5 = 0 , x1 =

2
19
, x2 =
5
5

with objective value c = 71


(= 14, 2).
5
We obtain the upper bound 15.

Branch and Bound for Integer


Optimization

P0

U =
c = 15

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
Branch on variable x2 :

P1 = P0 {x | x2 3}
P2 = P0 {x | x2 4}
P1 : Subsequent problem.
An optimal solution for the LP relaxation LP1 is
x4 = x5 = 0 , x1 =

1
1
, x2 = 2 , x3 =
2
2

and c = 29
(with upper bound -15).
2

Branch and Bound for Integer


Optimization

P0

x2 3
U =
c = 15

P1

U =
c = 15
x2 4
P2

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
P1 has to be further decomposed:
P3 = P1 {x | x1 0}
P4 = P1 {x | x1 1}

The active problems are: K = {P2 , P3 , P4 }.


Solving LP3 gives
x1 = x5 = 0 , x 2 = 3 , x3 = 2 , x4 = 4
and c = 17.
P3 is solved the currently best solution has value 17.

Branch and Bound for Integer


Optimization
P0

x2 4

x2 3
U =
c = 15
x1 0
U = 17

P3

U =
c = 15

P2

P1
x1 1
P4

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
Solving LP4 gives:

x4 = 0 , x1 = 1 , x 2 = 3 , x3 =

1
4
, x5 =
3
3

and c = 52
= 17 31 .
3
The upper bound (18) is worse than the best solution, thereby P4 is
solved.

Branch and Bound for Integer


Optimization
P0

x2 4

x2 3
U =
c = 15

U = 17

P3

P2

P1

x1 0

x1 1
c = 18

U =
c = 15

P4

U = 17

Branch and Bound for Integer


Optimization
Example Branch-and-Bound

Solving P2 gives:
and c = 43
.
3
P2 is not yet solved, we have to branch:
P5 = P2 {x | x1 0}
P6 = P2 {x | x1 1}

Branch and Bound for Integer


Optimization
P0

U =
c = 15
x2 4

x2 3
U =
c = 15

P2

P1

U = 17
c = 15

x1 0
x1 0
U = 17

P3

x1 1

x1 1
c = 18

P4

P5

P6

Branch and Bound for Integer


Optimization
Example Branch-and-Bound
Solving LP5 gives
x1 = x3 = x5 = 0 , x2 = 5 , x4 = 2
and c = 15.
This is a new best solution with value 15. P5 is thereby solved.
P6 does not need to be considered anymore, since LP0 shows that no
solution better than 15 is possible.

Branch and Bound for Integer


Optimization
P0

U =
c = 15
x2 4

x2 3

U =
c = 15

P2

P1

U = 17
c = 15

x1 0
x1 0
U = 17

P3

x1 1

x1 1
c = 18

P4

P5

U = 15
c = 15

P6

Branch and Bound


Multiple variants:
Node selection
Variable selection
Branching decisions
Possible problems: long run-times, large memory requirements
More details: Algorithmics, Mathematical Programming
L. Wolsey. Integer Programming. Wiley. 1998.

Relaxations
1. Constraint relaxations: New feasible
solutions may be allowed but none should
be lost!
C2-Exercises! Are the following valid constraint
relaxations?
2. Continous relaxations (LP relaxations )

Relaxations
If an optimal soltion to a relaxation is also
feasible in the model it relaxes, the solution is
optimal in that original model.
C3:Exercises!
Compute by inspection optimal solution to each
of the following relaxations and determine
whether we can conclude that the relaxation
optimum is optimal in the original model.

Relaxations

If an optimal solution to a relaxation is also feasible in the model it relaxes, the solution is
optimal in that original model.

More commonly, things are not that simple


BUT:
1) We have the bound value (optimal value of any
relaxation of a maximize model yields an upper
bound on the optimal value of the full model . The
optimal value of any relaxation of a minimization
model yields a lower bound.)
2) Maybe a good starting point for constructing a
good heuristic solution.
3) Proving infeasability: If a constraint relaxation
infeasible so is the full model it relaxes.

Rounded solutions from relaxations


Many relaxations produce optimal solutions
that are easily rounded to good feasible
solutions.
C4: Exercises: Round the LP relaxation
optimum to an approximate solution for the
original model. State the best lower and upper
bounds on the optimal integer solution.

Stronger LP relaxations
HOW?
1. Different formulations of the same problem
Example : E1
2. Choosing smallest Big-Ms.
C5:Exercise
3. Adding new valid inequality constraints.

Valid inequalities
A linear inequality is a valid inequality if it
holds for all integer feasible solutions to the
model.
To strengthen the solution it must cut off some
feasible solutions to the current LP relaxation
that are not feasible to the full ILP model.
Also called cutting planes
Class exercise : C6

Cutting Plane Method


Exponentially many constraints!
In mathematical optimization, the cutting-plane method is an umbrella
term for optimization methods which iteratively refine a feasible set or
objective function by means of linear inequalities, termed cuts.

Cutting Plane Method Basic Idea


-Start with a small subset of constraints and solve the resulting LP
-Check whether the obtained solution is feasible for all constraints
-If yes: optimal soloution found
-Else find a violated constraint and add it to the LP

-Iterate until no more constraints need to be added

171 / 319

Cutting Plane Method

Branch-and-Cut
Truncated cutting plane method returns a solution for an LP
relaxation

Branch-and-bound is combined with the cutting plane method:


Branch-and-Cut
The subproblems are solved using a truncated cutting plane
method

If no further cuts are found and the solution is not integer


feasible, branching is performed
Every new subproblem is again solved using the truncated
cutting plane method

Branch-and-Cut

Compared to classical branch-and-bound, the search tree in a


branch-and-cut apporach is usually significantly smaller.
The success of branch-and-cut approaches relies on:

1.the use of strong LP relaxations


2.fast separation algorithms (generating cutting planes)
3.a multitude of algorithmic tricks

VRPTW - Bounds
Lower Bounds
The network lower bound can be obtained by removing the

capacity and time window constraints.


LP relaxation
Better bounds can be obtained using mode complex algorithms

such as column generation.

Upper bounds
Route construction
Route improvement
Metaheuristics

Column Generation
The basic idea of column generation originates in the simplex
algorithm.

Only variables with negative reduced costs are entering the


basis.
It is sufficient to start with a small number of columns.
As long as variables with negative reduced cost can be
determined, they are added to the problem which is
subsequently solved again.

Column Generation
Subproblem: Search variables with negative reduced cost
(Pricing-Problem).

If a such a variable/column has be found it can be added to the


master problem, which is re-solved.
This process is repeated as long as new columns with negative
reduced cost are found, if no such column existis an optimal LP
solution has been determined.

Reduced Cost
The reduced cost are the cost of a variable in the current simplex
tableau.
Only variables with negative reduced cost can therefore improve
the objective value.
The reduced cost of a variable can be computed from the values
of the dual variables.
Given a linear program {min cx | Ax b, x 0}

Then the reduced cost are c yA, with y the dual variables of
the linear program.
More details can be found in: R. Vanderbei. Linear Programming:
Foundations and Extensions. Kluwer. 1998.

Branch-and-Price

LPs with exponentially many variables can be solved. This


does not give us a solution for the integer problem.
An optimal solution for integer problems consists of a
combination of branch and bound and column generation:
Branch-and-Price

Branch-and-Price

Columns are generated until an optimal LP solution is reached.

If this solution is not integral, the problem is divided (branching).


For each subproblem columns are generated again.
This process is repeated until an optimal solution is reached.

Lagrangian Relaxation
Lagrangian relaxation is a technique well suited for
problems where the constraints can be divided into two
sets:
good constraints, with which the problem is
solvable very easily
bad constraints that make it very hard to solve.
The main idea is to relax the problem by removing the
bad constraints and putting them into the objective
function, assigned with weights (the Lagrangian
multiplier). Each weight represents a penalty which is
added to a solution that does not satisfy the particular
constraint.

Lagrangian Relaxation

We assume that optimizing over the set X can be


done very easily, whereas adding the bad
constraints Ax b makes the problem intractable.

Lagrangian Relaxation
Therefore, we introduce a dual variable for
every constraint of Ax b. The vector 0 is
the vector of dual variables (the Lagrangian
multipliers) that has the same dimension as
vector b. For a fixed 0, consider the relaxed
problem

Lagrangian Relaxation

By assumption, we can efficiently compute the


optimal value for the relaxed problem with a fixed
vector .
Lemma (Weak duality). Z() provides a lower bound
on

Lagrangian Relaxation

Lagrangian Relaxation

Lagrangian Relaxation

Solving the lagrangian dual


Mostly used Techniques:
-Subgradient optimization method
-Multiplier adjustment.

Solving the lagrangian dual


Sign restrictions on multipliers:
-If the relaxed constraint has form
Multiplier is
0 for a maximization model
0 for a minimization model
-If the relaxed constraint has form
Multiplier is
0 for a maximization model
0 for a minimization model
-Class Exercise C7

Wrap-up
Suppose that we have some problem instance
of a combinatorial optimisation problem and
further suppose that it is a minimisation
problem.
We draw a vertical line representing value
(the higher up this line the higher the value)
then somewhere on this line is the optimal
solution to the problem we are considering.

Wrap-up
Exactly where on this line this optimal solution
lies we do not know, but it must be somewhere!
Conceptually therefore this optimal solution value
divides our value line into two:
above the optimal solution value are upper
bounds, values which are above the (unknown)
optimal solution value
below the optimal solution value are lower
bounds, values which are below the (unknown)
optimal solution value.

Wrap-up
In order to discover the optimal solution value then
any algorithm that we develop must address both
these issues i.e. it must concern itself both with
upper bounds and with lower bounds.
In particular the quality of these bounds is important
to the computational success of any algorithm:
we like upper bounds that are as close as possible to
the optimal solution, i.e. as small as possible
we like lower bounds that are as close as possible to
the optimal solution, i.e. as large as possible.

Wrap-up

Upper bounds

Typically upper bounds are found by searching for feasible


solutions to the problem, that is solutions which satisfy the
constraints of the problem.
A number of well-known general techniques are available to find
feasible solutions to combinatorial optimisation problems, for
example:
interchange
metaheuristics:
tabu search
simulated annealing
variable neighbourhood search
genetic algorithms (population heuristics).
In addition, for any particular problem, we may well have
techniques which are specific to the problem being solved.

Wrap-up Lower bounds


One well-known general technique which is available to find
lower bounds is linear programming relaxation. In linear
programming (LP) relaxation we take an integer (or mixedinteger) programming formulation of the problem and relax the
integrality requirement on the variables.
This gives a linear program which can be solved optimally using a
standard algorithm (simplex or interior point)
The solution value obtained for this linear program gives a lower
bound on the optimal solution to the original problem.
Another well-known (and well-used) technique which is available
to find lower bounds is lagrangean relaxation.

You might also like