You are on page 1of 5

The Complexity of Integer Programming

Chris Johnson

1 Background
1.1 Integer Programming
Given an objective function of n variables x1 , . . . , xn of the form
n
X
f (x1 , . . . , xn ) = ai x i + c
i=1

and a set of constraints that can be written in the form Ax ≤ b (where x is the column
vector of the variables xi , . . . , xn ), the problem of searching for a minimum (or maximum)
of f (x1 , . . . , xn ) is known as a linear program, and can be solved efficiently using algorithms
such as Dantzig’s simplex. However, if we were to also impose that each xi must be an integer
value, we stumble into the world of more difficult problems known as integer programs [4].

1.2 Computational Complexity


The relative efficiency of any algorithm is of general interest to any user of that algorithm; we
all want to have some idea how long it will take our programs to run. Computer scientists,
computational complexity theorists in particular, have devised formal ways of measuring the
amount of time required by an algorithm as a function of the size of the algorithm’s input.
For example, searching for a particular value in a set of sorted data can be done in O(log n)
time, meaning the number of steps has an upper-bound on the order of the logarithm of the
number of items in the set.
Though the efficiency of an algorithm on a “realistic” model of computation is of primary
interest to most people, computer scientists are also often concerned with how efficiently an
algorithm may execute on other types of computers. For example, computers that can per-
form some constant number of operations at each time step are referred to as deterministic,
while those that can perform an abritrary number of operations at each step are referred to
as non-deterministic. All real computers are deterministic, though non-deterministic models
of computation are of interest, mainly due to the improvement in running time of algorithms
with such models.
The set of all decision problems (problems with “yes or no” answers) that can be solved
in polynomial time on a deterministic machine is referred to as the class P. The class of
decision problems that can be solved in polynomial time on a non-deterministic is known
as N P. Obviously P ⊆ N P (any problem that can be solved in polynomial time on a

1
deterministic machine can be solved in polynomial time on a non-deterministic machine),
though whether P ⊂ N P or P = N P is still an open question.
Given that it is not yet clear if P = N P or not, complexity theorists have formulated
another class of the “hardest” problems in N P known as N P-complete. These are problems
in N P that every other problem in N P (and thus every problem in P) can be transformed
into in (deterministic) polynomial time [2]. Classic examples of the such problems include
the satisfiability problem, and (the decision problem version of) the travelling salesman
problem, while more modern examples include the decision problem versions of games such
as minesweeper and Tetris.
Of course, there are problems that are even harder than the problems in N P. Games
such as chess and go, for instance, are known to reside in EX PT IME, meaning they require
exponential time on a deterministic machine. There is, however, a special class of problems
that are at least as hard any problem in N P.
Suppose we have a problem Π that may or may not be in N P. If every problem in N P
can be transformed into Π in polynomial time, then we say that Π is N P-hard. It is in this
sense we mean that Π is at least as hard as any problem in N P [2].

2 Why is Integer Programming N P-Hard?


2.1 The Satisfiability Problem as an Integer Program
The original N P-complete problem described by Stephen Cook is known as the satisfiability
problem, which asks us if there is any combination of boolean variables such that a given
boolean formula is true [1]. The following conventions allow us to describe the satisfiability
problem as an integer program [4].
The value one will stand for true, and the value zero will stand for false. Binary AND
is represented by multiplication, and binary OR is represented by addition. Negation of x,
which we will denote x̄, is representated by 1 − x. Our boolean formula will be placed in
conjuctive normal form (clauses of ORs that are ANDed together), which means that using
our representation, the problem has a solution if there is a set of values such that the formula
evaluates to a number that is greater than or equal to one. Furthermore, this implies that
each clause must evaluate to a value that is greater than or equal to one, which we can use
as our constraints. Consider the following formula from [4].

(x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x¯2 ) ∧ (x2 ∨ x¯3 ) ∧ (x3 ∨ x¯1 ) ∧ (x¯1 ∨ x¯2 ∨ x¯3 )
Using the convention described above, this boolean formula becomes the following arith-
metic formula.

(x1 + x2 + x3 ) · (x1 + 1 − x2 ) · (x2 + 1 − x3 ) · (x3 + 1 − x1 ) · (1 − x1 + 1 − x2 + 1 − x3 )

We may now easily convert this to an integer program.

2
maximize y
x1 + x2 + x3 ≥ y
x1 + 1 − x2 ≥ 1
x2 + 1 − x3 ≥ 1
x3 + 1 − x1 ≥ 1
1 − x1 + 1 − x2 + 1 − x3 ≥ 1
x1 , x2 , x3 ≤ 1
x1 , x2 , x3 ≥ 0
x1 , x2 , x3 ∈ Z

The associated boolean expression has a solution only if after maximization, y ≥ 1.


This special case of integer programming, where our variables are limited to the values
zero and one, is known as 0-1 programming. Obviously, as we have shown an N P-complete
problem can be reduce to a 0-1 programming problem, we have shown that 0-1 programming
is N P-hard.

2.2 Equivalence of General Integer Programming and 0-1 Pro-


gramming
We shall now show that the “general” integer programming problem (that is, where our vari-
ables may be any integer) is polynomially equivalent to 0-1 programming. By polynomially
equivalent, we mean a problem of one type may be transformed into an equivalent problem
of the other type in polynomial time [2]. Knowing that these two classes of problems are
polynomially equivalent establishes the fact that if one class of problems is N P-hard, then
so is the other class. The technique we describe appears in [4].
Note that 0-1 programming is a special case of general integer programming (0-1 program-
ming implies integer programming), so we really only need to show that the other direction
(that integer programming implies 0-1 programming). We will show this by describing how
to rewrite an integer program as a 0-1 program.
Suppose we have an integer program whose feasible region is bounded by M , and let
l = dlog2 M e. We can now rewrite each xj in the original problem as a set of variables xj,i
where each xj,i ∈ Z2 , as each integer has a unique binary representation. Since xj ≤ M for
each xj we are only concerned with integers between 0 and M , which will require at most l
bits. We then set each xj,i to the ith bit of the binary representation of j, so
l
X
xj = xj,i 2i .
i=0

Thus every integer program can be written as a 0-1 program and vice versa. Since
the satisfiability problem can be written as a 0-1 program, 0-1 programming is N P-hard,
but since 0-1 programming and integer programming are polynomially equivalent, integer
programming is also N P-hard.

3
3 Algorithms for Integer Programming
Given that integer programming is N P-hard, how can we efficiently tackle integer program-
ming problems? There are two approaches we will now consider, both of which approach the
problem the same basic way: we will solve the relaxed version of the problem (that is, the
linear program with the same constraints, but allowing non-integer solutions), and continue
to add constraints until we arrive at an integer solution.

3.1 The Cutting Plane Algorithm


The cutting plane algorithm works by first finding the solution to the relaxed version of the
problem. If the solution we found had non-integer solutions, then we will impose a new
constraint that eliminates the previously found solution, but does not eliminate any feasible
integer solutions, until we arrive at a solution with all integer components [5].

3.2 Branch and Bound


The branch and bound algorithm treats the problem by considering a enumeration tree of
possible integer solutions. Of course, this tree is likely to be too large to the consider each
possible enumeration. Thus, we solve the relaxed version of the problem, each time pruning
possible solutions that either infeasible or non-optimal, based on the solution of the relaxed
problem [3].
Branch and bound builds this enumeration tree by considering subregions of the feasible
region (branching), and then finding appropriate upper and lower bounds for the optimal
solution within a subregion (bounding). One way of doing this would be to find the solution
to the original relaxed problem, say x0 . If x0 is not an integer, then we break our original
problem into two subproblems.
In the first subproblem we impose the constraints that each xi must be less than or equal
to bx0i c. In the second subproblem, our constraint is that each xi must be greater than or
equal to bx0i c + 1. We then attempt to solve the relaxed version of these problems, branching
again if need be.
If while solving any of our subproblems we arrive at an integer solution, however, we know
that we will not find any better solutions to that subproblem by breaking it up into even
more subproblems. We can then kill the nodes of the enumeration tree that are descendents
of the current node [6].

4 Conclusion
Despite that linear programming can be solved relatively efficiently in many cases, it comes
as a bit of a shock that problems can become considerably more difficult by imposing the
constraint that we only consider integer solutions. Despite this, there is no avoiding the
fact that there are applications where non-integer solutions are undesirable, and so we must
consider how to best tackle the problem of integer programming.

4
In this paper we have explained why integer programming is N P-hard, and is thus
likely to remain intractable for the foreseeable future. We have also briefly discussed two
algorithms that we can use for solving integer programming problems by repeatedly solving
relaxed versions of our original problem. Of course, using either algorithm, we’re repeatedly
solving linear programs, modifying the problem until we arrive at an integer solution.
Either algorithm will eventually find the solution to the integer program, but there is no
guarantee on the amount of time it will take to do so. It is possible that we could essentially
build our entire enumeration tree using branch and bound if we do not “luck out” and find
integer solutions after the first few iterations and can kill off parts of the tree. Depending
on the “shape” of our feasibility region, the cutting plane algorithm may require numerous
iterations until we’ve imposed enough constraints (cut off enough of the plane) that we arrive
at an integer solution.

References
[1] Stephen A. Cook. The complexity of theorem-proving procedures. In Proceedings of the
third annual ACM symposium on Theory of Computing, pages 151–158, 1971.

[2] Michael R. Garey and David S. Johnson. Computers and Intractibility: A Guide to the
Theory of NP-Completeness, chapter 2. W. H. Freeman and Company, 1979.

[3] George L. Nemhauser and Laurence A. Wolsey. Integer and Combinatorial Optimization,
chapter 14. John Wiley and Sons, 1988.

[4] Christos H. Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization: Algo-


rithms and Complexity, chapter 13. Dover Publications, 1998.

[5] Christos H. Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization: Algo-


rithms and Complexity, chapter 14. Dover Publications, 1998.

[6] Christos H. Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization: Algo-


rithms and Complexity, chapter 18. Dover Publications, 1998.

You might also like