You are on page 1of 3

Dynamic Programming 2/24/2005 1:46 AM

Matrix Chain-Products
Matrix Chain-Product:
Compute A=A0*A1*…*An-1
Dynamic Programming
„

„ Ai is di × di+1
„ Problem: How to parenthesize?
Example
„ B is 3 × 100
„ C is 100 × 5
„ D is 5 × 5
„ (B*C)*D takes 1500 + 75 = 1575 ops
„ B*(C*D) takes 1500 + 2500 = 4000 ops

Dynamic Programming 1 Dynamic Programming 4

Outline and Reading Enumeration Approach


Matrix Chain-Product Alg.:
Matrix Chain-Product (§5.3.1) „ Try all possible ways to parenthesize
The General Technique (§5.3.2) A=A0*A1*…*An-1
0-1 Knapsack Problem (§5.3.3) „ Calculate number of ops for each one
„ Pick the one that is best
Running time:
„ The number of parenthesizations is equal
to the number of binary trees with n nodes
„ This is exponential!
„ It is called the Catalan number, and it is
almost 4n.
„ This is a terrible algorithm!
Dynamic Programming 2 Dynamic Programming 5

Matrix Chain-Products Greedy Approach


Dynamic Programming is a general
Idea #1: repeatedly select the product that
algorithm design paradigm.
f uses (up) the most operations.
„ Rather than give the general structure, let
us first give a motivating example: Counter-example:
B
„ Matrix Chain-Products „ A is 10 × 5
Review: Matrix Multiplication. j B is 5 × 10
e „

„ C = A*B „ C is 10 × 5
„ A is d × e and B is e × f D is 5 × 10
e „

„ O(d⋅e⋅f ) time „ Greedy idea #1 gives (A*B)*(C*D), which takes


e −1 A C 500+1000+500 = 2000 ops
C[i, j ] = ∑ A[i, k ] * B[k , j ] d i i,j d „ A*((B*C)*D) takes 500+250+250 = 1000 ops
k =0

f
Dynamic Programming 3 Dynamic Programming 6

1
Dynamic Programming 2/24/2005 1:46 AM

Dynamic Programming
Another Greedy Approach Algorithm Visualization
Idea #2: repeatedly select the product that uses The bottom-up N i , j = min{N i ,k + N k +1, j + d i d k +1d j +1}
the fewest operations. construction fills in the i ≤k < j

N array by diagonals
Counter-example: Ni,j gets values from
N 0 1 2 i j … n-1

„ A is 101 × 11 previous entries in i-th 0


1
„ B is 11 × 9 row and j-th column
Filling in each entry in
… answer
„ C is 9 × 100 i
the N table takes O(n)
„ D is 100 × 99 time.
„ Greedy idea #2 gives A*((B*C)*D)), which takes Total run time: O(n3) j
109989+9900+108900=228789 ops Getting actual
„ (A*B)*(C*D) takes 9999+89991+89100=189090 ops parenthesization can be
done by remembering n-1
The greedy approach is not giving us the “k” for each N entry
optimal value. Dynamic Programming 7 Dynamic Programming 10

Dynamic Programming
“Recursive” Approach Algorithm
Define subproblems: Since Algorithm matrixChain(S):
subproblems
„ Find the best parenthesization of Ai*Ai+1*…*Aj. overlap, we don’t Input: sequence S of n matrices to be multiplied
„ Let Ni,j denote the number of operations done by this use recursion. Output: number of operations in an optimal
subproblem. Instead, we parenthesization of S
„ The optimal solution for the whole problem is N0,n-1. construct optimal for i ← 1 to n − 1 do
subproblems
Subproblem optimality: The optimal solution can be “bottom-up.” Ni,i ← 0
defined in terms of optimal subproblems Ni,i’s are easy, so for b ← 1 to n − 1 do
„ There has to be a final multiplication (root of the expression start with them { b = j − i is the length of the problem }
tree) for the optimal solution. Then do for i ← 0 to n − b − 1 do
problems of
„ Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1). “length” 2,3,… j←i+b
„ Then the optimal solution N0,n-1 is the sum of two optimal subproblems, Ni,j ← +∞
subproblems, N0,i and Ni+1,n-1 plus the time for the last multiply. and so on. for k ← i to j − 1 do
If the global optimum did not have these optimal Running time:
„
O(n3) Ni,j ← min{Ni,j, Ni,k + Nk+1,j + di dk+1 dj+1}
subproblems, we could define an even better “optimal”
return N0,n−1
solution.
Dynamic Programming 8 Dynamic Programming 11

The General Dynamic


Characterizing Equation Programming Technique
The global optimal has to be defined in terms of Applies to a problem that at first seems to
optimal subproblems, depending on where the final require a lot of time (possibly exponential),
multiply is at.
provided we have:
Let us consider all possible places for that final multiply:
„ Simple subproblems: the subproblems can be
Recall that Ai is a di × di+1 dimensional matrix.
„
defined in terms of a few variables, such as j, k, l,
So, a characterizing equation for Ni,j is the following:
„
m, and so on.
„ Subproblem optimality: the global optimum value
N i , j = min{N i ,k + N k +1, j + d i d k +1d j +1} can be defined in terms of optimal subproblems
i ≤k < j „ Subproblem overlap: the subproblems are not
independent, but instead they overlap (hence,
Note that subproblems are not independent–the should be constructed bottom-up).
subproblems overlap.

Dynamic Programming 9 Dynamic Programming 12

2
Dynamic Programming 2/24/2005 1:46 AM

A 0/1 Knapsack Algorithm,


The 0/1 Knapsack Problem Second Attempt
Given: A set S of n items, with each item i having Sk: Set of items numbered 1 to k.
„ wi - a positive weight
„ bi - a positive benefit Define B[k,w] to be the best selection from Sk with
Goal: Choose items with maximum total benefit but with weight at most w
weight at most W. Good news: this does have subproblem optimality.
If we are not allowed to take fractional amounts, then
this is the 0/1 knapsack problem. ⎧ B[k − 1, w] if wk > w
In this case, we let T denote the set of items we take B[k , w] = ⎨
⎩max{B[k − 1, w], B[k − 1, w − wk ] + bk }
„
else
„ Objective: maximize ∑b
i∈T
i I.e., the best subset of Sk with weight at most w is
either
„ Constraint: ∑w
i∈T
i ≤W „

„
the best subset of Sk-1 with weight at most w or
the best subset of Sk-1 with weight at most w−wk plus item k

Dynamic Programming 13 Dynamic Programming 16

Example 0/1 Knapsack Algorithm


Given: A set S of n items, with each item i having ⎧
B[k , w] = ⎨
B[k − 1, w] if wk > w
„ bi - a positive “benefit” ⎩max{B[k − 1, w], B[k − 1, w − wk ] + bk } else
„ wi - a positive “weight” Algorithm 01Knapsack(S, W):
Recall the definition of Input: set S of n items with benefit bi
Goal: Choose items with maximum total benefit but with B[k,w] and weight wi; maximum weight W
weight at most W. Since B[k,w] is defined in Output: benefit of best subset of S with
“knapsack” terms of B[k−1,*], we can weight at most W
use two arrays of instead of let A and B be arrays of length W + 1
a matrix for w ← 0 to W do
Items: Running time: O(nW). B[w] ← 0
1 2 3 4 5 box of width 9 in Not a polynomial-time for k ← 1 to n do
algorithm since W may be copy array B into array A
Weight: 4 in 2 in 2 in 6 in 2 in Solution: large for w ← wk to W do
• item 5 ($80, 2 in) This is a pseudo-polynomial if A[w−wk] + bk > A[w] then
Benefit: $20 $3 $6 $25 $80
• item 3 ($6, 2in) time algorithm B[w] ← A[w−wk] + bk
• item 1 ($20, 4in) return B[W]
Dynamic Programming 14 Dynamic Programming 17

A 0/1 Knapsack Algorithm,


First Attempt
Sk: Set of items numbered 1 to k.
Define B[k] = best selection from Sk.
Problem: does not have subproblem optimality:
„ Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of
(benefit, weight) pairs and total weight W = 20

Best for S4:

Best for S5:

Dynamic Programming 15

You might also like