Chapter4 DynamicProgramming
Chapter4 DynamicProgramming
Dynamic Programming is a method for solving a complex problem by breaking it down into a
collection of simpler sub-problems, solving each of those sub-problems just once, and storing
their solutions using a memory-based data structure (array, map, etc).
Each of the sub-problem solutions is indexed in some way, typically based on the values of its
input parameters, so as to facilitate its lookup. So the next time the same sub-problem occurs,
instead of re-computing its solution, one simply looks up the previously computed solution,
thereby saving computation time. This technique of storing solutions to sub-problems instead
of re-computing them is called memorization.
Dynamic programming is used for optimization problems:
- Find a solution with the optimal value.
- Minimization or maximization. (We’ll see both.)
Fibonacci Sequence
Solving the Fibonacci Sequence using recursion function which it takes the time complexity 2n
(exponential time):
Suppose n=5 so we can represent our solving as follows:
As you can see in the tree diagram, number 4 has been computed 1 time. Number 3 has been
repeatedly computed 2 times. Number 2 has been repeatedly computed 3times. Number 1 has
been repeatedly computed 5 times. The times grow as the number n gets larger. How can we
stop doing that? Here is where the Dynamic Programming comes into play that will reduce the
running time to O(n).
The first dynamic programming approach we’ll use is the top-down approach. The idea here
is similar to the recursive approach, but the difference is that we’ll save the solutions to
subproblems we encounter.
When the recursion does a lot of unnecessary calculation, just like one above, an easy way to
solve this is to cache the results. Whenever you are trying to computer a number say n. We first
check if have done that before in our cache. If we did, simply return what was in the cache.
Otherwise, try to compute the number. Once we get the number, we make sure to put the result
to the cache for use in the future.
In the top-down approach, we need to set up an array to save the solutions to subproblems.
Here, we create it in a helper function, and then we call our main function:
Now, let’s look at the main top-down function. We always check if we can return a solution
stored in our array before computing the solution to the subproblem like we did in the recusive
approach:
b. Bottom Up Memorization
In the bottom-up dynamic programming approach, we’ll reorganize the order in which
we solve the subproblems. We will compute F(0), then F(1), then F(2), and so on:
In the bottom-up approach, we calculate the Fibonacci numbers in order until we reach F(N).
Since we calculate them in this order, we don’t need to keep an array of size N+1 to store the
intermediate results.
Instead, we use variables A and B to save the two most recently calculated Fibonacci numbers.
This is sufficient to calculate the next number in the series:
Knapsack Problem
Problem : Given a set of items, each having different weight and value or profit associated
with it. Find the set of items such that the total weight is less than or equal to a capacity of the
knapsack and the total value earned is as large as possible.
The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, .
. . . . , xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, . . . , vn>
are weight and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity.
Select items from X and fill the knapsack such that it would maximize the profit. Knapsack
problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either add an
entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Approach for Knapsack Problem using Dynamic Programming
If the weight of the item is larger than the remaining knapsack capacity, we skip the item, and
the solution of the previous step remains as it is. Otherwise, we should add the item to the
solution set and the problem size will be reduced by the weight of that item. Corresponding
profit will be added for the selected item.
Dynamic programming divides the problem into small sub-problems. Let V is an array of the
solution of sub-problems. V[i, j] represents the solution for problem size j with first i items.
The mathematical notion of the knapsack problem is given as:
n = Number of items
The proposed algorithm for 0/1 knapsack using dynamic programming is described below:
Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end
Complexity analysis
With n items, there exist 2n subsets, the brute force approach examines all subsets to find the
optimal solution. Hence, the running time of the brute force approach is O(2n). This is
unacceptable for large n.
Dynamic programming finds an optimal solution by constructing a table of size n ´ M, where
n is a number of items and M is the capacity of the knapsack. This table can be filled up in
O(nM) time, same is the space complexity.
Find an optimal solution for following 0/1 Knapsack problem using dynamic
programming: Number of objects n = 4, Knapsack Capacity M = 5, Weights (W1, W2,
W3, W4) = (2, 3, 4, 5) and profits (P1, P2, P3, P4) = (3, 4, 5, 6).
Solution:
Solution of the knapsack problem is defined as,
Boundary conditions would be V [0, i] = V[i, 0] = 0. Initial configuration of table looks like.
Item Detail 0 1 2 3 4 5
i=0 0 0 0 0 0 0
i=1 w1=2 v1=3 0
i=2 w2=3 v2=4 0
i=3 w3=4 v3=5 0
i=4 w4=5 v4=6 0
Final table would be:
Item Detail 0 1 2 3 4 5
i=0 0 0 0 0 0 0
i=1 w1=2 v1=3 0 0 3 3 3 3
i=2 w2=3 v2=4 0 0 3 4 4 7
i=3 w3=4 v3=5 0 0 3 4 5 7
i=4 w4=5 v4=6 0 0 3 4 5 7
▪ Initialize an array memo with matrix’s dimensions n X m. This array will give us final
count, once we reach to bottom right.
▪ Fill memo[0][0]=0.
▪ Fill the first row with 1, as you can reach here from one direction(by going right)
▪ Fill the first column with 1 as well, as you can reach here from one direction (by going
down).
▪ Iterate over all other rows and columns and use formula memo[i][j] =memo[i-1][j] +
memo[i][j-1], as you can reach here from two directions (By going right or By going
down)
▪ return memo[n-1][m-1].
0 1 2 3 m
n 0 1 1 1
0
1 1 2 3 4
2 1 3 6 10
3 1 4 10 20
Dynamic Programming:
a. Bottom-Up Memorization:
Theorem: Let Z = ⟨ z1, ..., zk ⟩ be any LCS of X = ⟨ x1, ..., xm ⟩ and Y = ⟨ y1, ..., yn ⟩. Then
A recursive algorithm based on this formulation would have lots of repeated subproblems, for
example, on strings of length 4 and 3:
- Lots of repeated subproblems.
- Instead of recomputing, store
Dynamic programming avoids the redundant computations by storing the results in a table.
We use c[i,j] for the length of the LCS of prefixes Xi and Yj (hence it must start at 0)
This is a bottom-up solution: Indices i and j increase through the loops, and references
to c always involve either i-1 or j-1, so the needed subproblems have already been computed.
In the process of computing the value of the optimal solution we can also record
the choices that led to this solution. Step 4 is to add this latter record of choices and a way of
recovering the optimal solution at the end.
Table b[i, j] is updated above to remember whether each entry is
• a common substring of Xi-1 and Yj-1 (diagonal arrow), in which case the common
character xi = yj is included in the LCS;
• a common substring of Xi-1 and Y (↑); or
• a common substring of X and Yj-1 (←).
We reconstruct the path by calling Print-LCS(b, X, n, m) and following the arrows, printing
out characters of X that correspond to the diagonal arrows (a Θ(n + m) traversal from the
lower right of the matrix to the origin):
Example of LCS:
What do spanking and amputation have in common? [Show only c[i,j] ]
Answer : pain