0% found this document useful (0 votes)
2 views13 pages

Chapter4 DynamicProgramming

Dynamic Programming (DP) is a technique for solving complex problems by breaking them into simpler sub-problems, storing their solutions to avoid redundant calculations, and is particularly useful for optimization tasks. The document discusses various applications of DP, including the Fibonacci sequence, the Knapsack problem, counting paths in a matrix, and finding the longest common subsequence, detailing both top-down and bottom-up approaches. Each problem is illustrated with algorithms and complexity analysis, highlighting the efficiency of DP compared to brute force methods.

Uploaded by

ellenasan7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views13 pages

Chapter4 DynamicProgramming

Dynamic Programming (DP) is a technique for solving complex problems by breaking them into simpler sub-problems, storing their solutions to avoid redundant calculations, and is particularly useful for optimization tasks. The document discusses various applications of DP, including the Fibonacci sequence, the Knapsack problem, counting paths in a matrix, and finding the longest common subsequence, detailing both top-down and bottom-up approaches. Each problem is illustrated with algorithms and complexity analysis, highlighting the efficiency of DP compared to brute force methods.

Uploaded by

ellenasan7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Dynamic Programming

Dynamic Programming is a method for solving a complex problem by breaking it down into a
collection of simpler sub-problems, solving each of those sub-problems just once, and storing
their solutions using a memory-based data structure (array, map, etc).
Each of the sub-problem solutions is indexed in some way, typically based on the values of its
input parameters, so as to facilitate its lookup. So the next time the same sub-problem occurs,
instead of re-computing its solution, one simply looks up the previously computed solution,
thereby saving computation time. This technique of storing solutions to sub-problems instead
of re-computing them is called memorization.
Dynamic programming is used for optimization problems:
- Find a solution with the optimal value.
- Minimization or maximization. (We’ll see both.)

Dynamic programming is a four-step method:


1- Characterize the structure of an optimal solution.
2- Recursively define the value of an optimal solution.
3- Compute the value of an optimal solution, typically in a bottom-up fashion.
4- Construct an optimal solution from computed information.

Fibonacci Sequence

The Fibonacci Sequence is the series of numbers:


0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
The next number is found by adding up the two numbers before it.
• The 2 is found by adding the two numbers before it (1+1)
• The 3 is found by adding the two numbers before it (1+2),
• and the 5 is (2+3),
• and so on!

Solving the Fibonacci Sequence using recursion function which it takes the time complexity 2n
(exponential time):
Suppose n=5 so we can represent our solving as follows:

As you can see in the tree diagram, number 4 has been computed 1 time. Number 3 has been
repeatedly computed 2 times. Number 2 has been repeatedly computed 3times. Number 1 has
been repeatedly computed 5 times. The times grow as the number n gets larger. How can we
stop doing that? Here is where the Dynamic Programming comes into play that will reduce the
running time to O(n).

Dynamic Programming to solve this problem:


a. Top Down Memorization

The first dynamic programming approach we’ll use is the top-down approach. The idea here
is similar to the recursive approach, but the difference is that we’ll save the solutions to
subproblems we encounter.
When the recursion does a lot of unnecessary calculation, just like one above, an easy way to
solve this is to cache the results. Whenever you are trying to computer a number say n. We first
check if have done that before in our cache. If we did, simply return what was in the cache.
Otherwise, try to compute the number. Once we get the number, we make sure to put the result
to the cache for use in the future.
In the top-down approach, we need to set up an array to save the solutions to subproblems.
Here, we create it in a helper function, and then we call our main function:

Now, let’s look at the main top-down function. We always check if we can return a solution
stored in our array before computing the solution to the subproblem like we did in the recusive
approach:

b. Bottom Up Memorization
In the bottom-up dynamic programming approach, we’ll reorganize the order in which
we solve the subproblems. We will compute F(0), then F(1), then F(2), and so on:

In the bottom-up approach, we calculate the Fibonacci numbers in order until we reach F(N).
Since we calculate them in this order, we don’t need to keep an array of size N+1 to store the
intermediate results.
Instead, we use variables A and B to save the two most recently calculated Fibonacci numbers.
This is sufficient to calculate the next number in the series:

Knapsack Problem
Problem : Given a set of items, each having different weight and value or profit associated
with it. Find the set of items such that the total weight is less than or equal to a capacity of the
knapsack and the total value earned is as large as possible.

The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, .
. . . . , xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, . . . , vn>
are weight and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity.
Select items from X and fill the knapsack such that it would maximize the profit. Knapsack
problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either add an
entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Approach for Knapsack Problem using Dynamic Programming

If the weight of the item is larger than the remaining knapsack capacity, we skip the item, and
the solution of the previous step remains as it is. Otherwise, we should add the item to the
solution set and the problem size will be reduced by the weight of that item. Corresponding
profit will be added for the selected item.

Dynamic programming divides the problem into small sub-problems. Let V is an array of the
solution of sub-problems. V[i, j] represents the solution for problem size j with first i items.
The mathematical notion of the knapsack problem is given as:

V [1 …. n, 0 … M] : Size of the table


V (n, M) = Solution

n = Number of items

The proposed algorithm for 0/1 knapsack using dynamic programming is described below:

Algorithm DP_BINARY_KNAPSACK (V, W, M)


// Description: Solve binary knapsack problem using dynamic programming
// Input: Set of items X, set of weight W, profit of items V and knapsack capacity M
// Output: Array V, which holds the solution of problem
for i ← 1 to n do
V[i, 0] ← 0
end
for i ← 1 to M do
V[0, i] ← 0
end
for V[0, i] ← 0 do
for j ← 0 to M do
if w[i] ≤ j then
V[i, j] ← max{V[i-1, j], v[i] + V[i – 1, j – w[i]]}
else
V[i, j] ← V[i – 1, j] // w[i]>j
end
end
end
The above algorithm will just tell us the maximum value we can earn with dynamic
programming. It does not speak anything about which items should be selected. We can find
the items that give optimum result using the following algorithm

Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end
Complexity analysis

With n items, there exist 2n subsets, the brute force approach examines all subsets to find the
optimal solution. Hence, the running time of the brute force approach is O(2n). This is
unacceptable for large n.
Dynamic programming finds an optimal solution by constructing a table of size n ´ M, where
n is a number of items and M is the capacity of the knapsack. This table can be filled up in
O(nM) time, same is the space complexity.

▪ Running time of Brute force approach is O(2n).


▪ Running time using dynamic programming with memorization is O(n * M).
Example

Find an optimal solution for following 0/1 Knapsack problem using dynamic
programming: Number of objects n = 4, Knapsack Capacity M = 5, Weights (W1, W2,
W3, W4) = (2, 3, 4, 5) and profits (P1, P2, P3, P4) = (3, 4, 5, 6).
Solution:
Solution of the knapsack problem is defined as,

We have the following stats about the problem,

Item Weight (wi) Value (vi)


I1 2 3
I2 3 4
I3 4 5
Item Weight (wi) Value (vi)
I4 5 6

Boundary conditions would be V [0, i] = V[i, 0] = 0. Initial configuration of table looks like.
Item Detail 0 1 2 3 4 5
i=0 0 0 0 0 0 0
i=1 w1=2 v1=3 0
i=2 w2=3 v2=4 0
i=3 w3=4 v3=5 0
i=4 w4=5 v4=6 0
Final table would be:

Item Detail 0 1 2 3 4 5
i=0 0 0 0 0 0 0
i=1 w1=2 v1=3 0 0 3 3 3 3
i=2 w2=3 v2=4 0 0 3 4 4 7
i=3 w3=4 v3=5 0 0 3 4 5 7
i=4 w4=5 v4=6 0 0 3 4 5 7

Count Paths Problem


The problem is to count all the possible paths from top left to bottom right of a n X m matrix
with the constraints that from each cell you can either move only to right or down.
Given a 2-D matrix of order N x M, find the number of ways to reach cell with
coordinates (i,j) from a starting cell (0,0), with the condition such that one can move only
right and down one step.
Example: Suppose a matrix of order 3 x 3, the number of ways to reach the bottom right
corner are shown below:
Example:
Suppose we have n towns and m towns, count all the possible paths from the top left town to
bottom right town, where n=4 and m=4.
Solution:
Here is simple algorithm for dynamic programming solution.

▪ Initialize an array memo with matrix’s dimensions n X m. This array will give us final
count, once we reach to bottom right.
▪ Fill memo[0][0]=0.
▪ Fill the first row with 1, as you can reach here from one direction(by going right)
▪ Fill the first column with 1 as well, as you can reach here from one direction (by going
down).
▪ Iterate over all other rows and columns and use formula memo[i][j] =memo[i-1][j] +
memo[i][j-1], as you can reach here from two directions (By going right or By going
down)
▪ return memo[n-1][m-1].
0 1 2 3 m

n 0 1 1 1
0

1 1 2 3 4

2 1 3 6 10

3 1 4 10 20

Dynamic Programming:
a. Bottom-Up Memorization:

int count ( int n, int m)


{ for (i=0; i<n; i++)
for (j=0; j<m; j++)
if (i==0 && j==0) memo[i][j]=0;
else if (i==0 || j==0) memo[i][j]=1;
else memo[i][j]= memo[i-1][j] + memo[i][j-1] ;
return memo[n-1][m-1];
}
b. Top- Down Memorization:

int count ( int n, int m)


{ if (n==0 && m==0) return 0;
if ( n==0 || m==0 ) return 1;
if (memo[n][m]!=-1) return memo[n][m];
else memo[n][m]= count(n-1,m) + count(n,m-1);
return memo[n][m];
}
Time Complexity:
Since we are building the countWays matrix and iterating over all cells it has O(NxM) time
complexity which is far better than the recursive solution which gave exponential time
complexity.

Longest common subsequence


Problem: Given 2 sequences, X=<x1………..xm> and Y= <y1………..ym>. Find a
subsequence common to both whose length is longest. A subsequence doesn’t have to be
consecutive, but it has to be in order.
Examples [The examples are of different types of trees.]
Brute-force algorithm:

For every subsequence of X = ⟨ x1, ..., xm ⟩, check whether it is a subsequence of Y = ⟨ y1,


..., yn ⟩, and record it if it is longer than the longest previously found.

• There are 2m subsequences of X to check.


• For each subsequence, scan Y for the first letter. From there scan for the second letter,
etc., up to the n letters of Y.
• Therefore, Θ(n2m).

Step 1: Optimal substructure Notation:

• Xi = prefix ⟨ x1, ..., xi ⟩


• Yi = prefix ⟨ y1, ..., yi ⟩

Theorem: Let Z = ⟨ z1, ..., zk ⟩ be any LCS of X = ⟨ x1, ..., xm ⟩ and Y = ⟨ y1, ..., yn ⟩. Then

1. If xm = yn, then zk = xm = yn, and Zk-1 is an LCS of Xm-1 and Yn-1.


2. If xm ≠ yn, then zk ≠ xm ⇒ Z is an LCS of Xm-1 and Y.
3. If xm ≠ yn, then zk ≠ yn ⇒ Z is an LCS of X and Yn-1.

Step 2: Recursive formulation

Step 3: Compute Value of Optimal Solution to LCS

A recursive algorithm based on this formulation would have lots of repeated subproblems, for
example, on strings of length 4 and 3:
- Lots of repeated subproblems.
- Instead of recomputing, store

Dynamic programming avoids the redundant computations by storing the results in a table.
We use c[i,j] for the length of the LCS of prefixes Xi and Yj (hence it must start at 0)

This is a bottom-up solution: Indices i and j increase through the loops, and references
to c always involve either i-1 or j-1, so the needed subproblems have already been computed.

It is clearly Θ(mn); much better than Θ(n2m)!

Step 4: Construct an Optimal Solution to LCS

In the process of computing the value of the optimal solution we can also record
the choices that led to this solution. Step 4 is to add this latter record of choices and a way of
recovering the optimal solution at the end.
Table b[i, j] is updated above to remember whether each entry is

• a common substring of Xi-1 and Yj-1 (diagonal arrow), in which case the common
character xi = yj is included in the LCS;
• a common substring of Xi-1 and Y (↑); or
• a common substring of X and Yj-1 (←).

We reconstruct the path by calling Print-LCS(b, X, n, m) and following the arrows, printing
out characters of X that correspond to the diagonal arrows (a Θ(n + m) traversal from the
lower right of the matrix to the origin):

Example of LCS:
What do spanking and amputation have in common? [Show only c[i,j] ]
Answer : pain

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy