Dynamic Programming
Dynamic Programming
1
Dynamic Programming Algorithm
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom-
up/ top-down fashion
4. Construct an optimal solution from computed
information .
2
Knapsack Problem using Dynamic Programming Approach
Given n items of
weights: w1 w2 … wn
values: v1 v2 … vn , with a knapsack of capacity W ,find
most valuable subset of the items that fit into the knapsack
Draw a table say ‘T’ with (n+1) number of rows and (w+1) number of
columns.
Fill all the boxes of 0th row and 0th column with zeroes as shown
Step-02: Start filling the table row wise top to bottom from left to right.
T(i , j) = maximum value of the selected items if we can take items 1 to i and we
have weight restrictions of j.
Step-03:
After filling the table completely, value of the last box represents the maximum
possible value that be put in the knapsack.
Step-04:
To identify the items that must be put in the knapsack to obtain the maximum
profit, Considering the last column of the table, start scanning the entries from
bottom to top.
Problem-
For the given set of items and knapsack capacity = 5 kg, find the optimal
solution for the 0/1 knapsack problem making use of dynamic
programming approach.
1 2 3
2 3 4
3 4 5
4 5 6
Step-02:
Finding T(1,1)-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
T(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0
Finding T(1,2)- T(1,2) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) }
T(1,2) = max { T(0,2) , 3 + T(0,0) }
T(1,2) = max {0 , 3+0}
T(1,2) = 3
Considering the last column, start scanning the entries from bottom to top.
If an entry is encountered whose value is not same as the value which is
stored in the entry immediately above it, then mark the label of row of that
entry.
Following this, We mark the rows labelled “1” and “2”.
Thus, items that must be put in the knapsack to obtain the maximum value
7 are- Item-1 and Item-2
All pairs shortest path problem(Floyd-Warshall Algorithm)
Floyd-Warshall algorithm is used to find all pair shortest paths from a given
weighted graph. As a result of this algorithm, it will generate a matrix, which will
represent the minimum distance from any node to all other nodes in the graph.
If (i , j) is an edge in E, M[ i ][ j ] = weight(i,j)
// If there exists a direct edge between the vertices, value = weight of edge
Else
M[ i ][ j ] = infinity // If there is no direct edge between the vertices,
value = ∞
for k from 1 to |V|
for i from 1 to |V|
for j from 1 to |V|
if M[ i ][ j ] > M[ i ][ k ] + M[ k ][ j ]
M[ i ][ j ] = M[ i ][ k ] + M[ k ][ j ]
Remove all the self loops and parallel edges (keeping the edge with lowest weight) from the
graph if any.
Step-02:
Write the initial distance matrix representing the distance between every pair of vertices as
For diagonal elements (representing self-loops), value = 0
For vertices having a direct edge between them, value = weight of that edge
For vertices having no direct edges between them, value = ∞
Step-03
Optimal Binary Search Tree
A binary search tree is a special kind of binary tree. In binary search tree, the
elements in the left and right sub-trees of each node are respectively lesser
and greater than the element of that node.
Total number of possible Binary Search Trees with n different keys (countBST(n)) =
Catalan number Cn = (2n)! / ((n + 1)! * n!)
a2
a1
a1
Dynamic programming Approach
The idea is to one of the keys in a1, …,an , say ak, where 1 ≤
k ≤ n, must be the root. As per binary search rule, left
subtree of ak contains a1,...,ak-1 and right subtree of ak
contains ak+1,...,an.
1 0 p1 goal
0 p2
i C[i,j]
pn
n+1 0
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The right saves the tree roots, which are the k’s that give the minimum
j 0 1 2 3 4 i
j 0 1 2 3 4
i C
1 0 .1 .4 1.1 1.7 1 1 2 3 3
B D
2 0 .2 .8 1.4 2 2 3 3
3 0 .4 1.0 3 3 3 A
4 0 .3 4 4
optimal BST
5 0 5
• Example 1 : key A B C D E
probability 0.25 0.2 0.05 0.2 0.3
ki 1 2 3
pi 0.3 0.1 0.6
Travelling Salesman Problem
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
S=Φ
Cost(2,Φ,1)=d(2,1)=5
Cost(3,Φ,1)=d(3,1)=6
Cost(4,Φ,1)=d(4,1)=8
S=1
Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}
Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15
Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18
Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18
Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20
cost(4,{2},1)=d[4,2]+cost(2,Φ,1)=8+5=13
Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
Algorithm for Traveling salesman problem
Step 1: Let d[i, j] indicates the distance between cities i and j. Function
C[x, V – { x }]is the cost of the path starting from city x. V is the set of
cities/vertices in given graph. The aim of TSP is to minimize the cost
function.
Step 2: Assume that graph contains n vertices V1, V2, ..., Vn. TSP
finds a path covering all vertices exactly once, and the same time it
tries to minimize the overall traveling distance.
Solution:
Let us start our tour from city 1.
Step 1: Initially, we will find the distance between city 1 and city {2, 3, 4, 5}
without visiting any intermediate city.
Cost(x, y, z) represents the distance from x to z and y as an intermediate city.
Cost(2, Φ, 1) = d[2, 1] = 24
Cost(3, Φ, 1) = d[3, 1] = 11
Cost(4, Φ , 1) = d[4, 1] = 10
Cost(5, Φ , 1) = d[5, 1] = 9
Step 2: In this step, we will find the minimum distance by visiting 1 city as
intermediate city.
Cost(2, {3, 4}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min { [2 + 18], [5 + 35] } = min{20, 40} = 20
Cost(2, {4, 5}, 1) = min { d[2, 4] + Cost(4, {5}, 1), d[2, 5] + Cost(5, {4}, 1)]}
= min { [5 + 15], [11 + 21] } = min{20, 32} = 20
Cost(2, {3, 5}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min { [2 + 18], [5 + 35] } = min{20, 40} = 20
Cost(3, {2, 4}, 1) = min { d[3, 2] + Cost(2, {4}, 1), d[3, 4] + Cost(4, {2}, 1)]}
= min { [12 + 15], [8 + 47] } = min{27, 55} = 27
Cost(3, {4, 5}, 1) = min { d[3, 4] + Cost(4, {5}, 1), d[3, 5] + Cost(5, {4}, 1)]}
= min { [8 + 15], [7 + 21] } = min{23, 28} = 23
Cost(3, {2, 5}, 1) = min { d[3, 2] + Cost(2, {5}, 1), d[3, 5] + Cost(5, {2}, 1)]}
= min { [12 + 20], [7 + 28] } = min{32, 35} = 32
Cost(4, {2, 3}, 1) = min{ d[4, 2] + Cost(2, {3}, 1), d[4, 3] + Cost(3, {2}, 1)]}
= min { [23 + 13], [24 + 36] } = min{36, 60} = 36
Cost(4, {3, 5}, 1) = min{ d[4, 3] + Cost(3, {5}, 1), d[4, 5] + Cost(5, {3}, 1)]}
= min { [24 + 16], [6 + 19] } = min{40, 25} = 25
Cost(4, {2, 5}, 1) = min{ d[4, 2] + Cost(2, {5}, 1), d[4, 5] + Cost(5, {2}, 1)]}
= min { [23 + 20], [6 + 28] } = min{43, 34} = 34
Cost(5, {2, 3}, 1) = min{ d[5, 2] + Cost(2, {3}, 1), d[5, 3] + Cost(3, {2}, 1)]}
= min { [4 + 13], [8 + 36] } = min{17, 44} = 17
Cost(5, {3, 4}, 1) = min{ d[5, 3] + Cost(3, {4}, 1), d[5, 4] + Cost(4, {3}, 1)]}
= min { [8 + 18], [11 + 35] } = min{26, 46} = 26
Cost(5, {2, 4}, 1) = min{ d[5, 2] + Cost(2, {4}, 1), d[5, 4] + Cost(4, {2}, 1)]}
= min { [4 + 15], [11 + 47] } = min{19, 58} = 19
Step 4 : In this step, we will find the minimum distance by visiting 3 cities as
intermediate city.
Cost(2, {3, 4, 5}, 1) = min d[2, 3] + Cost(3, {4, 5}, 1)
d[2, 4] + Cost(4, {3, 5}, 1)
d[2, 5] + Cost(5, {3, 4}, 1)
= min { 2 + 23, 5 + 25, 11 + 36}
= min{25, 30, 47} = 25
All cities are visited so come back to 1. Hence the optimum tour would be
1 – 4 – 5 – 2 – 3 – 1.
Greedy Algorithms vs Dynamic Programming