DAA Unit-3 Dynamic Programming R20
DAA Unit-3 Dynamic Programming R20
DAA Unit-3 Dynamic Programming R20
Dynamic Programming is a design principle which is used to solve problems with overlapping
sub problems
➢ It is used when the solution to a problem can be viewed as the result of a sequence of
decisions. It avoid duplicate calculation in many cases by keeping a table of known
results and fills up as sub instances are solved.
➢ In Dynamic Programming We usually start with the smallest and hence the simplest
sub- instances.by combining their solutions, we obtain the answers to sub-instances of
increasing size, until finally we arrive at the solution of the original instances.
➢ It follows a bottom-up technique by which it start with smaller and hence simplest sub
instances. Combine their solutions to get answer to sub instances of bigger size until
we arrive at solution for original instance.
➢ Dynamic programming differs from Greedy method because Greedy method makes
only one decision sequence but dynamic programming makes more than one decision
However, sequences containing sub-optimal sub-sequences cannot be optimal and so
will not be generated.
➢ Divide and conquer is a top-down method.
➢ When a problem is solved by divide and conquer, we immediately attack the complete
instance, which we then divide into smaller and smaller sub-instances as the algorithm
progresses.
➢ The difference between Dynamic Programming and Divide and Conquer is that the sub
problems in Divide and Conquer are considered to be disjoint and distinct where as in
Dynamic Programming they are overlapping.
Principle of Optimality
➢ An optimal sequence of decisions has the property that whatever the initial state and
decisions are, the remaining decisions must constitute an optimal decision sequence
with regard to the state resulting from the first decision.
➢ i.e principle of optimality is satisfied when an optimal solution is found for a problem
then optimal solutions are also found for its sub-problems also.
General Method
ACET 1 Ch Murty
All pairs Shortest Path
❑ then cost(i, j) is the length / cost of edge <i, j> if <i, j> ∈ E(G) and
The graph allows edges with negative cost value, but negative valued cycles are not
allowed. All pairs shortest path problem is to find a matrix A such that A[i][j] is the length of
the shortest path from i to j.
Consider a shortest path from i to j, i≠j. The path originates at i goes through possibly many
vertices and terminates at j. Assume that there is no cycles. If there is a cycle we can remove
them without increase in the cost. Because there is no cycles with negative cost.
Algorithm makes n passes over A. Let A0, A1, .. An represent the matrix on each pass.
Let Ak-1[i,j] represent the smallest path from i to j passing through no intermediate
vertex greater than k-1. This will be the result after k-1 iterations. Hence kth iteration explores
whether k lies on optimal path. A shortest path from I to j passes through no vertex greater than
k, either it goes through k or does not.
If it does,
Ak[i, j] = Ak-1[i, j]
ACET 2 Ch Murty
Consider the directed graph given below.
ACET 3 Ch Murty
Algorithm for All Pairs Shortest path
0 Algorithm AllPaths(cost, A, n)
1 // cost[1 :n, 1:n] is the cost adjacency matrix of a graph with
2 // n vertices ; A[i, j] is the cost of a shortest path from vertex
3 // i to vertex j. cost[i,i]=0.0 for 1≤ i≤n.
4{
5 for i :=1 to n do
6 for j :=1 to n do
7 A[i, j] := cost[i,j]; // Copy cost into A
8 for k :=1 to n do
9 for i :=1 to n do
10 for j :=1 to n do
11 A[i,j]:=min (A[i,j], A[i,k]+A[k,j]);
12 }
ACET 4 Ch Murty
0/1 KNAPSACK
The 0/1 knapsack problem is similar to the knapsack problem as in the Greedy method
expect that the xi’s are restricted to have a values either 0 or 1( the decisions on me xi are made
in the order xn,xn-1 …x1 following a decision on xn, we may be in one of two possible states the
capacity remaining in the knapsack is m and no profit has accrued or the capacity remaining is
m-wn and profit of pn has accrued) let fi(y) be the value of optimal solution to KNAP (i,j,m) we
can represent the 0/1 knapsack problem as
The 0/1 knapsack problem is KNAP( l, n, m) The principal of optimality holds in the
solution of 0/1 knapsack problem in dynamic programming approach The dynamic
programming technique can also be used to represent the solution starts in tuples .Each tuple
contains member (pi,wi)a where pi is the profit earned on object i and wi is the weight of me
object i, one solution to the knapsack problem can be obtained by making sequence of decisions
on one var x1,x2 ….xn.
The capacity of knapsack remains the same and no profit is gained or capacity of
knapsack remains M-wn and a profit of pn is occurred. It is clear that the remaining decision
must be optimal w.r.t problem state resulting from the decision on xn….x1 will not be optimal.
Si1 = {}
Si1 = {(p, w) / (p-pi, w-wi) € Si}
Where Si is the set of all pairs for fi including (0, 0) and fi is completely defined by the
pairs (pi, wi). Si may obtained by merging Si and Si1. This merge corresponding to taking the
make of two functions fi-1(x) and fi-1(x-wi)+pi in the object function of 0/1 Knapsack problem
So if one of Si-1 and Si1 has a pair (pj, wj) and the other has a pair (pk, wk) and pj≤pk while wj≤wk.
Then the pair (pj, wj) is discarded this rule is called purging or dominance rule when generating
Si all the pairs (p, w) with w>m may also be purged.
If we try to remove ith item then profit of ith item is reduced from total profit and weight
of ith item is reduced from total weight which belongs to the profit and weight of (i-1)th item.
S0= {0, 0}
To find optimal solution by performing searching process for knowing values based on.
Merging operation:
Si = Si-1 + Si1
S1 = S0 + S11
ACET 6 Ch Murty
= {(0, 0), (1, 2)} + {(2, 3)}
S21 = {(2,3),(3,5)}
S2 = S1 U S21 = {(0,0)(1,2)}U{(2,3),(3,5)}
No deletion
= {(0,0),(1,2),(2,3),(3,5)}+{(4,3)}
S31 = {(4,3),(5,5),(6,6),(7,8)}
S3 = S2 U S31
= {(0,0),(1,2),(2,3),(3,5)} U {(4,3),5,5),(6,6),(7,8)}
= {(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6),(7,8)}
So.
S3 ={(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6)}
S3 = {(0,0),(1,2),(2,3),(4,3),(5,5),(6,6)}
S3 = {(0,0),(1,2),(4,3),(5,5),(6,6)}
ACET 7 Ch Murty
To find optimal sol
The searching process is done on last tuple of si in our problem capacity of Knapsack
is 6 so the last tuple (6,6) Implements above condition on (6,6)
Hence x3=1
(2,3) - (2,3)=(0,0)
ACET 8 Ch Murty
Multistage Graph
• A Dynamic Programming formulation for a k-stage graph problem is obtained by first
noticing that every s to t path is the result of the sequence of k-2 decisions.
• The ith decision involves determining which vertex in Vi+1 , 1 <= i <= k-2, is to be on
the path.
• Let p(i, j) be a minimum cost path from vertex j in Vi to vertex t. Let cost(i, j) be the
cost of this path.
• Then 𝑐𝑜𝑠𝑡(𝑖, 𝑗) = min {𝑐(𝑗, 𝑙) + 𝑐𝑜𝑠𝑡(𝑖 + 1, 𝑙)}
𝑖𝜖𝑉𝑖+1
Stage 4: In Stage ‘4’ the nodes are 9, 10, 11 the costs for these nodes (Initial Cases)
are the direct costs from these ( (n-1)th Stage to Destination Node) nodes to the
destination Node. They are
cost(4, 9) = 3 cost(4, 10) = 1 cost(4, 11) =4
Stage 3: In Stage ‘3’ the nodes are 6, 7, 8 for the costs from stage 3 to stage 4
3 + 𝑐𝑜𝑠𝑡(4,9) = 3 + 3 = 6
𝑐𝑜𝑠𝑡(3,6) = 𝑚𝑖𝑛 { = 5
4 + 𝑐𝑜𝑠𝑡(4,10) = 4 + 1 = 5
4 + 𝑐𝑜𝑠𝑡(4, 9) = 4 + 3 = 7
𝑐𝑜𝑠𝑡(3,7) = 𝑚𝑖𝑛 { 7 + 𝑐𝑜𝑠𝑡(4,10) = 7 + 1 = 8 = 7
6 + 𝑐𝑜𝑠𝑡(4,11) = 6 + 4 = 10
8 + 𝑐𝑜𝑠𝑡(4,10) = 8 + 1 = 9
𝑐𝑜𝑠𝑡(3,8) = 𝑚𝑖𝑛 { = 9
7 + 𝑐𝑜𝑠𝑡(4,11) = 7 + 4 = 11
ACET 9 Ch Murty
Stage 2: In Stage ‘2’ the nodes are 2, 3, 4 for the costs from stage 2 to stage 3
4 + 𝑐𝑜𝑠𝑡(3,6) = 4 + 5 = 9
𝑐𝑜𝑠𝑡(2,2) = 𝑚𝑖𝑛 { = 9
3 + 𝑐𝑜𝑠𝑡(3,7) = 3 + 7 = 11
5 + 𝑐𝑜𝑠𝑡(3,6) = 5 + 5 = 10
𝑐𝑜𝑠𝑡(2,3) = 𝑚𝑖𝑛 { = 10
4 + 𝑐𝑜𝑠𝑡(3,7) = 4 + 7 = 11
3 + 𝑐𝑜𝑠𝑡(3,7) = 3 + 7 = 10
𝑐𝑜𝑠𝑡(2,4) = 𝑚𝑖𝑛 { = 10
4 + 𝑐𝑜𝑠𝑡(3,8) = 4 + 9 = 13
2 + 𝑐𝑜𝑠𝑡(3,7) = 2 + 7 = 9
𝑐𝑜𝑠𝑡(2,5) = 𝑚𝑖𝑛 { = 9
2 + 𝑐𝑜𝑠𝑡(2,8) = 2 + 9 = 11
Stage 1: The optimal choices at stages 1 can be given as follows:
3 + 𝑐𝑜𝑠𝑡(2,2) = 3 + 9 = 12
8 + 𝑐𝑜𝑠𝑡(2,3) = 8 + 10 = 18
𝑐𝑜𝑠𝑡(1,1) = 𝑚𝑖𝑛 { = 12
4 + 𝑐𝑜𝑠𝑡(2,4) = 4 + 10 = 14
5 + 𝑐𝑜𝑠𝑡(2,5) = 5 + 9 = 14
The formal algorithm is given as follows:
Algorithm Fgraph(G)
Begin
cost = 0
n = |V|
stage = n-1
while (j <= stage) do
Choose a vertex k such that C[j, k] + cost k is minimum.
cost[j] = c[j,k]+cost(k)
j = j-1
Add the cost of C(j,r) to record k.
d[j] = k
End while
return cost[j]
end
The path recovery is done as follows:
Algorithm path(G,d,n,k)
begin
n = |V|
stage = n-1
for j = 2 to stage
path[j] = d[path[j-1]]
end for
end.
ACET 10 Ch Murty
TRAVELLING SALESMAN PROBLEM
➢ Let G = (V, E) be a directed graph with edge cost Cij is defined such that cij >0 for all i
and j and Cij = , if <i, j> E. Let V =n and assume n>1.
➢ The Traveling salesman problem is to find a tour of minimum cost.
➢ A tour of G is a directed simple cycle that includes every vertex in V.
➢ The cost of the tour is the sum of cost of the edges on the tour.
➢ The tour is the shortest path that starts and ends at the same vertex i.e 1.
➢ We know that the tour of the simple graph starts and ends at vertex 1. Every tour
consists of an edge <i, k> for some k V - {1} and a path from k to 1. Path from k to
1 goes through each vertex in V- {1, k} exactly once. If tour is optimal, path from k to
1 must be shortest k to 1 path going through all vertices in V-{1, k}. Hence the principle
of optimality holds.
Let g(i, S) be length of shortest path starting at i, going through all vertices in S ending
at 1. Function g (1, V-{1}) is the length of optimal salesman tour. From the principle
of optimality
Use eq. (1) to get g (i, S) for |S| =1. Then find g(i, S) with |S|=2 and so on.
APPLICATION:
1. Suppose we have to route a postal van to pick up mail from the mail boxes located at
‘n’ different sites.
2. An n+1 vertex graph can be used to represent the situation.
3. One vertex represents the post office from which the postal van starts and returns
4. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’.
5. The route taken by the postal van is a tour and we are finding a tour of minimum
length.
6. Every tour consists of an edge <1,k> for some k V-{} and a path from vertex k to
vertex 1.
7. The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once.
8. The function which is used to find the path is g(1,V-{1}) = min{ cij + g(j,s-{j})}
9. g(i, s) be the length of a shortest path starting at vertex i, going through all vertices in
S, and terminating at vertex 1.
The function g(1,v-{1}) is the length of an optimal tour.
Every tour consists of an edge <i, k> for some kV-{1} and a path from k to 1. Path from k to
1 goes through each vertex in V- {1, k} exactly once. If tour is optimal, path from k to 1 must
ACET 11 Ch Murty
be shortest k to 1 path going through all vertices in V-{1, k}. Hence the principle of optimality
holds.
Let g(i, S) be length of shortest path starting at i, going through all vertices in S ending at 1.
Function g (1, V-{1}) is the length of optimal salesman tour. From the principle of optimality
Use eq. (1) to get g (i, S) for |S| =1. Then find g(i, S) with |S|=2 and so on.
Example:
The Tour is
ACET 12 Ch Murty
1 → 4 → 3 → 2→ 1
3 + 5 + 6 + 5 = 19
ACET 13 Ch Murty
Optimal Binary Search Tree
Consider a fixed set of words and their probabilities. The problem is to arrange these words in
a binary search tree Let us consider S is the set of words S= {if, for, int, while, do}. if the
following are Two Binary Search Tree (BST) For given set S The assumption are each word
has same probability no unsuccessful search
We require 4 comparisons to find out an identifier in worst case in fig(a). But in fig
(b) only 3 compressing on an average the two trees need 12/5 ,11/5 comparisons.
In General, we can consider different words with different frequencies (probabilities)
and unsuccessful searches also
Let Given set of words are {a1, a2, a3,…….an} With a1< a2 <….. an
Let P(i) be the probability for searching of ai, q(i) be the probability for unsuccessful search
so clearly
Σp(i) + Σq(i) = 1
1≤ i≤ n 0 ≤ i ≤n
So Let us construct optimal Binary search tree
To obtain a cost function for BST add external nodes in the place of every empty sub true.
These nodes called external nodes.
ACET 14 Ch Murty
If BST represents ‘n’ identifiers then there will be exactly ‘n’ internal nodes and n+1
external nodes every internal node represent a point where successful search may terminate
and every external node represents a point where an unsuccessful search may terminates If a
successful search terminates at an internal node at level L, then L iterations are required. so
expected cost for me internal node for ai is P(i) * Level (ai) For unsuccessful search terminate
at external node the words which are not in BST can be partitioned into n+1 Equivalence
classes. Equivalence class E0 represents least among all the identifiers in BST& The
equivalence classes ‘En’ represents greatest among all Elements in BST Hence search
terminate at level Ei-1
Pi level(a i ) + Qi (level(Ei ) − 1)
n =1 n =0
n identifiers : a1 <a2 <a3 <…< an, Pi, 1in : the probability that ai (Successful) is searched. n+1
external nodes : E0, E1, E2, ……. En, Qi, 0in : the probability that Ei (Unsuccessful) is
searched where ai < Ei < ai+1 (a0=-, an+1=).
𝑘−1 𝑛
𝐶(0, 𝑛) = min {𝑝𝑘 + [𝑞0 + ∑(𝑝𝑖 + 𝑞𝑖 ) + 𝐶(0, 𝑘 − 1)] + [𝑞𝑘 + ∑ (𝑝𝑖 + 𝑞𝑖 ) + 𝐶(𝑘, 𝑛)]}
1≤𝑘≤𝑛
𝑖=1 𝑖=𝑘+1
𝑘−1 𝑗
𝐶(𝑖, 𝑗) = min {𝑝𝑘 + [𝑞𝑖 + ∑ (𝑝𝑙 + 𝑞𝑙 ) + 𝐶(𝑖, 𝑘 − 1)] + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 ) + 𝐶(𝑘, 𝑗)]}
𝑖+1≤𝑘≤𝑗
𝑙=𝑖+1 𝑙=𝑘+1
ACET 15 Ch Murty
𝑘−1 𝑗
𝐶(𝑖, 𝑗) = min {𝑝𝑘 + 𝐶(𝑖, 𝑘 − 1) + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 )] + 𝐶(𝑘, 𝑗) + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 )]}
𝑖+1≤𝑘≤𝑗
𝑙=𝑖+1 𝑙=𝑘+1
ACET 16 Ch Murty