DAA Unit-3 Dynamic Programming R20

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Dynamic Programming (DP)

Dynamic Programming is a design principle which is used to solve problems with overlapping
sub problems

➢ It is used when the solution to a problem can be viewed as the result of a sequence of
decisions. It avoid duplicate calculation in many cases by keeping a table of known
results and fills up as sub instances are solved.
➢ In Dynamic Programming We usually start with the smallest and hence the simplest
sub- instances.by combining their solutions, we obtain the answers to sub-instances of
increasing size, until finally we arrive at the solution of the original instances.
➢ It follows a bottom-up technique by which it start with smaller and hence simplest sub
instances. Combine their solutions to get answer to sub instances of bigger size until
we arrive at solution for original instance.

How DP differ from Greedy and Divide & Conquer

➢ Dynamic programming differs from Greedy method because Greedy method makes
only one decision sequence but dynamic programming makes more than one decision
However, sequences containing sub-optimal sub-sequences cannot be optimal and so
will not be generated.
➢ Divide and conquer is a top-down method.
➢ When a problem is solved by divide and conquer, we immediately attack the complete
instance, which we then divide into smaller and smaller sub-instances as the algorithm
progresses.
➢ The difference between Dynamic Programming and Divide and Conquer is that the sub
problems in Divide and Conquer are considered to be disjoint and distinct where as in
Dynamic Programming they are overlapping.

Principle of Optimality

➢ An optimal sequence of decisions has the property that whatever the initial state and
decisions are, the remaining decisions must constitute an optimal decision sequence
with regard to the state resulting from the first decision.
➢ i.e principle of optimality is satisfied when an optimal solution is found for a problem
then optimal solutions are also found for its sub-problems also.

General Method

1. The structure of the solution to be characterized.


2. Representing the optimal solution wherever needed in the sub-problem (defining
recursively)
3. The optimal solution has to be constructed from the information
4. The final solution has to be constructed from the information

ACET 1 Ch Murty
All pairs Shortest Path

Let G = (V, E) be a directed graph with n vertices

Let cost be a cost adjacency matrix for G such that

❑ cost(i, i) = 0, 1 < i < n,

❑ then cost(i, j) is the length / cost of edge <i, j> if <i, j> ∈ E(G) and

❑ cost(i, j) = ∞ if i ≠ j and <i, j> ∈ E(G)

The graph allows edges with negative cost value, but negative valued cycles are not
allowed. All pairs shortest path problem is to find a matrix A such that A[i][j] is the length of
the shortest path from i to j.

Consider a shortest path from i to j, i≠j. The path originates at i goes through possibly many
vertices and terminates at j. Assume that there is no cycles. If there is a cycle we can remove
them without increase in the cost. Because there is no cycles with negative cost.

Initially we set A[i][j] = c[i][j].

Algorithm makes n passes over A. Let A0, A1, .. An represent the matrix on each pass.

Let Ak-1[i,j] represent the smallest path from i to j passing through no intermediate
vertex greater than k-1. This will be the result after k-1 iterations. Hence kth iteration explores
whether k lies on optimal path. A shortest path from I to j passes through no vertex greater than
k, either it goes through k or does not.

If it does,

Ak[i, j] = Ak-1[i, k] + Ak-1[k, j]

If it does not, then no intermediate vertex has index


greater than k-1. Then

Ak[i, j] = Ak-1[i, j]

Combining these two conditions we get

A(k)[i, j] = min {A(k-1)[i, j], A(k-1)[i, k] + A(k-1)[k, j]}

ACET 2 Ch Murty
Consider the directed graph given below.

Cost adjacency matrix for the graph is as given below

Copy the cost values to the matrix A. So we have A0 as

Matrix A after each iteration is as given below.

ACET 3 Ch Murty
Algorithm for All Pairs Shortest path
0 Algorithm AllPaths(cost, A, n)
1 // cost[1 :n, 1:n] is the cost adjacency matrix of a graph with
2 // n vertices ; A[i, j] is the cost of a shortest path from vertex
3 // i to vertex j. cost[i,i]=0.0 for 1≤ i≤n.
4{
5 for i :=1 to n do
6 for j :=1 to n do
7 A[i, j] := cost[i,j]; // Copy cost into A
8 for k :=1 to n do
9 for i :=1 to n do
10 for j :=1 to n do
11 A[i,j]:=min (A[i,j], A[i,k]+A[k,j]);
12 }

ACET 4 Ch Murty
0/1 KNAPSACK
The 0/1 knapsack problem is similar to the knapsack problem as in the Greedy method
expect that the xi’s are restricted to have a values either 0 or 1( the decisions on me xi are made
in the order xn,xn-1 …x1 following a decision on xn, we may be in one of two possible states the
capacity remaining in the knapsack is m and no profit has accrued or the capacity remaining is
m-wn and profit of pn has accrued) let fi(y) be the value of optimal solution to KNAP (i,j,m) we
can represent the 0/1 knapsack problem as

Σ pi xi should be max subject to one l≤i≤n

cond that Σ wi xi ≤ m(capacity) and xi=0 or 1 where l≤i≤j

The 0/1 knapsack problem is KNAP( l, n, m) The principal of optimality holds in the
solution of 0/1 knapsack problem in dynamic programming approach The dynamic
programming technique can also be used to represent the solution starts in tuples .Each tuple
contains member (pi,wi)a where pi is the profit earned on object i and wi is the weight of me
object i, one solution to the knapsack problem can be obtained by making sequence of decisions
on one var x1,x2 ….xn.

A decision on variable xi involves deciding which of values 0 or 1 is to be assigned to


it. If it is assumed that the decision on xn one of the two passable status is arrived at.

The capacity of knapsack remains the same and no profit is gained or capacity of
knapsack remains M-wn and a profit of pn is occurred. It is clear that the remaining decision
must be optimal w.r.t problem state resulting from the decision on xn….x1 will not be optimal.

The following is the recurrence relation

Si1 = {}
Si1 = {(p, w) / (p-pi, w-wi) € Si}

Where Si is the set of all pairs for fi including (0, 0) and fi is completely defined by the
pairs (pi, wi). Si may obtained by merging Si and Si1. This merge corresponding to taking the
make of two functions fi-1(x) and fi-1(x-wi)+pi in the object function of 0/1 Knapsack problem
So if one of Si-1 and Si1 has a pair (pj, wj) and the other has a pair (pk, wk) and pj≤pk while wj≤wk.
Then the pair (pj, wj) is discarded this rule is called purging or dominance rule when generating
Si all the pairs (p, w) with w>m may also be purged.

If we try to remove ith item then profit of ith item is reduced from total profit and weight
of ith item is reduced from total weight which belongs to the profit and weight of (i-1)th item.
S0= {0, 0}

Addition: Si1 = Si-1 + (Pi, Wi)


Merging or union: Si = Si-1 U Si1

Purging (Dominance) Rule


ACET 5 Ch Murty
Take any 2 sets in Si pairs (Pj, Wj) and (Pk ,Wk) The purging rule states that if Pj≤ Pk
and Wj ≥ Wk then (Pj, Wj) will be deleted.
Finding optimal Solution

To find optimal solution by performing searching process for knowing values based on.

(i) if (Pi ,Wi) € Si-1 then Xn=0


(ii) if (Pi ,Wi) does not € Si-1 then (Pi ,Wi)= (P-Pn), (W-Wn) and Xn=1

The searching process is done on last tuple of Si

Problem : m=6 n=3 (W1,W2,W3)= (2,3,3), and (P1,P2,P3)=(1,2,4)

Solution: Initially take S0 = {0, 0}

From the given data (P1, W1) = (1, 2)

(P2, W2) = (2, 3)

(P3, W3) = (4, 3)

Addition: S11 = S0 + {P1, W1)

S11 = {(0, 0)} + {(1, 2)}

S11 = {(1, 2)}

Merging operation:

Si = Si-1 + Si1

S1 = S0 + S11

therefore S1 = {(0,0)} U {(1,2)} = {(0,0), (1,2)}

Applying purging rule: S0, S11

(Pj, Wj) = (0,0); (Pk, Wk) = (1, 2)

Here 0≤1 but 0≥2-------------- false

Hence no deletion of tuple

Calculation of S2 (= S1 U S21): S21 = S1 + (P2 , W2)

ACET 6 Ch Murty
= {(0, 0), (1, 2)} + {(2, 3)}

S21 = {(2,3),(3,5)}

S2 = S1 U S21 = {(0,0)(1,2)}U{(2,3),(3,5)}

S2 = {(0, 0), (1, 2), (2, 3), (3, 5)}

No deletion

Calculation of S2(=S2 U S31):

S31 = S2 + (P3, W3)

= {(0,0),(1,2),(2,3),(3,5)}+{(4,3)}

S31 = {(4,3),(5,5),(6,6),(7,8)}

S3 = S2 U S31

= {(0,0),(1,2),(2,3),(3,5)} U {(4,3),5,5),(6,6),(7,8)}

= {(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6),(7,8)}

Tuple (7,8) is Discarded because exceeds max capacity of Knapsack.

So.

S3 ={(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6)}

(3, 5) will be deleted because 3≤5 and 5≥3

S3 = {(0,0),(1,2),(2,3),(4,3),(5,5),(6,6)}

(2,3) will be deleted because 2≤4 and 3≥3.

S3 = {(0,0),(1,2),(4,3),(5,5),(6,6)}

ACET 7 Ch Murty
To find optimal sol

The searching process is done on last tuple of si in our problem capacity of Knapsack
is 6 so the last tuple (6,6) Implements above condition on (6,6)

(6, 6) є S3 but (6, 6) does not € S2

So it can be written as (6-pn, 6-wn)

(6, 6) - (4, 3) = (2, 3)

Hence x3=1

(2, 3) € S2 but (2,3) does not € S1 So x2=1

(2,3) does not € S0 which means x1=0

(2,3) - (2,3)=(0,0)

(0,0) € S1 and (0,0) € S0

Σ pi*xi give Max profit. which means x1=0

Therefore the optimal sol is {0, 1, 1}

Informal knapsack algorithm


1. Algorithm DKP(p, w, n, m)
2. {
3. S0:= {0, 0)};
4. for i :=1to n-1 do
5. {
6. Si-1 :={(P,W)(P-pi,W-wi)€ Si-1 and W≤m};
7. Si :=Marge Purge(Si-1, Si-11);
8. }
9. (PX,WX) :=last pair in Sn-1
10. (PY,WY) :=(P’ +pn,W’ +wn) where W’ is the largest W in
11. any pair in Sn-1 such that W+ wn≤m;
12. //Trace back for xn,xn-1….,x1.
13. if(PX > PY) then xn :=0;
14. else xn :=1;
15. Trace Back For(xn-1,…..x1);
16. }

ACET 8 Ch Murty
Multistage Graph
• A Dynamic Programming formulation for a k-stage graph problem is obtained by first
noticing that every s to t path is the result of the sequence of k-2 decisions.
• The ith decision involves determining which vertex in Vi+1 , 1 <= i <= k-2, is to be on
the path.
• Let p(i, j) be a minimum cost path from vertex j in Vi to vertex t. Let cost(i, j) be the
cost of this path.
• Then 𝑐𝑜𝑠𝑡(𝑖, 𝑗) = min {𝑐(𝑗, 𝑙) + 𝑐𝑜𝑠𝑡(𝑖 + 1, 𝑙)}
𝑖𝜖𝑉𝑖+1

Another Example of multistage graph is shown in Fig.

Stage 4: In Stage ‘4’ the nodes are 9, 10, 11 the costs for these nodes (Initial Cases)
are the direct costs from these ( (n-1)th Stage to Destination Node) nodes to the
destination Node. They are
cost(4, 9) = 3 cost(4, 10) = 1 cost(4, 11) =4
Stage 3: In Stage ‘3’ the nodes are 6, 7, 8 for the costs from stage 3 to stage 4
3 + 𝑐𝑜𝑠𝑡(4,9) = 3 + 3 = 6
𝑐𝑜𝑠𝑡(3,6) = 𝑚𝑖𝑛 { = 5
4 + 𝑐𝑜𝑠𝑡(4,10) = 4 + 1 = 5

4 + 𝑐𝑜𝑠𝑡(4, 9) = 4 + 3 = 7
𝑐𝑜𝑠𝑡(3,7) = 𝑚𝑖𝑛 { 7 + 𝑐𝑜𝑠𝑡(4,10) = 7 + 1 = 8 = 7
6 + 𝑐𝑜𝑠𝑡(4,11) = 6 + 4 = 10

8 + 𝑐𝑜𝑠𝑡(4,10) = 8 + 1 = 9
𝑐𝑜𝑠𝑡(3,8) = 𝑚𝑖𝑛 { = 9
7 + 𝑐𝑜𝑠𝑡(4,11) = 7 + 4 = 11

ACET 9 Ch Murty
Stage 2: In Stage ‘2’ the nodes are 2, 3, 4 for the costs from stage 2 to stage 3
4 + 𝑐𝑜𝑠𝑡(3,6) = 4 + 5 = 9
𝑐𝑜𝑠𝑡(2,2) = 𝑚𝑖𝑛 { = 9
3 + 𝑐𝑜𝑠𝑡(3,7) = 3 + 7 = 11
5 + 𝑐𝑜𝑠𝑡(3,6) = 5 + 5 = 10
𝑐𝑜𝑠𝑡(2,3) = 𝑚𝑖𝑛 { = 10
4 + 𝑐𝑜𝑠𝑡(3,7) = 4 + 7 = 11
3 + 𝑐𝑜𝑠𝑡(3,7) = 3 + 7 = 10
𝑐𝑜𝑠𝑡(2,4) = 𝑚𝑖𝑛 { = 10
4 + 𝑐𝑜𝑠𝑡(3,8) = 4 + 9 = 13
2 + 𝑐𝑜𝑠𝑡(3,7) = 2 + 7 = 9
𝑐𝑜𝑠𝑡(2,5) = 𝑚𝑖𝑛 { = 9
2 + 𝑐𝑜𝑠𝑡(2,8) = 2 + 9 = 11
Stage 1: The optimal choices at stages 1 can be given as follows:
3 + 𝑐𝑜𝑠𝑡(2,2) = 3 + 9 = 12
8 + 𝑐𝑜𝑠𝑡(2,3) = 8 + 10 = 18
𝑐𝑜𝑠𝑡(1,1) = 𝑚𝑖𝑛 { = 12
4 + 𝑐𝑜𝑠𝑡(2,4) = 4 + 10 = 14
5 + 𝑐𝑜𝑠𝑡(2,5) = 5 + 9 = 14
The formal algorithm is given as follows:
Algorithm Fgraph(G)
Begin
cost = 0
n = |V|
stage = n-1
while (j <= stage) do
Choose a vertex k such that C[j, k] + cost k is minimum.
cost[j] = c[j,k]+cost(k)
j = j-1
Add the cost of C(j,r) to record k.
d[j] = k
End while
return cost[j]
end
The path recovery is done as follows:
Algorithm path(G,d,n,k)
begin
n = |V|
stage = n-1
for j = 2 to stage
path[j] = d[path[j-1]]
end for
end.

ACET 10 Ch Murty
TRAVELLING SALESMAN PROBLEM
➢ Let G = (V, E) be a directed graph with edge cost Cij is defined such that cij >0 for all i
and j and Cij = , if <i, j>  E. Let V =n and assume n>1.
➢ The Traveling salesman problem is to find a tour of minimum cost.
➢ A tour of G is a directed simple cycle that includes every vertex in V.
➢ The cost of the tour is the sum of cost of the edges on the tour.
➢ The tour is the shortest path that starts and ends at the same vertex i.e 1.
➢ We know that the tour of the simple graph starts and ends at vertex 1. Every tour
consists of an edge <i, k> for some k  V - {1} and a path from k to 1. Path from k to
1 goes through each vertex in V- {1, k} exactly once. If tour is optimal, path from k to
1 must be shortest k to 1 path going through all vertices in V-{1, k}. Hence the principle
of optimality holds.

Let g(i, S) be length of shortest path starting at i, going through all vertices in S ending
at 1. Function g (1, V-{1}) is the length of optimal salesman tour. From the principle
of optimality

Use eq. (1) to get g (i, S) for |S| =1. Then find g(i, S) with |S|=2 and so on.

APPLICATION:
1. Suppose we have to route a postal van to pick up mail from the mail boxes located at
‘n’ different sites.
2. An n+1 vertex graph can be used to represent the situation.
3. One vertex represents the post office from which the postal van starts and returns
4. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’.
5. The route taken by the postal van is a tour and we are finding a tour of minimum
length.
6. Every tour consists of an edge <1,k> for some k  V-{} and a path from vertex k to
vertex 1.
7. The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once.
8. The function which is used to find the path is g(1,V-{1}) = min{ cij + g(j,s-{j})}
9. g(i, s) be the length of a shortest path starting at vertex i, going through all vertices in
S, and terminating at vertex 1.
The function g(1,v-{1}) is the length of an optimal tour.
Every tour consists of an edge <i, k> for some kV-{1} and a path from k to 1. Path from k to
1 goes through each vertex in V- {1, k} exactly once. If tour is optimal, path from k to 1 must

ACET 11 Ch Murty
be shortest k to 1 path going through all vertices in V-{1, k}. Hence the principle of optimality
holds.
Let g(i, S) be length of shortest path starting at i, going through all vertices in S ending at 1.
Function g (1, V-{1}) is the length of optimal salesman tour. From the principle of optimality
Use eq. (1) to get g (i, S) for |S| =1. Then find g(i, S) with |S|=2 and so on.

Example:

Starting Point is at Node 1:


0 10 9 3
5 0 6 2
9 6 0 7
7 3 5 0

|s| = 0 G(1,ᶲ) = C11 = 0


G(2,ᶲ) = C21 = 5
G(3,ᶲ) = C31 = 9
G(4,ᶲ) = C41 = 7

|s| = 1 G(2,{3}) = C23 + G(3,ᶲ) = 6+9 = 15


G(2,{4}) = C24 + G(4,ᶲ) = 2+7 = 9

G(3,{2}) = C32 + G(2,ᶲ) = 6+5 = 11


G(3,{4}) = C34 + G(4,ᶲ) = 7+7 = 14

G(4,{2}) = C42 + G(2,ᶲ) = 3+5 = 8


G(4,{3}) = C43 + G(3,ᶲ) = 5+9 = 14

|s| = 2 G(2,{3,4}) = min(C23 +G(3,{4}), C24+ G(4,{3}))


= min(6+14, 2+14)=16
G(3,{2,4}) = min(C32 + G(2,{4}), C34 + G(4,{2}))
= min(6+9,7+8) = 15
G(4,{2,3}) = min(C42 + G(2,{3}), C43 + G(3,{2}))
= min(3+15, 5+11) = 16

|s| = 3 G(1,{2,3,4}) = min( C12 + G(2,{3,4}),


C13 + G(3,{2,4}),
C14 + G(4,{2,3}))
= min(10+16, 9+15, 3+16)
19

Minimum Tour cost is 16

The Tour is

ACET 12 Ch Murty
1 → 4 → 3 → 2→ 1

3 + 5 + 6 + 5 = 19

ACET 13 Ch Murty
Optimal Binary Search Tree
Consider a fixed set of words and their probabilities. The problem is to arrange these words in
a binary search tree Let us consider S is the set of words S= {if, for, int, while, do}. if the
following are Two Binary Search Tree (BST) For given set S The assumption are each word
has same probability no unsuccessful search

We require 4 comparisons to find out an identifier in worst case in fig(a). But in fig
(b) only 3 compressing on an average the two trees need 12/5 ,11/5 comparisons.
In General, we can consider different words with different frequencies (probabilities)
and unsuccessful searches also
Let Given set of words are {a1, a2, a3,…….an} With a1< a2 <….. an
Let P(i) be the probability for searching of ai, q(i) be the probability for unsuccessful search
so clearly
Σp(i) + Σq(i) = 1
1≤ i≤ n 0 ≤ i ≤n
So Let us construct optimal Binary search tree
To obtain a cost function for BST add external nodes in the place of every empty sub true.
These nodes called external nodes.

ACET 14 Ch Murty
If BST represents ‘n’ identifiers then there will be exactly ‘n’ internal nodes and n+1
external nodes every internal node represent a point where successful search may terminate
and every external node represents a point where an unsuccessful search may terminates If a
successful search terminates at an internal node at level L, then L iterations are required. so
expected cost for me internal node for ai is P(i) * Level (ai) For unsuccessful search terminate
at external node the words which are not in BST can be partitioned into n+1 Equivalence
classes. Equivalence class E0 represents least among all the identifiers in BST& The
equivalence classes ‘En’ represents greatest among all Elements in BST Hence search
terminate at level Ei-1

The expected cost of a binary search tree:


n n

 Pi  level(a i ) +  Qi  (level(Ei ) − 1)
n =1 n =0

The level of the root : 1

n identifiers : a1 <a2 <a3 <…< an, Pi, 1in : the probability that ai (Successful) is searched. n+1
external nodes : E0, E1, E2, ……. En, Qi, 0in : the probability that Ei (Unsuccessful) is
searched where ai < Ei < ai+1 (a0=-, an+1=).

𝑘−1 𝑛

𝐶(0, 𝑛) = min {𝑝𝑘 + [𝑞0 + ∑(𝑝𝑖 + 𝑞𝑖 ) + 𝐶(0, 𝑘 − 1)] + [𝑞𝑘 + ∑ (𝑝𝑖 + 𝑞𝑖 ) + 𝐶(𝑘, 𝑛)]}
1≤𝑘≤𝑛
𝑖=1 𝑖=𝑘+1

𝑘−1 𝑗

𝐶(𝑖, 𝑗) = min {𝑝𝑘 + [𝑞𝑖 + ∑ (𝑝𝑙 + 𝑞𝑙 ) + 𝐶(𝑖, 𝑘 − 1)] + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 ) + 𝐶(𝑘, 𝑗)]}
𝑖+1≤𝑘≤𝑗
𝑙=𝑖+1 𝑙=𝑘+1

ACET 15 Ch Murty
𝑘−1 𝑗

𝐶(𝑖, 𝑗) = min {𝑝𝑘 + 𝐶(𝑖, 𝑘 − 1) + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 )] + 𝐶(𝑘, 𝑗) + [𝑞𝑘 + ∑ (𝑝𝑙 + 𝑞𝑙 )]}
𝑖+1≤𝑘≤𝑗
𝑙=𝑖+1 𝑙=𝑘+1

𝐶(𝑖, 𝑗) = min {𝑝𝑘 𝐶(𝑖, 𝑘 − 1) + 𝐶(𝑘, 𝑗)} + 𝑝𝑘 + 𝑤(𝑖, 𝑘 − 1) + 𝑤(𝑘, 𝑗)


𝑖+1≤𝑘≤𝑗

𝐶(𝑖, 𝑗) = min {𝐶(𝑖, 𝑘 − 1) + 𝐶(𝑘, 𝑗)} + 𝑤(𝑖, 𝑗)


𝑖+1≤𝑘≤𝑗

ACET 16 Ch Murty

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy