Daa R20 Unit 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Dynamic Programming (DP)

Dynamic Programming is a design principle which is used to solve


problems with overlapping sub problems.

It is used when the solution to a problem can be viewed as the result of a


sequence of
decisions. It avoids duplicate calculation in many cases by keeping a table
of known
results and fills up as sub instances are solved.
In Dynamic Programming We usually start with the smallest and hence the
simplest sub-
instances. By combining their solutions, we obtain the answers to sub-
instances of increasing size, until finally we arrive at the solution of the original
instance.
It follows a bottom-up technique by which it start with smaller and hence
simplest sub instances. Combine their solutions to get answer to sub instances
of bigger size until we arrive at solution for original instance.

How DP differ from Greedy and Divide & Conquer


Dynamic programming differs from Greedy method because Greedy method
makes only one decision sequence but dynamic programming considers
more than one decision sequence. However, sequences containing sub-optimal
sub-sequences can not be optimal and so will not be generated.

Divide and conquer is a top-down method. When a problem is solved by divide


and conquer, we immediately attack the complete instance, which we then divide
into smaller and smaller sub-instances as the algorithm progresses. The
difference between Dynamic Programming and Divide and Conquer is that the
sub problems in Divide and Conquer are considered to be disjoint and distinct
where as in Dynamic Programming they are overlapping.

Principle of Optimality

An optimal sequence of decisions has the property that whatever the initial
state and decisions are, the remaining decisions must constitute an optimal
decision sequence with regard to the state resulting from the first decision.
In other words, principle of optimality is satisfied when an optimal solution is found
for a problem then optimal solutions are also found for its sub-problems also.

General Method
The Idea of Developing a DP Algorithm

Step1: Structure: Characterize the structure of an optimal solution.


- Decompose the problem into smaller problems, and find a relation between the
structure of the optimal solution of the original problem and the solutions of the
smaller problems.
Step2: Principle of Optimality: Recursively define the value of an optimal solution.
- Express the solution of the original problem in terms of optimal solutions for
smaller problems.
Step 3: Bottom-up computation: Compute the value of an optimal solution in a
bottom-up fashion by using a table structure.
Step 4: Construction of optimal solution: Construct an optimal solution from
computed information.
Steps 3 and 4 may often be combined.
Remarks on the Dynamic Programming Approach
Steps 1-3 form the basis of a dynamic-programming solution to a problem. Step
4 can be omitted if only the value of an optimal solution is required.

All pairs Shortest Paths

Let G=(V,E) be a directed graph with n vertices.The cost of the vertices are
represented by an adjacency matrix such that 1≤i≤n and 1≤j≤n

cost(i, i) = 0,
cost(i, j) is the length of the edge <i, j> if <i, j> E and

cost(i, j) = ∞ if <i, j> E.

The graph allows edges with negative cost value, but negative valued cycles are not
allowed. All pairs shortest path problem is to find a matrix A such that A[i][j] is
the length of the shortest path from i to j.

Consider a shortest path from i to j, i≠j. The path originates at i goes through
possibly many vertices and terminates at j. Assume that there is no cycles. If there
is a cycle we can remove
them without increase in the cost because there is no cycle with negative cost.

Initially we set A[i][j] = cost (i,j).

Algorithm makes n passes over A. Let A0 , A1 , .. An represent the matrix on each pass.

Let Ak-1 [i,j] represent the smallest path from i to j passing through no
intermediate vertex greater than k- 1. This will be the result after k- 1 iterations.
Hence kth iteration explores whether k lies on optimal path. A shortest path from i
to j passes through no vertex greater thank, either it goes through kor does not.

If it does, Ak [i,j] = Ak-1 [i,k] + Ak-1 [k,j]

If it does not, then no intermediate vertex has index greater thank- 1. Then Ak [i,j] = Ak-
1
[i,j]

Combining these two conditions we get

Ak [i,j] = min{ Ak-1 [i,j], Ak-1 [i,k] + Ak-1 [k,j]}, k>=1 (A0 [i,j]= cost(i, j))

Consider the directed graph given below.


Cost adjacency matrix for the graph is as given below

Copy the cost values to the matrix A. So we have A0 as

Matrix A after each iteration is as given below.

Recurrence relation is:

Ak (i, j) = min { Ak−1 (i,j), Ak−1 (i,k) + Ak−1 (k,j) } , k =1

Algorithm
Void All Paths ( float cost[ ][ ], float A[ ][ ], int n)

// cost[1: n, 1:n] is the cost adjacency matrix of a graph with


// n vertices; A[i, j] is the cost of a shortest path from vertex
// i to vertex j. cost[i, i] = 0.0, for 1≤ i≤n
for (int i=1; i<=n; i++)

for (intj=1; j<=n; j++)

A[i][j] = cost[i][j];

for (int k=1; k<=n; k++)

for (i=1; i<=n; i++)

for (j=1; j<=n; j++)

A[i][j] = min{A[i][j], A[i][k] + A[k][j])};

The above is the algorithm to compute lengths of shortest paths.


Time complexity of the algorithm is O (n3).

0/1 KNAPSACK PROBLEM (DYNAMIC KNAPSACK)

The 0/1 knapsack problem is similar to the knapsack problem as in the Greedy
method except that the xi’s are restricted to have a value of either 0 or 1.

We can represent the 0/1 knapsack problem as:

The 0/1 knapsack problem thus is:

Maximize ∑ pi xi

subject to: ∑ wi xi ≤ m(capacity) and xi=0 or 1 where l≤i≤n

The decisions on the xi are made in the order xn,xn- 1 …x1. Following a decision on xn,
we may be in one of two possible states the capacity remaining in the knapsack is
m and no profit has accrued or the capacity remaining is m-wn and profit of pn has
accrued). It is clear that the remaining decisions xn-1 ,…..,x1 must be optimal w.r.t
the problem state resulting from the decision xn. Otherwise xn,…..x1 will not be
optimal.

Let fn(m) be the value of optimal solution to KNAP (1,n,m).

Since principle of optimality holds good, we obtain

f n(m)=max{fn-1(m),fn-1(m-wn)+pn }

in general,

f i(y)=max{fn-1(y),fn-1(m-wi)+pi }

A decision on variable xi involves deciding which of values 0 or 1 is to be assigned to


it.
Where Si is the set of all pairs for fi including (0,0) and fi is completely defined
by the pairs (pi,wi)

Si+1 may obtained by merging Si and S1 i. This merge corresponding to taking the
make of two functions fi-1(x) and fi-1(x-wi)+pi in the object function of 0/1
Knapsack problem

So, if one of Si-1 and S1i has a pair (pj, wj) and the other has a pair (pk,wk) and pj ≤
pkwhile wj≥wk. Then the pair (pj ,wj) is discarded this rule is called purging or
dominance rule when generating Si all the pairs (p,w)with w>m may also be purged

If we try to remove ith item then profit of ith item is reduced from total profit and
weight of ith item is reduced from total weight which belongs to the profit and weight
of (i- 1)th item

Outline to solve the 0/1 problem

Initially westart with S0= {0, 0} i.e profit =0 & weight=0 (bag is empty)

Addition

Then, S0 1 is obtained by adding P1, W1 to all pairs in S0. For generali, S I 1 is


obtained by adding Pi, Wito S i- 1 The following recurrence relation defines S I 1.

S1 i= {(P, W)/ (P-Pi, W-Wi) S i}

Merging or union Si = Si-1 U S1i

Purging (Dominance) Rule

Take any two sets Si and Si 1 consisting of pairs(Pj , Wj) and (Pk ,Wk). The purging
rule states that if Pj≤ Pk and Wj ≥ Wk then (Pj, Wj) will be deleted. In other words,
remove those pairs
that achieve less profit using up more weight.

problem : m=6 n=3 (W1,W2,W3)= (2,3,3), and (P1,P2,P3)=(1,2,4)

In other words, the given profit, weight pairs are: (1,2) (2,3) and (4,3)

Solution: Initially take S0 ={0,0}

Addition:

S01 = Add (P1 , W1) for all pairs in S0

S0 1 = {(1,2)} because S0 is (0,0)

Merging operation:

Si = Si-1 + Si-1 1

S1 = S0 U S0 1 therefore S1 = {(0,0) U{(1,2)} = {(0,0) (1,2)}

S12 = S1 + (P2 , W2)


= {(0,0),(1,2)}+{2,3)}

S12 = {(2,3),(3,5)}

S2 = S1 U S12 = {(0,0)(1,2)}U{(2,3),(3,5)}

S2 = {(0,0),(1,2),(2,3),(3,5)}

S13 = S2 + (P3, W3)

= {(0,0),(1,2),(2,3),(3,5)}+{(4,3)}

S13 = {(4,3),(5,5),(6,6),(7,8)}

S3 = S3-1 U S13 = S 2 U S13

= {(0,0),(1,2),(2,3),(3,5)}U{(4,3),5,5),(6,6),(7,8)}

={(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6),(7,8)}

Tuple (7,8) is Discarded because the weight (8) exceeds max capacity of Knapsack
m=6 so, S3 ={(0,0),(1,2),(2,3),(3,5),(4,3),(5,5),(6,6)

Applying purging rule:

(Pj,Wj) = (3,5); (Pk,Wk) = (5,3)

Here, (3, 5) will be deleted because 3≤5 and 5≥3

This purging rule has to be applied for every Si. In this example, no purging was
necessary in S1 and S2.

S3 = {(0,0),(1,2),(2,3),(4,3),(5,5),(6,6)}

(2,3) will be deleted because 2≤4 and 3≥3.

S3 = {(0,0),(1,2),(4,3),(5,5),(6,6)}

Finding optimal Solution

The searching process is done starting with the last tuple (P1, W1) of Sn. A set of
0/1 values

for xi s such that ∑ pi xi=P1 and ∑ wi xi= W 1 . This can be determined by carrying
out a search through thes i s.

If (P1 ,W1) Sn- 1 , we can set xn=0. Otherwise (if (P1, W1) not in S n- 1), we set Xn
to 1. Then, we will continue this search recursively through all Sn- 1, Sn-2, …., S0 with
either (P1, W1) in the case xn = 0 or (P1-pn,W1-wn) in the case xn=1.

In the above example, last tuple in S3 is (6,6) but (6, 6) S2. Hence x3=1.

Now, we need to search for (6-P3, 6-W3) which is (2,3).


(2, 3) S2 but (2,3) S1 So, x2=1.

Now, we need to search for (2-P2, 3-W2) which is (0,0).

(0,0) S1 and (0,0) S0 . Hence, X1=0.

Therefore the optimal sol is {x1=0,x2=1,x3=1} meaning weight 2 and weight 3 are
taken and weight 1 is ignored.

Informal knapsack algorithm


1. Algorithm DKP(p, w,n, m)
2. {
3. S0 := {0,0)};
4. for i :=1ton- 1 do
5. {
6. Si-1 :={(P,W)(P-pi,W-wi) Si-1 and W≤m};
7. Si :=Marge Purge(Si-1 , Si-11);
8. }
9. (PX,WX) :=last pair in Sn-1
10. (PY,WY) :=(P’ +pn,W’ +wn) where W’ is the largest W in
11. any pair in Sn-1 such that W+ wn≤m;
12. //Trace back for xn,xn-1….,x1.
13. if(PX > PY) then xn :=0;
14. else xn :=1;
15. Trace Back For(xn-1 ,…..x1);
16. }

TRAVELLING SALESMAN PROBLEM


Let G = (V,E) be a directed graph with edge cost cij is defined such that cij >0 for all i
and jand cij = ,if (i,j) 生E.
Let V =n and assume n>1.

The traveling salesman problem is to find a tour of minimum cost.


A tour of G is a directed simple cycle that includes every vertex in V.
The cost of the tour isthe sum of cost of the edges on the tour.
The touristhe shortest path that starts and ends at the same vertex i.e 1.
We know that the tour of the simple graph starts and ends at vertex 1. Every
tour consists of an edge <i, k> for some k V- {1} and apath from k to 1. Path
from k to 1 goes through each vertex in V- {1, k} exactly once. If tour is
optimal, path from k to 1 muat be shortest k to 1 path going through all
vertices in V- {1, k}. Hence the principle of optimality holds.
Let g(i, S) be length of shortest path starting at i, going through all
vertices in S ending at 1. Function g (1, V- {1}) is the length of optimal
salesman tour. From the principle of optimality
Use eq. (1) to get g (i, S) for |S| =1. Then find g(i, S) with |S|=2 and so on.

APPLICATION:

1. Suppose we have to route a postal van to pick up mail from the mail boxes
located at ‘n’ different sites.
2. An n+1 vertex graph can be used to represent the situation.
3. One vertex represents the post office from which the postal van starts and
return.
4. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’.
5. The route taken by the postal van is a tour and we are finding a tour of
minimum length.
6. Every tour consists of an edge <1,k> for some k = V- {} and a path from
vertex k to vertex 1.
7. The path from vertex k to vertex 1 goes through each vertex in V- {1,k} exactly
once.
8. the function which is used to find the
path is g(1,V- {1}) = min{ cij + g(j,S- {j})}

9. g(i,s) be the length of a shortest path starting at vertex i,


going through all vertices in S,and terminating at vertex 1.

10. The function g(1,v- {1}) is the length of an optimal tour.

STEPS TO FIND THE PATH:

1. Find g(i,Φ) =ci1 , 1≤ i <n, hence we can use equation(2) to obtain g(i,s) for all s
to size 1.
2. That we have to start with s=1,(ie) there will be only one vertex in set ‘s’.
3. Then s=2, and we have to proceed until |s| <n- 1.
4. for example consider the graph.

g(i,s) set of nodes/vertex have to visited.


starting position

Fromeq g(i,s) =min{cij +g(j,s- {j}) we obtain g(i,S) for all S of size 1,2…

When |s|<n- 1, the values of I and S for which g(i,S) is needed such that

i 1, 1 sand i s.

|s| =0

i =1 ton. g(i, Φ)=ci1, 1≤ i≤ n

g(1,Φ) = c11 => 0; g(2,Φ) = c21 => 5; g(3,Φ) = c31 => 6; g(4,Φ) = c41 => 8
|s| =1

i =2 to 4

g(2,{3}) = c23 + g(3,Φ) = 9+6 =15

g(2,{4}) = c24 + g(4,Φ) = 10+8 =18


g(3,{2}) = c32 + g(2,Φ) = 13+5 =18
g(3,{4}) = c34 + g(4,Φ) = 12+8 =20
g(4,{2}) = c42 + g(2,Φ) = 8+5 =13
g(4,{3}) = c43 + g(3,Φ) = 9+6 =15
| s| =2

i 1, 1 sand i s.

g(2,{3,4}) = min{c23+g(3{4}),c24+g(4,{3})}

= min{9+20,10+15} = min{29,25} =25

g(3,{2,4}) =min{c32+g(2{4}),c34+g(4,{2})}

= min{13+18,12+13} = min{31,25} = 25

g(4,{2,3}) = min{c42+g(2{3}),c43+g(3,{2})}

= min{8+15,9+18} = min{23,27} =23

|s| = 3

from eq(1) we obtain


g(1,{2,3,4})=min{c12+g(2{3,4}),c13+g(3,{2,4}),c14+g(4,{2,3})}

= min{10+25,15+25,20+23} = min{35,35,43} =35

optimal cost is 35

the shortest pathis,

g(1,{2,3,4}) = c12 + g(2,{3,4}) => 1->2

g(2,{3,4}) = c24 + g(4,{3}) => 1->2->4

g(4,{3}) = c43 + g(3{Φ}) => 1->2->4->3->1

so, the optimal tour is 1 2 4 3 1

Optimal Binary Search Tree

Consider a fixed set of words and their probabilities. The problem is to


arrange these words in a binary search tree

Let us consider S is the set of words

S= {if, for, int, while, do}

if the following are Two Binary Search Trees (B S T ) For given set S. here
assumption is each word has same probability no unsuccessful search

fo
for

do
d
whil

if whil
int

if
(a) (b)

We require 4 comparisons to find out an identifier in worst case in fig(a).But in fig (b)
only 3 comparisons on avg the two trees need 12/5 ,11/5 comparisons
In General, we can consider different words with different frequencies (probabilities)
and un successful searches also

Let Given set of words are {a1 , a2, a3, …….an } With a1< a2 <….. an

Let P(i) be the probability for searching of ai

q(i) be the probability for unsuccessful search so clearly

∑p(i) + ∑q(i) =1

1≤i≤n 0≤i≤n

So Let us construct optimal Bin search true

To obtain a cost function for BST add external nodes in the place of every
empty sub true. These nodes called external nodes.

for
for

do int
do wh

int
if wh

if

If BST represents identi ers then there will be exately ‘n ‘ internal nodes and n+
1
external nodes every internal node represent a point where successful search may
terminate and every external node represents a point where an unsuccessful
search may terminates

If a successful search terminates at an internal node at level L, then L interations are


required. So expected cost for the internal node for ai is P(i) * Level (ai)

For unsuccessful search terminate at external node the words which are not in BST
can be
partioned into n+1 Equalnce classes. Equvalence class E0 represents least among all
the
identifers in BST& The equalence classes ‘En ’ represents greatest among all
Elements in BST Hence search terment at level Ei- 1

∑p(i)*level (ai) + ∑q(i) * (level (Ei)- 1)

1≤i≤n 0≤i≤n

we can obtain optimal BST for the above value is min


problem:

Let P(i)=q(i)=1/7, for all i in the BST The following word set(a1,a2 ,a3)= (do,it,while)

Whil do
if

do Whil if
if

Whil
do
(a (b) (c
) )

d
While

do Whil

i
if

(d) (e
)

wit Equal probability p(i) = q(i)=1/7 for all i


we have cost (a) = 1+1/7*2+1/7*3+1/7*2+1/7*2+1/7*1
1/7*

Cost (b) = 1/7*1+1/7*2+1/7*2+1/7*2+1/7*2+1/7*2+1/7*2

= 13/7
Cost (c) =cost(d)=cost(e)=15/7

Tree b is optimal

with p(1) = 0.5,p(2) = 0.1,p(3) = 0.05,q(0)=0.05,q(1)=0.1,q(2)=0.05,q(3)=0.05

cost(Tree a)=2.05
cost (Tree c)= 1*0.5 + 2*0.1 + +3*0.05+3*0.15+3*0.11+2*0.05 +1 * 0.05

= 0.5+0.2 + 0.15 + 0.45 + 0.3 + 0.10 + 0.05

=1.75

To apply dynamic programming to the problem of obtaining an optimal BST , we


need to construct the tree as the result of a sequence of decisions and
observe the principal of optimality

One possible approach to this would be to make a desition as to


which of the ai’s should be assigned to the root node of the tree. If we choose ak
then the internal nodes
for a1 ,a2 ,………ak-1 will lie one left sub tree of the root. The remaining will be in the
right
subtree r.

ak a

a1 a3

a1 …….ak-1 ak+1 ….. an E0 E1 E2 E3

E0…….Ek-1 Ek ………..En

Cost (l) = ∑ p(i) * level (ai) + ∑q(i)* level (Ei-1)

1≤i≤k- 1 0≤i≤k- 1

Cost (r) = ∑ P(i) * level (ai) + ∑ q(i) *(level (Ei)- 1)

k+1 ≤ i≤n k<i≤n

By using the forumula

w (i ,j) = q(i) + ∑ j( (q(l) + p(l)) we obtain

l=i+1

the following as the expected cost of search tree

p(k) +cost (l) + cost (r) + w(0,k- 1) + w (k , n)

For left subtree cost is c (0,k- 1) and using (0,k- 1)

For right sub Tree cost is c(k,n) and using w(k,n)

cost of the tree is


c(0,n) = min∑ { c (0,k- 1) + c(k,n) + w(0,k- 1) + w(k,n) + p(k) }

1≤k≤n Total true weight

= min ∑ { (0,k- 1) + c (k,n) + w (0,n) }

1≤k≤n

In General,

c(i,j) = min ∑ { c (i,k- 1) + c(k,j) + w(i,j) + w(i,j) }

i≤k≤j

The above Eq can be solved for c (0,n) by first computing all c(i,j) such that j- 1 = 1

Next we compute all c(i,j) of such that j-i=2 thenc(i,j) with j-i=3 and so on.we
record all the values of r(i,j) of each tree then optimal BST can be computed from
these r (i,j)

Note: Initial values are c (i,i) = 0

w(i,i) = q(i
)
r(i,i) = 0 for all 0≤i<4

From observation, w(i,j) = p( j) + q( j) + w (i,j- 1)

(Problem and Solution discussed in the classroom)

MULTI SATAGE GRAPHS


VERTEX 1 2 3 4 5 6 7 8 9 10 11 12

COST 16 17 9 18 15 7 5 7 4 2 5 0
DISTANCE 2/3 7 6 8 8 10 10 10 12 12 12 12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy