4 Dynamic Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Dynamic programming

1
Divide & Conquer vs. Dynamic
Programming
•Both techniques split their input into parts, find sub-
solutions to the parts, and combine solutions to sub-
problems.
•In divide and conquer, solution to one sub-problem may
not affect the solutions to other sub-problems of the same
problem.
–In dynamic programming, sub-problems are dependent.
Sub-problems may share sub-sub-problems

2
Greedy vs. Dynamic Programming
•Both techniques are an algorithm design technique for
optimization problems (minimizing or maximizing), and both
build solutions from a collection of choices of individual elements.
–The greedy method computes its solution by making its choices
in a serial forward fashion, never looking back or revising
previous choices.
–Dynamic programming computes its solution forward/backward
by synthesizing them from smaller sub-solutions, and by trying
many possibilities and choices before it arrives at the optimal set
of choices.
•There is no a priori test by which one can tell if the Greedy method
will lead to an optimal solution.
–By contrast, there is a test for Dynamic Programming, called
The Principle of Optimality

3
The Principle of Optimality
•In DP an optimal sequence of decisions is obtained by making explicit
appeal to the principle of optimality.
•Definition: A problem is said to satisfy the Principle of Optimality if the
sub-solutions of an optimal solution of the problem are themselves
optimal solutions for their sub-problems.
–In solving a problem, we make a sequence of decisions D1, D2,..., Dn.
If this sequence is optimal, then the k decisions also be optimal
•Examples: The shortest path problem satisfies the principle of
optimality.
–This is because if a, x1, x2,..., xn, b is a shortest path from node a to
node b in a graph, then the portion of xi to xj on that path is a shortest
path from xi to xj.
•DP reduces computation by
–Storing solution to a sub-problem the first time it is solved.
–Looking up the solution when sub-problem is encountered again.
–Solving sub-problems in a bottom-up or top-down fashion.
4
Dynamic programming (DP)
•DP is an algorithm design method that can be used when the
solution to a problem can be viewed as the result of a
sequence of decisions.
–Example: The solution to knapsack problem can be
viewed as the result of a sequence of decisions. We have to
decide the values of xi, 0 or 1. First we make a decision on
x1, then x2 and so on.
•For some problems, an optimal sequence of decisions can be
found by making the decisions one at a time using greedy
method.
•For other problems, it is not possible to make step-wise
decisions based on only local information.
–One way to solve such problems is to try all possible
decision sequences. However time and space requirement
is prohibitive.
–DP reduces those possible sequences not leading to
optimal decision. 5
Dynamic programming approaches
• To solve a problem by using dynamic programming:
–Find out the recurrence relations.
• Dynamic programming is a technique for efficiently
computing recurrences by storing partial results.
–Represent the problem by a multistage graph.
–In summary, if a problem can be described by a multistage
graph, then it can be solved by dynamic programming
• Forward approach and backward approach:
–If the recurrence relations are formulated using the forward
approach, then the relations are solved beginning with the last
decision.
–If the recurrence relations are formulated using the backward
approach, then the relations are solved starting from the
beginning until we each to the final decision
Example: 0-1 knapsack problem
6
The shortest path
• Given a multi-stage graph, how can I find a shortest path?
–Forward approach: Let p(i,j) denote the minimum
cost path from vertex j to the terminal vertex T. Let
COST(i,j) denote the cost of p(i,j) path. Then using the
forward approach, we obtain:
COST(i,j) = min {COST(i,j), c(i,k) + COST(k,j)}
–Backward approach: Let p(i,j) be a minimum cost
path from vertex S to a vertex j in Vi . Let COST(i,j) be
the cost of p(i,j).
COST(i,j) = min {COST(i,j), COST(i,k) + c(k,j)}

NB. If (i, j) is not element of E then COST(i, j) = + inf.


7
The shortest path in multistage graphs

8
Cont..
• Find the shortest path in multistage graphs for the
following example?

• By greedy method: shortest path is ???


• The real shortest path is: ???

9
Algorithm
procedure shortest_path (COST[], A[], n)
//cost[i,j] is the cost of edges[i,j] and A[i,j] is the shortest path from i to j
//cost[i,i] is 0.0
for i = 1 to n do
for j = 1 to n do
A(i, j) := COST(i, j) //copy cost into A
for k = 1 to n do
for i = 1 to n do
for j = 1 to n do
A(i, j ) = min(A(i, j), A(i,k) + A(k,j));
end for
end for
end for
return A(1..n,1..n)
end shortest_path
This algorithm runs in time O( n3 )
10
String editing
• The problem is given two sequences of symbols, X = x 1
x2 … xn and Y = y1 y2 … ym, transform X to Y, based on a
sequence of three operations: Delete, Insert and Change, so
that for every operation COST(Cij) is incurred.
• The objective of string editing is to identify a minimum cost
sequence of edit operation that will transform X into Y.
Example: consider the sequences
X = {a a b a b} and Y = {b a b b}
Identify a minimum cost sequence of edit operation that
transform X into Y. Assume change costs 2 units, delete 1
unit and insert 1 unit.
(a) apply brute force approach
(b) apply dynamic programming

11
Dynamic programming
•The minimum cost of any edit sequence that transforms x1
x2 … xi into y1 y2 … yj (for i>0 and j>0) is the minimum of
the three costs: delete, change, or insert operations.
•The following recurrence equation is used for COST(i,j).
0 if i=0,j=0
COST(i-1,0) + D(xi) i>0, j=0
COST(0,j-1) + I(yj) j>0, i=0
COST(i,j) = COST'(i,j) i>0, j>0

where COST'(i,j) = min { COST(i-1,j) + D(xi),


COST(i-1,j-1) + C(xi,yj),
COST(i,j-1) + I(yj) }
It takes O(n,m) 12
Example
Transform the sequences
Xi = {a a b a b} into Yj = {b a b b}
with minimum cost sequence of edit operation using
dynamic programming approach, Assume that change costs
2 units, delete and insert 1 unit.
j 0 1 2 3 4 The value 3 at (5,4) is the
i
0 0 1 2 3 4 optimal solution
1 1 2 1 2 3 By tracing back one can
determine which operations
2 2 3 2 3 4 lead to optimal solution
3 3 2 3 2 3
• Delete x1, Delete x2 and Insert
4 4 3 2 3 4 y4 Or,
5 5 4 3 2 3 • Change x1 to y1 & Delete x134
Graph Traversals
In DFS, as each vertex v is visited, all
of it’s children are added to front for
immediate processing
• Use of a stack leads to a depth-first
visit order. Stack is used to keep
track of: nodes to be visited next, or
nodes that we have already visited.

In BFS, as each vertex v is visited all


of it’s unvisited children are kept in a
waiting list
•Use of a queue leads to a breadth-first
visit order. Queue is used to keep
track of: nodes to be visited next, or
nodes that we have already visited.
Depth-First Traversal
Strategy: Go as far as you can (if there is unvisited node depth-wise);
otherwise, go back and try another way

The depth-first traversal visits the nodes in the order -


c, a, b, d
Remark:
• A depth-first traversal only follows edges that lead to
unvisited vertices.
• if we omit the edges that are not followed, the remaining
edges form a tree.
DFS: Algorithm
procedure DFS( v )
visited (v) = 1; //Mark v as visited
For each vertex u adjacent to v do
If (visited (u) = 0) then //if vertex u is unvisited
DFS(u)
end
procedure GraphTraversal()
for i = 1 to n // n is the number of vertices v
visited(vi) = 0; //Mark v as unvisited
for i = 1 to n
if (visited(vi) = 0)
DFS(vi)
end
Breadth-First Search (BFS)
• In DFS, we choose the most recently visited vertex to expand. Where
as, BFS explores the vertices in the order of their distance from the
start vertex level-wise.
–BFS examines every path of length i before going on to paths of
length i+1.

BFS visits the nodes in the order: a,


b, c, d
BFS: Algorithm
procedure BFS( v )
visited (v) = 1; //Mark v as visited
enqueue(v)
do{
for all vertices u adjacent to v do
If visited(u) = 0
visited(u) =1 enqueue(u)
v = dequeue()
}while queue is not empty
end
procedure GraphTraversal()
initialize visited(v) = 0; //Mark v as unvisited
for i = 1 to n // n is the number of vertices v
if (visited(vi) = 0)
BFS(vi)
end
Exercise
• Show the order of traversing the following graphs
using DFS and BFS starting from node 1
1
DFS: 1→2→5→6→3→7→4
2 3 4
BFS: 1→2→3→4→5→6→7

5 6 7

1
2 3
DFS: 1→2→4→8→5→6→3→7
4 5 6 7 BFS: 1→2→3→4→5→6→7→8

8
Exercise
Find the order of traversing the following graphs using
DFS and BFS

Start at node 0

D
G J
Start at node B H
A E K
C
I
A F
L
Floyd Warshall Algorithm
• Floyd Warshall Algorithm is a famous algorithm.
• It is used to solve All Pairs Shortest Path Problem.
• It computes the shortest path between every pair of vertices of the
given graph.
• Floyd Warshall Algorithm is an example of dynamic
programming approach
Advantages
• It is extremely simple.
• It is easy to implement.
Time Complexity
• Floyd Warshall Algorithm consists of three loops over all the
nodes.
• The inner most loop consists of only constant complexity
operations.
• Hence, the asymptotic complexity of Floyd Warshall algorithm is
O(n3).
• Here, n is the number of nodes in the given graph.
Cont..
• Consider the following directed weighted graph-

• Using Floyd Warshall Algorithm, find the shortest path distance


between every pair of vertices.
Solution
Step-01:
• Remove all the self loops and parallel edges (keeping the lowest
weight edge) from the graph.
• In the given graph, there are neither self edges nor parallel edges.
Cont..
Step-02:
• Write the initial distance matrix.
• It represents the distance between every pair of vertices in the
form of given weights.
• For diagonal elements (representing self-loops), distance value =
0.
• For vertices having a direct edge between them, distance value =
weight of that edge.
• For vertices having no direct edge between them, distance value =
∞.
• Initial distance matrix for the given graph is-
Cont..
Step-03:
• Using Floyd Warshall Algorithm, write the following 4 matrices-
Cont..
• The last matrix D4 represents the shortest path distance between
every pair of vertices.
Remember-
• In the above problem, there are 4 vertices in the given graph.
• So, there will be total 4 matrices of order 4 x 4 in the solution
excluding the initial distance matrix.
• Diagonal elements of each matrix will always be 0.
Knapsack Problem
We are given the following-
• A knapsack (kind of shoulder bag) with limited weight capacity.
• Few items each having some weight and value.
The problem states-
• Which items should be placed into the knapsack such that, the
value or profit obtained by putting the items into the knapsack is
maximum.
• And the weight limit of the knapsack does not exceed.
Cont..
Knapsack Problem Variants-
1. Fractional Knapsack Problem
2. 0/1 Knapsack Problem
0/1 Knapsack Problem-
• As the name suggests, items are indivisible here.
• We can not take the fraction of any item.
• We have to either take an item completely or leave it completely.
• It is solved using dynamic programming approach.
0/1 Knapsack Problem Using Dynamic Programming-
Consider-
• Knapsack weight capacity = w
• Number of items each having some weight and value = n
• 0/1 knapsack problem is solved using dynamic programming in
the following steps-
Cont..
Step-01:
• Draw a table say ‘T’ with (n+1) number of rows and (w+1)
number of columns.
• Fill all the boxes of 0th row and 0th column with zeroes as
shown-
Cont..
Step-02:
• Start filling the table row wise top to bottom from left to
right.
• Use the following formula-
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }
• Here, T(i , j) = maximum value of the selected items if we
can take items 1 to i and have weight restrictions of j.
• This step leads to completely filling the table.
• Then, value of the last box represents the maximum possible
value that can be put into the knapsack.
Cont..
Step-03:
• To identify the items that must be put into the knapsack to
obtain that maximum profit,
• Consider the last column of the table.
• Start scanning the entries from bottom to top.
• On encountering an entry whose value is not same as the
value stored in the entry immediately above it, mark the row
label of that entry.
• After all the entries are scanned, the marked labels represent
the items that must be put into the knapsack.
Cont..
Time Complexity-
• Each entry of the table requires constant time θ(1) for its
computation.
• It takes θ(nw) time to fill (n+1)(w+1) table entries.
• It takes θ(n) time for tracing the solution since tracing
process traces the n rows.
• Thus, overall θ(nw) time is taken to solve 0/1 knapsack
problem using dynamic programming.
Cont..
Problem
• For the given set of items and knapsack capacity = 5 kg, find
the optimal solution for the 0/1 knapsack problem making
use of dynamic programming approach.

• A thief enters a house for robbing it. He can carry a maximal


weight of 5 kg into his bag. There are 4 items in the house
with the given weights and values. What items should thief
take if he either takes the item completely or leaves it
completely?
Cont..
Solution-
Given-
• Knapsack capacity (w) = 5 kg
• Number of items (n) = 4
Step-01:
• Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and
(w+1) = 5 + 1 = 6 number of columns.
• Fill all the boxes of 0th row and 0th column with 0.
Cont..
Step-02:
• Start filling the table row wise top to bottom from left to right
using the formula-
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }
Finding T(1,1)-
• We have,
i=1
j=1
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
• Substituting the values, we get-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
T(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0
Cont..
Finding T(1,2)-
We have,
• i=1
• j=2
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
• Substituting the values, we get-
T(1,2) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) }
T(1,2) = max { T(0,2) , 3 + T(0,0) }
T(1,2) = max {0 , 3+0}
T(1,2) = 3
Cont..
Finding T(1,3)-
We have,
• i=1
• j=3
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
• Substituting the values, we get-
T(1,3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) }
T(1,3) = max { T(0,3) , 3 + T(0,1) }
T(1,3) = max {0 , 3+0}
T(1,3) = 3
Cont..
Finding T(1,4)-
We have,
• i=1
• j=4
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
• Substituting the values, we get-
T(1,4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) }
T(1,4) = max { T(0,4) , 3 + T(0,2) }
T(1,4) = max {0 , 3+0}
T(1,4) = 3
Cont..
Finding T(1,5)-
We have,
• i=1
• j=5
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
• Substituting the values, we get-
T(1,5) = max { T(1-1 , 5) , 3 + T(1-1 , 5-2) }
T(1,5) = max { T(0,5) , 3 + T(0,3) }
T(1,5) = max {0 , 3+0}
T(1,5) = 3
Cont..
Finding T(2,1)-
We have,
• i=2
• j=1
(value)i = (value)1 = 4
(weight)i = (weight)1 = 3
• Substituting the values, we get-
T(2,1) = max { T(2-1 , 1) , 4 + T(2-1 , 1-3) }
T(2,1) = max { T(1,1) , 4 + T(1,-2) }
T(2,1) = T(1,1) { Ignore T(1,-2) }
T(2,1) = 0
Cont..
Finding T(2,2)-
We have,
• i=2
• j=2
(value)i = (value)1 = 4
(weight)i = (weight)1 = 3
• Substituting the values, we get-
T(2,2) = max { T(2-1 , 2) , 4 + T(2-1 , 2-3) }
T(2,2) = max { T(1,2) , 4 + T(1,-1) }
T(2,2) = T(1,2) { Ignore T(1,-1) }
T(2,2) = 3
Cont..
Finding T(2,3)-
We have,
• i=2
• j=3
(value)i = (value)1 = 4
(weight)i = (weight)1 = 3
• Substituting the values, we get-
T(2,3) = max { T(2-1 , 3) , 4 + T(2-1 , 3-3) }
T(2,3) = max { T(1,3) , 4 + T(1,0) }
T(2,3) = max { 3 , 4+0 }
T(2,3) = 4
Cont..
Finding T(2,4)-
We have,
• i=2
• j=4
(value)i = (value)1 = 4
(weight)i = (weight)1 = 3
• Substituting the values, we get-
T(2,4) = max { T(2-1 , 4) , 4 + T(2-1 , 4-3) }
T(2,4) = max { T(1,4) , 4 + T(1,1) }
T(2,4) = max { 3 , 4+0 }
T(2,4) = 4
Cont..
Finding T(2,5)-
We have,
• i=2
• j=5
(value)i = (value)1 = 4
(weight)i = (weight)1 = 3
• Substituting the values, we get-
T(2,5) = max { T(2-1 , 5) , 4 + T(2-1 , 5-3) }
T(2,5) = max { T(1,5) , 4 + T(1,2) }
T(2,5) = max { 3 , 4+3 }
T(2,5) = 7
Cont..
• Similarly, compute all the entries.
• After all the entries are computed and filled in the table, we
get the following table-

• The last entry represents the maximum possible value that


can be put into the knapsack.
• So, maximum possible value that can be put into the
knapsack = 7.
Cont..
Identifying Items To Be Put Into Knapsack-
• Following Step-04,
• We mark the rows labelled “1” and “2”.
• Thus, items that must be put into the knapsack to obtain the
maximum value 7 are, Item-1 and Item-2

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy