4 Dynamic Programming
4 Dynamic Programming
4 Dynamic Programming
1
Divide & Conquer vs. Dynamic
Programming
•Both techniques split their input into parts, find sub-
solutions to the parts, and combine solutions to sub-
problems.
•In divide and conquer, solution to one sub-problem may
not affect the solutions to other sub-problems of the same
problem.
–In dynamic programming, sub-problems are dependent.
Sub-problems may share sub-sub-problems
2
Greedy vs. Dynamic Programming
•Both techniques are an algorithm design technique for
optimization problems (minimizing or maximizing), and both
build solutions from a collection of choices of individual elements.
–The greedy method computes its solution by making its choices
in a serial forward fashion, never looking back or revising
previous choices.
–Dynamic programming computes its solution forward/backward
by synthesizing them from smaller sub-solutions, and by trying
many possibilities and choices before it arrives at the optimal set
of choices.
•There is no a priori test by which one can tell if the Greedy method
will lead to an optimal solution.
–By contrast, there is a test for Dynamic Programming, called
The Principle of Optimality
3
The Principle of Optimality
•In DP an optimal sequence of decisions is obtained by making explicit
appeal to the principle of optimality.
•Definition: A problem is said to satisfy the Principle of Optimality if the
sub-solutions of an optimal solution of the problem are themselves
optimal solutions for their sub-problems.
–In solving a problem, we make a sequence of decisions D1, D2,..., Dn.
If this sequence is optimal, then the k decisions also be optimal
•Examples: The shortest path problem satisfies the principle of
optimality.
–This is because if a, x1, x2,..., xn, b is a shortest path from node a to
node b in a graph, then the portion of xi to xj on that path is a shortest
path from xi to xj.
•DP reduces computation by
–Storing solution to a sub-problem the first time it is solved.
–Looking up the solution when sub-problem is encountered again.
–Solving sub-problems in a bottom-up or top-down fashion.
4
Dynamic programming (DP)
•DP is an algorithm design method that can be used when the
solution to a problem can be viewed as the result of a
sequence of decisions.
–Example: The solution to knapsack problem can be
viewed as the result of a sequence of decisions. We have to
decide the values of xi, 0 or 1. First we make a decision on
x1, then x2 and so on.
•For some problems, an optimal sequence of decisions can be
found by making the decisions one at a time using greedy
method.
•For other problems, it is not possible to make step-wise
decisions based on only local information.
–One way to solve such problems is to try all possible
decision sequences. However time and space requirement
is prohibitive.
–DP reduces those possible sequences not leading to
optimal decision. 5
Dynamic programming approaches
• To solve a problem by using dynamic programming:
–Find out the recurrence relations.
• Dynamic programming is a technique for efficiently
computing recurrences by storing partial results.
–Represent the problem by a multistage graph.
–In summary, if a problem can be described by a multistage
graph, then it can be solved by dynamic programming
• Forward approach and backward approach:
–If the recurrence relations are formulated using the forward
approach, then the relations are solved beginning with the last
decision.
–If the recurrence relations are formulated using the backward
approach, then the relations are solved starting from the
beginning until we each to the final decision
Example: 0-1 knapsack problem
6
The shortest path
• Given a multi-stage graph, how can I find a shortest path?
–Forward approach: Let p(i,j) denote the minimum
cost path from vertex j to the terminal vertex T. Let
COST(i,j) denote the cost of p(i,j) path. Then using the
forward approach, we obtain:
COST(i,j) = min {COST(i,j), c(i,k) + COST(k,j)}
–Backward approach: Let p(i,j) be a minimum cost
path from vertex S to a vertex j in Vi . Let COST(i,j) be
the cost of p(i,j).
COST(i,j) = min {COST(i,j), COST(i,k) + c(k,j)}
8
Cont..
• Find the shortest path in multistage graphs for the
following example?
9
Algorithm
procedure shortest_path (COST[], A[], n)
//cost[i,j] is the cost of edges[i,j] and A[i,j] is the shortest path from i to j
//cost[i,i] is 0.0
for i = 1 to n do
for j = 1 to n do
A(i, j) := COST(i, j) //copy cost into A
for k = 1 to n do
for i = 1 to n do
for j = 1 to n do
A(i, j ) = min(A(i, j), A(i,k) + A(k,j));
end for
end for
end for
return A(1..n,1..n)
end shortest_path
This algorithm runs in time O( n3 )
10
String editing
• The problem is given two sequences of symbols, X = x 1
x2 … xn and Y = y1 y2 … ym, transform X to Y, based on a
sequence of three operations: Delete, Insert and Change, so
that for every operation COST(Cij) is incurred.
• The objective of string editing is to identify a minimum cost
sequence of edit operation that will transform X into Y.
Example: consider the sequences
X = {a a b a b} and Y = {b a b b}
Identify a minimum cost sequence of edit operation that
transform X into Y. Assume change costs 2 units, delete 1
unit and insert 1 unit.
(a) apply brute force approach
(b) apply dynamic programming
11
Dynamic programming
•The minimum cost of any edit sequence that transforms x1
x2 … xi into y1 y2 … yj (for i>0 and j>0) is the minimum of
the three costs: delete, change, or insert operations.
•The following recurrence equation is used for COST(i,j).
0 if i=0,j=0
COST(i-1,0) + D(xi) i>0, j=0
COST(0,j-1) + I(yj) j>0, i=0
COST(i,j) = COST'(i,j) i>0, j>0
5 6 7
1
2 3
DFS: 1→2→4→8→5→6→3→7
4 5 6 7 BFS: 1→2→3→4→5→6→7→8
8
Exercise
Find the order of traversing the following graphs using
DFS and BFS
Start at node 0
D
G J
Start at node B H
A E K
C
I
A F
L
Floyd Warshall Algorithm
• Floyd Warshall Algorithm is a famous algorithm.
• It is used to solve All Pairs Shortest Path Problem.
• It computes the shortest path between every pair of vertices of the
given graph.
• Floyd Warshall Algorithm is an example of dynamic
programming approach
Advantages
• It is extremely simple.
• It is easy to implement.
Time Complexity
• Floyd Warshall Algorithm consists of three loops over all the
nodes.
• The inner most loop consists of only constant complexity
operations.
• Hence, the asymptotic complexity of Floyd Warshall algorithm is
O(n3).
• Here, n is the number of nodes in the given graph.
Cont..
• Consider the following directed weighted graph-