0% found this document useful (0 votes)
0 views

unit-4-new

The document provides an overview of Dynamic Programming (DP), an algorithm design technique for solving optimization problems defined by recurrences with overlapping subproblems. It covers key concepts such as the general strategy, characteristics, applications, and the differences between DP and other methods like greedy algorithms and divide-and-conquer. Additionally, it explains the top-down and bottom-up approaches, along with specific problems like the 0/1 Knapsack Problem, Traveling Salesperson, and Floyd-Warshall's algorithm.

Uploaded by

Fire Ice Yt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

unit-4-new

The document provides an overview of Dynamic Programming (DP), an algorithm design technique for solving optimization problems defined by recurrences with overlapping subproblems. It covers key concepts such as the general strategy, characteristics, applications, and the differences between DP and other methods like greedy algorithms and divide-and-conquer. Additionally, it explains the top-down and bottom-up approaches, along with specific problems like the 0/1 Knapsack Problem, Traveling Salesperson, and Floyd-Warshall's algorithm.

Uploaded by

Fire Ice Yt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Dynamic programming

Syllabus Content
• General strategy
• Characteristics of Dynamic Programming
• Application of Dynamic Programming.
• Multistage graphs
• All pair shortest path algorithm- Floyd-Warshall's
Algorithm
• The 0/1 Knapsack Problem
• Traveling Salesperson
• Longest common subsequence
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by recurrences with overlapping
subproblems

• Invented by American mathematician Richard Bellman in the


1950s to solve optimization problems.

• “Programming” here means “planning”

• Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
3
Dynamic programming
• Dynamic programming is algorithm design method that can be
used when the solution to a problem can be viewed as the
result of sequence of decision.

• Dynamic programming is a technique that breaks the problems


into sub-problems, and saves the result for future purposes so
that we do not need to compute the result again.

• In dynamic programming an optimal sequence of decision is


obtained by making explicit appeal to the principle of
optimality.
Dynamic programming
• The essential difference between the greedy method &
dynamic programming is that in the greedy method only one
decision sequence is ever generated. However sequences
containing suboptimal subsequences cannot be optimal and so
will not be generated.

• Dynamic Programming is a general approach to solving


problems, much like “divide-and-conquer” is a general
method, except that unlike divide-and-conquer, the
subproblems will typically overlap
Dynamic programming design involves 4 major
steps.
1) Characterize the structure of optimal solution.
2) Recursively define the value of an optimal
solution.
3) Compute the value of an optimum solution in a
bottom up fashion.
4) Construct an optimum solution from computed
information.
General Characteristics of
Dynamic Programming
The general characteristics of Dynamic programming are
1) The problem can be divided into stages with a policy
decision required at each stage.
2) Each stage has number of states associated with it.
3) Given the current stage an optimal policy for the remaining
stages is independent of the policy adopted.
4) The solution procedure begins be finding the optimal policy
for each state of the last stage.
5) A recursive relation is available which identifies the optimal
policy for each stage with n stages remaining given the
optimal policy for each stage with (n-1) stages remaining.
APPLICATIONS OF DYNAMIC
PROGRAMMING
1) Matrix Chain Multiplication
2) Optimal Binary search Trees
3) 0/1 Knapsack Problem
4) Multi stage Graph
5) Traveling sales person problem
6) Reliability Design
Approaches of dynamic programming

There are two approaches to dynamic programming:


• Top-down approach
• Bottom-up approach
Top-down approach
• The top-down approach follows the
memorization technique, while bottom-up
approach follows the tabulation method.

• Here memorization is equal to the sum of


recursion and caching.

• Recursion means calling the function itself,


while caching means storing the intermediate
results.
Bottom-Up approach
• The bottom-up approach is also one of the techniques which
can be used to implement the dynamic programming.

• It uses the tabulation technique to implement the dynamic


programming approach.

• It solves the same kind of problems but it removes the


recursion.

• If we remove the recursion, there is no stack overflow issue


and no overhead of the recursive functions.

• In this tabulation technique, we solve the problems and store


the results in a matrix.
Ex 1: Fibonacci Top Down Approach
• Recall definition of Fibonacci numbers:

F(n) = F(n-1) + F(n-2)


F(0) = 0
F(1) = 1

• Computing the nth Fibonacci number recursively (top-down):

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)

...
12
int fib(int n)
{
If(n<=1)
return n;
return fib(n-2)+fib(n-1);
}
Memorization(storing result)
No of calls= 6
Fin(n) = n+1 calls
O(n)
Ex : Fibonacci numbers bottom-up
Computing the nth Fibonacci number using bottom-up iteration and
recording results: int fibo( int n)
{
F(0) = 0 int A[];
A[0]=0; A[1]= 1;
F(1) = 1
for(i=2;i<n;i++)
F(2) = 1+0 = 1 {
… A[i] = A[i-1] + A[i-2];
F(n-2) =
F(n-1) = }
return A;
F(n) = F(n-1) + F(n-2)
}

0 1 1 . . . F(n-2) F(n-1) F(n)


Efficiency:
- time
- space
14
Feature Top-Down (Memoization) Bottom-Up (Tabulation)
Solves a problem by breaking
Builds solutions to
it into smaller subproblems
subproblems iteratively and
Definition and solving them recursively
stores them in a table to
while storing results to avoid
avoid recursion.
recomputation.
Uses recursion with Uses iteration with a table
Implementation memorization (storing already (array/matrix) to store
computed values). results of subproblems.
Recursion Yes, recursive approach. No, iterative approach.

Space Higher (due to recursive stack Lower (only requires a table


Complexity and memoization table). to store results).
Time O(n), but recursion adds O(n), usually more
Complexity overhead. optimized than recursion.
Example Use Fibonacci sequence using Fibonacci sequence using a
Case recursion + memoization. loop + array.
Dynamic Programming (DP) Vs Greedy Algorithm
Vs Divide and Conquer
Dynamic
Greedy Divide and
Feature Programming
Algorithm Conquer
(DP)
Solves problems
Divides a problem
by breaking them
Makes a locally into independent
into overlapping
optimal choice at subproblems,
subproblems and
Approach each step, hoping solves them
solving them
for a globally recursively, and
optimally using
optimal solution. combines
memoization or
solutions.
tabulation.
No overlapping No overlapping
Yes, subproblems
subproblems; subproblems;
Subproblem overlap and are
Overlap
decisions are each subproblem
solved multiple
made in is solved
times.
sequence. independently.
Dynamic Programming (DP) Vs Greedy Algorithm
Vs Divide and Conquer
Dynamic Greedy Divide and
Feature
Programming (DP) Algorithm Conquer

May use recursion


Uses recursion
Recursion Usage (Top-Down) or Usually iterative.
extensively.
iteration (Bottom-Up).
Higher than Greedy Usually faster than Can be efficient
but optimized using DP but might be (e.g., O(n log n) for
Time Complexity
memoization/tabulati incorrect for some merge sort) but has
on. problems. recursive overhead.
Higher due to
Generally low space Can be high due to
Space Complexity memoization or
complexity. recursive calls.
tabulation.
Fibonacci, 0/1
Huffman Coding,
Knapsack, Multistage
Kruskal’s Algorithm, Merge Sort, Quick
Examples graph, All pair shortest
Prim’s Algorithm, Sort, Binary Search
path algo Floyd-
Activity Selection.
Warshall,
MULTISTAGE GRAPH
• A multi stage Graph G=(V,E) is a directed graph in which the vertices are
partitioned into k>=2 disjoint sets, Vi where 1<=i<=k.
• If is an edge in E, then uЄvi , vЄvi+1 where 1<=i<=k.
• The sets vi and vk are such that |vi | = |vk| = 1.
• Let s and T be 2 vertices in vi and vk respectively then the vertex s is called
the source and T is called the Sink” or “Destination”.
• Let c(i,j) be the cost of the edge .
• The cost of a path from S to T is the sum of the costs of edges on the path.
• The multistage graph problem is to find minimum cost path from S to T.
• Each set Vi defines a stage in the graph.
• Because of the constraints on E, every path from s to t starts in stage 1,
goes to stage2, then stage 3, then to stage 4 and so on.
• And eventually terminates in stage k.
• The multistage graph problem can be solved in 2 ways.
• Forward method:
• Backward method:
0/1 KNAPSACK PROBLEM
• In this problem, ‘n’ objects are given.
• with each object ‘i’ having a weight of ‘wi’ & a knapsack having a
capacity of ‘m’ is given.
• If an object ‘i’ is placed in the knapsack, a profit of ‘Pixi’ is earned A
solution to this knapsack problem can be obtained by making a
sequence of decisions on the variables x1, x2, x3,… xn.
• A decision of variable xi involves in determining which of the values
0 or 1 is to be assigned to it.
• Following a decision on any object xi .
• We may be in any of 2 possible states.
• 1) The capacity remaining in the knapsack is m and no profit has
earned. (object=0) 2)
• The capacity remaining is m-wi and a profit of pi has earned.
• It is clear that the remaining decisions xi+1, xi+2,….xn must be
optimal w.r.t. the problem state resulting from the decision of xi
(i=1).
• Hence the principle of optimality holds.
Set method
Set method
Longest
common subsequence
• Longest Common Subsequence A subsequence of a
string S, is a set of characters that appear in left to-
right order, but not necessarily consecutively.
• A common subequence of two strings is a
subsequence that appears in both strings.
• A longest common subequence is a common
subsequence of maximal length.
String 1: a b c d e f g h I j
string 2: c d g I

• ACT T GCG
• ACT , AT T C , T , ACT T GC are all
subsequences.
• T T A is not a subequence
• If S1 and S2 are the two given sequences then, Z is the
common subsequence of S1 and S2 if Z is a subsequence of
both S1 and S2. Furthermore, Z must be a strictly increasing
sequence of the indices of both S1 and S2.
• If S1 = {B, C, D, A, A, C, D}
• Then, {A, D, B} cannot be a subsequence of S1 as the order
of the elements is not the same (ie. not strictly increasing
sequence).
• S1 = {B, C, D, A, A, C, D}
• S2 = {A, C, D, B, A, C}
• Then, common subsequences are {B, C}, {C, D, A, C}, {D, A,
C}, {A, A, C}, {A, C}, {C, D},
among these subsequences, {C, D, A, C} is the longest common subsequence
• S1=abaaba
• S2=babbab
• (baab,baba)
Floyd-Warshall algorithm
example
Floyd-Warshall Algorithm(All pairs
shortest path)
• Create a matrix A0 of dimension n*n where n
is the number of vertices.
• The row and the column are indexed
as i and j respectively.
• i and j are the vertices of the graph.
• Each cell A[i][j] is filled with the distance from
the ith vertex to the jth vertex.
• If there is no path from ith vertex to jth vertex,
the cell is left as infinity.
• Now, create a matrix A1 using matrix A0.
• The elements in the first column and the first row
are left as they are.
• The remaining cells are filled in the following way.

Let k be the intermediate vertex in the shortest


path from source to destination.
• In this step, k is the first vertex. A[i][j] is filled
with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j]).
• That is, if the direct distance from the source
to the destination is greater than the path
through the vertex k, then the cell is filled
with A[i][k] + A[k][j].

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy