Adsa U4,1
Adsa U4,1
DYNAMIC PROGRAMMING
Dynamic Programming is the most powerful design technique for solving
optimization problems. Divide & Conquer algorithm partition the problem into disjoint sub
problems, solve the subproblems recursively and then combine their solution to solve the
original problems.
Dynamic Programming is used when the subproblems are not independent, e.g.
when they share the same subproblems. In this case, divide and conquer may do more
work than necessary, because it solves the same sub problem multiple times.
Dynamic Programming solves each subproblem just once and stores the result in a
table so that it can be repeatedly retrieved if needed again.
Dynamic Programming is a Bottom-up approach- we solve all possible small
problems and then combine to obtain solutions for bigger problems.
Dynamic Programming is a paradigm of algorithm design in which an optimization
problem is solved by a combination of achieving sub-problem solutions and appearing to
the "principle of optimality".
Characteristics of Dynamic Programming:
Dynamic Programming works when a problem has the following features:-
o Optimal Substructure: If an optimal solution contains optimal sub solutions then
a problem exhibits optimal substructure.
o Overlapping subproblems: When a recursive algorithm would visit the same
subproblems repeatedly, then a problem has overlapping subproblems.
If a problem has optimal substructure, then we can recursively define an optimal
solution. If a problem has overlapping subproblems, then we can improve on a recursive
implementation by computing each subproblem only once.
If a problem doesn't have optimal substructure, there is no basis for defining a
recursive algorithm to find the optimal solutions. If a problem doesn't have overlapping
subproblems, we don't have anything to gain by using dynamic programming.
If the space of subproblems is enough (i.e. polynomial in the size of the input),
dynamic programming can be much more efficient than recursion.
can also be noticed that there exists only O(n2 ) different sequence of matrices, in this way
do not reach the exponential growth.
Step1: Structure of an optimal parenthesization: Our first step in the dynamic
paradigm is to find the optimal substructure and then use it to construct an optimal solution
to the problem from an optimal solution to subproblems.
Let Ai....j where i≤ j denotes the matrix that results from evaluating the product Ai
Ai+1....Aj.
If i < j then any parenthesization of the product Ai Ai+1 ......Aj must split that the
product between Ak and Ak+1 for some integer k in the range i ≤ k ≤ j. That is for some
value of k, we first compute the matrices Ai.....k & Ak+1....j and then multiply them
together to produce the final product Ai....j. The cost of computing Ai....k plus the cost of
computing Ak+1....j plus the cost of multiplying them together is the cost of
parenthesization.
Step 2: A Recursive Solution: Let m [i, j] be the minimum number of scalar
multiplication needed to compute the matrixAi....j.
If i=j the chain consist of just one matrix Ai....i=Ai so no scalar multiplication are
necessary to compute the product. Thus m [i, j] = 0 for i= 1, 2, 3....n.
If i<j we assume that to optimally parenthesize the product we split it between Ak
and Ak+1 where i≤ k ≤j. Then m [i,j] equals the minimum cost for computing the
subproducts Ai....k and Ak+1....j+ cost of multiplying them together. We know Ai has
dimension pi-1 x pi, so computing the product Ai....k and Ak+1....jtakes pi-1 pk pj scalar
multiplication, we obtain
m [i,j] = m [i, k] + m [k + 1, j] + pi-1 pk pj