Daa Lecture CSE
Daa Lecture CSE
Introduction
The Dynamic Programming (DP) is the most powerful design technique for solving optimization
problems. It was invented by mathematician named Richard Bellman inn 1950s. The DP in closely
related to divide and conquer techniques, where the problem is divided into smaller sub-problems and
each sub-problem is solved recursively. The DP differs from divide and conquer in a way that instead of
solving sub-problems recursively, it solves each of the sub-problems only once and stores the solution to
the sub-problems in a table. The solution to the main problem is obtained by the solutions of these sub-
problems.
The steps of Dynamic Programming technique are:
Dividing the problem into sub-problems: The main problem is divided into smaller sub-
problems. The solution of the main problem is expressed in terms of the solution for the smaller
sub-problems.
Storing the sub solutions in a table: The solution for each sub-problem is stored in a table so
that it can be referred many times whenever required.
Bottom-up computation: The DP technique starts with the smallest problem instance and
develops the solution to sub instances of longer size and finally obtains the solution of the
original problem instance.
The strategy can be used when the process of obtaining a solution of a problem can be viewed as a
sequence of decisions. The problems of this type can be solved by taking an optimal sequence of
decisions. An optimal sequence of decisions is found by taking one decision at a time and never making
an erroneous decision. In Dynamic Programming, an optimal sequence of decisions is arrived at by using
the principle of optimality. The principle of optimality states that whatever be the initial state and
decision, the remaining decisions must constitute an optimal decision sequence with regard to the state
resulting form the first decision.
A fundamental difference between the greedy strategy and dynamic programming is that in the
greedy strategy only one decision sequence is generated, wherever in the dynamic programming, a
number of them may be generated. Dynamic programming technique guarantees the optimal solution
for a problem whereas greedy method never gives such guarantee.
(i) First, multiplying A2 and A3 , then multiplying A1 with the resultant matrix i.e. A1 (A2 A3 ).
(ii) First, multiplying A1 and A2 , and then multiplying the resultant matrix with A3 i.e. (A1 A2 ) A3 .
The number of scalar multiplications required in case 1 is 100 * 5 * 50 + 10 * 100 * 50 = 25000 + 50,000
= 75,000 and the number of scalar multiplications required in case 2 is 10 * 100 * 5 + 10 * 5 * 50 = 5000
+ 2500 = 7500
To find the best possible way to calculate the product, we could simply parenthesize the expression
in every possible fashion and count each time how many scalar multiplications are required. Thus the
matrix chain multiplication problem can be stated as “find the optimal parenthesisation of a chain of
matrices to be multiplied such that the number of scalar multiplications is minimized”.
cost(A1 ……An)= cost(A1 ……Ak) + cost(Ak+1 ..... An) + cost of multiplying two resultant matrices together.
Here, the cost represents the number of scalar multiplications. The sub chain (A1 ….Ak) has a dimension
P[0] x P[k] and the sub chain (Ak +1 ……An) has a dimension P[k] x P[n]. The number of scalar
multiplications required to multiply two resultant matrices is P[0] x P[k] x P[n].
Let m[i, j] be the minimum number of scalar multiplications required to multiply the matrix chain
(Ai ......... Aj). Then
(i) m[i, j] = 0 if i = j
(ii) m[i, j] = minimum number of scalar multiplications required to multiply (Ai….Ak) + minimum
number of scalar multiplications required to multiply (A k+1 ….An) + cost of
multiplying two resultant matrices i.e.
m[i, j] m[i, k ] m[k, j] P[i 1] P[k ] P[ j]
However, we don’t know the value of k, for which m[i, j] is minimum. Therefore, we have to try all j – i
possibilities.
0 if i j
m i, j
min m[i, k] m[k, j] P[i 1] P[k] P[ j] Otherwise
ik j
Therefore, the minimum number of scalar multiplications required to multiply n matrices A1 A2 ……An is
The dynamic programming approach for matrix chain multiplication is presented in Algorithm 7.2.
AlgorithmMATRIX-CHAIN-MULTIPLICATION (P)
// P is an array of length n+1 i.e. from P[0] to P[n]. It is assumed that the matrix Ai has the dimension P[i-
1] ×P[i].
m[i, i] = 0;
j = i + (l-1);
m[i, j] = ∞;
s[i, j] = k;
returnm and s.
Now let us discuss the procedure and pseudo code of the matrix chain multiplication. Suppose, we
are given the number of matrices in the chain is n i.e. A1 , A2 ………An and the dimension of matrix Ai is P[i-
1] ×P[i]. The input to the matrix-chain-order algorithm is a sequence P[n+1] = {P[0], P[1], …….P[n]}. The
algorithm first computes m[i, i] = 0 for i = 1, 2, …….n in lines 2-3. Then, the algorithm computes m[i, j] for
j– i = 1 in the first step to the calculation of m[i, j] for j – i = n -1 in the last step. In lines 3 – 11, the value
of m[i, j] is calculated for j – i = 1 to j –i = n – 1 recursively. At each step of the calculation of m[i, j], a
calculation on m[i, k] and m[k+1, j] for ik<j, are required, which are already calculated in the previous
steps.
To find the optimal placement of parenthesis for matrix chain multiplication Ai, Ai+1 , …..Aj, we should
test the value of ik<j for which m[i, j] is minimum. Then the matrix chain can be divided from (A1 ……Ak)
and (Ak+1 ……. Aj).
The solution can be obtained by using a bottom up approach that means first we should calculate mii
for 1i 5. Then mijis calculated for j – i = 1 to j – i = 4. We can fill the table shown in Fig. 7.4 to find the
solution.
Fig. 7.4Table to store the partial solutions of the matrix chain multiplication problem
The value of mii for 1i 5 can be filled as 0 that means the elements in the first row can be assigned 0.
Then
For j – i = 1
m12 = P0 P1 P2 = 5 x 10 x 3 = 150
m23 = P1 P2 P3 = 10 x 3 x 12 = 360
m34 = P2 P3 P4 = 3 x 12 x 5 = 180
m45 = P3 P4 P5 = 12 x 5 x 50 = 3000
For j – i = 2
For j – i = 3
For j - i = 4
m15 = min{m11 + m25 + P0 P1 P5 , m12 +m35 + P0 P2 P5 , m13 + m45 +P0 P3 P5 , m14 +m55 +P0 P4 P5 }
405+0+5*5*50}
To find the optimal parenthesization of A1 ........ A5 , we find the value of k is 4 for which m15 is
minimum. So the matrices can be splitted to (A1 ….A4 ) (A5 ). Similarly, (A1 ... A4 ) can be splitted to (A1 A2 ) (A3
A4 ) because for k = 2, m14 is minimum. No further splitting is required as the subchains (A1 A2 ) and (A3 A4 )
has length 1. So the optimal paranthesization of A1 ...... A5 in ( (A1 A2 ) (A3 A4 ) ) (A5 ).
1 n1 if n 1
T (n)
1 T (k ) T (n k) 1 if n 1
k 1
n1
2
3 1
n
n