0% found this document useful (0 votes)
9 views

Daa Lecture CSE

Uploaded by

star light
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Daa Lecture CSE

Uploaded by

star light
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Dynamic Programming

Introduction
The Dynamic Programming (DP) is the most powerful design technique for solving optimization
problems. It was invented by mathematician named Richard Bellman inn 1950s. The DP in closely
related to divide and conquer techniques, where the problem is divided into smaller sub-problems and
each sub-problem is solved recursively. The DP differs from divide and conquer in a way that instead of
solving sub-problems recursively, it solves each of the sub-problems only once and stores the solution to
the sub-problems in a table. The solution to the main problem is obtained by the solutions of these sub-
problems.
The steps of Dynamic Programming technique are:
 Dividing the problem into sub-problems: The main problem is divided into smaller sub-
problems. The solution of the main problem is expressed in terms of the solution for the smaller
sub-problems.
 Storing the sub solutions in a table: The solution for each sub-problem is stored in a table so
that it can be referred many times whenever required.
 Bottom-up computation: The DP technique starts with the smallest problem instance and
develops the solution to sub instances of longer size and finally obtains the solution of the
original problem instance.
The strategy can be used when the process of obtaining a solution of a problem can be viewed as a
sequence of decisions. The problems of this type can be solved by taking an optimal sequence of
decisions. An optimal sequence of decisions is found by taking one decision at a time and never making
an erroneous decision. In Dynamic Programming, an optimal sequence of decisions is arrived at by using
the principle of optimality. The principle of optimality states that whatever be the initial state and
decision, the remaining decisions must constitute an optimal decision sequence with regard to the state
resulting form the first decision.
A fundamental difference between the greedy strategy and dynamic programming is that in the
greedy strategy only one decision sequence is generated, wherever in the dynamic programming, a
number of them may be generated. Dynamic programming technique guarantees the optimal solution
for a problem whereas greedy method never gives such guarantee.

Matrix chain Multiplication


Let, we have three matrices A1 , A2 and A3 , with order (10 x 100), (100 x 5) and (5 x 50) respectively.
Then the three matrices can be multiplied in two ways.

(i) First, multiplying A2 and A3 , then multiplying A1 with the resultant matrix i.e. A1 (A2 A3 ).
(ii) First, multiplying A1 and A2 , and then multiplying the resultant matrix with A3 i.e. (A1 A2 ) A3 .

The number of scalar multiplications required in case 1 is 100 * 5 * 50 + 10 * 100 * 50 = 25000 + 50,000
= 75,000 and the number of scalar multiplications required in case 2 is 10 * 100 * 5 + 10 * 5 * 50 = 5000
+ 2500 = 7500

To find the best possible way to calculate the product, we could simply parenthesize the expression
in every possible fashion and count each time how many scalar multiplications are required. Thus the
matrix chain multiplication problem can be stated as “find the optimal parenthesisation of a chain of
matrices to be multiplied such that the number of scalar multiplications is minimized”.

Dynamic Programming Approach for Matrix Chain Multiplication


Let us consider a chain of n matrices A1 , A2 ........ An, where the matrix Ai has dimensions P[i-1] x P[i].
Let the parenthesisation at k results two sub chains A1 …….Ak and Ak+1 ...... An. These two sub chains must
each be optimal for A1 ……An to be optimal. The cost of matrix chain (A1 ….An) is calculated as
cost(A1 ……Ak) + cost(Ak+1 ..... An) + cost of multiplying two resultant matrices together i.e.

cost(A1 ……An)= cost(A1 ……Ak) + cost(Ak+1 ..... An) + cost of multiplying two resultant matrices together.

Here, the cost represents the number of scalar multiplications. The sub chain (A1 ….Ak) has a dimension
P[0] x P[k] and the sub chain (Ak +1 ……An) has a dimension P[k] x P[n]. The number of scalar
multiplications required to multiply two resultant matrices is P[0] x P[k] x P[n].
Let m[i, j] be the minimum number of scalar multiplications required to multiply the matrix chain
(Ai ......... Aj). Then

(i) m[i, j] = 0 if i = j
(ii) m[i, j] = minimum number of scalar multiplications required to multiply (Ai….Ak) + minimum
number of scalar multiplications required to multiply (A k+1 ….An) + cost of
multiplying two resultant matrices i.e.
m[i, j]  m[i, k ]  m[k, j]  P[i 1] P[k ] P[ j]

However, we don’t know the value of k, for which m[i, j] is minimum. Therefore, we have to try all j – i
possibilities.

 0 if i  j
m i, j  
min m[i, k]  m[k, j]  P[i 1] P[k] P[ j] Otherwise
ik  j

Therefore, the minimum number of scalar multiplications required to multiply n matrices A1 A2 ……An is

m[1, n]  minm[1, k ]  m[k , n]  P[0] P[k ] P[n]


1k n

The dynamic programming approach for matrix chain multiplication is presented in Algorithm 7.2.

AlgorithmMATRIX-CHAIN-MULTIPLICATION (P)

// P is an array of length n+1 i.e. from P[0] to P[n]. It is assumed that the matrix Ai has the dimension P[i-
1] ×P[i].

for(i = 1; i<=n; i++)

m[i, i] = 0;

for(l = 2; l<=n; l++){

for(i = 1; i<=n-(l-1); i++){

j = i + (l-1);

m[i, j] = ∞;

for(k = i; k<=j-1; k++)

q = m[i, k] + m[k+1, j] + P[i-1] P[k] P[j] ;

if (q<m [i, j]){


m[i, j] = q;

s[i, j] = k;

returnm and s.

Algorithm 7.2 Matrix Chain multiplication algorithm.

Now let us discuss the procedure and pseudo code of the matrix chain multiplication. Suppose, we
are given the number of matrices in the chain is n i.e. A1 , A2 ………An and the dimension of matrix Ai is P[i-
1] ×P[i]. The input to the matrix-chain-order algorithm is a sequence P[n+1] = {P[0], P[1], …….P[n]}. The
algorithm first computes m[i, i] = 0 for i = 1, 2, …….n in lines 2-3. Then, the algorithm computes m[i, j] for
j– i = 1 in the first step to the calculation of m[i, j] for j – i = n -1 in the last step. In lines 3 – 11, the value
of m[i, j] is calculated for j – i = 1 to j –i = n – 1 recursively. At each step of the calculation of m[i, j], a
calculation on m[i, k] and m[k+1, j] for ik<j, are required, which are already calculated in the previous
steps.

To find the optimal placement of parenthesis for matrix chain multiplication Ai, Ai+1 , …..Aj, we should
test the value of ik<j for which m[i, j] is minimum. Then the matrix chain can be divided from (A1 ……Ak)
and (Ak+1 ……. Aj).

Let us consider matrices A1 ,A2 ……A5 to illustrate MATRIX-CHAIN-MULTIPLICATIONalgorithm. The matrix


chain order P = {P0 , P1 , P2 , P3 , P4, P5 } = {5, 10, 3, 12, 5, 50}. The objective is to find the minimum number
of scalar multiplications required to multiply the 5 matrices and also find the optimal sequence of
multiplications.

The solution can be obtained by using a bottom up approach that means first we should calculate mii
for 1i  5. Then mijis calculated for j – i = 1 to j – i = 4. We can fill the table shown in Fig. 7.4 to find the
solution.

Fig. 7.4Table to store the partial solutions of the matrix chain multiplication problem

The value of mii for 1i  5 can be filled as 0 that means the elements in the first row can be assigned 0.
Then

For j – i = 1
m12 = P0 P1 P2 = 5 x 10 x 3 = 150

m23 = P1 P2 P3 = 10 x 3 x 12 = 360

m34 = P2 P3 P4 = 3 x 12 x 5 = 180

m45 = P3 P4 P5 = 12 x 5 x 50 = 3000

For j – i = 2

m13 = min {m11 + m23 + P0 P1 P3 , m12 + m33 + P0 P2 P3 }

= min {0 + 360 + 5 * 10 * 12, 150 + 0 + 5*3*12}

= min {360 + 600, 150 + 180} = min {960, 330} = 330

m24 = min {m22 + m34 + P1 P2 P4 , m23 + m44 + P1 P3 P4 }

= min {0 + 180 + 10*3*5, 360 + 0 +10*12*5}

= min {180 + 150, 360 + 600} = min {330, 960} = 330

m35 = min {m33 + m45 + P2 P3 P5 , m34 + m55 + P2 P4 P5 }

= min {0 + 3000 + 3*12*50, 180 + 0 + 3*5*50}

= min {3000 + 1800 + 180 + 750} = min {4800, 930} = 930

For j – i = 3

m14 = min {m11 + m24 + P0 P1 P4 , m12 + m34 + P0 P2 P4 , m13 +m44 +P0 P3 P4 }

= min {0 + 330 + 5*10*5, 150 + 180 + 5*3*5, 330+0+5*12*5}

= min {330 + 250, 150 + 180 + 75, 330 +300}

= min {580, 405, 630} = 405

m25 = min {m22 + m35 + P1 P2 P5 , m23 + m45 + P1 P3 P5 , m24 +m55 +P1 P4 P5 }

= min {0 + 930 +10*3*50, 360+3000+10*12*50, 330+0+10*5*50}

= min {930 + 1500, 360 +3000+6000, 330+2500}

= min {2430, 9360, 2830} = 2430

For j - i = 4

m15 = min{m11 + m25 + P0 P1 P5 , m12 +m35 + P0 P2 P5 , m13 + m45 +P0 P3 P5 , m14 +m55 +P0 P4 P5 }

= min{0+2430+5*10*50, 150+930+5*3*50, 330+3000+5*12*50,

405+0+5*5*50}

= min {2430+2500, 150+930+750, 330+3000+3000, 405+1250}

= min {4930, 1830, 6330, 1655} = 1655


Hence, minimum number of scalar multiplications required to multiply the given five matrices in
1655.

To find the optimal parenthesization of A1 ........ A5 , we find the value of k is 4 for which m15 is
minimum. So the matrices can be splitted to (A1 ….A4 ) (A5 ). Similarly, (A1 ... A4 ) can be splitted to (A1 A2 ) (A3
A4 ) because for k = 2, m14 is minimum. No further splitting is required as the subchains (A1 A2 ) and (A3 A4 )
has length 1. So the optimal paranthesization of A1 ...... A5 in ( (A1 A2 ) (A3 A4 ) ) (A5 ).

Time complexity of multiplying a chain of n matrices


Let T(n) be the time complexity of multiplying a chain of n matrices.

1 n1 if n  1
T (n) 
 1   T (k )  T (n  k)   1 if n  1


 k 1
n1

 T (n)   1  T (k )  T (n  k)  1 if n  1


k 1
n1

  1   n 1  T (k )  T (n  k )


k 1

 T (n)  n   2T (1)  T (2) LL T (n 1)LLL(7.1)

Replacing n by n-1, we get

T (n 1)  n 1  2T (1)  T (2) LL T (n  2)LLL(7.2)


Subtracting equation 7.2 from equation 7.1, we have

T (n)  T (n 1)   n   n 1  2T (n 1)


 T (n)   1  3T (n 1)
  1  3  1  3T (n  2)   1  3 1  32T (n  2)
 
 1 1 3  32 LL 3n2  3n1T (1)
 11 3  3 LL 3 
2 n1

 2 
3 1
n
 n

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy