0% found this document useful (0 votes)
38 views18 pages

Pres

This document discusses dynamic programming and its applications to matrix chain multiplication and optimal polygon triangulation. It provides an overview of dynamic programming, how it differs from recursion and divide-and-conquer approaches, and describes how to use it to find optimal parenthesizations of matrix multiplications and triangulations of polygons that minimize computation costs. Dynamic programming is used to solve these problems by breaking them down into overlapping subproblems and storing solutions to optimize solving larger instances of the problems.

Uploaded by

Er Umesh Thoriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views18 pages

Pres

This document discusses dynamic programming and its applications to matrix chain multiplication and optimal polygon triangulation. It provides an overview of dynamic programming, how it differs from recursion and divide-and-conquer approaches, and describes how to use it to find optimal parenthesizations of matrix multiplications and triangulations of polygons that minimize computation costs. Dynamic programming is used to solve these problems by breaking them down into overlapping subproblems and storing solutions to optimize solving larger instances of the problems.

Uploaded by

Er Umesh Thoriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

DYNAMIC PROGRAMMING:

Matrix Chain Multiplication & Optimal


Triangulation

CSC 252
Algorithms
Haniya Aslam
Presentation Overview

 Understanding dynamic programming


 Dynamic programming vs. Recursion and
Demand & Conquer
 Matrix chain multiplication
 Optimal polygon triangulation
 Acknowledgements
Dynamic Programming

 A generalization of iteration and recursion.


 “Dynamic Programming is recursion’s somewhat
neglected cousin. …(It) is the basis of comparison
and alignment routines.
 Bottom-up design:
 Start at the bottom
 Solve small sub-problems
 Store solutions
 Reuse previous results for solving larger sub-problems
Dynamic Programming cont.

Fibonacci:
function Fibonacci(n: integer) : integer;
var
i : index; sum, interm1, interm2: integer;
begin
interm1:= 0; {F0}
interm2:= 1; {F1}
for i:=3 to n do
sum :=interm1 + interm2;
interm1:=interm2;
interm2:= sum;
end {for}
Fibonacci := sum;
end {Fibonacci}
Dynamic Programming vs. Recursion
and Divide & Conquer

 In a recursive program, a problem of size n is


solved by first solving a sub-problem of size n-1.
 In a divide & conquer program, you solve a
problem of size n by first solving a sub-problem of
size k and another of size k-1, where 1 < k < n.
 In dynamic programming, you solve a problem of
size n by first solving all sub-problems of all sizes
k, where k < n.
Matrix Chain Multiplication

 Given : a chain of matrices {A1,A2,…,An}.


 Once all pairs of matrices are parenthesized, they can be
multiplied by using the standard algorithm as a sub-routine.
 A product of matrices is fully parenthesized if it is either a
single matrix or the product of two fully parenthesized
matrix products, surrounded by parentheses. [Note: since
matrix multiplication is associative, all parenthesizations yield the same
product.]
Matrix Chain Multiplication
cont.
 For example, if the chain of matrices is {A, B, C, D},
the product A, B, C, D can be fully parenthesized in 5
distinct ways:
(A ( B ( C D ))),
(A (( B C ) D )),
((A B ) ( C D )),
((A ( B C )) D),
((( A B ) C ) D ).
 The way the chain is parenthesized can have a
dramatic impact on the cost of evaluating the product.
Matrix Chain Multiplication
Optimal Parenthesization

 Example: A[30][35], B[35][15], C[15][5]


minimum of A*B*C
A*(B*C) = 30*35*5 + 35*15*5 = 7,585
(A*B)*C = 30*35*15 + 30*15*5 = 18,000
 How to optimize:
 Brute force – look at every possible way to parenthesize :
Ω(4n/n3/2)
 Dynamic programming – time complexity of Ω(n3) and space
complexity of Θ(n2).
Matrix Chain Multiplication
Structure of Optimal Parenthesization

 For n matrices, let Ai..j be the result of AiAi+1….Aj


 An optimal parenthesization of AiAi+1…An splits the
product between Ak and Ak+1 where 1  k < n.
 Example, k = 4 (A1A2A3A4)(A5A6)
Total cost of A1..6 = cost of A1..4 plus total
cost of multiplying these two matrices
together.
Matrix Chain Multiplication
Overlapping Sub-Problems
 Overlapping sub-problems helps in reducing the running
time considerably.
 Create a table M of minimum Costs
 Create a table S that records index k for each optimal sub-problem
 Fill table M in a manner that corresponds to solving the
parenthesization problem on matrix chains of increasing length.
 Compute cost for chains of length 1 (this is 0)
 Compute costs for chains of length 2
A1..2, A2..3, A3..4, …An-1…n
 Compute cost for chain of length n
A1..n

Each level relies on smaller sub-strings


Optimal Polygon Triangulation

 A triangulation of a polygon is a set of T chords of the polygon that


divide the polygon into disjoint triangles. In a triangulation, no chords
intersect (except at end-points) and the set T of chords is maximal: every
chord not in T intersects some chord in T. The sides of triangles
produced by the triangulation are either chords in the triangulation or
sides of the polygon. Every triangulation of an n-vertex convex polygon
has n-3 chords and divides the polygon into n-2 triangles.
a. b.
Optimal Polygon Triangulation
cont.
 In the optimal polygon triangulation problem , we are given
a polygon P = {v0, v1, v2, …., vn-1} and a weight function w
defined on triangles formed by sides and chords of P.
 The problem is to find a triangulation that minimizes the
sum of the weights of the triangles in the triangulation.
 This problem, like matrix chain multiplication, uses
parenthesization.

A full parenthesization corresponds to a


full binary tree also called a parse tree.
Parenthesization in Triangulation

Above is the parse tree for the triangulation of a


polygon.The internal nodes of the parse tree are
the chords of the triangulation plus the side v0v6,
which is the root.
Triangulation and Matrix Chain
Multiplication
 Since a fully parenthesized product of n matrices
corresponds to a parse tree with n leaves, it
therefore also corresponds to a triangulation of an
(n+1)-vertex polygon.
 Each matrix Ai in a product of A1A2…An
corresponds to side vi-1vi of an (n+1)-vertex
polygon.
 The matrix chain is actually a special case of the
optimal triangulation problem.
Triangulation and Matrix Chain
Multiplication cont.
 Given a matrix chain product A1A2….AN, we define
an (n+1)-vertex convex polygon p = {v0,v1,….,vn}.
If matrix Ai has dimensions pi-1 x pi, for I = 1,2,…,n,
the weight function for the triangulation is defined
as
w(Δvivjvk) = pipjpk
 An optimal triangulation of P with respect to this
weight function gives the parse tree for an optimal
parenthesization of A1A2….An.
Substructure of an optimal
triangulation
 Given: an optimal triangulation T of an (n+1)-
vertex polygon P that includes the triangle Δv0vkvn,
where 1  k  n-1.
 The weight of T is the sum of the weights of
Δv0vkvn and triangles in the triangulation of the two
sub-polygons {v0,v1,….vk} and {vk, vk+1,….vn}.
 The triangulation of the sub-polygons determined
by T , therefore, must be optimal, since a lesser-
weight triangulation of either sub-polygon would
contradict the minimality of the eight of T.
A Recursive Solution

 Given, for 1  I  j  n:
 T[i,j] is the weight of an optimal triangulation of the polygon

{vi-1, vi, …..vj} [Since m[i,j] is the minimum cost of computing


the matrix chain sub-product AiAi+1….Aj]
 In the case of a 2-vertex polygon: t[i,i] = 0 for i = 1,2,…,n.
 To minimize over all vertices vk, where k = i,i+1,…j-1, the weight of Δvi-
1vkvj plus the weights of the optimal triangulation of the polygons {vi-1, vi,
….,vk} and {vk,vk+1,…,vj}.
 The recursive solution then is:
If i < j,
T[i,j] = min {t[i,k] + t[k+1,j] + w(Δvi-1vkvj)}
i k j-1
Acknowledgments

 http://www-cse/uta/edu/~holder/courses/cse5311/lectures/18/node18.html
 http://www.middlebury.edu/~dickerso/ccsc/ugcg.html
 http://www.eecs.harvard.edu/~nr/cs152/readings/dynamic.html
 http://www.catalase.com/dprog.htm
 http://mail.informs.org/classes/dynamic/node1.html
 http://cse.hanyang.ac.kr/~jmchoi/c…6-2/algorithm/classnote/node6.html
 http://people.bu.edu/rlynch/cs566/sld002.htm

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy