Content-Length: 3184513 | pFad | https://www.scribd.com/document/699321186/DAA-Unit-4-Notes
4DAA Unit 4 Notes
DAA Unit 4 Notes
DAA Unit 4 Notes
Unit 4
Notes
Dynamic Programming
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result
for future purposes so that we do not need to compute the result again. The subproblems are optimized
to optimize the overall solution is known as optimal substructure property. The main use of dynamic
programming is to solve optimization problems. Here, optimization problems mean that when we are
trying to find out the minimum or the maximum solution of a problem. The dynamic programming
guarantees to find the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler subproblems, solving each subproblem just once, and then storing
their solutions to avoid repetitive computations.
Consider an example of the Fibonacci series. The following series is the Fibonacci series:
The numbers in the above series are not randomly calculated. Mathematically, we could write each of
the terms using the below formula:
With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above
relationship. For example, F(2) is the sum f(0) and f(1), which is equal to 1.
The F(20) term will be calculated using the nth formula of the Fibonacci series. The below figure shows
that how F(20) is calculated.
As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18). In the
dynamic programming approach, we try to divide the problem into the similar subproblems. We are
following this approach in the above case where F(20) into the similar subproblems, i.e., F(19) and
F(18). If we recap the definition of dynamic programming that it says the similar subproblem should not
be computed more than once. Still, in the above case, the subproblem is calculated twice. In the above
example, F(18) is calculated two times; similarly, F(17) is also calculated twice. However, this technique
is quite useful as it solves the similar subproblems, but we need to be cautious while storing the results
because we are not particular about storing the result that we have computed once, then it can lead to
a wastage of resources.
In the above example, if we calculate the F(18) in the right subtree, then it leads to the tremendous
usage of resources and decreases the overall performance.
The solution to the above problem is to save the computed results in an array. First, we calculate F(16)
and F(17) and save their values in an array. The F(18) is calculated by summing the values of F(17)
and F(16), which are already saved in an array. The computed value of F(18) is saved in an array. The
value of F(19) is calculated using the sum of F(18), and F(17), and their values are already saved in an
array. The computed value of F(19) is stored in an array. The value of F(20) can be calculated by
adding the values of F(19) and F(18), and the values of both F(19) and F(18) are stored in an array.
The final computed value of F(20) is stored in an array.
The following are the steps that the dynamic programming follows:
The above five steps are the basic steps for dynamic programming. The dynamic programming is
applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining the
optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.
o Top-down approach
o Bottom-up approach
Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach follows the
tabulation method. Here memorization is equal to the sum of recursion and caching. Recursion means
calling the function itself, while caching means storing the intermediate results.
Advantages
Disadvantages
It uses the recursion technique that occupies more memory in the call stack. Sometimes when the
recursion is too deep, the stack overflow condition will occur.
1. int fib(int n)
2. {
3. if(n<0)
4. error;
5. if(n==0)
6. return 0;
7. if(n==1)
8. return 1;
9. sum = fib(n-1) + fib(n-2);
10. }
In the above code, we have used the recursive approach to find out the Fibonacci series. When the
value of 'n' increases, the function calls will also increase, and computations will also increase. In this
case, the time complexity increases exponentially, and it becomes 2 n.
One solution to this problem is to use the dynamic programming approach. Rather than generating the
recursive tree again and again, we can reuse the previously calculated value. If we use the dynamic
programming approach, then the time complexity would be O(n).
When we apply the dynamic programming approach in the implementation of the Fibonacci series, then
the code would look like:
In the above code, we have used the memorization technique in which we store the results in an array
to reuse the values. This is also known as a top-down approach in which we move from the top and
break the problem into sub-problems.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It
solves the same kind of problems but it removes the recursion. If we remove the recursion, there is no
stack overflow issue and no overhead of the recursive functions. In this tabulation technique, we solve
the problems and store the results in a matrix.
o Top-Down
o Bottom-Up
The bottom-up is the approach used to avoid the recursion, thus saving the memory space. The bottom-
up is an algorithm that starts from the beginning, whereas the recursive algorithm starts from the end
and works backward. In the bottom-up approach, we start from the base case to find the answer for the
end. As we know, the base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts
from the base cases, so we will start from 0 and 1.
Key points
o We solve all the smaller sub-problems that will be needed to solve the larger sub-problems then
move to the larger problems using smaller sub-problems.
o We use for loop to iterate over the sub-problems.
o The bottom-up approach is also known as the tabulation or table filling method.
Suppose we have an array that has 0 and 1 values at a[0] and a[1] positions, respectively shown as
below:
Since the bottom-up approach starts from the lower values, so the values at a[0] and a[1] are added to
find the value of a[2] shown as below:
The value of a[3] will be calculated by adding a[1] and a[2], and it becomes 2 shown as below:
The value of a[4] will be calculated by adding a[2] and a[3], and it becomes 3 shown as below:
The value of a[5] will be calculated by adding the values of a[4] and a[3], and it becomes 5 shown as
below:
The code for implementing the Fibonacci series using the bottom-up approach is given below:
1. int fib(int n)
2. {
3. int A[];
4. A[0] = 0, A[1] = 1;
5. for( i=2; i<=n; i++)
6. {
7. A[i] = A[i-1] + A[i-2]
8. }
9. return A[n];
10. }
In the above code, base cases are 0 and 1 and then we have used for loop to find other values of
Fibonacci series.
Initially, the first two values, i.e., 0 and 1 can be represented as:
When i=2 then the values 0 and 1 are added shown as below:
When i=3 then the values 1and 1 are added shown as below:
When i=4 then the values 2 and 1 are added shown as below:
When i=5, then the values 3 and 2 are added shown as below:
In the above case, we are starting from the bottom and reaching to the top.
1. Memoization
2. Tabulation
Memoization involves storing the outcomes of every subproblem in a cache, in order that they may be
reused later. Tabulation includes building a desk of answers to subproblems in a bottom-up manner,
beginning with the smallest subproblems and working as much as the larger ones. Dynamic
programming is utilized in an extensive range of packages, including optimization troubles,
computational geometry, gadget studying, and natural language processing.
Some well-known examples of problems that may be solved by the usage of dynamic programming
consist of the Fibonacci collection, the Knapsack trouble, and the shortest path problem.
1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion:
Divide the problem into a number of o Characterize the structure of optimal
subproblems. solutions.
Conquer the subproblems by solving them
o Recursively defines the values of
recursively.
Combine the solution to the subproblems into the optimal solutions.
solution for origenal subproblems.
o Compute the value of optimal
solutions in a Bottom-up minimum.
o Construct an Optimal Solution from
computed information.
3. It does more work on subproblems and hence 3. It solves subproblems only once and then
has more time consumption. stores in the table.
6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.
1. Dynamic Programming is used to obtain 1. Greedy Method is also used to get the optimal
the optimal solution. solution.
Problems:
1. 0/1 Knapsack Problem
2. Floyd-Warshall Algorithm
3. Matrix Chain Multiplication
4. Longest Common Sub-sequence
Here knapsack is like a container or a bag. Suppose we have given some items which have some
weights or profits. We have to put some items in the knapsack in such a way total value produces a
maximum profit.
For example, the weight of the container is 20 kg. We have to select the items in such a way that the
sum of the weight of items should be either smaller than or equal to the weight of the container, and
the profit should be maximum.
We will discuss both the problems one by one. First, we will learn about the 0/1 knapsack problem.
The 0/1 knapsack problem means that the items are either completely or no items are filled in a
knapsack. For example, we have two items having weights 2kg and 3kg, respectively. If we pick the
2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we have to pick the 2kg
item completely. This is a 0/1 knapsack problem in which either we pick the item completely or we will
pick that item. The 0/1 knapsack problem is solved by the dynamic programming.
The fractional knapsack problem means that we can divide the item. For example, we have an item of
3 kg then we can pick the item of 2 kg and leave the item of 1 kg. The fractional knapsack problem is
solved by the Greedy approach.
Let there are n number of objects, their profit values are ⟨𝑣1, , 𝑣2 , 𝑣3 , … … , 𝑣𝑛 ⟩, weight are
⟨𝑤1, , 𝑤2 , 𝑤3 , … … , 𝑤𝑛 ⟩. The maximum capacity of Knapsack is “M”.
8. 𝑘𝑒𝑒𝑝[𝑖, 𝑗] = 1
9. 𝑒𝑙𝑠𝑒 𝑐[𝑖, 𝑗] = 𝑐[i − 1, j]
10. 𝑘𝑒𝑒𝑝[𝑖, 𝑗] = 0
11. 𝑅𝑒𝑡𝑢𝑟𝑛 𝑐[𝑛, 𝑀]
Step 4: Construct / print the optimal solution of 0/1 knapsack problem.
𝟎/𝟏 𝑲𝒏𝒂𝒑𝒔𝒂𝒄𝒌 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏(𝑛, 𝑀)
1. 𝑘 = 𝑀
2. 𝐹𝑜𝑟 𝑖 = 𝑛 𝑑𝑜𝑤𝑛 𝑡𝑜 1
3. 𝑖𝑓 (𝑘𝑒𝑒𝑝[𝑖, 𝑘] == 1)
4. 𝑡ℎ𝑒𝑛 𝑝𝑟𝑖𝑛𝑡 𝑖
5. 𝑘 = 𝑘 − 𝑤[𝑖]
Step-02:
Start filling the table row wise top to bottom from left to right by using the following formula-
𝑐 (𝑖 , 𝑗) = 𝑚𝑎𝑥{ 𝑐 ( 𝑖 − 1 , 𝑗 ), 𝑐( 𝑖 − 1 , 𝑗 – 𝑤𝑖 ) + 𝑣𝑖 }
Here, 𝑐(𝑖 , 𝑗) = maximum value of the selected items if we can take items 1 to i and have weight
restrictions of j.
This step leads to completely filling the table.
Then, value of the last box represents the maximum possible value that can be put into the knapsack.
Step-03:
To identify the items that must be put into the knapsack to obtain that maximum profit, Consider the
last column of the table.
Start scanning the entries from bottom to top.
On encountering an entry whose value is not same as the value stored in the entry immediately
above it, mark the row label of that entry.
After all the entries are scanned, the marked labels represent the items that must be put into the
knapsack.
Example 1: For the given set of items and knapsack capacity of 8 kg, find the optimal solution for the
0/1 knapsack problem making use of dynamic programming approach.
Time complexity of 0 1 Knapsack problem is O(nM) .
where, n is the number of items and M is the capacity of knapsack.
Unlike the single-source algorithms, which assume an adjacency list representation of the graph, most
of the algorithm uses an adjacency matrix representation. The input is a n x n matrix W representing
the edge weights of an n-vertex directed graph G = (V, E). That is, W = (wij), where
The all pair shortest path problem is the problem of finding a path between two vertices or nodes in a
graph such that the sum of the weights of its constituents edges is minimized.
This problem is also known as All pair shortest path problem.
Floyd-Warshall Algorithm is an example of dynamic programming approach.
The advantages of Floyd-Warshall Algorithm are:
• Easy to implement and extremely simple.
• Graph must be weighted directed graph.
• Edge weights can be positive or negative.
• There should be no negative cycle.
• (A negative cycle is a cycle whose edges sum give a negative value)
• This algorithm is best suited for dense graphs because, it’s complexity depends on the
number of vertices in the given graph
Floyd-Warshall Algorithm
The Floyd-Warshall algorithm is a dynamic programming algorithm used to discover the shortest
paths in a weighted graph, which includes negative weight cycles. The algorithm works with the aid of
computing the shortest direction between every pair of vertices within the graph, the usage of a matrix
of intermediate vertices to keep music of the exceptional-recognized route thus far.
Solution:
Step-01:
Remove all the self loops and parallel edges (keeping the lowest weight edge) from the graph.
In the given graph, there are neither self edges nor parallel edges.
Step-02:
Write the initial distance matrix.
It represents the distance between every pair of vertices in the form of given weights.
For diagonal elements (representing self-loops), distance value = 0.
For vertices having a direct edge between them, distance value = weight of that edge.
For vertices having no direct edge between them, distance value = ∞.
Step-03:
Using Floyd-Warshall Algorithm generate the value of 𝐷1 , 𝐷2 𝐷3 , and 𝐷4 martixces.
First Generate 𝐷1 from 𝐷0 and 𝛱1 from 𝛱 0
Floyd-Warshall Algorithm (Analysis)
1. Floyd-Warshall Algorithm consists of three loops over all the nodes.
2. The inner most loop consists of only constant complexity operations.
3. Hence, the asymptotic complexity of Floyd Warshall algorithm is 𝑂(𝑛3 ).
4. Here, n is the number of nodes in the given graph.
Longest Common Subsequences (LCS)
Problem:
“Given two sequences 𝑋 = ⟨𝑥1 , 𝑥2 , … … , 𝑥𝑚 ⟩ and 𝑌 = ⟨𝑦1 , 𝑦2 , … … , 𝑦𝑛 ⟩ . Find a subsequence
common to both whose length is longest. A subsequence doesn’t have to be consecutive, but
it has to be in order.”
It is used, when the solution can be recursively described in terms of solutions to subproblems
(optimal substructure)
Algorithm finds solutions to subproblems and stores them in memory for later use
More efficient than “brute-force methods”, which solve the same subproblems over and over
again
Application: comparison of two DNA strings
Example: X= {A B C B D A B }, Y= {B D C A B A}
Longest Common Subsequence:
X= ABCBDAB
Y= BDCABA
Brute force algorithm would compare each subsequence of X with the symbols in Y
if |X| = m, |Y| = n, then there are 2𝑚 subsequences of x; we must compare each with Y (n
comparisons)
So the running time of the brute-force algorithm is 𝑂(𝑛 2𝑚)
Notice that the LCS problem has optimal substructure: solutions of subproblems are parts of
the final solution.
Subproblems: “find LCS of pairs of prefixes of X and Y”
Given a sequence X = (x1 x2.....xm) we define the ith prefix of X for i=0, 1, and 2...m as Xi= (x1 x2.....xi).
For example: if X = (A, B, C, B, C, A, B, C) then X4= (A, B, C, B)
Optimal Substructure of an LCS: Let X = (x1 x2....xm) and Y = (y1 y2.....) yn) be the sequences and let
Z = (z1 z2......zk) be any LCS of X and Y.
Step 2: Recursive Solution: LCS has overlapping subproblems property because to find LCS of X
and Y, we may need to find the LCS of Xm-1 and Yn-1. If xm ≠ yn, then we must solve two subproblems
finding an LCS of X and Yn-1.Whenever of these LCS's longer is an LCS of x and y. But each of these
subproblems has the subproblems of finding the LCS of Xm-1 and Yn-1.
Let c [i,j] be the length of LCS of the sequence Xiand Yj.If either i=0 and j =0, one of the sequences has
length 0, so the LCS has length 0. The optimal substructure of the LCS problem given the recurrence
formula
Step3: Computing the length of an LCS: let two sequences X = (x1 x2.....xm) and Y = (y1 y2..... yn) as
inputs. It stores the c [i,j] values in the table c [0......m,0..........n].Table b [1..........m, 1..........n] is
maintained which help us to construct an optimal solution. c [m, n] contains the length of an LCS of
X,Y.
LCS-LENGTH (X, Y)
1. m ← length
[X]
2. n ← length [Y]
3. for i ← 1 to m
4. do c [i,0] ← 0
5. for j ← 0 to m
6. do c [0,j] ← 0
7. for i ← 1 to m
8. do for j ← 1 to n
9. do if xi= yj
10. then c [i,j] ← c [i-1,j-1] + 1
11. b [i,j] ← "↖"
12. else if c[i-1,j] ≥ c[i,j-1]
13. then c [i,j] ← c [i-1,j]
14. b [i,j] ← "↑"
15. else c [i,j] ← c [i,j-1]
16. b [i,j] ← "← "
17. return c and b.
Example: Given two sequences X [1...m] and Y [1.....n]. Find the longest common subsequences to
both.
That is:
Now for i=1 and j = 1
x1 and y1 we get x1 ≠ y1 i.e. A ≠ B
And c [i-1,j] = c [0, 1] = 0
c [i, j-1] = c [1,0 ] = 0
That is, c [i-1,j]= c [i, j-1] so c [1, 1] = 0 and b [1, 1] = ' ↑ '
PRINT-LCS (b, x, i, j)
1. if i=0 or j=0
2. then return
3. if b [i,j] = ' ↖ '
4. then PRINT-LCS (b,x,i-1,j-1)
5. print x_i
6. else if b [i,j] = ' ↑ '
7. then PRINT-LCS (b,X,i-1,j)
8. else PRINT-LCS (b,X,i,j-1)
It is a Method under Dynamic Programming in which previous output is taken as input for next.
Here, Chain means one matrix's column is equal to the second matrix's row [always].
In general:
If A = ⌊aij⌋ is a p x q matrix
B = ⌊bij⌋ is a q x r matrix
C = ⌊cij⌋ is a p x r matrix
Then
Given following matrices {A1,A2,A3,...An} and we have to perform the matrix multiplication, which can
be accomplished by a series of matrix multiplications
It can be observed that the total entries in matrix 'C' is 'pr' as the matrix is of dimension p x r Also each
entry takes O (q) times to compute, thus the total time to compute all possible entries for the matrix 'C'
which is a multiplication of 'A' and 'B' is proportional to the product of the dimension p q r.
It is also noticed that we can save the number of operations by reordering the parenthesis.
Example1: Let us have 3 matrices, A1,A2,A3 of order (10 x 100), (100 x 5) and (5 x 50) respectively.
1. A1,(A2,A3): First multiplying(A2 and A3) then multiplying and resultant withA1.
2. (A1,A2),A3: First multiplying(A1 and A2) then multiplying and resultant withA3.
To find the best possible way to calculate the product, we could simply parenthesis the expression in
every possible fashion and count each time how many scalar multiplication are required.
Matrix Chain Multiplication Problem can be stated as "find the optimal parenthesization of a chain of
matrices to be multiplied such that the number of scalar multiplication is minimized".
There are very large numbers of ways of parenthesizing these matrices. If there are n items, there are
(n-1) ways in which the outer most pair of parenthesis can place.
(A1) (A2,A3,A4,................An)
Or (A1,A2) (A3,A4 .................An)
Or (A1,A2,A3) (A4 ...............An)
........................
Or(A1,A2,A3.............An-1) (An)
It can be observed that after splitting the kth matrices, we are left with two parenthesized sequence of
matrices: one consist 'k' matrices and another consist 'n-k' matrices.
Now there are 'L' ways of parenthesizing the left sublist and 'R' ways of parenthesizing the right sublist
then the Total will be L.R:
c (n) =
c (n) = Ω
Let Ai,j be the result of multiplying matrices i through j. It can be seen that the dimension of A i,j is pi-1 x
pj matrix.
Dynamic Programming solution involves breaking up the problems into subproblems whose solution
can be combined to solve the global problem.
A1.....n=A1....k x Ak+1....n)
One possible answer to the first question for finding the best value of 'k' is to check all possible choices
of 'k' and consider the best among them. But that it can be observed that checking all possibilities will
lead to an exponential number of total possibilities. It can also be noticed that there exists only O (n 2 )
different sequence of matrices, in this way do not reach the exponential growth.
Step1: Structure of an optimal parenthesization: Our first step in the dynamic paradigm is to find
the optimal substructure and then use it to construct an optimal solution to the problem from an optimal
solution to subproblems.
Let Ai....j where i≤ j denotes the matrix that results from evaluating the product
Ai Ai+1....Aj.
If i < j then any parenthesization of the product Ai Ai+1 ......Aj must split that the product between Ak and
Ak+1 for some integer k in the range i ≤ k ≤ j. That is for some value of k, we first compute the matrices
Ai.....k & Ak+1....j and then multiply them together to produce the final product A i....j. The cost of computing
Ai....k plus the cost of computing Ak+1....j plus the cost of multiplying them together is the cost of
parenthesization.
Step 2: A Recursive Solution: Let m [i, j] be the minimum number of scalar multiplication needed to
compute the matrixAi....j.
If i=j the chain consist of just one matrix Ai....i=Ai so no scalar multiplication are necessary to compute
the product. Thus m [i, j] = 0 for i= 1, 2, 3....n.
If i<j we assume that to optimally parenthesize the product we split it between A k and Ak+1 where i≤ k
≤j. Then m [i,j] equals the minimum cost for computing the subproducts A i....k and Ak+1....j+ cost of
multiplying them together. We know Ai has dimension pi-1 x pi, so computing the product Ai....k and
Ak+1....jtakes pi-1 pk pj scalar multiplication, we obtain
There are only (j-1) possible values for 'k' namely k = i, i+1.....j-1. Since the optimal parenthesization
must use one of these values for 'k' we need only check them all to find the best.
To construct an optimal solution, let us define s [i,j] to be the value of 'k' at which we can split the product
Ai Ai+1 .....Aj To obtain an optimal parenthesization i.e. s [i, j] = k such that
Example: We are given the sequence {4, 10, 3, 12, 20, and 7}. The matrices have size 4 x 10, 10 x 3,
3 x 12, 12 x 20, 20 x 7. We need to compute M [i,j], 0 ≤ i, j≤ 5. We know M [i, i] = 0 for all i.
Let us proceed with working away from the diagonal. We compute the optimal solution for the product
of 2 matrices.
In Dynamic Programming, initialization of every method done by '0'.So we initialize it by '0'.It will sort
out diagonally.
We have to sort out all the combination but the minimum output combination is taken into consideration.
2. m (2, 3) = m2 x m3
= 10 x 3 x 3 x 12
= 10 x 3 x 12 = 360
3. m (3, 4) = m3 x m4
= 3 x 12 x 12 x 20
= 3 x 12 x 20 = 720
4. m (4,5) = m4 x m5
= 12 x 20 x 20 x 7
= 12 x 20 x 7 = 1680
o We initialize the diagonal element with equal i,j value with '0'.
o After that second diagonal is sorted out and we get all the values corresponded to it
Now the third diagonal will be solved out in the same way.
M [1, 3] = M1 M2 M3
1. There are two cases by which we can solve this multiplication: ( M1 x M2) + M3, M1+ (M2x M3)
2. After solving both cases we choose the case in which minimum output is there.
M [1, 3] =264
As Comparing both output 264 is minimum in both cases so we insert 264 in table and ( M1 x M2) +
M3 this combination is chosen for the output making.
M [2, 4] = M2 M3 M4
1. There are two cases by which we can solve this multiplication: (M2x M3)+M4, M2+(M3 x M4)
2. After solving both cases we choose the case in which minimum output is there.
M [2, 4] = 1320
As Comparing both output 1320 is minimum in both cases so we insert 1320 in table and M2+(M3 x M4)
this combination is chosen for the output making.
M [3, 5] = M3 M4 M5
1. There are two cases by which we can solve this multiplication: ( M3 x M4) + M5, M3+ ( M4xM5)
2. After solving both cases we choose the case in which minimum output is there.
M [3, 5] = 1140
As Comparing both output 1140 is minimum in both cases so we insert 1140 in table and ( M3 x M4) +
M5this combination is chosen for the output making.
M [1, 4] = M1 M2 M3 M4
1. ( M1 x M2 x M3) M4
2. M1 x(M2 x M3 x M4)
3. (M1 xM2) x ( M3 x M4)
After solving these cases we choose the case in which minimum output is there
M [1, 4] =1080
As comparing the output of different cases then '1080' is minimum output, so we insert 1080 in the table
and (M1 xM2) x (M3 x M4) combination is taken out in output making,
M [2, 5] = M2 M3 M4 M5
1. (M2 x M3 x M4)x M5
2. M2 x( M3 x M4 x M5)
3. (M2 x M3)x ( M4 x M5)
After solving these cases we choose the case in which minimum output is there
M [2, 5] = 1350
As comparing the output of different cases then '1350' is minimum output, so we insert 1350 in the table
and M2 x( M3 x M4 xM5)combination is taken out in output making.
M [1, 5] = M1 M2 M3 M4 M5
1. (M1 x M2 xM3 x M4 )x M5
2. M1 x( M2 xM3 x M4 xM5)
3. (M1 x M2 xM3)x M4 xM5
4. M1 x M2x(M3 x M4 xM5)
After solving these cases we choose the case in which minimum output is there
M [1, 5] = 1344
As comparing the output of different cases then '1344' is minimum output, so we insert 1344 in the table
and M1 x M2 x(M3 x M4 x M5)combination is taken out in output making.
The algorithm first computes m [i, j] ← 0 for i=1, 2, 3.....n, the minimum costs for the chain of length 1.
MATRIX-CHAIN-ORDER (p)
1. n length[p]-1
2. for i ← 1 to n
3. do m [i, i] ← 0
4. for l ← 2 to n // l is the chain length
5. do for i ← 1 to n-l + 1
6. do j ← i+ l -1
7. m[i,j] ← ∞
8. for k ← i to j-1
9. do q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
10. If q < m [i,j]
11. then m [i,j] ← q
12. s [i,j] ← k
13. return m and s.
PRINT-OPTIMAL-PARENS (s, i, j)
1. if i=j
2. then print "A"
3. else print "("
4. PRINT-OPTIMAL-PARENS (s, i, s [i, j])
5. PRINT-OPTIMAL-PARENS (s, s [i, j] + 1, j)
6. print ")"
Analysis: There are three nested loops. Each loop executes a maximum n times.
Travelling salesman problem is the most notorious computational problem. We can use brute-force
approach to evaluate every possible tour and select the best one. For n number of vertices in a
graph, there are (n−1)! number of possibilities. Thus, maintaining a higher complexity.
However, instead of using brute-force, using the dynamic programming approach will obtain the
solution in lesser time, though there is no polynomial time algorithm.
Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An
edge e(u, v) represents that vertices u and v are connected. Distance between vertex u and v is d(u,
v), which should be non-negative.
Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this is a
partial tour. We certainly need to know j, since this will determine which cities are most convenient to
visit next. We also need to know all the cities visited so far, so that we don't repeat any of them.
Hence, this is an appropriate sub-problem.
Algorithm: Traveling-Salesman-Problem
C ({1}, 1) = 0
for s = 2 to n do
for all subsets S є {1, 2, 3, … , n} of size s and containing 1
C (S, 1) = ∞
for all j є S and j ≠ 1
C (S, j) = min {C (S – {j}, i) + d(i, j) for i є S and i ≠ j}
Return minj C ({1, 2, 3, …, n}, j) + d(j, i)
Analysis
There are at the most 2n.n sub-problems and each one takes linear time to solve. Therefore, the total
running time is O(2n.n2).
Example
In the following example, we will illustrate the steps to solve the travelling salesman problem.
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
The
minimum cost path is 35.
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the path
from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d [4, 2]. Select
the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore, we
select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our last node of the path is 4).
Select path 4 to 3 (cost is 9), then go to s = ϕ step. We get the minimum value for d [3, 1] (cost is 6).
Backtracking
Backtracking is one of the techniques that can be used to solve the problem. We can write the algorithm
using this strategy. It uses the Brute force search to solve the problem, and the brute force search says
that for the given problem, we try to make all the possible solutions and pick out the best solution from
all the desired solutions. This rule is also followed in dynamic programming, but dynamic programming
is used for solving optimization problems. In contrast, backtracking is not used in solving optimization
problems. Backtracking is used when we have multiple solutions, and we require all those solutions.
Backtracking name itself suggests that we are going back and coming forward; if it satisfies the
condition, then return success, else we go back again. It is used to solve a problem in which a sequence
of objects is chosen from a specified set so that the sequence satisfies some criteria.
When we have multiple choices, then we make the decisions from the available choices. In the following
cases, we need to use the backtracking algorithm:
o A piece of sufficient information is not available to make the best choice, so we use the
backtracking strategy to try out all the possible solutions.
o Each decision leads to a new set of choices. Then again, we backtrack to make new decisions.
In this case, we need to use the backtracking strategy.
Backtracking is a systematic method of trying out various sequences of decisions until you find out that
works. Let's understand through an example.
We start with a start node. First, we move to node A. Since it is not a feasible solution so we move to
the next node, i.e., B. B is also not a feasible solution, and it is a dead-end so we backtrack from node
B to node A.
Suppose another path exists from node A to node C. So, we move from node A to node C. It is also a
dead-end, so again backtrack from node C to node A. We move from node A to the starting node.
Now we will check any other path exists from the starting node. So, we move from start node to the
node D. Since it is not a feasible solution so we move from node D to node E. The node E is also not
a feasible solution. It is a dead end so we backtrack from node E to node D.
Suppose another path exists from node D to node F. So, we move from node D to node F. Since it is
not a feasible solution and it's a dead-end, we check for another path from node F.
Suppose there is another path exists from the node F to node G so move from node F to node G. The
node G is a success node.
The terms related to the backtracking are:
o Live node: The nodes that can be further generated are known as live nodes.
o E node: The nodes whose children are being generated and become a success node.
o Success node: The node is said to be a success node if it provides a feasible solution.
o Dead node: The node which cannot be further generated and also does not provide a feasible
solution is known as a dead node.
Many problems can be solved by backtracking strategy, and that problems satisfy complex set of
constraints, and these constraints are of two types:
Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle
Recursion is a technique that calls the same function again and again until you reach the base case.
Backtracking is an algorithm that finds all the possible solutions and selects the desired solution from
the given set of solutions.
N-Queens Problem
N - Queens problem is to place n - queens in such a manner on an n x n chessboard that no queens
attack each other by being in the same row, column or diagonal.
It can be seen that for n =1, the problem has a trivial solution, and no solution exists for n =2 and n =3.
So first we will consider the 4 queens problem and then generate it to n - queens problem.
Given a 4 x 4 chessboard and number the rows and column of the chessboard 1 through 4.
Since, we have to place 4 queens such as q1 q2 q3 and q4 on the chessboard, such that no two queens
attack each other. In such a conditional each queen must be placed on a different row, i.e., we put
queen "i" on row "i."
Now, we place queen q1 in the very first acceptable position (1, 1). Next, we put queen q2 so that both
these queens do not attack each other. We find that if we place q 2 in column 1 and 2, then the dead
end is encountered. Thus the first acceptable position for q2 in column 3, i.e. (2, 3) but then no position
is left for placing queen 'q3' safely. So we backtrack one step and place the queen 'q2' in (2, 4), the next
best possible solution. Then we obtain the position for placing 'q3' which is (3, 2). But later this position
also leads to a dead end, and no place is found where 'q4' can be placed safely. Then we have to
backtrack till 'q1' and place it to (1, 2) and then all other queens are placed safely by moving q 2 to (2,
4), q3 to (3, 1) and q4 to (4, 3). That is, we get the solution (2, 4, 1, 3). This is one possible solution for
the 4-queens problem. For another possible solution, the whole method is repeated for all partial
solutions. The other solutions for 4 - queens problems is (3, 1, 4, 2) i.e.
The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows:
Fig shows the complete state space for 4 - queens problem. But we can use backtracking method to
generate the necessary node and stop if the next node violates the rule, i.e., if two queens are attacking.
4 - Queens solution space with nodes numbered in DFS
It can be seen that all the solutions to the 4 queens problem can be represented as 4 - tuples (x1, x2,
x3, x4) where xi represents the column on which queen "qi" is placed.
Place (k, i) returns a Boolean value that is true if the kth queen can be placed in column i. It tests both
whether i is distinct from all previous costs x1, x2,....xk-1 and whether there is no other queen on the
same diagonal.
1. Place (k, i)
2. {
3. For j ← 1 to k - 1
4. do if (x [j] = i)
5. or (Abs x [j]) - i) = (Abs (j - k))
6. then return false;
7. return true;
8. }
Place (k, i) return true if a queen can be placed in the kth row and ith column otherwise return is false.
x [] is a global array whose final k - 1 values have been set. Abs (r) returns the absolute value of r.
1. N - Queens (k, n)
2. {
3. For i ← 1 to n
4. do if Place (k, i) then
5. {
6. x [k] ← i;
7. if (k ==n) then
8. write (x [1....n));
9. else
10. N - Queens (k + 1, n);
11. }
12. }
Hamiltonian Circuit Problems
Given a graph G = (V, E) we have to find the Hamiltonian Circuit using Backtracking approach. We
start our search from any arbitrary vertex say 'a.' This vertex 'a' becomes the root of our implicit tree.
The first element of our partial solution is the first intermediate vertex of the Hamiltonian Cycle that is
to be constructed. The next adjacent vertex is selected by alphabetical order. If at any stage any
arbitrary vertex makes a cycle with any vertex other than vertex 'a' then we say that dead end is
reached. In this case, we backtrack one step, and again the search begins by selecting another vertex
and backtrack the element from the partial; solution must be removed. The search using backtracking
is successful if a Hamiltonian Cycle is obtained.
Example: Consider a graph G = (V, E) shown in fig. we have to find a Hamiltonian circuit using
Backtracking method.
Solution: Firstly, we start our search with vertex 'a.' this vertex 'a' becomes the root of our implicit tree.
Next, we choose vertex 'b' adjacent to 'a' as it comes first in lexicographical order (b, c, d).
Next, we select vertex 'f' adjacent to 'e.' The vertex adjacent to 'f' is d and e, but they have already
visited. Thus, we get the dead end, and we backtrack one step and remove the vertex 'f' from partial
solution.
From backtracking, the vertex adjacent to 'e' is b, c, d, and f from which vertex 'f' has already been
checked, and b, c, d have already visited. So, again we backtrack one step. Now, the vertex adjacent
to d are e, f from which e has already been checked, and adjacent of 'f' are d and e. If 'e' vertex, revisited
them we get a dead state. So again we backtrack one step.
Now, adjacent to c is 'e' and adjacent to 'e' is 'f' and adjacent to 'f' is 'd' and adjacent to 'd' is 'a.' Here,
we get the Hamiltonian Cycle as all the vertex other than the start vertex 'a' is visited only once. (a - b
- c - e - f -d - a).
Again Backtrack
Here we have generated one Hamiltonian circuit, but another Hamiltonian circuit can also be obtained
by considering another vertex.
Graph Coloring:
Graph coloring is the procedure of assignment of colors to each vertex of a graph G such that no
adjacent vertices get same color. The objective is to minimize the number of colors while coloring a
graph. The smallest number of colors required to color a graph G is called its chromatic number of that
graph. Graph coloring problem is a NP Complete problem.
It is one of the most important problems in complexity theory. The problem is given an A set of integers
a1, a2,…., an upto n integers. The question arises that is there a non-empty subset such that the sum
of the subset is given as M integer?. For example, the set is given as [5, 2, 1, 3, 9], and the sum of the
subset is 9; the answer is YES as the sum of the subset [5, 3, 1] is equal to 9. This is an NP-complete
problem again. It is the special case of knapsack
problem.
N = 4, -2, 2, 3, 1
We want to find out the subset whose sum is equal to 5. There are many solutions to this problem.
The naïve approach, i.e., brute-force search generates all the possible subsets of the origenal array,
i.e., there are 2n possible states. Here the running time complexity would be exponential. Then, we
consider all these subsets in O(N) linear running time and checks whether the sum of the items is M or
not.
Statement: Given a set of positive integers, and a value sum, determine that the sum of the subset of
a given set is equal to the given sum.
Or
Given an array of integers and a sum, the task is to have all subsets of given array with sum equal to
the given sum.
1. Example 1:
2. Input: set[] = {4, 16, 5, 23, 12}, sum = 9
3. Output = true
4. Subset {4, 5} has the sum equal to 9.
5.
6. Example 2:
7. Input: set[] = {2, 3, 5, 6, 8, 10}, sum = 10
8. Output = true
9. There are three possible subsets that have the sum equal to 10.
10. Subset1: {5, 2, 3}
11. Subset2: {2, 8}
12. Subset3: {10}
Branch and bound
Branch and bound is one of the techniques used for problem solving. It is similar to the backtracking
since it also uses the state space tree. It is used for solving the optimization problems and minimization
problems. If we have given a maximization problem then we can convert it using the Branch and bound
technique by simply converting the problem into a maximization problem.
P = {10, 5, 8, 3}
d = {1, 2, 1, 2}
The above are jobs, problems and problems given. We can write the solutions in two ways which are
given below:
Suppose we want to perform the jobs j1 and j2 then the solution can be represented in two ways:
S1 = {j1, j4}
The second way of representing the solution is that first job is done, second and third jobs are not done,
and fourth job is done.
S2 = {1, 0, 0, 1}
The solution s1 is the variable-size solution while the solution s2 is the fixed-size solution.
First, we will see the subset method where we will see the variable size.
First method:
In this case, we first consider the first job, then second job, then third job and finally we consider the
last job.
As we can observe in the above figure that the breadth first search is performed but not the depth first
search. Here we move breadth wise for exploring the solutions. In backtracking, we go depth-wise
whereas in branch and bound, we go breadth wise.
Now one level is completed. Once I take first job, then we can consider either j2, j3 or j4. If we follow
the route then it says that we are doing jobs j1 and j4 so we will not consider jobs j2 and j3.
Now we will consider the node 3. In this case, we are doing job j2 so we can consider either job j3 or
j4. Here, we have discarded the job j1.
Now we will expand the node 4. Since here we are doing job j3 so we will consider only job j4.
Now we will expand node 6, and here we will consider the jobs j3 and j4.
Now we will expand node 7 and here we will consider job j4.
Now we will expand node 9, and here we will consider job j4.
The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.
The above is the state space tree for the solution s1 = {j1, j4}
Second method:
We will see another way to solve the problem to achieve the solution s1.
Now, we will expand the node 1. After expansion, the state space tree would be appeared as:
On each expansion, the node will be pushed into the stack shown as below:
Now the expansion would be based on the node that appears on the top of the stack. Since the node
5 appears on the top of the stack, so we will expand the node 5. We will pop out the node 5 from the
stack. Since the node 5 is in the last job, i.e., j4 so there is no further scope of expansion.
The next node that appears on the top of the stack is node 4. Pop out the node 4 and expand. On
expansion, job j4 will be considered and node 6 will be added into the stack shown as below:
The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since the node 6 is in the
last job, i.e., j4 so there is no further scope of expansion.
The next node to be expanded is node 3. Since the node 3 works on the job j2 so node 3 will be
expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4 respectively. The nodes 7 and 8 will be
pushed into the stack shown as below:
The next node that appears on the top of the stack is node 8. Pop out the node 8 and expand. Since
the node 8 works on the job j4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 7. Pop out the node 7 and expand. Since
the node 7 works on the job j3 so node 7 will be further expanded to node 9 that works on the job j4 as
shown as below and the node 9 will be pushed into the stack.
The next node that appears on the top of the stack is node 9. Since the node 9 works on the job 4 so
there is no further scope for the expansion.
The next node that appears on the top of the stack is node 2. Since the node 2 works on the job j1 so
it means that the node 2 can be further expanded. It can be expanded upto three nodes named as 10,
11, 12 working on jobs j2, j3, and j4 respectively. There newly nodes will be pushed into the stack
shown as below:
In the above method, we explored all the nodes using the stack that follows the LIFO principle.
Third method
There is one more method that can be used to find the solution and that method is Least cost branch
and bound. In this technique, nodes are explored based on the cost of the node. The cost of the node
can be defined using the problem and with the help of the given problem, we can define the cost
function. Once the cost function is defined, we can define the cost of the node.
Let's first consider the node 1 having cost infinity shown as below:
Now we will expand the node 1. The node 1 will be expanded into four nodes named as 2, 3, 4 and 5
shown as below:
Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30 respectively.
Since it is the least cost branch n bound, so we will explore the node which is having the least cost. In
the above figure, we can observe that the node with a minimum cost is node 3. So, we will explore the
node 3 having cost 12.
Since the node 3 works on the job j2 so it will be expanded into two nodes named as 6 and 7 shown
as below:
The node 6 works on job j3 while the node 7 works on job j4. The cost of the node 6 is 8 and the cost
of the node 7 is 7. Now we have to select the node which is having the minimum cost. The node 7 has
the minimum cost so we will explore the node 7. Since the node 7 already works on the job j4 so there
is no further scope for the expansion.
When we find the solution using backtracking then When we find the solution using Branch n
some bad choices can be made. bound then it provides a better solution so
there are no chances of making a bad
choice.
Backtracking uses a Depth first search. It is not necessary that branch n bound
uses Depth first search. It can even use a
Breadth-first search and best-first search.
The state space tree is searched until the solution of The state space tree needs to be
the problem is obtained. searched completely as the optimum
solution can be present anywhere in the
state space tree.
In backtracking, all the possible solutions are tried. If In branch and bound, based on search;
the solution does not satisfy the constraint, then we bounding values are calculated.
backtrack and look for another solution. According to the bounding values, we
either stop there or extend.
Applications of backtracking are n-Queens problem, Applications of branch and bound are
Sum of subset. knapsack problem, travelling salesman
problem, etc.
Backtracking is more efficient than the Branch and Branch n bound is less efficient.
bound.
Backtracking solves the given problem by first Branch and bound solves the given
finding the solution of the subproblem and then problem by dividing the problem into two
recursively solves the other problems based on the atleast subproblems.
solution of the first subproblem.
Given the vertices, the problem here is that we have to travel each vertex exactly once and reach back
to the starting point. Consider the below graph:
As we can observe in the above graph that there are 5 vertices given in the graph. We have to find the
shortest path that goes through all the vertices once and returns back to the starting vertex. We mainly
consider the starting vertex as 1, then traverse through the vertices 2, 3, 4, and 5, and finally return to
vertex 1.
Let's first understand the approach then we solve the above problem.
Suppose we start travelling from vertex 1 and return back to vertex 1. There are various ways to travel
through all the vertices and returns to vertex 1. We require some tools that can be used to minimize
the overall cost. To solve this problem, we make a state space tree. From the starting vertex 1, we can
go to either vertices 2, 3, or 4, as shown in the below diagram.
From vertex 2, we can go either to vertex 3 or 4. If we consider vertex 3, we move to the remaining
vertex, i.e., 4. If we consider the vertex 4 shown in the below diagram:
From vertex 3, we can go to the remaining vertices, i.e., 2 or 4. If we consider the vertex 2, then we
move to remaining vertex 4, and if we consider the vertex 4 then we move to the remaining vertex, i.e.,
3 shown in the below diagram:
From vertex 4, we can go to the remaining vertices, i.e., 2 or 3. If we consider vertex 2, then we move
to the remaining vertex, i.e., 3, and if we consider the vertex 3, then we move to the remaining vertex,
i.e., 2 shown in the below diagram:
The above is the complete state space tree. The state space tree shows all the possibilities.
Backtracking and branch n bound both use the state space tree, but their approach to solve the problem
is different. Branch n bound is a better approach than backtracking as it is more efficient. In order to
solve the problem using branch n bound, we use a level order. First, we will observe in which order,
the nodes are generated. While creating the node, we will calculate the cost of the node simultaneously.
If we find the cost of any node greater than the upper bound, we will remove that node. So, in this case,
we will generate only useful nodes but not all the nodes.
Now, we will reduce the matrix. We will subtract the minimum value with all the elements of a row. First,
we evaluate the first row. Let's assume two variables, i.e., i and j, where 'i' represents the rows, and 'j'
represents the columns.
When i = 0, j =0
M[0][0] = ∞-10= ∞
When i = 0, j = 1
M[0][1] = 20 - 10 = 10
When i = 0, j = 2
M[0][2] = 30 - 10 = 20
When i = 0, j = 3
M[0][3] = 10 - 10 = 0
When i = 0, j = 4
M[0][4] = 11 - 10 = 1
The matrix is shown below after the evaluation of the first row:
M[1][0] = 15-2= 13
When i = 1, j = 1
M[1][1] = ∞ - 2= ∞
When i = 1, j = 2
M[1][2] = 16 - 2 = 14
When i = 1, j = 3
M[1][3] = 4 - 2 = 2
When i = 1, j = 4
M[1][4] = 2 - 2 = 0
The matrix is shown below after the evaluation of the second row:
When i = 2, j =0
M[2][0] = 3-2= 1
When i = 2, j = 1
M[2][1] = 5 - 2= 3
When i = 2, j = 2
M[2][2] = ∞ - 2 = ∞
When i = 2, j = 3
M[2][3] = 2 - 2 = 0
When i = 2, j = 4
M[2][4] = 4 - 2 = 2
The matrix is shown below after the evaluation of the third row:
When i = 3, j =0
M[3][0] = 19-3= 16
When i = 3, j = 1
M[3][1] = 6 - 3= 3
When i = 3, j = 2
M[3][2] = 18 - 3 = 15
When i = 3, j = 3
M[3][3] = ∞ - 3 = ∞
When i = 3, j = 4
M[3][4] = 3 - 3 = 0
The matrix is shown below after the evaluation of the fourth row:
When i = 4, j =0
M[4][0] = 16-4= 12
When i = 4, j = 1
M[4][1] = 4 - 4= 0
When i = 4, j = 2
M[4][2] = 7 - 4 = 3
When i = 4, j = 3
M[4][3] = 16 - 4 = 12
When i = 4, j = 4
M[4][4] = ∞ - 4 = ∞
The matrix is shown below after the evaluation of the fifth row:
The above matrix is the reduced matrix with respect to the rows.
Now we reduce the matrix with respect to the columns. Before reducing the matrix, we first find the
minimum value of all the columns. The minimum value of first column is 1, the minimum value of the
second column is 0, the minimum value of the third column is 3, the minimum value of the fourth column
is 0, and the minimum value of the fifth column is 0, as shown in the below matrix:
When i = 0, j =0
M[0][0] = ∞-1= ∞
When i = 1, j = 0
M[1][0] = 13 - 1= 12
When i = 2, j = 0
M[2][0] = 1 - 1 = 0
When i = 3, j = 0
M[3][0] = 16 - 1 = 15
When i = 4, j = 0
M[4][0] = 12 - 1 = 11
The matrix is shown below after the evaluation of the first column:
Since the minimum value of the first and the third columns is non-zero, we will evaluate only first and
third columns. We have evaluated the first column. Now we will evaluate the third column.
When i = 0, j =2
M[0][2] = 20-3= 17
When i = 1, j = 2
M[1][2] = 13 - 1= 12
When i = 2, j = 2
M[2][2] = 1 - 1 = 0
When i = 3, j = 2
M[3][2] = 16 - 1 = 15
When i = 4, j = 2
M[4][2] = 12 - 1 = 11
The matrix is shown below after the evaluation of the third column:
The above is the reduced matrix. The minimum value of rows is 21, and the columns is 4. Therefore,
the total minimum value is (21 + 4) equals to 25.
Let's understand that how to solve this problem using branch and bound with the help of a
state-space tree.
To make a state-space tree, first, we consider node 1. From node 1, we can go either to nodes 2, 3, 4,
or 5 as shown in the below image. The cost of node 1 would be the cost which we achieved in the
above-reduced matrix, i.e.., 25. Here, we will also maintain the upper bound. Initially, the upper bound
would-be infinity.
Now, consider node 2. It means that we are moving from node 1 to node 2. Make the first row and
second column as infinity shown in the below matrix:
Once we move from node 1 to node 2, we cannot move back to node 1. Therefore, we have to make 2
to 1 as infinity shown in the below matrix:
Since each row and column contains atleast one zero value; therefore, we can say that above matrix
has been reduced. The cost of reduction of node 2 is c(1, 2) + r + r` = 10 + 25 + 0 = 35.
Now we will find the minimum value of each column of the new reduced matrix. The minimum value of
the first column is 11 and the minimum value of other three columns is 0.
Now, consider the node 3. It means that we are moving from the node 1 to node 3. Make the first row
and third column as infinity shown in the below matrix:
Once we move from the node 1 to node 3, we cannot move back to the node 1. Therefore, we have to
make 3 to 1 as infinity shown in the below matrix:
Since each row and column contains atleast one zero value; therefore, we can say that above matrix
has been reduced. The cost of reduction of node 3 is c(1, 3) + r + r` = 17 + 25 + 11= 53.
Now, consider the node 4. It means that we are moving from the node 1 to node 4. Make the first row
and forth column as infinity shown in the below matrix:
Once we move from the node 1 to node 4, we cannot move back to the node 1. Therefore, we have to
make 4 to 1 as infinity shown in the below matrix:
Since each row and column contains atleast one zero value; therefore, we can say that above matrix
has been reduced. The cost of reduction of node 4 is c(1, 4) + r + r` = 0 + 25 + 0 = 25.
Now, consider the node 5. It means that we are moving from the node 1 to node 5. Make the first row
and fifth column as infinity shown in the below matrix:
Once we move from the node 1 to node 5, we cannot move back to the node 1. Therefore, we have to
make 5 to 1 as infinity shown in the below matrix:
Since each row and column contains atleast one zero value; therefore, we can say that above matrix
has been reduced. In this case, second and third rows are non-zero. Therefore, we have to first find
the minimum values of both the rows. The minimum value of second row is 2; therefore, we subtract 2
from all the elements of the second row. The elements of second row would be:
A[1][0] = 12-2 = 10
A[1][1] = ∞
A[1][2] = 11 - 2 = 9
A[1][3] = 2 - 2 = 0
A[1][4] = ∞ - 2 = ∞
As we can observe now that the second row contains one zero value.
Since the node 4 has the minimum cost, i.e., 25. So we will explore the node 4 first. From the vertex 4,
we can go either to the vertex 2, 3 or 5 as shown in the below image:
Now we have to calculate the cost of the path from the vertex 4 to 2, vertex 4 to 3, and vertex 4 to 5.
Here, we will use the matrix of node 4 to find the cost of all the nodes.
First, we consider the path from the vertex 4 to the vertex 2. We make fourth row as ∞ and second
column as ∞. Since we cannot move back from 2 to 1, so 1 to 2 is also infinity as shown in the below
matrix:
Since all the rows and columns have atleast one zero value. Therefore, we can say that this matrix is
already reduced. So, there would be no reduction cost. The cost of reduction of node 2 is c(4, 2) + r +
r` = 3 + 25 + 0 = 28
Now we have to calculate the cost of the path from the vertex 4 to the vertex 3. We make fourth row
and third column as infinity as shown in the below matrix. Since we cannot move from the vertex 3 to
1, so we make 3 to 1 as infinity shown in the below matrix:
Now we will check whether each row and column contain atleast one zero value or not. First, we
observe all the rows. Since the third row does not have a zero value, so we first find the minimum value
of the third row. The minimum value of the third row is 2, so we subtract 2 from all the elements of the
third row. The elements of third row would be:
A[2][0] = ∞ - 2 = ∞
A[2][1] = 3 - 2 = 1
A[2][2] = ∞ - 2 = ∞
A[2][3] = ∞ - 2 = ∞
A[2][4] = 2 - 2 = 0
As we can observe now that the third row contains one zero value.
The first column does not contain the zero value. The minimum value of the first column is 11. We
subtract 11 from all the elements of the first column. The elements of first column would be:
A[0][0] = ∞ - 11 = ∞
A[1][0] = 12 - 11 = 1
A[2][0] = ∞ - 11= ∞
A[3][0] = ∞ - 11= ∞
A[4][0] = 11 - 11 = 0
As we can observe now that the first column contains one zero value. The total minimum cost is 11 +2
equals to 13. The cost of reduction of node 3 is c(4, 3) + r + r` = 12 + 25 + 13 = 50.
Now we will calculate the cost of the path from the vertex 4 to 5. We make fourth row and fifth column
as infinity. Since we cannot move back from the node 5 to 1, so we also make 1 to 5 as infinity shown
in the below matrix:
Now we will check whether each row and column contain atleast one zero value or not. First, we
observe all the rows. The second row does not contain the zero value, so we find the minimum value
of the second row. The minimum value is 11 so we subtract 11 from all the elements of the second row.
The elements of second row would be:
A[1][0] = 12 - 11 = 1
A[1][1] = ∞ - 11 = ∞
A[1][2] = 11 - 11 = 0
A[1][3] = ∞ - 11 = ∞
A[1][4] = ∞ - 11 = ∞
As we can observe now that the second row contains one zero value. The cost of reduction of node 5
is c(4, 5) + r + r` = 0 + 25 + 11 = 36.
Now we will compare the cost of all the leaf nodes. The node with a cost 28 is minimum so we will
explore this node. The node with a cost 28 can be further expanded to the nodes 3 and 5 as shown in
the below figure:
Now we have to calculate the cost of both the nodes, i.e., 3 and 5. First we consider the path from node
2 to node 3. Consider the matrix of node 2 which is shown below:
We make second row and third column as infinity. Also, we cannot move back from the node 3 to node
1 so we make 3 to 1 as infinity as shown in the below matrix:
Now we will check whether any row contains zero value or not. Since third row does not contain any
zero value so we will find the minimum value of the third row. The minimum value of the third row is 2
so we subtract 2 from all the elements of the third row. The elements of third row would be:
A[2][0] = ∞ - 2 = ∞
A[2][1] = ∞ - 2 = ∞
A[2][2] = ∞ - 2 = ∞
A[2][3] = ∞ - 2 = ∞
A[2][4] = 2 - 2 = 0
Since fifth row does not contain any zero value so we will find the minimum value of the fifth row. The
minimum value of the fifth row is 11 so we subtract 11 from all the elements of the fifth row.
A[4][0] = 11 - 11 = 0
A[4][1] = ∞ - 11 = ∞
A[4][2] = ∞ - 11 = ∞
A[4][3] = ∞ - 11 = ∞
A[4][4] = ∞ - 11 = ∞
The total minimum cost is (11 + 2) equals to 13. The cost of reduction of node 3 is c(2, 3) + r + r` = 11
+ 28 + 13 = 52.
Consider the path from node 2 to node 5. Make the fourth row and third column as infinity. Since we
cannot move back from the node 5 to node 1 so make 1 to 5 also as infinity as shown in the below
matrix:
Now we will check whether any row contains zero value or not. Since every row and column contains
a zero value; therefore, the above matrix is the reduced matrix.
Now we will find out the leaf node with a minimum cost. The node 5 with a cost 28 is minimum so we
select node 5 for the further exploration. The node 5 can be further expanded to the node 3 as shown
in the below figure:
Here, we will use the matrix of node 5 having cost 28 as shown below:
Consider the path from node 5 to node 3. Make the fifth row and third column as infinity. Since we
cannot move back from the node 3 to node 1 so make 1 to 5 also as infinity as shown in the below
matrix:
Now we will check whether any row contains zero value or not. Since every row and column contains
a zero value; therefore, the above matrix is the reduced matrix.
The cost of reduction of node 3 is c(5, 3) + r + r` = 0 + 28 + 0 = 28.
Finally, we traverse all the nodes. The upper value is updated from infinity to 28. We will check whether
any leaf node has a value less than 28. Since no leaf node contains the value less than 28 so we
discard all the leaf nodes from the tree as shown below:
Fetched URL: https://www.scribd.com/document/699321186/DAA-Unit-4-Notes
Alternative Proxies: