DAA Material
DAA Material
DAA Material
5. Find least cost valued node A (i.e. E-node), by computing reduced cost node matrix
with every remaining node.
a. If <i, j> edge is to be included, then do following: Cij= cost of edge, if there is a
direct path from city i to city j
Cij = ∞, if there is no direct path from city i to city j.
b. Convert cost matrix to reduced matrix by subtracting minimum values from
appropriate rows and columns, such that each row and column contains at
least one zero entry.
c. Find cost of reduced matrix. Cost is given by summation of subtracted
amount from the cost matrix to convert it in to reduce matrix.
d. Prepare state space tree for the reduce matrix
e. Find least cost valued node A (i.e. E-node), by computing reduced cost node
matrix with every remaining node.
f. If <i, j> edge is to be included, then do following:
(i) Set all values in row i and all values in column j of A to ∞
(ii) Set A[j, 1] = ∞
(iii) Reduce A again, except rows and columns having all ∞ entries.
g. Compute the cost of newly created reduced matrix as,
Cost = L + Cost(i, j) + r
General Strategy
▪ Dynamic programming is powerful design technique for optimization problems.
Here word “programming” refers to planning or construction of a solution, it
does not have any resemblance with computer programming.
▪ Divide and conquer divides the problem into small sub problems. Sub problems
are solved recursively. Unlike divide and conquer, sub problems in dynamic
programming are not independent. Sub problems in it overlap with each other.
Solutions of sub problems are merged to get the solution of the original large
problem.
▪ In divide and conquer, sub problems are independent and hence repeated
problems are solved multiple times. Dynamic programming saves the solution in
the table, so when the same problem encounters again, the solution is retrieved
from the table. It is bottom up approach. It starts solving the smallest possible
problem and uses a solution of the smaller problem to build solution of the
larger problem.
Limitations
▪ The method is applicable to only those problems which possess the property of
principle of optimality.
▪ We must keep track of partial solutions.
▪ Dynamic programming is more complex and time-consuming
L[i, j] = w (i, j), if i ≠ j and (i, j) ∈ E // w(i, j) is the weight of the edge (i, j)
Principle of optimality :
If k is the node on the shortest path from i to j, then the path from i to k and k to j, must
also be shortest.
In the following figure, the optimal path from i to j is either p or summation of p1 and p2.
Algorithm for All Pairs Shortest Path
This approach is also known as the Floyd-warshall shortest path algorithm. The
algorithm for all pair shortest path (APSP) problem is described below
Problem: Apply Floyd’s method to find the shortest path for the below-
mentioned all pairs.
Bellman–Ford Algorithm for Shortest Paths
Bellman-Ford algorithm is used to find minimum distance from the source vertex to any
other vertex. The main difference between this algorithm with Dijkstra’s the algorithm is,
in Dijkstra’s algorithm we cannot handle the negative weight, but here we can handle it
easily.
Bellman-Ford algorithm finds the distance in a bottom-up manner. At first, it finds those
distances which have only one edge in the path. After that increase the path length to find
all possible solutions.