Module 2 ADA
Module 2 ADA
MODULE 2
Many such problems are optimization problems: they ask to find an element that maximizes
or minimizes some desired characteristic such as a path length or an assignment cost.
Exhaustive search
• It is simply a brute-force approach to combinatorial problems.
• It suggests generating each and every element of the problem domain, selecting those of
them that satisfy all the constraints, and then finding a desired element (e.g., the one that
optimizes some objective function).
We illustrate exhaustive search by applying it to two important problems:
1. The traveling salesman problem,
2. The knapsack problem,
• The problem can be modelled by a weighted graph, with the graph’s vertices representing
the cities and the edge weights specifying the distances.
• Then the problem can be stated as the problem of finding the shortest Hamiltonian circuit
of the graph.
(A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph
exactly once. It is named after the Irish mathematician Sir William Rowan Hamilton (1805–
1865), who became interested in such cycles as an application of his algebraic discoveries.)
• An inspection of Figure 3.7 reveals three pairs of tours that differ only by their direction.
Hence, we could cut the number of vertex permutations by half.
• We could, for example, choose any two intermediate vertices, say, b and c, and then
consider only permutations in which b precedes c.
Analysis:
1. Size of input is number of cities
2. Basic Operation is to find cost of each path operation count
3. Total number of path that we need to find for n cities is T(n) € Θ (n-1)!
2. Knapsack Problem
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack. Figure 3.8a
presents a small instance of the knapsack problem.
Analysis
1. No of items is the measure of input size ‘n’
2. Basic Operation is to find total weight, profit and check whether it is feasible or not.
3. The time complexity of these algorithms is T(n) € Θ(2n)
DECREASE-AND-CONQUER
1. Discuss decrease and conquer algorithmic technique. Explain its variations 06M
• The decrease-and-conquer technique is based on exploiting the relationship between a solution
to a given instance of a problem and a solution to its smaller instance.
• Once such a relationship is established, it can be exploited either
1. top down or
2. bottom up.
There are three major variations of decrease-and-conquer:
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease
1. DECREASE BY A CONSTANT
In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant on
each iteration of the algorithm. Typically, this constant is equal to one (Figure 4.1), although other
constant size reductions do happen occasionally.
2. THE DECREASE-BY-A-CONSTANT-FACTOR
• This technique suggests reducing a problem instance by the same constant factor on each
iteration of the algorithm. In most applications, this constant factor is equal to two. The
decrease-by-half idea is illustrated in Figure 4.2.
• If the instance of size n is to compute an, the instance of half its size is to compute an/2, with
the obvious relationship between the two: an = (an/2)2.
• But since we consider here instances with integer exponents only, the former does not work
for odd n. If n is odd, we have to compute an−1 by using the rule for even-valued exponents
and then multiply the result by a. To summarize, we have the following formula:
If we compute an recursively according to formula (4.2) and measure the algorithm’s efficiency
by the number of multiplications, we should expect the algorithm to be in θ(log n) because, on each
iteration, the size is reduced by about a half at the expense of one or two multiplications.
Disadvantages
1. Once the constant is fixed we carry forward that value till end.
3. VARIABLE-SIZE-DECREASE
a. Finally, in the variable-size-decrease variety of decrease-and-conquer, the size-
reduction pattern varies from one iteration of an algorithm to another.
b. Euclid’s algorithm for computing the greatest common divisor provides a good
example of such a situation.
c. Recall that this algorithm is based on the formula,
Analysis:
1. The size of the input is n
3. The number of key comparisons in this algorithm obviously depends on the nature of
the input.
The comparison A[j ]> v is executed only once on every iteration of the outer loop. It
happens if and only if A[i − 1]≤ A[i] for every i = 1, . . . , n − 1, i.e., if the input array is
already sorted in nondecreasing order. Thus, for sorted arrays, the number of key
comparisons is,
Topological Sorting
(2. Obtain Topological sorting for the graph using (a) DFS method
(b)source removal method 10M)
3. Apply Topological sorting for the graph and find the topological
sequence. 06M
• Figure 4.5a, the depth-first search forest (Figure 4.5b) exhibits all four types of edges
possible in a DFS forest of a directed graph: tree edges (ab, bc, de), back edges (ba) from
vertices to their ancestors, forward edges (ac) from vertices to their descendants in the tree
other than their children, and cross edges (dc), which are none of the aforementioned types.
Example:
• Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take
in some degree program. The courses can be taken in any order as long as the following
course prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4
requires C3, and C5 requires C3 and C4. The student can take only one course per term. In
which order should the student take the courses?
• The situation can be modeled by a digraph in which vertices represent courses and directed
edges indicate prerequisite requirements (Figure 4.6).
• In terms of this digraph, the question is whether we can list its vertices in such an order that
for every edge in the graph, the vertex where the edge starts is listed before the vertex where
the edge ends. (Can you find such an ordering of this digraph’s vertices?) This problem is
called topological sorting.
Thus, for topological sorting to be possible, a digraph in question must be a DAG. There are two
efficient algorithms that both verify whether a digraph is a DAG and, if it is, produce an ordering
of vertices that solves the topological sorting problem.
a) Topological sorting using DFS based method
b) Topological sorting using Source removal method.
Note: that the solution obtained by the source-removal algorithm is different from the one
obtained by the DFS-based algorithm.
Divide-and-Conquer
(1. Explain divide and conquer technique with general algorithm 06M
2. Discuss the general method of divide and conquer along with control
abstraction 06M
3. What are the disadvantages of Divide and conquer approach 04M)
The divide-and-conquer technique is diagrammed in Figure 5.1, which depicts the case of dividing
a problem into two smaller subproblems.
• In the most typical case of divide-and-conquer a problem’s instance of size n is divided into two
instances of size n/2. More generally, an instance of size n can be divided into b instances of
size n/b, with a of them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.)
• Assuming that size n is a power of b to simplify our analysis, we get the following recurrence
for the running time T (n): T(n) = aT (n/b) + f (n)
where f (n) is a function that accounts for the time spent on dividing an instance.
• The above Recurrence is called the general divide-and-conquer recurrence. Obviously, the
order of growth of its solution T (n) depends on the values of the constants a and b and the
order of growth of the function f(n).
• The efficiency analysis of many divide-and-conquer algorithms is greatly simplified by the
following theorem:
• For example, the recurrence for the number of additions A(n) made by the divide-and-conquer
sum-computation algorithm (see above) on inputs of size n = 2k is:
a) Merge-sort:
(1. Explain the concept of divide and conquer. Design an algorithm for merge
sort and derive its time complexity 10M)
2. Write merge sort algorithm for sorting using divide and conquer 06M)
3. Write merge sort algorithm with examples also calculate efficiency 12M)
4. Write an algorithm for merge sort. Also demonstrate the applicability of
Master’s theorem to compute time complexity 06M
5. Design an algorithm for performing merge sort. Analyze its time efficiency.
Apply the same to sort the following set of numbers 4,9,0,-1,6,8,9,2,3,12. 10M
• Merge-sort is a perfect example of a successful application of the divide- and-conquer
technique.
• It sorts a given array A[0…..n − 1] by dividing it into two halves A[0…...n/2 − 1] and
A[n/2…...n − 1], sorting each of them recursively, and then merging the two smaller sorted
Example 2: Let us apply the merge-sort algorithm to sort the array elements:
8, 3, 2, 9, 7, 1, 5, 4
Analysis of Merge-sort:
• n is the measure of input size.
• Basic operation is comparison.
• The number of key comparisons are C(n) is,
• worst-case: It happens when elements at each step exactly one comparison is made, until neither of
the two arrays becomes empty before the other one contains just one element. Therefore, number
of comparisons are n-1. ic, in worst case Cmerge(n)=n-1
Cworst(n) = C(n/2) + C(n/2) + n-1 for n>1
c) Quick sort
1. What is divide and conquer? Develop the quick sort algorithm and write its
bestcase. Make use of this algorithm to sort the list of characters: E, X, A,M,P,L,E
10M)
2. Design an algorithm for Quick sort algorithm. Apply quick sort on these elements
25,75, 40, 10, 20, 05, 15 10M)
3. Apply merge sort and quick sort algorithm to sort the characters VTUBELAGAVI
10M)
4. Sort the following keyword “ALGORITHM” by applying quick sort method 06M)
5. Write Quick sort algorithm with example. Also calculate efficiency. 12M)
6. Sort the below given array of elements using quick sort. Mention time complexity
08M
7. Design an algorithm for performing quick sort, apply the same to sort the following
set of numbers 5, 3, 1, 9,8,2,4,7. 10M
o Quicksort is the other important sorting algorithm that is based on the divide-and conquer
approach. Unlike merge-sort, which divides its input elements according to their position in the
array, quicksort divides them according to their value.
o A partition is an arrangement of the array’s elements so that all the elements to the left of some
element A[s] are less than or equal to A[s], and all the elements to the right of A[s] are greater than
or equal to it:
Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array, and we
can continue sorting the two subarrays to the left and to the right of A[s] independently.
ALGORITHM Partition(A[l..r])
The recursive calls to Quicksort with input values l and r of subarray bounds and split position s of
a partition obtained can be represented by Recursive-Tree as shown below:
Analysis:
• Worst case: Occurs when, at each invocation of the procedure, the current array is
partitioned into 2 sub arrays with one of them being empty. This situation occurs if
all the elements are arranged in ascending order or descending order.
Ex: 20, 30, 40, 50
[20], [30, 40, 50]
20, [30], [40, 50]
20, 30, 40, [50]
• Average case: The pivot element may be placed at any arbitrary position in the
array ranging from 0 to n-1 with probability 1/n.
Time complexity is T(n) ϵ Ɵ (n log n)
Dept., of CSE Page 23
Analysis & Design of Algorithms BCS401
Advantage:
• Quicksort is in-place
• Time Complexity is T(n) ϵ Ɵ (n log n)
Disadvantage:
1. It is not stable
2. At worst-case time complexity is O(n2)
• A sorting algorithm is called stable if it preserves the relative order of any two equal
elements in its input. In other words, if an input list contains two equal elements in
positions i and j where i < j, then in the sorted list they have to be in positions i and
j, respectively, such that
i’ < j’.
Analysis:
• Measure of input size is the number of nodes n(T ) in a given binary tree T
• The number of comparisons made to compute the maximum of two numbers and the
number of additions A(n(T )) made by the algorithm are the same.
• We have the following recurrence relation for A(n(T )):
A(n(T )) = A(n(Tleft)) + A(n(Tright)) + 1 for n(T ) > 0, A(0) = 0.
• Basic operations are addition and comparison.
• For the empty tree, the comparison T = ∅ is executed once but there are no additions, and
for a single-node tree, the comparison and addition numbers are 3 and 1, respectively.
• In general, the number of comparisons to check nodes of the tree is ,
C(n) = n + x = 2n + 1,
and the number of additions is, A(n) = n.
The most important divide-and-conquer algorithms for binary trees are the three classic
traversals: preorder, inorder, and postorder.
• All three traversals visit nodes of a binary tree recursively, i.e., by visiting the tree’s root and its
left and right subtrees. They differ only by the timing of the root’s visit:
➢ In the preorder traversal, the root is visited before the left and right subtrees are visited (in
that order).
➢ In the inorder traversal, the root is visited after visiting its left subtree but before visiting the
right subtree.
➢ In the postorder traversal, the root is visited after visiting the left and right subtrees (in that
order).
The divide-and-conquer strategy suggest us another way to compute the product of two n x n
matrices.
If we assume that n is a power of 2, ic, n= 2k, where k is a non-negative integer, A & B can be
portioned into four square sub-matrices, each having dimensions n/2 x n/2.
For n=2,
The product A.B can be computed using the formula,
Since matrix multiplication (O(n3) is more expensive than matrix addition (O(n2)), we can attempt
to reformulate the equations for Cij to have fewer multiplications and possibly more additions.
• Volker Strassen has discovered a way to compute the Cij’s using only 7 multiplications and 18
additions or subtractions.
• His method involves, first computing the 7 n/2 x n/2 matrices P, Q, R, S, T, U, & V as follows:
P= (A11 + A22) (B11 +B22)
Q= B11(A21+A22)
R= A11 (B12 -B22)
S= A22(B21 – B11)
T= B22(A11 +A12)
U= (A21-A11) (B11 + B12)
V= (A12-A22) (B21 +B22)
**********************************