DAA notes Module 2
DAA notes Module 2
RV Educational Institutions®
RV Institute of Technology and Management
(Affiliated to VTU, Belagavi)
IV Semester
2022 Scheme
Module -2
Divide and Conquer
2.1 Divide And Conquer Algorithm
In this approach ,we solve a problem recursively by applying 3 steps as shown in Fig 2.1.
1. DIVIDE-break the problem into several sub problems of smaller size.
2. CONQUER-solve the problem recursively.
3. COMBINE-combine these solutions to create a solution to the original problem.
Fig 2.2: Divide and conquer method for Counter fiet Coin problem
Max-Min Problem
Problem Statement: The Max-Min Problem in algorithm analysis is finding the maximum and minimum
value in an array.
Solution
• To find the maximum and minimum numbers in a given array numbers[] of size n, the follow ing
algorithm can be used. First we are representing the naive method and then we will present divide and
conquer approach.
• NaïveMethod
• Naïve method is a basic method to solve any problem. In this method, the maximum and minimum
number can be found separately. To find the maximum and minimum numbers, the following straightforward
algorithm can be used.
mid:= └(i+j)/2┘;
Max-Min(i,mid,max.min);
Max-Min(mid+1,j,max1,min1);
}
}
Example
a: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
A good way of keeping track of recursive calls is to build a tree by adding a node each time a new
call is made. On the array a[ ] above, the following tree is produced as shown in Fig 2.3.
Analysis
• Let T(n) be the number of comparisons made by Max−Min().
• If T(n) represents the numbers, then the recurrence relation can be represented as
• Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the recursion tree.
• Compared to Naïve method, in divide and conquer approach, the number of comparisons is less.
However, using the asymptotic notation both of the approaches are represented by O(n).
Features:
• Is a comparison based algorithm
• Is a stable algorithm
• Is a perfect example of divide & conquer algorithm design strategy
• It was invented by John Von Neumann
Algorithm:
ALGORITHM Mergesort ( A[0… n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )
Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1
Solution: Merge sort illustration is shown in Fig 2.4.
Now we add equations (3) through (8) : the sum of their left-hand sides
will be equal to the sum of their right-hand sides:
T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + … + T(2)/2 =
T(N/2) / (N/2) + T(N/4) / (N/4) + ….+ T(2) / 2 + T(1) / 1 + LogN
(LogN is the sum of 1s in the right-hand sides)
Advantages:
• Number of comparisons performed is nearly optimal.
• Mergesort will never degrade to O(n2)
• It can be applied to files of any size
Limitations:
• Uses O(n) additional memory.
Algorithm
ALGORITHM Quicksort (A[ l …r ])
//sorts by quick sort
//i/p: A sub-array A[l..r] of A[0..n-1],defined by its left and right indices l and r
//o/p: The sub-array A[l..r], sorted in ascending order
if l < r
s <-- Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]
Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
Illustration of quick sort is shown in Fig 2.5.
1. the for loop stops when the indexes cross, hence there are N iterations
2. swap is one operation – disregarded
3. Two recursive calls:
a. Best case: each call is on half the array, hence time is 2T(N/2)
b. Worst case: one array is empty, the other is N-1 elements, hence time is T(N-1)
Best-case analysis:
The pivot is in the middle
T(N) = 2T(N/2) + cN
Divide by N:
T(N) / N = T(N/2) / (N/2) + c
Telescoping:
T(N/2) / (N/2) = T(N/4) / (N/4) + c
T(N/4) / (N/4) = T(N/8) / (N/8) + c
……
T(2) / 2 = T(1) / (1) + c
The average value of T(i) is 1/N times the sum of T(0) through T(N-1)
1/N S T(j), j = 0 thru N-1
T(N) = 2/N (S T(j)) + cN
Multiply by N
NT(N) = 2(S T(j)) + cN*N
and subtract:
NT(N) - (N-1)T(N-1) = 2T(N-1) + 2cN -c
Telescope:
T(N)/(N+1) = T(N-1)/N + 2c/(N+1)
T(N-1)/(N) = T(N-2)/(N-1)+ 2c/(N)
T(N-2)/(N-1) = T(N-3)/(N-2) + 2c/(N-1)
….
T(2)/3 = T(1)/2 + 2c /3
Solution
In this algorithm, we want to find whether element x belongs to a set of numbers stored in an array
numbers[]. Where l and r represent the left and right index of a sub-array in which searching operation should
be performed.
Algorithm: Binary-Search(numbers[], x, l, r)
if l = r then
return l
else
m := ⌊(l + r) / 2⌋
if x ≤ numbers[m] then
return Binary-Search(numbers[], x, l, m)
else
return Binary-Search(numbers[], x, m+1, r)
Analysis
Linear search runs in O(n) time. Whereas binary search produces the result in O(log n) time. Let T(n) be
the number of comparisons in worst-case in an array of n elements.
Hence,
The difference between O(log(N)) and O(N) is extremely significant when N is large: for any practical problem
it is crucial that we avoid O(N) searches. For example, suppose your array contains 2 billion (2 * 10**9) values.
Linear search would involve about a billion comparisons; binary search would require only 32 comparisons!
The space requirements for the recursive and iterative versions of binary search are different. Iterative Binary
Search requires only a constant amount of space, while Recursive Binary Search requires space proportional to
the number of comparisons to maintain the recursion stack.
Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y. Using Naïve
method, two matrices (X and Y) can be multiplied if the order of these matrices are p × q and q × r. Following
is the algorithm.
Where:
M1 = (A00 + A11) * (B00 + B11)
M2 = (A10 + A11) * B00
M3 = A00 * (B01 – B11)
M4 = A11 * (B10 – B00)
M5 = (A00 + A01) * B11
M6 = (A10 – A00) * (B00 + B01)
M7 = (A01 – A11) * (B10 + B11)
Analysis:
• Input size: n – order of square matrix.
• Basic operation:
o Multiplication (7)
o Addition (18)
o Subtraction (4)
• No best, worst, average case
• Let M(n) be the number of multiplication’s made by the algorithm, Therefore we have:
M (n) = 7 M(n/2) for n > 1
M (1) = 1
Assume n = 2k
M (2k) = 7 M(2k-1)
= 7 [7 M(2k-2)]
= 72 M(2k-2)
…
= 7i M(2k-i)
When i=k
= 7k M(2k-k)
= 7k
Following Fig 2.6 shows the major variations of decrease & conquer approach.
Decrease by a constant :(usually by 1):
Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the first card is already
sorted in the card game, and then we select an unsorted card. If the selected unsorted card is greater than
the first card, it will be placed at the right side; otherwise, it will be placed at the left side. Similarly, all
unsorted cards are taken and put in their exact place.
The same approach is applied in insertion sort. The idea behind the insertion sort is that first take one
element, iterate it through the sorted array. Although it is simple to use, it is not appropriate for large data
sets as the time complexity of insertion sort in the average case and worst case is O(n2), where n is the
number of items. Insertion sort is less efficient than the other sorting algorithms like heap sort, quick sort,
merge sort, etc.
o Simple implementation
o Efficient for small data sets
o Adaptive, i.e., it is appropriate for data sets that are already substantially sorted.
Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to the next
element. Else, shift greater elements in the array towards the right.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be easier to
understand the insertion sort via an example.
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now, 12 is
stored in a sorted sub-array.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with swapping,
insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted array
remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are 31 and
8.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-
case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending
order. The worst-case time complexity of insertion sort is O(n2).
2. Space Complexity
Space Complexity O(1)
Stable YES
o The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra variable is required for
swapping.
Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
Mark each vertex in V with 0 as a mark of being “unvisited”
count <--0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
count <--count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
Algorithm:
NOTE:
There is no solution for topological sorting if there is a cycle in the digraph .
[MUST be a DAG]
Topological sorting problem can be solved by using
DFS method
Source removal method
int topologicalOrderTraversal( ){
int numVisitedVertices = 0;
while(there are more vertices to be visited){
if(there is no vertex with in-degree 0)
break;
else{
select a vertex v that has in-degree 0;
visit v;
numVisitedVertices++;
delete v and all its emanating edges;
}
}
return numVisitedVertices;
}
*****