1 2 3 4 5 6 7 Merged
1 2 3 4 5 6 7 Merged
Session – 9
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm analysis
- Strassen's Matrix multiplication - Finding Maximum
and minimum - Algorithm for finding closest pair -
Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
BINARY SEARCH
ALGORITHM
Binarysearch(a[n], key)
start=0; end=n-1;
While(start<=end)
{
mid=(start+end)/2;
return mid
if(a[mid]==key)
return mid;
else if(a[mid]<key)
end=mid-1;
else
start=mid+1;
}
return -1;
ANALYSIS
Using Master’s theorem
T(n) = T(n/2) + 1
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
a=1 b=2 f(n) = 1 = n0
1. If f(n) < O(nlogba), then T (n) = ϴ (nlogba).
2. If f(n) = ϴ (nlogba) , then T (n) = ϴ (nlogbalogn).
3. If f(n) > Ω (nlogba), and f(n) satisfies the regularity condition,
then T (n) = ϴ (f(n)).
Calculate nlogba = nlog21 = n0
Compare with f(n). Since f(n) < nlogba
i.e. n0 = n0
Case 3 is satisfied hence complexity is given as T(n) = Θ(f(n)) = Θ
(n0logn) = Θ (logn)
Worksheet No. 9
UNIT II
DIVIDE AND CONQUER
Session – 10
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm
analysis - Strassen's Matrix multiplication - Finding
Maximum and minimum - Algorithm for finding closest
pair - Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
Merge Sort algorithm
⚫ Merge sort is based on Divide and conquer method.
⚫ It takes the list to be sorted and divide it in half to create two unsorted lists.
⚫ The two unsorted lists are then sorted and merged to get a sorted list.
⚫ The two unsorted lists are sorted by continually calling the merge-sort algorithm; we
eventually get a list of size 1 which is already sorted. The two lists of size 1 are then
merged.
Steps using Divide and Conquer strategy
⚫ Step 1 − if it is only one element in the list it is already sorted, return.
⚫ Step 2 − divide the list recursively into two halves until it can no more be divided.
⚫ Step 3 − merge the smaller lists into new list in sorted order.
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
Call mergeSort(arr, l, m)
Call merge(arr, l, m, r)
Mergesort(A,p,r)
Mergesort() function
MERGE-SORT (A, p, r)
1. WHILE p < r // Check for base case
2. q = FLOOR[(p + r)/2] // Divide step
3. MERGE-SORT (A, p, q)
4. MERGE-SORT (A, q + 1, r)
5. MERGE (A, p, q+1, r)
Session – 11
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm
analysis - Strassen's Matrix multiplication - Finding
Maximum and minimum - Algorithm for finding closest
pair - Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
Quick Sort algorithm
⚫ Quick sort works by partitioning a given array A[p . . r] into two non-empty sub
array A[p . . q] and A[q+1 . .r] such that every key in A[p . . q] is less than or equal
⚫ Then the two sub arrays are sorted by recursive calls to Quick sort. The exact
position of the partition depends on the given array and index q is computed as
⚫ As a first step, Quick Sort chooses as pivot one of the items in the
array to be sorted. Then array is then partitioned on either side of
the pivot.
⚫ Elements that are less than or equal to pivot will move toward the
left and elements that are greater than or equal to pivot will move
toward the right.
Partitioning procedure
⚫ PARTITION (A, p, r)
1. x ← A[p]
2. i ← p+1
3. j ← r
4. while TRUE do
5. Repeat j ← j-1
6. until A[j] ≤ x
7. Repeat i ← i+1
8. until A[i] ≥ x
9. if i < j
10. then exchange A[i] ↔ A[j]
11. else return j
Example
⚫ Partitioning begins by locating two position markers—let’s call
them leftmark and rightmark—at the beginning and end of the
remaining items in the list (positions 1 and 8). The goal of the
partition process is to move items that are on the wrong side with
respect to the pivot value while also converging on the split point.
This process as we locate the position of 54.
Analysis of Quick Sort
Best Case
The best thing that could happen in Quick sort would be that
each partitioning stage divides the array exactly in half. In
other words, the best to be a median of the keys in A[p . .
r] every time procedure 'Partition‘ is called. The procedure
'Partition' always split the array to be sorted into two equal
sized arrays.
⚫ If the procedure 'Partition' produces two regions of size
n/2. the recurrence relation is then
⚫ T(n) = T(n/2) + T(n/2) + O(n)
⚫ = 2T(n/2) + O(n)
⚫ And from case 2 of Master theorem
T(n) = (n lg n)
⚫Worst-case
When quicksort always has the most
unbalanced partitions possible, then the
original call takes cn, time for some
constant c, the recursive call on n-1
elements takes c(n-1) time, the recursive
call on n-2elements takes c(n−2) and so
on.
⚫ When we total up the partitioning times
for each level, we get
⚫ c(n) + c(n-1) + c(n-2) + c(n-3) +.. 2c =
c(n+(n-1)+(n-2)+...(2))
= c(1+2+3+...+n) – c(1)
= c(n(n+1)/2) - c
= Θ (n2)
Using Master’s theorem
Best Case
T(n) = 2T(n/2) + ϴ(n)
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1
a=2 b=2 f(n) = n
1. If f(n) < O(nlogba), then T (n) = ϴ (nlogba).
2. If f(n) = ϴ (nlogbalogkn) with k≥0, then T (n) = ϴ (nlogbalogk+1n).
3. If f(n) > Ω (nlogba), and f(n) satisfies the regularity condition, then T (n)
= ϴ (f(n)).
Calculate nlogba = nlog22 = n
Compare with f(n). Since f(n) = nlogba
i.e. n = n
Case 2 is satisfied hence complexity is given as
T(n) = Θ(f(n)logn) = Θ (nlogn)
Worksheet No. 11
UNIT II
DIVIDE AND CONQUER
Session – 12
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm analysis
- Strassen's Matrix multiplication - Finding
Maximum and minimum - Algorithm for finding closest
pair - Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
Strassen’s Multiplication –
Naive Method
⚫ Given two square matrices A and B of size
n x n each, find their multiplication matrix.
Naive Method
void multiply(int A[][N], int B[][N], int C[][N])
{
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
C[i][j] = 0;
for (int k = 0; k < N; k++)
{
C[i][j] += A[i][k]*B[k][j];
}
}
}
} // Time Complexity of above method is O(N3).
Strassen’s Multiplication –
Divide and Conquer
⚫ Following is simple Divide and Conquer method to multiply two square
matrices.
1. The constants used in Strassen’s method are high and for a typical
application Naive method works better.
3. If f(n) > Ω (nlogba), and f(n) satisfies the regularity condition, then T (n) = ϴ
(f(n)).
Session – 13
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm analysis
- Strassen's Matrix multiplication - Finding Maximum
and minimum - Algorithm for finding closest pair -
Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
Finding Maximum and Minimum
⚫ Algorithm straightforward
Finding the maximum and minimum elements in a set of (n) elements
Algorithm straightforward (a, n, max, min)
Complexity
best= average =worst= 2(n-1) comparisons
Modify it for betterment
⚫ If we change the body of the loop as follows:
max=min=a(1)
for i=2 to n do
begin
if (a(i)>max) then max=a(i)
else if (a(i)<min) then min=a(i)
end
⚫ Complexity
best case: elements in increasing order No. of comparisons = (n-1)
Worst case: elements in decreasing order No. of comparisons = 2(n-1)
Average case: a(i) is greater than max half the time
No. of comparisons = 3n/2 – 3/2
=(1/2)((n-1)+(2n-2))
Divide and Conquer approach
Problem: is to find the maximum and minimum items in a set of (n) elements.
⚫ Algorithm MaxMin(i, j, max, min)
input: array of N elements, i lower bound, j upper bound
output: max: largest value; min: smallest value.
if (i=j) then max=min=a(i)
else if (i=j-1) then
if (a(i)<a(j)) then max= a(j) and min= a(i)
else max= a(i) and min= a(j)
else
mid=(i+j)/2
maxmin(i, mid, max, min)
maxmin(mid+1, j, max1, min1)
if (max<max1) then max = max1
if (min>min1) then min = min1
end
56 34 12 1 76 34 23 8 16
56 34 12 1 76 34 23 8 16
56 34 12 1 76 34 23 8 16
Min 8
Max 76
Min 1
Max 76
Video
⚫ Click here
Complexity Analysis
1.if size= 1 return current element as both max and min //base condition
2.else if size= 2 one comparison to determine max and min //base condition
recur for max and min of left half recur for max and min of right half.
solving which give you T(n) =(3n/2)-2 which is the exact no. of comparisons but still the
worst time complexity will be T(n)=O(n) and best case time complexity will be O(1)
when you have only one element in array, which will be candidate for both max and
min.
Worksheet No. 13
UNIT II
DIVIDE AND CONQUER
Session – 14
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm analysis
- Strassen's Matrix multiplication - Finding Maximum
and minimum - Algorithm for finding closest pair -
Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
Algorithm for finding closest pair
⚫ We are given an array of n points in the plane,
and the problem is to find out the closest pair of
points in the array.
⚫ For example, in air-traffic control, you may want to
monitor planes that come too close together, since this
may indicate a possible collision.
⚫ Recall the following formula for distance between two
points p and q.
⚫ The Brute force solution is O(n2),
compute the distance between each pair
and return the smallest.
⚫ We can calculate the smallest distance in
O(nLogn) time using Divide and Conquer
strategy.
Algorithm
Input: An array of n points P[]
Output: The smallest distance between two points in the given array.
As a pre-processing step, input array is sorted with x coordinates.
1) Find the middle point in the sorted array, we can take P[n/2] as middle point.
2) Divide the given array in two halves. The first subarray contains points from
P[0] to P[n/2]. The second subarray contains points from P[n/2+1] to P[n-1].
3) Recursively find the smallest distances in both subarrays. Let the distances
be dl and dr. Find the minimum of dl and dr. Let the minimum be d.
4) From above 3 steps, we have an upper bound d of minimum distance.
Now we need to consider the pairs such that one point in pair is from left
half and other is from right half. Consider the vertical line passing through
passing through P[n/2] and find all points whose x coordinate is closer than d to
the middle vertical line. Build an array strip[] of all such points.
5) Sort the array strip[] according to y coordinates. This step is O(nLogn). It
can be optimized to O(n) by recursively sorting and merging.
6) Find the smallest distance in strip[]. This is tricky. From first look, it seems
to be a O(n2) step, but it is actually O(n). It can be proved geometrically that for
every point in strip, we only need to check at most 7 points after it (note that strip is
sorted according to Y coordinate).
⚫ After dividing, it finds the strip in O(n) time, sorts the strip in O(nLogn)
time and finally finds the closest points in strip in O(n) time.
3. If f(n) > Ω (nlogba), and f(n) satisfies the condition, then T (n) = ϴ (f(n)).
Compare with f(n). Since f(n) < nlogba i.e. nlogn = n1log1n
Session – 15
Syllabus
⚫ Introduction, Binary Search - Merge sort and its
algorithm analysis - Quick sort and its algorithm analysis
- Strassen's Matrix multiplication - Finding Maximum
and minimum - Algorithm for finding closest pair -
Convex Hull Problem
Introduction
⚫ Divide / Break
⚫ Conquer / Solve
⚫ Merge / Combine
CONVEX HULL PROBLEM
⚫ A polygon is convex if any line segment joining
two points on the boundary stays within the
polygon.
⚫ The convex hull of a set of points in the plane is
the smallest convex polygon for which each
point is either on the boundary or in the interior of
the polygon.
⚫ A vertex is a corner of a polygon. For example,
the highest, lowest, leftmost and rightmost
points are all vertices of the convex hull.
Different algorithms
⚫ Graham Scan,
⚫ Jarvis March
⚫ Divide & Conquer
⚫ This gives an O(n) time algorithm, apart from the initial sort
which takes time O(n log n).
Jarvis March
⚫ This is also called the wrapping algorithm.
⚫ This algorithm finds the points on the convex hull in the
order in which they appear.
⚫ It is quick if there are only a few points on the convex hull, but slow if
there are many.
⚫ Let x0 be the leftmost point. Let x1 be the first point counter
clockwise when viewed fromx0. Then x2 is the first point
counter clockwise when viewed from x1, and so on.
Jarvis March Algorithm
i=0
while not done do
xi+1 = first point counter clockwise from xi
1. It helps to work with convex hulls that do not overlap. To ensure this, all the points
are presorted from left to right. So we have a left and right half and hence a left
2. Define a bridge as any line segment joining a vertex on the left and a
vertex on the right that does not cross the side of either polygon. What we
1. Start with any bridge. For example, a bridge is guaranteed if you join the
rightmost vertex on the left to the leftmost vertex on the right.
2. Keeping the left end of the bridge fixed, see if the right end can be raised. That is,
look at the next vertex on the right polygon going clockwise, and see whether
that would be a (better) bridge. Otherwise, see if the left end can be raised while
the right end remains fixed.
3. If made no progress in (2) (cannot raise either side), then stop else repeat (2).
Running time
⚫ The key is to perform step (2) in constant time. For
this it is sufficient that each vertex has a pointer to the
next vertex going clockwise and going counter clockwise.
⚫ Hence the choice of data structure: we store each hull using
a doubly linked circular linked list.
⚫ It follows that the total work done in a merge is
proportional to the number of vertices. This means that the
overall algorithm takes time O(nlog n).
Worksheet No. 15