0% found this document useful (0 votes)
11 views

DAA notes Module 2

The document outlines the Divide and Conquer algorithm, detailing its three main steps: Divide, Conquer, and Combine, along with its advantages and limitations. It also discusses applications of this approach, including detecting counterfeit coins, finding maximum and minimum values in an array, and sorting algorithms like Merge Sort and Quick Sort. Additionally, it provides algorithmic implementations and complexity analyses for these sorting methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

DAA notes Module 2

The document outlines the Divide and Conquer algorithm, detailing its three main steps: Divide, Conquer, and Combine, along with its advantages and limitations. It also discusses applications of this approach, including detecting counterfeit coins, finding maximum and minimum values in an array, and sorting algorithms like Merge Sort and Quick Sort. Additionally, it provides algorithmic implementations and complexity analyses for these sorting methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

RV Institute of Technology and Management®

RV Educational Institutions®
RV Institute of Technology and Management
(Affiliated to VTU, Belagavi)

JP Nagar 8th Phase, Bengaluru - 560076


Department of Computer Science and Engineering

Course Name: Analysis and Design of Algorithms

Course Code: BCS401

IV Semester

2022 Scheme

Designand Analysis of Algorithms(BCS401) P a g e 1 | 33


RV Institute of Technology and Management®

Module -2
Divide and Conquer
2.1 Divide And Conquer Algorithm
In this approach ,we solve a problem recursively by applying 3 steps as shown in Fig 2.1.
1. DIVIDE-break the problem into several sub problems of smaller size.
2. CONQUER-solve the problem recursively.
3. COMBINE-combine these solutions to create a solution to the original problem.

Fig 2.1: shows the general divide & conquer plan

CONTROL ABSTRACTION FOR DIVIDE AND CONQUER ALGORITHM


Algorithm D and C (P)
{
if small(P)
then return S(P)
else
{ divide P into smaller instances P1 ,P2 ..... Pk
Apply D and C to each sub problem
Return combine (D and C(P1)+ D and C(P2)+ ...... +D and C(Pk))
}
}

Designand Analysis of Algorithms(BCS401) P a g e 2 | 33


RV Institute of Technology and Management®

Let a recurrence relation is expressed as


T(n)= ϴ(1), if n<=C
aT(n/b) + D(n)+ C(n) ,otherwise
then n=input size a=no. Of sub-problems n/b= input size of the sub-problems

Advantages of Divide & Conquer technique:


• For solving conceptually difficult problems like Tower Of Hanoi, divide & conquer is a powerful tool
• Results in efficient algorithms
• Divide & Conquer algorithms are adapted foe execution in multi-processor
machines
• Results in algorithms that use memory cache efficiently.
Limitations of divide & conquer technique:
• Recursion is slow
• Very simple problem may be more complicated than an iterative approach.
• Stack usage is very high where function states needs to be stored.
• Memory management is needed.

General divide & conquer recurrence:


An instance of size n can be divided into b instances of size n/b, with “a” of them needing to be solved. [ a
_ 1, b > 1]. Assume size n is a power of b.
The recurrence for the running time T(n) is as follows:
T(n) = T(1) for n=1
T(n) = aT(n/b) + f(n) for n>1
where:
f(n) – a function that accounts for the time spent on dividing the problem into smaller ones and
on combining their solutions.Therefore, the order of growth of T(n) depends on the values of the constants
a & b and the order of growth of the function f(n).
Application of Divide and Conquer Approach
Following are some problems, which are solved using divide and conquer approach.
• Detecting a Counter fiet coin
• Finding the maximum and minimum of a sequence of numbers
• Strassen’s matrix multiplication
• Merge sort
• Binary search

Designand Analysis of Algorithms(BCS401) P a g e 3 | 33


RV Institute of Technology and Management®

Detecting a Counter fiet coin:


Given a two pan fair balance and N identically looking coins, out of which only one coin is lighter (or
heavier). To figure out the odd coin, how many minimum number of weighing are required in the worst
case?
Difficult: Given a two pan fair balance and N identically looking coins out of which only one coin may be
defective. How can we trace which coin, if any, is odd one and also determine whether it is lighter or heavier
in minimum number of trials in the worst case?
Approach 1: Linear method
In this method taking 2 and weighing repeatedly takes (n-1) comparisons.

Approach 2: Divide and Conquer method


Algorithms (n is a power of 2)
• Left-to-right: Compare (1,2), then (3,4), and so forth ...... till you find the counterfeit one. (≤ 8
comparisons or n/2)
• Divide-and-Conquer: Split into two sets of eight, then the lighter set is split into two sets and so on. (= 4
comparisons or log n). Illustration is shown in Fig 2.2.

Fig 2.2: Divide and conquer method for Counter fiet Coin problem

Max-Min Problem
Problem Statement: The Max-Min Problem in algorithm analysis is finding the maximum and minimum
value in an array.

Solution
• To find the maximum and minimum numbers in a given array numbers[] of size n, the follow ing
algorithm can be used. First we are representing the naive method and then we will present divide and
conquer approach.

Designand Analysis of Algorithms(BCS401) P a g e 4 | 33


RV Institute of Technology and Management®

• NaïveMethod
• Naïve method is a basic method to solve any problem. In this method, the maximum and minimum
number can be found separately. To find the maximum and minimum numbers, the following straightforward
algorithm can be used.

Algorithm: Max-Min-Element (numbers[])


max := numbers[1]
min := numbers[1]
for i = 2 to n do
{
if numbers[i] > max then max := numbers[i] ;
if numbers[i] < min then min := numbers[i] ;
}
return (max, min) ;
Analysis
• The number of comparison in Naive method is 2n - 2.
• The number of comparisons can be reduced using the divide and conquer approach.
Following is the technique: Divide and Conquer Approach
• In this approach, the array is divided into two halves. Then using recursive approach maximum and
minimum numbers in each halves are found. Later, return the maximum of two maxima of each half and
the minimum of two minima of each half.
• In this given problem, the number of elements in an array is y−x+1y−x+1, where y is greater than or equal
to x.
• Max−Min(x,y)Max−Min(x,y) wil return the maximum and minimum values of an array
numbers[x...y]numbers[x...y].
Algorithm: Max - Min(i,j,max,min)
if (i==j) then max:=min:=a[i]; // small(P)
else if (i=j-1) then //small(P)
{ if(a[i]<a[j]) then
{
max:=a[j]; min:=a[i];
}
else
{
max:=a[i]; min:=a[j];
}
else
{

Designand Analysis of Algorithms(BCS401) P a g e 5 | 33


RV Institute of Technology and Management®

mid:= └(i+j)/2┘;
Max-Min(i,mid,max.min);
Max-Min(mid+1,j,max1,min1);

if (max<max1) then max:=max1;


if (min > min1) then min:=min1;

}
}

Let P=(n,a[i],. .. ,a[j]) denote the arbitrary instances of the problem.


n= number of elements in the list a[i],.,a[j].
Let small(P) be true when n<=2
• In this case, the maximum and minimum are a[i] if n=1
• If ==2, the problem can be solved by making one comparison.
• If the list has more than 2 elements, P has to be divided into smaller instances,
P1=(n/2,a[i],..,d(n/2)) and
P2=(n-n/2,a[n/2+1],..,a[n])
After this it can be solved by recursively invoking the same algorithm.

Example
a: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
A good way of keeping track of recursive calls is to build a tree by adding a node each time a new
call is made. On the array a[ ] above, the following tree is produced as shown in Fig 2.3.

Fig 2.3: Tree produced by algorithm Max-Min

Designand Analysis of Algorithms(BCS401) P a g e 6 | 33


RV Institute of Technology and Management®

Analysis
• Let T(n) be the number of comparisons made by Max−Min().
• If T(n) represents the numbers, then the recurrence relation can be represented as

• Let us assume that n is in the form of power of 2. Hence, n = 2k where k is height of the recursion tree.

When n is a power of two, n = 2k


for some positive integer k, then
T(n) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2
= 4T(n/4) + 4 + 2
.
.
.
= 2k-1 T(2) + ∑(1≤i≤k-1) 2k
= 2k-1 + 2k – 2
= 3n/2 – 2 = O(n)
Note that 3n/2 – 2 is the best, average, worst case number of comparison when n is a power of two.

• Compared to Naïve method, in divide and conquer approach, the number of comparisons is less.
However, using the asymptotic notation both of the approaches are represented by O(n).

2.2 Merge sort and its complexity.


Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.

Features:
• Is a comparison based algorithm
• Is a stable algorithm
• Is a perfect example of divide & conquer algorithm design strategy
• It was invented by John Von Neumann

Designand Analysis of Algorithms(BCS401) P a g e 7 | 33


RV Institute of Technology and Management®

Algorithm:
ALGORITHM Mergesort ( A[0… n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )

ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )


//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C
i <-- 0
j<-- 0
k<--0
while i < p and j < q do
if B[i]<= C[j]
A[k] <-- B[i]
i<-- i + 1
else
A[k]<-- C[j]
j<--j + 1
k<-- k + 1
if i == p
copy C [ j… q-1 ] to A [ k… (p+q-1) ]
else
copy B [ i… p-1 ] to A [ k… (p+q-1) ]

Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1
Solution: Merge sort illustration is shown in Fig 2.4.

Designand Analysis of Algorithms(BCS401) P a g e 8 | 33


RV Institute of Technology and Management®

Fig 2.4: Merge Sort illustration


Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Worst case: During key comparison, neither of the two arrays becomes empty before the other one
contains just one element.
• Let T(n) denotes the number of times basic operation is executed. Then
T(n) = 2T(n/2) + Cmerge(n) for n > 1
T(1) = 0
where, Cmerge(n) is the number of key comparison made during the merging stage.
In the worst case:
Cmerge(n) = 2 Cmerge(n/2) + n-1 for n > 1
Cmerge(1) = 0
Thus we have:
(1) T(1) = 1
(2) T(N) = 2T(N/2) + N
Next we will solve this recurrence relation. First we divide (2) by N:
(3) T(N) / N = T(N/2) / (N/2) + 1

Designand Analysis of Algorithms(BCS401) P a g e 9 | 33


RV Institute of Technology and Management®

N is a power of two, so we can write


(4) T(N/2) / (N/2) = T(N/4) / (N/4) +1
(5) T(N/4) / (N/4) = T(N/8) / (N/8) +1
(6) T(N/8) / (N/8) = T(N/16) / (N/16) +1
(7) ……
(8) T(2) / 2 = T(1) / 1 + 1

Now we add equations (3) through (8) : the sum of their left-hand sides
will be equal to the sum of their right-hand sides:
T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + … + T(2)/2 =
T(N/2) / (N/2) + T(N/4) / (N/4) + ….+ T(2) / 2 + T(1) / 1 + LogN
(LogN is the sum of 1s in the right-hand sides)

After crossing the equal term, we get


(9) T(N)/N = T(1)/1 + LogN
T(1) is 1, hence we obtain
(10) T(N) = N + NlogN = O(NlogN)
Hence the complexity of the MergeSort algorithm is O(NlogN).

Advantages:
• Number of comparisons performed is nearly optimal.
• Mergesort will never degrade to O(n2)
• It can be applied to files of any size

Limitations:
• Uses O(n) additional memory.

2.3 Quick Sort (Also known as “partition-exchange sort”)


Definition:
Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.
Features:
• Developed by C.A.R. Hoare
• Efficient algorithm

Designand Analysis of Algorithms(BCS401) P a g e 10 | 33


RV Institute of Technology and Management®

• NOT stable sort


• Significantly faster in practice, than other algorithms

Algorithm
ALGORITHM Quicksort (A[ l …r ])
//sorts by quick sort
//i/p: A sub-array A[l..r] of A[0..n-1],defined by its left and right indices l and r
//o/p: The sub-array A[l..r], sorted in ascending order
if l < r
s <-- Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]

ALGORITHM Partition (A[l ..r])


//Partitions a sub-array by using its first element as a pivot
//i/p: A sub-array A[l..r] of A[0..n-1], defined by its left and right indices l and r (l < r)
//o/p: A partition of A[l..r], with the split position returned as this function’s value
p<-- A[l]
i <--l;
j <--r + 1;
Repeat
repeat i<-- i + 1 until A[i] >=p //left-right scan
repeat j<--j – 1 until A[j] < p //right-left scan
if (i < j) //need to continue with the scan
swap(A[i], a[j])
until i >= j //no need to scan
swap(A[l], A[j])
return j

Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
Illustration of quick sort is shown in Fig 2.5.

Designand Analysis of Algorithms(BCS401) P a g e 11 | 33


RV Institute of Technology and Management®

Fig 2.5: Quick Sort Illustration

Recurrence relation based on the code

1. the for loop stops when the indexes cross, hence there are N iterations
2. swap is one operation – disregarded
3. Two recursive calls:
a. Best case: each call is on half the array, hence time is 2T(N/2)
b. Worst case: one array is empty, the other is N-1 elements, hence time is T(N-1)

T(N) = T(i) + T(N - i -1) + cN

Designand Analysis of Algorithms(BCS401) P a g e 12 | 33


RV Institute of Technology and Management®

The time to sort the file is equal to


o the time to sort the left partition with i elements, plus
o the time to sort the right partition with N-i-1 elements, plus
o the time to build the partitions

Worst case analysis:


The pivot is the smallest element
T(N) = T(N-1) + cN, N > 1
Telescoping:
T(N-1) = T(N-2) + c(N-1)
T(N-2) = T(N-3) + c(N-2)
T(N-3) = T(N-4) + c(N-3)
T(2) = T(1) + c.2

Add all equations:


T(N) + T(N-1) + T(N-2) + … + T(2) =
= T(N-1) + T(N-2) + … + T(2) + T(1) + c(N) + c(N-1) + c(N-2) + … + c.2
T(N) = T(1) + c times (the sum of 2 thru N) = T(1) + c(N(N+1)/2 -1) = O(N2)

Best-case analysis:
The pivot is in the middle
T(N) = 2T(N/2) + cN
Divide by N:
T(N) / N = T(N/2) / (N/2) + c
Telescoping:
T(N/2) / (N/2) = T(N/4) / (N/4) + c
T(N/4) / (N/4) = T(N/8) / (N/8) + c
……
T(2) / 2 = T(1) / (1) + c

Add all equations:


T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + …. + T(2) / 2 =
= (N/2) / (N/2) + T(N/4) / (N/4) + … + T(1) / (1) + c.logN

After crossing the equal terms: T(N)/N = T(1) + cLogN


T(N) = N + NcLogN = O(NlogN)

Average case analysis


Similar computations, resulting in T(N) = O(NlogN)

Designand Analysis of Algorithms(BCS401) P a g e 13 | 33


RV Institute of Technology and Management®

The average value of T(i) is 1/N times the sum of T(0) through T(N-1)
1/N S T(j), j = 0 thru N-1
T(N) = 2/N (S T(j)) + cN
Multiply by N
NT(N) = 2(S T(j)) + cN*N

To remove the summation, we rewrite the equation for N-1:


(N-1)T(N-1) = 2(S T(j)) + c(N-1)2, j = 0 thru N-2

and subtract:
NT(N) - (N-1)T(N-1) = 2T(N-1) + 2cN -c

Prepare for telescoping. Rearrange terms, drop the insignificant c:


NT(N) = (N+1)T(N-1) + 2cN
Divide by N(N+1):
T(N)/(N+1) = T(N-1)/N + 2c/(N+1)

Telescope:
T(N)/(N+1) = T(N-1)/N + 2c/(N+1)
T(N-1)/(N) = T(N-2)/(N-1)+ 2c/(N)
T(N-2)/(N-1) = T(N-3)/(N-2) + 2c/(N-1)
….
T(2)/3 = T(1)/2 + 2c /3

Add the equations and cross equal terms:


T(N)/(N+1) = T(1)/2 +2c S (1/j), j = 3 to N+1
The sum S (1/j), j =3 to N-1, is about LogN
Thus T(N) = O(NlogN)

2.4 Binary search


Binary search can be performed on a sorted array. In this approach, the index of an element x is determined
if the element belongs to the list of elements. If the array is unsorted, linear search is used to determine the
position.

Solution

Designand Analysis of Algorithms(BCS401) P a g e 14 | 33


RV Institute of Technology and Management®

In this algorithm, we want to find whether element x belongs to a set of numbers stored in an array
numbers[]. Where l and r represent the left and right index of a sub-array in which searching operation should
be performed.

Algorithm: Binary-Search(numbers[], x, l, r)
if l = r then
return l
else
m := ⌊(l + r) / 2⌋
if x ≤ numbers[m] then
return Binary-Search(numbers[], x, l, m)
else
return Binary-Search(numbers[], x, m+1, r)
Analysis
Linear search runs in O(n) time. Whereas binary search produces the result in O(log n) time. Let T(n) be
the number of comparisons in worst-case in an array of n elements.
Hence,

Using this recurrence relation T(n)=log n.


Therefore, binary search uses O(log n) time.
Example
In this example, we are going to search element 63.

Designand Analysis of Algorithms(BCS401) P a g e 15 | 33


RV Institute of Technology and Management®

Best case - O (1) comparisons


In the best case, the item X is the middle in the array A. A constant number of comparisons (actually just 1) are
required.

Worst case - O (log n) comparisons


In the worst case, the item X does not exist in the array A at all. Through each recursion or iteration of Binary
Search, the size of the admissible range is halved. This halving can be done ceiling(lg n ) times. Thus, ceiling( lg
n ) comparisons are required.
Average case - O (log n) comparisons
To find the average case, take the sum over all elements of the product of number of comparisons required to
find each element and the probability of searching for that element. To simplify the analysis, assume that no item
which is not in A will be searched for, and that the probabilities of searching for each element are uniform.

The difference between O(log(N)) and O(N) is extremely significant when N is large: for any practical problem
it is crucial that we avoid O(N) searches. For example, suppose your array contains 2 billion (2 * 10**9) values.
Linear search would involve about a billion comparisons; binary search would require only 32 comparisons!

The space requirements for the recursive and iterative versions of binary search are different. Iterative Binary
Search requires only a constant amount of space, while Recursive Binary Search requires space proportional to
the number of comparisons to maintain the recursion stack.

Designand Analysis of Algorithms(BCS401) P a g e 16 | 33


RV Institute of Technology and Management®

Applications of binary search:


• Number guessing game
• Word lists/search dictionary etc
Advantages:
• Efficient on very big list
• Can be implemented iteratively/recursively
Limitations:
• Interacts poorly with the memory hierarchy
• Requires given list to be sorted
• Due to random access of list element, needs arrays instead of linked list.

2.5 Matrix multiplication


The general method of matrix multiplication and later we wil discuss Strassen’s matrix multiplication
algorithm.
Problem Statement
Let us consider two matrices X and Y. We want to calculate the resultant matrix Z by multiplying X and Y.

Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y. Using Naïve
method, two matrices (X and Y) can be multiplied if the order of these matrices are p × q and q × r. Following
is the algorithm.

Algorithm: Matrix-Multiplication (X, Y, Z)


for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
Z[i,j] := Z[i,j] + X[i,k] × Y[k,j]
Complexity
Here, we assume that integer operations take O(1) time. There are three forloops in this algorithm and one
is nested in other. Hence, the algorithm takes O(n3) time to execute.

Strassen’s Matrix Multiplication Algorithm


Description :
Strassen’s algorithm is used for matrix multiplication. It is asymptotically faster than the standard matrix
multiplication algorithm.

Designand Analysis of Algorithms(BCS401) P a g e 17 | 33


RV Institute of Technology and Management®

ALGORITHM using Divide & Conquer method:


Let A & B be two square matrices.
C= A * B
We have,

Where:
M1 = (A00 + A11) * (B00 + B11)
M2 = (A10 + A11) * B00
M3 = A00 * (B01 – B11)
M4 = A11 * (B10 – B00)
M5 = (A00 + A01) * B11
M6 = (A10 – A00) * (B00 + B01)
M7 = (A01 – A11) * (B10 + B11)

Analysis:
• Input size: n – order of square matrix.
• Basic operation:
o Multiplication (7)
o Addition (18)
o Subtraction (4)
• No best, worst, average case
• Let M(n) be the number of multiplication’s made by the algorithm, Therefore we have:
M (n) = 7 M(n/2) for n > 1
M (1) = 1
Assume n = 2k
M (2k) = 7 M(2k-1)
= 7 [7 M(2k-2)]
= 72 M(2k-2)

= 7i M(2k-i)
When i=k
= 7k M(2k-k)
= 7k

Designand Analysis of Algorithms(BCS401) P a g e 18 | 33


RV Institute of Technology and Management®

2.6 Decrease & Conquer


Introduction:
Decrease & conquer is a general algorithm design strategy based on exploiting the relationship between a
solution to a given instance of a problem and a solution to a smaller instance of the same problem. The
exploitation can be either top-down (recursive) or bottom-up (non-recursive).
The major variations of decrease and conquer are:
1. Decrease by a constant :(usually by 1):
a. insertion sort
b. graph traversal algorithms (DFS and BFS)
c. topological sorting
d. algorithms for generating permutations, subsets
2. Decrease by a constant factor (usually byhalf)
a. binary search and bisection method
3. Variable size decrease
a. Euclid’s algorithm

Following Fig 2.6 shows the major variations of decrease & conquer approach.
Decrease by a constant :(usually by 1):

Designand Analysis of Algorithms(BCS401) P a g e 19 | 33


RV Institute of Technology and Management®

Fig 2.6: Decrease by a Constant

Decrease by a constant factor (usually by half) is shown in Fig 2.7.

Fig 2.7 : Decrease by a constant factor

Designand Analysis of Algorithms(BCS401) P a g e 20 | 33


R V Institute of Technology & Management®

2.7 Insertion sort

Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the first card is already
sorted in the card game, and then we select an unsorted card. If the selected unsorted card is greater than
the first card, it will be placed at the right side; otherwise, it will be placed at the left side. Similarly, all
unsorted cards are taken and put in their exact place.

The same approach is applied in insertion sort. The idea behind the insertion sort is that first take one
element, iterate it through the sorted array. Although it is simple to use, it is not appropriate for large data
sets as the time complexity of insertion sort in the average case and worst case is O(n2), where n is the
number of items. Insertion sort is less efficient than the other sorting algorithms like heap sort, quick sort,
merge sort, etc.

Insertion sort has various advantages such as -

o Simple implementation
o Efficient for small data sets
o Adaptive, i.e., it is appropriate for data sets that are already substantially sorted.

Now, let's see the algorithm of insertion sort.

Algorithm
The simple steps of achieving the insertion sort are listed as follows -

Step 1 - If the element is the first element, assume that it is already sorted. Return 1.

Step2 - Pick the next element, and store it separately in a key.

Step3 - Now, compare the key with all elements in the sorted array.

Step 4 - If the element in the sorted array is smaller than the current element, then move to the next
element. Else, shift greater elements in the array towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.

Designand Analysis of Algorithms(21CS42) P a g e 21 | 33


R V Institute of Technology & Management®

Working of Insertion sort Algorithm


Now, let's see the working of the insertion sort Algorithm.

To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be easier to
understand the insertion sort via an example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now, 12 is
stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with swapping,
insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted array
remains sorted after swapping.

Designand Analysis of Algorithms(21CS42) P a g e 22 | 33


R V Institute of Technology & Management®

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are 31 and
8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Designand Analysis of Algorithms(21CS42) P a g e 23 | 33


R V Institute of Technology & Management®

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Insertion sort complexity


Now, let's see the time complexity of insertion sort in best case, average case, and in worst case. We will
also see the space complexity of insertion sort.

Designand Analysis of Algorithms(21CS42) P a g e 24 | 33


R V Institute of Technology & Management®

1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-
case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending
order. The worst-case time complexity of insertion sort is O(n2).

2. Space Complexity
Space Complexity O(1)
Stable YES

o The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra variable is required for
swapping.

2.8 Graph searching algorithmns-Depth-first search (DFS) and Breadth-first search


(BFS)
DFS and BFS are two graph traversing algorithms and follow decrease and conquer approach – decrease by
one variation to traverse the graph
Some useful definition:
• Tree edges: edges used by DFS traversal to reach previously unvisited vertices
• Back edges: edges connecting vertices to previously visited vertices other than their
immediate predecessor in the traversals
• Cross edges: edge that connects an unvisited vertex to vertex other than its
immediate predecessor. (connects siblings)
• DAG: Directed acyclic graph

Depth-first search (DFS)


Description:
• DFS starts visiting vertices of a graph at an arbitrary vertex by marking it as visited.
• It visits graph’s vertices by always moving away from last visited vertex to an

Design and Analysis of Algorithms(21CS42) P a g e 26 | 33


R V Institute of Technology & Management®

unvisited one, backtracks if no adjacent unvisited vertex is available.


• Is a recursive algorithm, it uses a stack
• A vertex is pushed onto the stack when it’s reached for the first time
• A vertex is popped off the stack when it becomes a dead end, i.e., when there is no
adjacent unvisited vertex
• “Redraws” graph in tree-like fashion (with tree edges and back edges for undirected
graph)

Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
Mark each vertex in V with 0 as a mark of being “unvisited”
count <--0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
count <--count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)

Designand Analysis of Algorithms(21CS42) P a g e 27 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 28 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 29 | 33


R V Institute of Technology & Management®

The DFS tree is shown in the Fig 2.8 below.

Designand Analysis of Algorithms(21CS42) P a g e 30 | 33


R V Institute of Technology & Management®

Fig 2.8: DFS tree

Breadth-first search (BFS)


Description:
• BFS starts visiting vertices of a graph at an arbitrary vertex by marking it as visited.
• It visits graph’s vertices by across to al the neighbors of the last visited vertex
• Instead of a stack, BFS uses a queue
• Similar to level-by-level tree traversal
• “Redraws” graph in tree-like fashion (with tree edges and cross edges for undirected
graph)

Algorithm:

Designand Analysis of Algorithms(21CS42) P a g e 31 | 33


R V Institute of Technology & Management®

ALGORITHM BFS (G)


//implements BFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: BFS tree/forest

Mark each vertex in V with 0 as a mark of being “unvisited”


count <--0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
count <-- count + 1
mark v with count and initialize a queue with v
while the queue is NOT empty do
for each vertex w in V adjacent to front’s vertex v do
if w is marked with 0
count<-- count + 1
mark w with count
add w to the queue
remove vertex v from the front of the queue

Designand Analysis of Algorithms(21CS42) P a g e 32 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 33 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 34 | 33


R V Institute of Technology & Management®

2.9 Topological Sorting


Description:
Topological sorting is a sorting method to list the vertices of the graph in such an order that for every edge
in the graph, the vertex where the edge starts is listed before the vertex where the edge ends.

NOTE:
There is no solution for topological sorting if there is a cycle in the digraph .
[MUST be a DAG]
Topological sorting problem can be solved by using
DFS method
Source removal method

Designand Analysis of Algorithms(21CS42) P a g e 35 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 36 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 37 | 33


R V Institute of Technology & Management®

Designand Analysis of Algorithms(21CS42) P a g e 38 | 33


R V Institute of Technology & Management®

Topological Sort Algorithms: DFS based algorithm


Topological-Sort(G)
{
1. Call dfsAllVertices on G to compute f[v] for each vertex v
2. If G contains a back edge (v, w) (i.e., if f[w] > f[v]) , report error ;
3. else, as each vertex is finished prepend it to a list; // or push in stack
4. Return the list; // list is a valid topological sort
}

• Running time is O(V+E), which is the running time for DFS.

Topological Sort Algorithms: Source Removal Algorithm


• The Source Removal Topological sort algorithm is:
– Pick a source u [vertex with in-degree zero], output it.
– Remove u and all edges out of u.
– Repeat until graph is empty.

int topologicalOrderTraversal( ){
int numVisitedVertices = 0;
while(there are more vertices to be visited){
if(there is no vertex with in-degree 0)
break;
else{
select a vertex v that has in-degree 0;
visit v;
numVisitedVertices++;
delete v and all its emanating edges;
}
}
return numVisitedVertices;
}

*****

Designand Analysis of Algorithms(21CS42) P a g e 39 | 33

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy