daa-ct1-set-c-answer-key
daa-ct1-set-c-answer-key
School of Computing
Course Code & Title:21CSC204J Design and Analysis of Algorithms Duration:1 hr 40 min
CO1 2 1 2 1 - - - - 3 - 3 3 1 -
CO2 2 1 2 1 - - - - 3 - 3 3 1 -
CO3 2 1 2 1 - - - - 3 - 3 3 1 -
CO4 2 1 2 1 - - - - 3 - 3 3 1 -
CO5 2 1 2 1 - - - - 3 - 3 3 1 -
Part - A
(1 x 10 = 10 Marks)
Instructions: Answer all
1 Number of comparisons required to search an element x = 18 in the list A= [ 5, 44, 89, 1 2 1 1,4 4.4.2
22, 18, 9, 3, 15,8] using linear search
a) 3
b) 1
c) 4
d) 5
Ans: 5
2 What is the time complexity of following code. Assume that n > 0 1 4 1 2,3 2.8.2
int segment(int n) {
If(n==1)
return 1;
else
return (n+ segment(n-1)); }
a) O(n)
b) O(log n)
2
c) O(n )
d) O(n!)
Ans: a
3 Which of the following functions provides the maximum asymptotic complexity? 1 2 1 2 2.8.2
a) f1(n) = n^(3/2)
b) f2(n) = n^(logn)
c) f3(n) = nlogn
d) f4(n) = 2^n.
Ans: d.
printf(“%d %d”, i , j )
}}
}
a) Θ(n √n)
2
b) Θ (n )
c) Θ (nlogn)
2
d) Θ (n logn)
Ans: c
Ans: C
6 What is the worst case time complexity of a quick sort algorithm? 1 1 2 2 2.8.2
a) O(N)
b) O(N log N)
2
c) O(N )
d) O(log N)
Ans: C
7 Which of the following sorting algorithms provide the best time complexity in the worst- 1 3 2 2,4 4.4.3
case scenario?
a) Merge Sort
b) Quick Sort
c) Bubble Sort
d) Selection Sort
Ans: a
9 Develop the algorithmic steps to find the maximum and minimum 1 3 2 2 2.5.2
element in the given list.
Apply Quick sort on a given sequence 7 11 14 6 9 4 3 12. What is the sequence after first
phase, pivot is first element?
a) 6 4 3 7 11 9 14 12
b) 6 3 4 7 9 14 11 12
c) 7 6 14 11 9 4 3 12
d) 7 6 4 3 9 14 11 12
Ans: b
10 What is the time complexity of Largest subarray sum problem using naïve approach 1 1 2 2 2.8.2
a) T(n) = O(n)
b) T(n) = O(log n)
2
c) T(n) = O(n log n)
2
d) T(n) = O(n )
Ans: d
Part – B
(5 x 4 = 20 Marks)
Instructions: Answer All the Questions
11 Express the function f(n) = n3/1000-100n2 - 100n + 3 in terms of Theta notation. 5 2 1 1 1.2.1
3 2 3
n /1000 − 100n − 100n + 3 = Θ(n )
Ans:
3
For c1=1/2000, c2=1, f(n)=n3/1000 − 100n2 − 100n + 3 and g(n)= n )
(Or)
The highest-order term in the function, which is n^3/1000. As n grows larger, the influence
of the other terms in the function diminishes relative to n^3/1000. Therefore, for sufficiently
large n, the function is dominated by the term n^3/1000. We can drop the lower-order terms
and coefficients and write the function as Θ(n^3/1000).
12 State the objective of Strassen Matrix Multiplication and list the steps 5 3 1 1 1.2.1
involved in the process
Analyze the order of growth.
2
(i).F(n) = 2n + 5 and g(n) = 7n. Use the Ω (g(n)) notation
Ans:
f(n) = 2n2+5 and g(n) = 7n.
We need to find the constant c such that
f(n)≥ c∗ g(n).
Let n = 0, then
f(n) = 2n2+5 = 2(0)2+5 = 5
g(n) = 7(n) = 7(0) = 0
Here, f(n)>g(n)
Let n = 1, then
f(n) = 2n2+5 = 2(1)2+5 = 7
g(n) = 7(n) = 7(1) = 7
Here, f(n)=g(n)
Let n = 2, then
f(n) = 2n2+5 = 2(2)2+5 = 13
g(n) = 7(n) = 7(2) = 14
Here, f(n)<g(n)
Thus, for n=1, we get f(n) ≥ c∗ g(n).
This concludes that Omega helps to determine the "lower bound" of the algorithm's run-
time.
13 Develop the algorithmic steps to find the maximum and minimum 5 2 2 4 4.5.1
element in the given list.
Illustrate the operation of merge sort on the array
A = { 3 , 41, 52 , 26 , 38 , 57 , 9 , 49 }
Ans: Have to explain the divide operation and then merge operation. (2 marks)
Diagram (3 marks)
14 i) Write the recurrence relation of Matrix Multiplication using divide and conquer 5 4 2 4 4.6.2
approach and solve it to find time complexity (2 marks)
ii) How Strassen Matrix Multiplication algorithms reduces time complexity of matrix
multiplication (1 mark)
iii) Write the recurrence relation of Strassen Matrix Multiplication and solve it to find its
time complexity (2 marks)
Ans:
i)
several new additions of n/2 X n/2 matrices, but still only a constant number of
additions.
iii)
lg 7
T(n) : Θ (n ) Since lg 7 lies between 2:80 and 2:81, Strassen’s algorithm
2:81
runs in O(n )
Part – C
(2 x 10 = 20 Marks)
Instructions: Answer All the Questions
15.A Find the time complexity of the below recurrence relation 10 1 1 1 1.2.1
i) T(n) = {2T(n-1) if n > 0
1 otherwise
ii) T(n) = { 2T(n/2)+1 if n > 1
1 otherwise
Ans:
i) T(n) = 2T(n-1)
2
T(n) = 2[2T(n-2)] = 2 T(n-2)
2 3
T(n) = 2[2 T(n-2)] = 2 T(n-3)
k
T(n) = 2 T(n-k)
n-k = 0, n= k, T(0) = 1
n
T(n) = O(2 )
ii) a = 2, b = 2 and f(n) = 1.
So c = log22 = 1 and O(n^1) > O(1),
which means that it fall in the third case and therefore a complexity is O(n).
OR
15.B Illustrate briefly on Big oh Notation, Omega Notation and Theta Notations. Depict the same 10 1 1 1 1.6.1
graphically and explain
Big-O
The Asymptotic Notation Ο(n) represents the upper bound of an algorithm’s running time
O(g(n)) = {f(n) | there exist positive constants c and n0, such that 0 ≤ f(n) ≤ cg(n) for all n ≥
n0. }
For example, if an algorithm has a time complexity of O(n), it means that the algorithm’s
running time will not exceed the linear growth rate, even if the input size increases
Big-Ω
The Omega Asymptotic Notation Ο(n) represents the lower bound of an algorithm’s
running time.
Ω(g(n)) = {f(n) | there exist positive constants c and n0, such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0. }
For example, if an algorithm has a time complexity of Ω(n), it means that the algorithm’s
running time will not be less than the linear growth rate, even if the input size decreases
Big-Θ
The Theta Asymptotic Notation Ο(n) represents the both lower bound and upper bound of
an algorithm’s running time.
Θ(g(n)) = {f(n) | there exist positive constants c1, c2, and n0, such that 0 ≤ c1g(n) ≤ f(n) ≤
c2g(n) for all n ≥ n0
For example, if an algorithm has a time complexity of Θ(n), it means that the algorithm’s
running time will be proportional to the linear growth rate, even if the input size increases or
decreases
16.A What is the closet pair problem? Explain the brute force approach to solve closest-pair with 10 3 2 4 4.4.1
an example ii) Derive its time complexity.
The closest-pair problem calls for finding the two closest points in a set of n points. It is the
simplest of a variety of problems in computational geometry that deals with proximity of
points in the plane or higher-dimensional spaces.
We assume that the points in question are specified in a standard fashion by their (x,
y) Cartesian coordinates and that the distance between two points pi(xi, yi) and pj (xj ,
yj ) is the standard Euclidean distance
The brute-force approach compute the distance between each pair of distinct points and find
a pair with the smallest distance. we do not want to compute the distance between the same
pair of points twice. To avoid doing so, we consider only the pairs of points (pi, pj ) for
which i < j .
ALGORITHM BruteForceClosestPair(P )
//Finds distance between two closest points in the plane by brute force
//Input: A list P of n (n ≥ 2) points p1(x1, y1), . . . , pn(xn, yn) //Output: The distance
between the closest pair of points
d←∞
for i ← 1 to n − 1 do
for j ← i + 1 to n do
d ← min(d, sqrt((xi − xj )2 + (yi − yj )2)) //sqrt is square root
return d
The number of times it will be executed can be computed as follows:
OR
16.B Devise an algorithm for Quick sort and derive its time complexity. For the above algorithm 10 3 2 4 4.4.3
find the time complexity if all the elements are arranged in ascending order. Illustrate with
the help of recurrence tree.
Quicksort is based on the three-step process of divide-and-conquer.
• To sort the subarrayA[p . . r ]:
Divide: Partition A[p . . r ], into two subarrays A[p . . q − 1] and A[q + 1 . . r ], such that
each element in the firstsubarray A[p . . q − 1] is ≤ A[q] and A[q] is ≤ each element in the
second subarrayA[q + 1 . . r ].
Conquer: Sort the two subarrays by recursive calls to QUICKSORT.
Combine: No work is needed to combine the subarrays, because they are sorted in place.
• Perform the divide step by a procedure PARTITION, which returns the index q that
marks the position separating the subarrays.
QUICKSORT (A, p, r)
If p < r
then q ←PARTITION(A, p, r )
QUICKSORT (A, p, q − 1)
QUICKSORT (A, q + 1, r)
Initial call is QUICKSORT (A, 1, n)
Partitioning
Partition subarrayA [p . . . r] by the following procedure:
PARTITION (A, p, r)
x ← A[r ]
i ← p –1
for j ← p to r –1 }
do if A[ j ] ≤ x
then i ← i + 1
swap (A[i ] ↔ A[ j ])
}
Swap(A[i + 1] ↔ A[r ] )
return i + 1
Complexity Analysis:
T(n) = O(nlogn)
if all the elements are arranged in ascending order Worst case occurs
If N is the length of array and having current pivot at star琀椀ng posi琀椀on of array, pivot at last index is
iden琀椀昀椀ed only traversing array from start to end . A昀琀er 昀椀xing pivot and spli琀�ng resultant sub
Again this (N-1) sub array finds next pivot at last index resulting array partitions with
lengths (N-2),1.
This process repeats until the final sub arrays lengths are both 1,1.
Course Outcome (CO) and Bloom’s Level (BL) Coverage in the Questions
52 BL4
51 1% BL1
BL3 33%
Percentage
50 43%
49
48 BL2
CO1 CO2 23%