ADA Q Bank Ans

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Question bank

Unit 1
Q1) What is an algorithm? What do you mean by correct algorithm? What do you mean by
instance of
a problem? List out the criteria that all algorithms must satisfy. On what bases will you consider
algorithm A is better than algorithm B?(can see this answer in notebook)
Ans. An Algorithm is defined as collection of unambiguous instruction occurring in some
specific sequence and such an algorithm should produce output for given set of input in finite
amount of time.
An algorithm is correct only if it produces correct result for all input instances.
A specific selection of values for the parameters is called an instance of the problem for eg:- the
input parameters to a sorting function might be an array of integer a particular array of integer
worth a given size and specific values position in the array would be an instance of the problem.
All algorithm must satisfy following criteria:
1 An algorithm must take zero or more values as input.
2 An algorithm must give one or more values as output
3 An algorithm must have Clear and unambiguous instructions
4 An algorithm must not have any infinite sequence of steps
5 It should be feasible with specific computational devices
We can say algorithm a is better than algorithm B on basis of
1 Time complexity:- If time complexity of A is better than B or A take less time than B to run
then we can say A is better algorithm than B.
2 space complexity:- If A takes less space compare than B then we can say A is more space
efficient than B so A is better than B.

Q2 What is an Algorithm? What do you mean by linear inequalities and linear equations?
Explain
asymptotic notation with the help of example.
Ans see Q1 for first part
Linear inequation:-
In mathematics linear inequality is a statement containing <,>,<= or >=
For example :- x+y>14
Linear inequation is based on number of variables.
The solution to one variable :3x+7<=10
The solution to two variables: 3x+7y>10
And so on
Linear equation:-
In mathematics linear equality is a statement containing = equation
Eg: ax+by=c

Asymptotic notations are the mathematical notations used to describe the running time of an
algorithm when the input tends towards a particular value or a limiting value.

For example: In bubble sort, when the input array is already sorted, the time taken by the
algorithm is linear i.e. the best case.

But, when the input array is in reverse condition, the algorithm takes the maximum time
(quadratic) to sort the elements i.e. the worst case.

When the input array is neither sorted nor in reverse order, then it takes average time. These
durations are denoted using asymptotic notations.

There are mainly three asymptotic notations:

 Big-O notation
 Omega notation
 Theta notation

Big-O notation(O):-
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.

Omega notation(Ω-notation):-

Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

Theta Notation (Θ-notation):-

Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
Q3 Why do we use asymptotic notations in the study of Algorithms? Briefly describe the
commonly used asymptotic Notations?

Ans see Q2 and internet

Q4 Explain master theorem and solve the following recurrence equation with master method
1. T(n)= 9T(n/3) + n
2. T(n)= 3T(n/4) + nlgn
3.T(n) = T(2n/3)+1

Ans Masters theorem is used for efficiency analysis consider the following recurrence equation:-
T(n)=aT(n/b)+f(n)

If f(n) is Θ(nd) where d>=0 in the recurrence relation then:


1. T(n)= Θ(nd) if a<bd
2. T(n)= Θ(nd logn) if a==b
3. T(n)= Θ(nlogba) if a>bd
T(n)= 9T(n/3) + n :-
a=9 and b=3 and d=1
so a>bd i.e. 4>2
T(n)= Θ(nlogba)
= Θ(nlog39)
= Θ(n2)

T(n)= 3T(n/4) + nlogn :-


a=3 and b=4 and d=1
so bd>a i.e. 4>3
T(n)= Θ(nd)
T(n)= Θ(n)

T(n) = T(2n/3)+1
a=1, b=3/2, d=0
Q5 What is an amortized analysis? Explain accounting method and aggregate analysis with
suitable example.(see in notebook)
Ans
1. Amortize analysis considers not just one operation, but a sequence of operation on a
given data structure.
2. The time required to perform a sequence of operation of data structure operation is
averaged out all operation
3. Amortize analysis can be used to show that the average of an operation might be small
even though single operation might be expensive.
There are 3 most common method of amortize analysis: -
 Aggregate method
 Accounting method
 Potential method
1. Aggregate method:-
 A sequence of n operation takes worst time of T(n)
 Amortize cost per operation is T(n)/n
 For example:
To implement push(item) we need :
S[top] = item
top++
2. Accounting Method:-
 Assign each type of operation an (different) amortize cost.
 Overcharge some operations
 Store the overcharged as credit specific objects when amortized cost>actual cost
 Then use the credit for compensation for some later operation when actual
cost>amortized cost
 Here amortize cost= actual cost+credits(deposit or used by)
3. The potential method:-
 Same as accounting method.
 But store the credit as “Potential energy” and as a whole

Q6 Explain following terms with example.


1. Set 2. Relation 3. Function

Ans Sets are represented as a collection of well-defined objects or elements and it does not change
from person to person.
For example: A={set of all airline companies}
A relation in mathematics defines the relationship between two different sets of
information. If two sets are considered, the relation between them will be established if there
is a connection between the elements of two or more non-empty sets.
For example: A={set of son} and B={set of father} relation between A and B is a relation
A function is a relation between a set of inputs and a set of permissible outputs with the property that
each input is related to exactly one output. Let A & B be any two non-empty sets; mapping from A to
B will be a function only when every element in set A has one end, only one image in set B.

Q7 Do as directed.
Calculate computation time for the statement t3 in following code fragment?
for i = 1 to n
{
for j = 1 to i
{
c = c + 1 …..…………… t3
}
}
2. Prove that T (n) = 1+2+3+…. +n = Θ (n2).
Ans
algorithm Cost time
for i = 1 to n C1 n
{
for j = 1 to i C2 (n-1)m
{
c = c + 1 …..…………… t3 C3 (n-1)(m-1)
}
}

So t3= (n-1)(m-1)

T(n)=n(n-1)/2
T(n)=(n2-n)/2
So Time complexity of equation is T(n)= Θ(n2)
Hence Proved

Q8 Define an amortized analysis. Briefly explain its different Techniques. Carry out aggregate
analysis for the problem of implementing a k-bit binary counter that counts upward from 0.
Ans see Q5
Amortize k-bit binary for aggregate is:-
n Counter value No of flips
0 0000 0
1 0001 1
2 0010 2
3 0011 1
4 0100 3
5 0101 1
6 0110 2
7 0111 1
8 1000 4

Amortize time per operation is O(1)

Amortize k-bit binary for Accounting method(banker method) is:-

n Counter value No of flips


0 0000 0
1 0001 2
2 0010 2
3 0011 2
4 0100 2
5 0101 2
6 0110 2
7 0111 2
8 1000 2
Amortize cost per operation is 2 so time complexity is O(1)
Amortized cost=actual cost+credit=15+1=16

Q9 Define following terms


(i) Quantifier
(ii) Algorithm
(iii) Big ‘Oh’ Notation
(iv) Big ‘Omega’ Notation
(v) ‘Theta’ Notation

Ans 1. Quantifier:-

2. Algorithm:- see Q1

3. Big ‘Oh’ notation:- Big-O notation represents the upper bound of the running time of an
algorithm. Thus, it gives the worst-case complexity of an algorithm.
4. Big ‘omega’ notation:- Omega notation represents the lower bound of the running time
of an algorithm. Thus, it provides the best case complexity of an algorithm.
5. ‘Theta’ Notation:- Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an algorithm, it is used
for analyzing the average-case complexity of an algorithm.

Q10 SHORT QUESTIONS


(i) what is an Algorithm?
(ii) what is worst case time complexity?
(iii) Big Oh notation
Ans (i) see Q1
(ii) It gives an upper bound on the resources required by the algorithm. In the case of running time, the
worst-case time complexity indicates the longest running time performed by an algorithm given any
input of size n, and thus guarantees that the algorithm will finish in the indicated period of time.
(iii) see above question

Q11 Define Algorithm, Time Complexity and Space Complexity.


Ans see Q1
Time complexity is the time taken by the algorithm to execute each set of instructions. It is
always better to select the most efficient algorithm when a simple problem can solve with
different methods.
Space complexity is usually referred to as the amount of memory consumed by the algorithm. It
is composed of two different spaces; Auxiliary space and Input space.

Q12 Solve the recurrence T(n) = 7T(n/2) + n3


Ans a=7,b=2,d=3
So a<bd
7<8
T(n)= Θ(n3)

Unit 2

Q1 What is an algorithm? Explain various properties of an algorithm.


Ans See Q1 of L-1
Properties of algorithm are:-
1. Non-ambiguity:-Each step in an algorithm should be non-ambiguous. That means each
instruction should be clear and precise. The instruction in an algorithm should not denote
any conflicting meaning
2. Range of input :- The range of input should be specific, because algorithm is generally
input driven it should not take infinite input.
3. Multiplicity:- The same algorithm can be represented in several way. That means we can
write in simple English the sequence of instruction or we can write in the form of pseudo
code.
4. Speed:- The algorithm are written using some specific ideas (which is known as logic of
algorithm). But such algorithm should be efficient and should produce the output with
fast speed.
5. Finiteness:- The algorithm should be finite. That means after performing required
operation it should terminate.
Q2 What do you mean by asymptotic notations? Explain.
Ans see L-1 Q3

Q3 Write a program/algorithm of selection sort methods. What is complexity of the method?(see


notebook)
Ans
 It works by rapidly selecting elements.
 The algorithm finds the smallest element in the array first and exchange it with element
in the 1st position
 Then finds second smallest element and exchange it with 2nd position
And algorithm continues
1,9,6,3,2
1,2,6,3,9
1,2,3,6,9

algorithm Cost Time


For i<-1 to n-1 do C1 n
minj=I,minx=T[i] C2 n-1
for j<-i+1 to n do: C3 n(n-1)
if T[j]<T[mini] do C4 (n-1)(n-1)
minj=j, minx=T[j] C5 (n-1)(n-1)
T[minj]=T[i] C6 n-1
T[i]=minx C7 n-1

Analysis:-
T(n)=c1n+c2(n-1)+c3n(n-1)+c4(n-1)2+c5(n-1)2+c6(n-1)+c7(n-1)
T(n)=c1n+(n-1)(c2+c6+c7)+ (n-1)2(c4+c5)+c3n2+c3
T(n)=an2+bn+c
T(n)= Θ(n2)

Q4 Explain different asymptotic notations in brief.


Ans see L-1 Q5

Q5 What is an amortized analysis? Explain aggregate method of amortized analysis using simple
example.
Ans see L-1 Q5

Q6 Explain why analysis of algorithm is important? Explain: Worst case, Best case, Average
case complexity.
Ans analysis of algorithm is important because:-
 To predict the behavior of an algorithm without implementing it on a specific computer.
 It is much more convenient to have simple measures for the efficiency of an algorithm than to
implement the algorithm and test the efficiency every time a certain parameter in the underlying
computer system changes.
 More importantly, by analyzing different algorithms, we can compare them to determine the best
one for our purpose.
Best case: Define the input for which algorithm takes less time or minimum time. In the best case
calculate the lower bound of an algorithm. Example: In the linear search when search data is present at
the first location of large data then the best case occurs.

Worst Case: Define the input for which algorithm takes a long time or maximum time. In the worst
calculate the upper bound of an algorithm. Example: In the linear search when search data is not present
at all then the worst case occurs.

Average case: In the average case take all random inputs and calculate the computation time for all
inputs.

Q7 Define: Big Oh, Omega and Big Theta notation.


Ans see L-1 Q9

Q8 What is Recursion? Give the implementation of Tower of Hanoi Problem using recursion.
Ans The process in which a function calls itself directly or indirectly is called recursion and the
corresponding function is called a recursive function

def TowerOfHanoi(n, from_rod, to_rod, aux_rod):


if n == 0:
return
TowerOfHanoi(n-1, from_rod, aux_rod, to_rod)
print("Move disk", n, "from rod", from_rod, "to rod", to_rod)
TowerOfHanoi(n-1, aux_rod, to_rod, from_rod)

# Driver code
N=3

# A, C, B are the name of rods


TowerOfHanoi(N, 'A', 'C', 'B')

Q9 Explain why analysis of algorithm is important?


Ans analysis of algorithm is important because : -
1. It predicts behavior of algorithm without any computer system
2. It is easy and convenient calculate the complexity of algorithm rather than calculating it
each without checking it in machine with different environment thus saving lot of time.
3. It tell us which algorithm is best for out use easily.

Q10 Explain bubble sort algorithm. Derive the algorithmic complexity in best case, worst case
and average case analysis. (see in notebook)
Ans see in notebook

Best case:- when all elements are already sorted which is Ω(1)
Worst case :- see table in book it is an2+bn+c form so Θ(n2)
Average case:- It is same as worst case.

Q11 Explain the heap sort in detail. Give its complexity.


Ans A heap sort data structure is a binary tree with following properties:
That each level of the tree is completely billed, except possibly the bottom level. At this
level it is filled from left to right.
The data item stored in each node is greater than or equal to the data item stored in its
children node.it can be implemented using array.

1. Creating Heap
2. Heapify heap
3. Sorting heap

Transform into max heap: After that, the task is to construct a tree from that unsorted array

Remove the maximum element in each step


Delete the root element (10) from the max heap. In order to delete this node, try to swap it with the last
node, i.e. (1). After removing the root element, again heapify it to convert it into max heap.
Q12 Sort the letters of word “DESIGN” in alphabetical order using bubble sort.
Ans word is sorted in following way:-
1st iteration:
1. DESIGN
2. DESGIN
3. DEGSIN
4. DEGSIN
5. DEGSIN
2nd iteration:-
1. DEGSIN
2. DEGISN
3. DEGINS
4. DEGINS
Final word:-DEGINS
Q13 Write an algorithm for insertion sort. Analyze insertion sort algorithm for best case and
worst case.
Ans Algorithm:-
Insertion_sort(A[0,..n-1]){
For (i=1;i<n-1;i++){
Temp=A[i];
J=i-1;
While(J>=0 and A[J]>A[i]){
A[j+1]=A[J];
J=J-1;
}
A[j+1]=Temp;
}
}
Analysis:
When an array is already sorted then it is best case complexity The best case time complexity od
insertion sort O(n).
If an array is randomly distributed then it results in average case time complexity which is O(n2).
If an array is in decreasing order then it is in worst case time complexity in this case is O(n2).

Q14Explain Counting sort with example.


Ans In counting sort the elements are arranged at its proper position where position is
determined by integer m which is in the range 0 to n. In other words, if element p is at some
location say 5then all elements less than that element are arranged on left say on 1,2,3,4 and
greater than that element are arranged after that number 6,7,and so on.
(see tb pg 2-109)

Q15 Sort the following data with Heap Sort Method: 20, 50, 30, 75, 90, 60, 25, 10, and 40. And
explain it.
Ans see assignment Q3

Q16 What is an amortized analysis? Explain aggregate method of amortized analysis using
suitable example.
Ans see L-1 Q5

Q17 Explain Selection Sort Algorithm and give its best case, worst case and average case
complexity with example.
Ans see Q3

Q18Sort the letters of word “EDUCATION” in alphabetical order using insertion sort.
Ans arr1=[E,D,U,C,A,T,I,O,N]
arr2=[]
Step1: arr2=[E],arr1=[D,U,C,A,T,I,O,N]
Step2: arr2=[D,E], arr1=[ U,C,A,T,I,O,N]
Step3: arr2=[D,E,U],arr1=[ C,A,T,I,O,N]
Step4: arr2=[C,D,E,U], arr1=[ A,T,I,O,N]
Step5: arr2=[A,C,D,E,U], arr1=[ T,I,O,N]
Step6: arr2=[ A,C,D,E,T,U],arr1=[I,O,N]
Step7: arr2=[ A,C,D,E,I,T,U], arr1=[O,N]
Step8: arr2=[ A,C,D,E,I,O,T,U],arr1=[N]
Step9:arr2=[A,C,D,E,I,N,O,T,U]
Hence array is sorted

Q19 Apply the bubble sort algorithm for sorting {U,N,I,V,E,R,S}


Ans see Q13 same
Q20Let f(n) and g(n) be asymptotically nonnegative functions.
Using the basic definition of Θ-notation, prove that
max(f(n), g(n)) = Θ (f(n) + g(n)).
Ans

Q21What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster
than an algorithm whose running time is 2n on the same machine?
Ans

n=1⇒ 100×1^2 = 100 > 2^1

n=2⇒ 100×2^2 = 400 > 2^2

n=4⇒ 100×4^2 = 1600 > 2^4

n=8⇒ 100×8^2 = 6400 > 2^8

n=16⇒ 100×16^2 = 25600 < 2^16

so n=16

Q22 Explain Tower of Hanoi Problem, Derive its recursion equation and computer it’s time
complexity.
Ans Tower of Hanoi is a mathematical puzzle where we have three rods and n disks. The objective of the
puzzle is to move the entire stack to another rod, obeying the following simple rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one of the stacks and placing it on top of another
stack i.e. a disk can only be moved if it is the uppermost disk on a stack.
3) No disk may be placed on top of a smaller disk.

L-3
Q1 Prove that Greedy Algorithms does not always give optimal solution. What are the general
characteristics of Greedy Algorithms? Also compare GreedyAlgorithms with Dynamic
Programming and Divide and Conquer methods to find out major difference between them.
Ans
Greed algorithm don’t give optimal solution always as it chooses the best option at current time
and don’t think about it’s consequences in future for eg:-

Above shown is greedy approach which is not optimal

Gready algorithm:-
Gready algorithm is an algorithmic paradigm that builds up a solution piece by piece, always
choosing the next piece that offers the most obvious and immediate benefit. So the problems
where choosing locally optimal also leads to a global solution is the best fit for Greedy.

Example: In Fractional Knapsack Problem the local optimal strategy is to choose the item that
has maximum value vs weight ratio. This strategy also leads to global optimal solution because
we allowed taking fractions of an item.

A problem that can be solved using the Greedy approach follows the below-mentioned
properties:

 Optimal substructure property.


 Minimization or Maximization of quantity is required.
 Ordered data is available such as data on increasing profit, decreasing cost, etc.
 Non-overlapping subproblems.

Dynamic Programming:-

Dynamic programming is mainly an optimization over plain recursion. Wherever we see a recursive
solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The
idea is to simply store the results of subproblems so that we do not have to re-compute them when needed
later. This simple optimization reduces time complexities from exponential to polynomial.
Example:- If we write a simple recursive solution for Fibonacci Numbers, we get exponential time
complexity and to optimize it by storing solutions of subproblems, time complexity reduces to linear this
can be achieved by Tabulation or Memoization method of of Dynamic programming.

A problem that can be solved using Dynamic Programming must follow the below mentioned
properties:

 Optimal substructure property.


 Overlapping subproblems.

Divide and conquer Greedy Algorithm

A greedy algorithm is optimization


Divide and conquer is used to find the solution, it does not aim technique. It tries to find an optimal
for the optimal solution. solution from the set of feasible
solutions.

DC approach divides the problem into small sub problems,


In greedy approach, the optimal solution
each sub problem is solved independently and solutions of the
is obtained from a set of feasible
smaller problems are combined to find the solution of the large
solutions.
problem.

Greedy algorithm does not consider the


Sub problems are independent, so DC might solve same sub
previously solved instance again, thus it
problem multiple time.
avoids the re-computation.

DC approach is recursive in nature, so it is slower and Greedy algorithms are iterative in nature
inefficient. and hence faster.

Greedy algorithms also run in


Divide and conquer algorithms mostly runs in polynomial time polynomial time but takes less time than
Divide and conquer

Examples:
Examples:
Merge sort,
Knapsack problem,
Quick sort,
Activity selection problem,
Binary search,
Job scheduling problem,
Strassen’s matrix multiplication,
Huffman codes,
convex hull problem,
Optimal storage on tapes,
large integer problem,
Optimal merge pattern.
exponential problem
Q2 Justify the general statement that “if a problem can be split using Divide and Conquer
strategy in almost equal portions at each stage, then it is a good candidate for recursive
implementation, but if it cannot be easily be so divided in equal portions, then it better be
implemented iteratively”. Explain with an example

Ans A divide & conquer algorithm consists of three parts:

1. Decide how to decompose your problem into smaller parts, and separate the
problem information according to those parts.
2. Solve each resulting smaller problem separately.
3. Combine the solutions of the smaller problems to get a solution to the larger
problem.

The divide & conquer approach can fail in various ways, primarily at steps 1 & 3. You may not be able to
decompose the problem, or you may need too many smaller problems. For example, consider the task of
solving a maze. How do we decompose this into smaller problems? If we slice a maze in half, the final
solution may go back and forth between the two halves several times. How could a “solution” of the maze
on one half ever be expected to lead to a solution of the full maze? Consider the problem of finding the
closest pair of points in a 2-dimensional square. We can decompose it into 4 squares and find the closest
pair in each of those squares (green lines in the attached drawing), but what if the closest pair of points
are on opposite sides of one of those green line segments? You could try to put squares of the same size
covering each of those 4 edges, and another one centered at the intersection of the green lines. But then
you would have to solve 9 smaller problems — and that’s so many that your algorithm would be slower
than the brute force one

Q3 Write an algorithm for binary search. Calculate the time complexity for each case.
Ans
//Input array A from which key element need to be searched
//if element is equal to key then return index else -1
Low=0
High=n-1
While (low<high){
M=(low+high)/2
If(KEY==A[m]){
Return m
}
Else if(KEY<A[m]){
High=m-1
}
Else{
low=m+1
}
Return -1
}

Best case:
If element is center of list then time complexity is:-
Θ(1)
If element at either starting or at end of the list the it is worst case:-
1st iteration:-
Length of array=n
2nd iteration:-
Length of array=n/2
3rd iteration:
Length of array=n/22
nth iteration:
length of array=n/2k
we know at end of k iteration length of array becomes 1
n/2k=1
so n=2k
applying log
logn=klog22
k=logn
So time complexity of binary search= log(n)

Average case:
Time complexity of average case is same as worst case log(n)

Q4Write an algorithm for merge sort with divide and conquer strategy.Analyze each case.List
best case worst case and average case complexity
Ans
Q5 Write an algorithm for quick sort with divide and conquer strategy.Analyze each case. In
which case it performs similar to selection sort?(text book see)
Ans Algorithm for Quick:-
Quick(A[],low,high):-
If(low<high)then
M=partition(A[low…high])
Quick(A[low…m-1])
Quick(A[mid+1…high])

Algorithm for partition:


Partition():
Pivot=A[low]
I=low
J=high+1
While(i<=j){
While(A[i]<=pivot) do
While (A[i]

Q6 Differentiate divide and conquer with dynamic programming. Write recurrence for
calculation for binominal coefficient.
Ans see Q1

Q7 Explain how to apply the divide and conquer strategy for sorting the elements using merge
sort.
Ans merge sort employ a common algorithmic paradigm based on recursion. This paradigm,
divide-and-conquer, breaks a problem into subproblems that are similar to the original problem,
recursively solves the subproblems, and finally combines the solutions to the subproblems to
solve the original problem. Because divide-and-conquer solves subproblems recursively, each
subproblem must be smaller than the original problem, and there must be a base case for
subproblems. You should think of a divide-and-conquer algorithm as having three parts:

1. Divide the problem into a number of subproblems that are smaller instances of the same
problem.
2. Conquer the subproblems by solving them recursively. If they are small enough, solve
the subproblems as base cases.
3. Combine the solutions to the subproblems into the solution for the original problem.
Q8 Differentiate the following:
1. Divide and conquer & Dynamic Programming
2. Greedy Algorithm & Dynamic Programming
Ans See Q1 answer

Q9 Show how divide and conquer technique is used to compute product of two n digit no with
example.
Ans see notebook

Q10 Sort the following list using quick sort algorithm:


<50, 40, 20, 60, 80, 100, 45, 70, 105, 30, 90, 75>
Also discuss worst and best case of quick sort algorithm
Ans see Example 3.8.2 in T.B.
Q11 Explain Binary search algorithm with divide and conquer strategy and use the recurrence
tree to show that the solution to the binary search recurrence T (n) = T(n/2) + Ѳ(1) is T(n) =
Ѳ(lgn).
Ans

Q12 Explain how to apply the divide and conquer strategy for sorting the elements using quick
sort with example. Write algorithm for quick sort method.
Ans see Q5
Quick sort algorithm that uses the divide and conquer strategy. In this method it:
1. Divide : Splits the array into two parts each element on left sub-array is less than or equal
middle element. The splitting of the array into 2 sub array is based on pivot element. All
the elements that are more than pivot should be on the right sub array.
2. Conquer : Recursively sort 2 sub arrays.
3. Combine : Combine all sorted array in a group to form a list of sorted elements.

Q13 Discuss matrix multiplication problem using divide and Conquer technique.
Ans The divide and conquer approach can be used for implementing Strassen’s matrix
multiplication:-
Divide : Divide matrix into sub matrix : A0, A1, A2 etc
Conquer: Use a group of matrix multiply equation.
Combine : Recursively multiply sub-matrix and get the final result of multiplication after
performing required addition and subtraction.

Q18 Multiply 981 by 1234 by divide and conquer method.


Ans see in notebook

Q14 Explain Strasson’s algorithm for matrix multiplication


Ans see book 3-66

Q15 Explain the use of Divide and Conquer Technique for Binary Search Method. What is the
complexity of Binary Search Method? Explain it with example.
Ans see Q3

Q16 Write a program/algorithm of Quick Sort Method and analyze it with example.
Ans see Q5

Q17 Write an algorithm for quick sort and derive best case, worst case using divide and conquer
technique also trace given data (3,1,4,5,9,2,6,5)
Ans see in notebook solved

Q19What do you mean by Divide & Conquer approach? List


advantages and disadvantages of it
Ans A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-
problems of the same or related type, until these become simple enough to be solved directly.
The solutions to the sub-problems are then combined to give a solution to the original problem.

Advantages of divide and conquer are:-

 Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
 It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
 It is more proficient than that of its counterpart Brute Force technique.
 Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.

Disadvantages of divide and conquer are:-

 Since most of its algorithms are designed by incorporating recursion, so it necessitates


high memory management.
 An explicit stack may overuse the space.
 It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU.

Q20Solve the following recurrence relation using iteration


method. T(n) = 8T(n/2) + n2. Here T(1) = 1.
Ans

Q21 Write Merge sort algorithm and compute its worst case and best-case time complexity. Sort
the List G,U,J,A,R,A,T in alphabetical order using merge sort
Ans The Merge Sort algorithm is a sorting algorithm that is based on the Divide and Conquer paradigm.
In this algorithm, the array is initially divided into two equal halves and then they are combined in a
sorted manner
Step 1 Starts

step 2: declare array and left, right, mid variable

step 3: perform merge function.


if left > right
return 0
mid= (left+right)/2
mergesort(array, left, mid)
mergesort(array, mid+1, right)
merge(array, left, mid, right)

step 4: Stop
def mergeSort(arr):
if len(arr) > 1:

# Finding the mid of the array


mid = len(arr)//2

# Dividing the array elements


L = arr[:mid]

# into 2 halves
R = arr[mid:]

# Sorting the first half


mergeSort(L)

# Sorting the second half


mergeSort(R)

i =j =k =0

# Copy data to temp arrays L[] and R[]


while i < len(L) and j < len(R):
if L[i] <= R[j]:
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1

# Checking if any element was left


while i < len(L):
arr[k] = L[i]
i += 1
k += 1

while j < len(R):


arr[k] = R[j]
j += 1
k += 1

O(N log(N)), Sorting arrays on different machines. Merge Sort is a recursive algorithm and time
complexity can be expressed as following recurrence relation.

T(n) = 2T(n/2) + θ(n)

It falls in case II of the Master Method and the solution of the recurrence is θ(Nlog(N)).

Sorting G,U,J,A,R,A,T (can represent in tree form also do tree in paper):-


Step 1 divide array

leftarray=[G,U,J] rightarray=[A,R,A,T]

Step 2:divide left array:-

L=[G] R=[U,J]

R is further divided:- l=[U] r=[J]

Step 3 sort and combine left side:-

L=[G], R=[J,U]

Further sort :-

Leftarray=[G,J,U]

Step 4:-

Repeat step 2 and 3 with right array we get :-

Rightarray=[A,A,R,T]

Step5 :- merge rightarray and left array after sorting:-

=[A,A,G,J,R,T,U]

Q22 Demonstrate Binary Search method to search Key = 14, form the array
A=<2,4,7,8,10,13,14,60>

Ans

Step 1:- taking center part and comparing it with key

N/2=8 where n is length of array

8==14 false

Step 2:- 14 is greater than 8 so taking left array=[10,13,14,60] and repeating step 1:-

n/2=13

13==14 false
Step 3 :- 14 is greater than 13 so taking left array=[14,60] and repeating step 1:-

n/2=14

14=14 True

So match found

Q23 Write an algorithm for insertion sort. Analyze insertion sort algorithm for best case and
worst case.
Ans see in notebook

L-4

Q1Given two sequences of characters, P=<ABCDABE>, Q=<CABE > Obtain the longest
common subsequence
Ans
C A B E
A 0 ⬉1 ←1 ←1
B 0 ↑1 ⬉2 ←2
C ⬉3 ←3 ←3 ←3
D ↑3 ←3 ↑3 ←3
A ↑3 ⬉4 ←4 ←4
B ↑3 ↑4 ⬉5 ←4
E ↑3 ↑4 ↑5 ⬉6

So longest subsequent is :- EBACBA


Q2 Given the four matrix find out optimal sequence for multiplication D=<15,5,10,20,25>
Ans
Q3 Given coins of denominations 1, 3 and 4 with amount to be pay is 7. Find optimal no. of
coins and sequence of coins used to pay given amount using dynamic method
Ans see in testbook 4-12

L-4
Q1Give and explain Kruskal’s Algorithm for Minimum Spanning Tree and Compare it with
Prim’s algorithm with an example.
Ans Given a connected and undirected graph, a spanning tree of that graph is a subgraph that is a tree and
connects all the vertices together. A single graph can have many different spanning trees. A minimum
spanning tree (MST) or minimum weight spanning tree for a weighted, connected, undirected graph is a
spanning tree with a weight less than or equal to the weight of every other spanning tree.
https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/
or see book 5-25

Q2 Is Selection sorting a greedy algorithm? If so, what are the functions involved.
Ans In every iteration of selection sort, the minimum element (considering ascending order) from the
unsorted subarray is picked and moved to the sorted subarray. Clearly, it is a greedy approach to sort
the array.
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering
ascending order) from the unsorted part and putting it at the beginning.

Q3 Write down the general characteristics of greedy algorithm

Ans A problem that can be solved using the Greedy approach follows the below-mentioned
properties:

 Optimal substructure property.


 Minimization or Maximization of quantity is required.
 Ordered data is available such as data on increasing profit, decreasing cost, etc.
 Non-overlapping subproblems.

Q4 Explain Kruskal’s algorithm to find minimum spanning tree with an example. What is it’s
time complexity?

Ans see link of Q1


O(ElogE) or O(ElogV), Sorting of edges takes O(ELogE) time. After sorting, we iterate through all edges
and apply the find-union algorithm. The find and union operations can take at most O(LogV) time. So
overall complexity is O(ELogE + ELogV) time. The value of E can be at most O(V2), so O(LogV) is
O(LogE) the same. Therefore, the overall time complexity is O(ElogE) or O(ElogV).
Q5 What do you mean by minimum spanning tree? Explain single source shortest path with the
help of example
Ans Minimum spanning tree is a spanning tree with smallest or minimum weight
For eg:-

Out of 3 b is correct spanning tree


Q6 Give and explain Prim’s Algorithm for Minimum Spanning Tree and Compare it with
Kruskal’s algorithm with an example.
Ans Here in this algorithm we will first consider all vertices first. Then we will select an edge
with minimum weight. The algorithm proceeds by selecting adjacent edges with minimum
weight. The algorithm proceeds by selecting adjacent edges with minimum weight.

Prim’s algorithm Kruskal’s algorithm


This algorithm is for obtaining minimum This algorithm is for obtaining minimum
spanning tree by selecting the adjacent spanning tree but it is not necessary to choose
vertices of already selected vertices adjacent vertices of already selected vertices

Q7 Explain Greedy method in detail with example and differentiate it with dynamic method.
Ans A Greedy algorithm is an algorithmic paradigm that builds up a solution piece by piece,
always choosing the next piece that offers the most obvious and immediate benefit. So the
problems where choosing locally optimal also leads to a global solution is the best fit for Greedy.

Example: In Fractional Knapsack Problem the local optimal strategy is to choose the item that has
maximum value vs weight ratio. This strategy also leads to global optimal solution because we allowed
taking fractions of an item.
Feature Greedy method Dynamic programming

In a greedy Algorithm, we In Dynamic Programming we make


Feasibility make whatever choice seems decision at each step considering
best at the moment in the hope current problem and solution to
Feature Greedy method Dynamic programming

that it will lead to global previously solved sub problem to


optimal solution. calculate optimal solution .

It is guaranteed that Dynamic


In Greedy Method, sometimes Programming will generate an optimal
Optimality there is no such guarantee of solution as it generally considers all
getting Optimal Solution. possible cases and then choose the
best.

A greedy method follows the A Dynamic programming is an


problem solving heuristic of algorithmic technique which is usually
Recursion making the locally optimal based on a recurrent formula that uses
choice at each stage. some previously calculated states.

It is more efficient in terms of It requires Dynamic Programming


Memoization memory as it never look back table for Memoization and it increases
or revise previous choices it’s memory complexity.

Greedy methods are generally


Dynamic Programming is generally
faster. For example, Dijkstra’s
Timecomplexity slower. For example, Bellman Ford
shortest path algorithm takes
algorithm takes O(VE) time.
O(ELogV + VLogV) time.

The greedy method computes


Dynamic programming computes its
its solution by making its
solution bottom up or top down by
Fashion choices in a serial forward
synthesizing them from smaller
fashion, never looking back or
optimal sub solutions
revising previous choices.

Q8Find Minimum Spanning Tree for the given graph using Prim’s Algo. (Initialization from
node A).
Ans A -> D ->F -> E ->C, E->B, E->G
Q9 Explain Dijkstra’s shortest path algorithm with example. If we want to display intermediate
node than what change we should make in the algorithm.
Ans

Q10 Mention applications of minimum spanning tree. Generate minimum spanning tree from the
following graph using Prim’s algorithm. (Start at vertex a).

Ans Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.

Examples:

Input: src = 0, the graph is shown below.


Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8

Q10 Mention applications of minimum spanning tree. Generate minimum spanning tree from the
following graph using Prim’s algorithm. (Start at vertex a).
see pic of graph from pdf
Ans Application of minimum spanning tree are:-

Network design

 telephone, electrical, hydraulic, TV cable, computer, road

The standard application is to a problem like phone network design. You have a business with
several offices; you want to lease phone lines to connect them up with each other, and the phone
company charges different amounts of money to connect different pairs of cities. You want a set
of lines that connects all your offices with a minimum total cost. It should be a spanning tree,
since if a network isn’t a tree you can always remove some edges and save money.

 traveling salesperson problem, Steiner tree

A less obvious application is that the minimum spanning tree can be used to approximately solve
the traveling salesman problem. A convenient formal way of defining this problem is to find the
shortest path that visits each point at least once.
Note that if you have a path visiting all points exactly once, it’s a special kind of tree. For
instance in the example above, twelve of sixteen spanning trees are actually paths. If you have a
path visiting some vertices more than once, you can always drop some edges to get a tree. So in
general the MST weight is less than the TSP weight, because it’s a minimization over a strictly
larger set.

See https://www.geeksforgeeks.org/applications-of-minimum-spanning-tree/

Solve tree in pdf

Q11 Following are the details of various jobs to be scheduled on multiple processors such that no
two processes execute at the same on the same processor.

Show schedule of these jobs on minimum number of processors using greedy approach. Derive
an algorithm for the same. What is the time complexity of this algorithm?
Ans see textbook page 5-55

Q12 Define MST. Explain Kruskal’s algorithm with example for construction of MST.
Ans see Q5 for(MST minimum spanning tree)
Kruskal’s algorithm :-
 Sort all the edges in non-decreasing order of their weight.
 Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If cycle
is not formed, include this edge. Else, discard it.
 Repeat step#2 until there are (V-1) edges in the spanning tree

Q13 Explain in brief characteristics of greedy algorithms. Compare Greedy Method with
Dynamic Programming Method.
Ans See Q3 and Q6

Q14 Write the Prim’s Algorithm to find out Minimum Spanning Tree. Apply the same and find
MST for the graph given below.

Ans see Q6
See notebook
Q15 What is recurrence? Solve recurrence equation T (n) =T (n-1) + n using forward substitution
and backward substitution method. https://www.javatpoint.com/daa-recurrence-relation?
Ans A recurrence is an equation or inequality that describes a function in terms of its values on smaller
inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that
satisfy the recurrence.

Q16 Using greedy algorithm find an optimal schedule for following jobs with n=6.
Profits: (P1,P2,P3,P4,P5,P6) = (20, 15, 10, 7, 5, 3)
Deadline: (d1,d2,d3,d4,d5,d6) =(3, 1, 1, 3, 1, 3)
Ans
Step 1:- arrange the profit in desending order as it is arranged in order by default we do not have
to arrange:-
jobs J1 J2 J3 J4 J5 J6
profit 20 15 10 7 5 3
Deadline 3 1 1 3 1 3

Step2:- Create an array J[] which stores the jobs :-


0 0 0 0 0 0

Step 3:- add ith job in array at index denoted by deadline:-


0 0 J1 0 0 0
Step 4:- add next deadline job nearest at deadline location
J2 0 J1 0 0 0

Step5 :- next is J3 at position 1 which is already occupied so not possible to enter

J2 0 J1 0 0 0

Hence we obtain Job sequence as :- J2-J1

Q17 Write Huffman code algorithm and Generate Huffman code for following

Ans Step 1 :- arrange in assending order:-


E-8,D-8,C-10,B-12,A-24
See in notebook from step 2:

Q18 Compute MST using PRIM’s Algorithm.


Ans like Q15

Q19Find an optimal Huffman code for the following set of frequency. a : 50, b: 20, c: 15, d: 30.
Ans see in notebook

Q20 Solve the following recurrence relation using substitution


method. T(n) = 2T(n/2) + n. Here T(1) = 1.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy