0% found this document useful (0 votes)
762 views80 pages

ADA SolBank Final

The document provides solutions to 30 questions related to algorithm analysis and design. It includes definitions and explanations of key concepts like the brute force approach, worst-case analysis of merge sort, time complexity of optimal BST, Huffman codes, complexity classes P, NP and NP-Complete, algorithm design techniques like divide and conquer, greedy algorithms, dynamic programming, and more.

Uploaded by

ganashreep2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
762 views80 pages

ADA SolBank Final

The document provides solutions to 30 questions related to algorithm analysis and design. It includes definitions and explanations of key concepts like the brute force approach, worst-case analysis of merge sort, time complexity of optimal BST, Huffman codes, complexity classes P, NP and NP-Complete, algorithm design techniques like divide and conquer, greedy algorithms, dynamic programming, and more.

Uploaded by

ganashreep2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 80

Soundarya Institute of Management and Science

Department of Computer Science

Analysis and Design of Algorithm – Solution Bank


SECTION – A (2 Mark Questions)
1. List any two important characteristics of Algorithm.
 The non-ambiguity requirement for each step of an algorithm cannot be compromised.
 The range of inputs for which an algorithm works has to be specified carefully.
 The same algorithm can be represented in several different ways.
 There may exist several algorithms for solving the same problem

2. Define Brute force approach.


Brute force is a straightforward approach to solving a problem, usually directly based on the
problem statement and definitions of the concepts involved.
It tries out all the possibilities till the satisfactory solution is found.

3. Write the worst-case analysis of Merge sort.


The worst case scenario for merge sort occurs when the given array is sorted in descending order
leading to maximum number of comparisons.
Hence the worst-case time complexity of merge sort is O(n * log n).

4. Why is the time efficiency of algorithm Optimal BST is cubic?


There are O(n^2) subproblems to solve, and each subproblem requires examining O(n)
possibilities.
Therefore, the overall time complexity is O(n^3), as the algorithm iterates through the cost
matrix of size n x n and performs computations for each entry, resulting in a cubic time
complexity.

5. Define Huffman codes.


Huffman Coding is a greedy technique to obtain an optimal solution to a problem. The Huffman
coding is generally used for lossless data compression mechanisms.
Sometimes, it is also called data compression encoding. It makes sure that there is no ambiguity
while decoding the output bitstream.

6. Differentiate between P, NP & NP-Complete.


 P: Class of problems that can be efficiently solved by deterministic algorithms in polynomial
time.
 NP: Class of problems for which a potential solution can be efficiently verified in polynomial
time, but finding the solution itself may be computationally expensive.
 NP-Complete: Subset of NP problems that are believed to be the most difficult and have the
property that any problem in NP can be polynomially reduced to them. Solving an NP-Complete
problem would imply solving all other problems in NP efficiently.
7. Define An algorithm design technique
An algorithm design technique means a unique approach or mathematical method for
creating algorithm and solving problems.
While multiple algorithms can solve a problem not algorithms can solve it efficiently.
There are several commonly used algorithm design techniques, including:
Brute Force, Divide and Conquer, Greedy algorithm etc.,

8. With diagrammatic representation, define Divide and Conquer technique


The divide and conquer technique is a technique that is commonly used to solve problems which
breaks down the bigger problem into smaller by a constant value or factor or variable size.

9. Differentiate between DFS & BFS.

10. Define Warshall’s algorithm.


Warshall's algorithm is used to determine the transitive closure of a directed graph or all paths in
a directed graph by using the adjacency matrix. For this, it generates a sequence of n matrices.
Where, n is used to describe the number of vertices:
R(0), ..., R(k-1), R(k), ... , R(n)

11. State any two difference between greedy algorithm and dynamic programming
12. Define Branch-and-bound technique.
Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree.
It is used for solving the optimization problems and minimization problems.

13. Define Divide and Conquer technique


The divide and conquer technique is a technique that is commonly used to solve problems which
breaks down the bigger problem into smaller by a constant value or factor or variable size.
There are three major variations:
 Decrease by Constant
 Decrease by constant factor
 Variable size decrease

14. What is Minimum Cost Spanning Tree ? Give an example.


A minimum spanning tree is a special kind of tree that minimizes the lengths (or “weights”)
of the edges of the tree.
An example is a cable company wanting to lay line to multiple
neighborhoods; by minimizing the amount of cable laid, the cable company will save money

15. Define Hashing, Hash Function & Hash table.


Hashing refers to the process of generating a fixed-size output from an input of variable size
using the mathematical formulas known as hash functions. This technique determines an index
or location for the storage of an item in a data structure.
A Hash Function is a function that converts a given numeric or alphanumeric key to a small
practical integer value.
Hash Table is a data structure which stores data in an associative manner. In a hash table, data is
stored in an array format, where each data value has its own unique index value.

16. Define Backtracking and Branch-and-bound technique.


Backtracking is an algorithmic technique whose goal is to use brute force to find all solutions to
a problem. It entails gradually compiling a set of all possible solutions. Because a problem will
have constraints, solutions that do not meet them will be removed.
Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree.
It is used for solving the optimization problems and minimization problems.

17. List the methods of computing the time efficiency of algorithms


 Operation Counts
 Step Counts
 Asymptotic Notations
18. Write the diagrammatic representation of Decrease-by-a-constant

19. Define Interpolation search.


The Interpolation Search is an improvement over Binary Search for instances, where the values
in a sorted array are uniformly distributed. Interpolation constructs new data points within the
range of a discrete set of known data points.
Interpolation search may go to different locations according to the value of the key being
searched.

20. Define Dynamic Programming


Dynamic programming is a technique that breaks the problems into sub-problems, and saves the
result for future purposes so that we do not need to compute the result again.
The subproblems are optimized to optimize the overall solution is known as optimal substructure
property. The main use of dynamic programming is to solve optimization problems.

21. Define Principle of Optimality


The principle of optimality is a fundamental aspect of dynamic programming, which states that
the optimal solution to a dynamic optimization problem can be found by combining the optimal
solutions to its sub-problems. While this principle is generally applicable, it is often only taught
for problems with finite or countable state spaces in order to sidestep measure-theoretic
complexities.

22. Define assignment problem


An assignment problem is a particular case of transportation problem. The objective is to assign
a number of resources to an equal number of activities . So as to minimize total cost or maximize
total profit of allocation.

23. Define Space and Time trade-offs.


Time-Space tradeoff is a situation where one thing increases and another thing decreases. It is a
way to solve a problem in:
 Either in less time and by using more space, or
 In very little space by spending a long amount of time.
24. Define Greedy technique.
The greedy method is a simple and straightforward way to solve optimization problems. It
involves making the locally optimal choice at each stage with the hope of finding the global
optimum. The main advantage of the greedy method is that it is easy to implement and
understand.

25. What is Hamilton circuit problem?


A Hamiltonian circuit is a circuit that visits every vertex once with no repeats. Being a circuit, it
must start and end at the same vertex. A Hamiltonian path also visits every vertex once with no
repeats, but does not have to start and end at the same vertex.

26. Mention the best case and worst case time complexities of Linear Search Algorithm
Best Case Time Complexity: The best case scenario occurs when the target element is found at
the very beginning of the list. In this case, the linear search algorithm would require only one
comparison to find the target. Therefore, the best case time complexity is O(1), which denotes
constant time
Worst Case Time Complexity: The worst case scenario happens when the target element is either
not present in the list or is located at the very end. In this case, the linear search algorithm would
need to compare the target element with each element in the list, resulting in n comparisons,
where n is the number of elements in the list. Therefore, the worst case time complexity is O(n),
which denotes linear time.

27. What is Knapsack problem?


The Knapsack problem is a classic optimization problem in computer science and mathematics.
It involves selecting a subset of items from a given set, each with its own value and weight, to
maximize the total value while keeping the total weight within a predefined capacity.

28. Mention the different types of sorting techniques.


 Insertion sort
 Selection sort
 Merge sort
 Quick sort
 Bubble sort
 Heap sort

29. Define Directed Graph and Cycle.


A directed graph, also known as a digraph, is a type of graph in which the edges have a specific
direction associated with them. In a directed graph, each edge has a starting vertex and an ending
vertex, and the direction indicates the flow or relationship between the vertices.

30. Write the time complexity of (A) Merge sort (b) Binary search
Time Complexity of Merge Sort: O(n log n)
Time Complexity of Binary Search: O(log n)

31. What is cost adjacency matrix?


A cost adjacency matrix, also known as a weighted adjacency matrix, is a square matrix that
represents the costs or weights associated with the connections between vertices in a graph.
It is commonly used in graph theory and network analysis to represent the relationships and costs
between various nodes or vertices in a network.
32. State fractional knapsack problem.
The fractional knapsack problem is a classic problem in combinatorial optimization. If a set of
items are given, each with a weight and a value, the goal is to select a subset of the items that
maximises the value while keeping the total weight below or equal to a given limit.

33. What is backtracking?


Backtracking is an algorithmic technique for solving problems recursively by trying to build a
solution incrementally, one piece at a time, removing those solutions that fail to satisfy the
constraints of the problem at any point in time.
The idea behind the backtracking technique is that it searches for a solution to a problem among
all the available options.
34. Mention three tree traversal methods.
 In-order Traversal.
 Pre-order Traversal.
 Post-order Traversal.

35. What is sum of subsets problems?


The Subset Sum problem is a classic computational problem in computer science and
mathematics. Given a set of positive integers and a target sum, the problem is to determine
whether there exists a subset of the given set whose elements add up to the target sum.
For example, consider the set S = {3, 5, 2, 8} and the target sum T = 10. The Subset Sum
problem would ask whether there is a subset of S that adds up to 10. In this case, the subset {2,
8} satisfies the condition, as the sum of its elements is 2 + 8 = 10.

36. Write any two difference between analysis and profiling.


ANALYSIS PROFILING
Examination and evaluation of data or Identification and creation of a profile or
information to gain insights and draw characteristics of an individual, group, or
conclusions entity based on collected data or behavior
patterns.
Focuses on understanding the data and Focuses on creating a comprehensive profile
drawing meaningful conclusions. that represents the characteristics or attributes
of a person or entity.
To gain insights, identify patterns, and make To create a detailed profile that can be used
informed decisions based on data analysis. for various purposes, such as targeted
marketing, law enforcement, or behavioral
analysis

37. Write two possible solutions of 4-Queen problem.


The 4-Queen Problem is a classic puzzle that involves placing four queens on a 4x4 chessboard
in such a way that no two queens threaten each other. In other words, no two queens should share
the same row, column, or diagonal.
Solution 1:
- Queen 1: Placed at (1, 2) on the chessboard.
- Queen 2: Placed at (2, 4) on the chessboard.
- Queen 3: Placed at (3, 1) on the chessboard.
- Queen 4: Placed at (4, 3) on the chessboard.
The chessboard would look like this:

|Q| | | |
| | | |Q|
| | |Q| |
| |Q| | |

Solution 2:
- Queen 1: Placed at (1, 3) on the chessboard.
- Queen 2: Placed at (2, 1) on the chessboard.
- Queen 3: Placed at (3, 4) on the chessboard.
- Queen 4: Placed at (4, 2) on the chessboard.

The chessboard would look like this:

| | |Q| |
|Q| | | |
| | | |Q|
| |Q| | |

In both solutions, the queens are placed in such a way that they do not threaten each other. No
two queens share the same row, column, or diagonal, satisfying the requirements of the 4-Queen
Problem. It's important to note that these are just two possible solutions, and there can be
additional valid arrangements of the queens on the chessboard.

38. What are lower bound arguments ? Give an example.


Lower bound arguments are used in computer science and algorithm analysis to establish a lower
limit or bound on the resources (such as time or space) required to solve a given problem. These
arguments provide a way to prove that no algorithm can perform better than a certain level of
efficiency.
For example, let's consider sorting the elements [5, 2, 3, 1] using a comparison-based sorting
algorithm. The decision tree for this sorting problem would look like:

?
/ \
? ?
/\ /\
? ?? ?
/\/\/\
5 23 1

In the worst case, the decision tree must have enough leaf nodes to represent all possible
permutations of the input elements. Since there are n! (n factorial) possible permutations for a
list of n elements, the height of the decision tree must be at least log(n!) = Ω(n log n).
39. What is Decrease by a Constant? Give an example.
In this variation, the size of an instance is reduced by the same constant on each iteration or
the recursive step of the algorithm. Typically, this constant is equal to one , although other
constant size reductions can happen. This variation is used in many algorithms like;
 Graph search algorithms: DFS, BFS
 Topological sorting
 Algorithms for generating permutations, or subsets
 Insertion sort.

40. What is Exhaustive search?


Exhaustive Search is a brute-force algorithm that systematically enumerates all possible
solutions to a problem and checks each one to see if it is a valid solution.
This algorithm is typically used for problems that have a small and well-defined search space
where it is feasible to check all possible solutions.

41. Define string matching.


String Matching Algorithm is also called "String Searching Algorithm." This is a vital class of
string algorithm is declared as "this is the method to find a place where one is several strings are
found within the larger string.

42. What is Abstract Data Type(ADT)


Abstract Data type (ADT) is a type (or class) for objects whose behaviour is defined by a set of
value and a set of operations.
The definition of ADT only mentions what operations are to be performed but not how these
operations will be implemented.

43. What is Empirical Analysis of Algorithms?


Empirical algorithms (or experimental algorithms) is the practice of using empirical methods to
study the behaviour of algorithms. The practice combines algorithm development and
experimentation: algorithms are not just designed, but also implemented and tested in a variety
of situations. In this process, an initial design of an algorithm is analysed so that the algorithm
may be developed in a stepwise manner.

44. What is Strassen’s Matrix multiplication?


Strassen's Matrix Multiplication Algorithm is an algorithm used to multiply two matrices,
resulting in a third matrix that contains the product of the two input matrices.
The algorithm is used in a variety of scientific and engineering applications, including computer
vision, machine learning, and numerical
simulations.

45. What is hash collection? Give one example.


Hashing is designed to solve the problem of needing to efficiently find or store an item in a
collection.
For example, if we have a list of 10,000 words of English and we want to check if a given word
is in the list, it would be inefficient to successively compare the word with all 10,000 items until
we find a match

46. What is 0/1 or 0-1 in Knapsack problem?


The "0/1" or "0-1" constraint means that each item can either be included in the knapsack
(assigned a value of 1) or not included (assigned a value of 0). This constraint makes the
problem more challenging because you cannot take fractional quantities of items. It's a binary
decision for each item: either include it entirely or exclude it completely.

47. What is kruskal’s algorithm?


The Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree (MST) of
a connected, weighted undirected graph. The MST is a subgraph that connects all vertices of the
original graph with the minimum total edge weight, without forming any cycles.

48. What is fake coin problem?


The fake coin problem is a classic example of a combinatorial problem that involves identifying
a counterfeit (fake) coin among a set of otherwise identical coins using a balance scale.
There can be two ways of solving this problem: Divide and Conquer Method, Brute Force
Method.

49. Discuss complexity of topological sorting.


Time-complexity: every vertex and every edge will be visited atleast once during the execution
of the algorithm. Specifically, for each vertex, the algorithm identifies the incoming edges and
reduces the in-degrees.
Space-complexity: the spce complexity of the topological sorting algorithm can be described as
O(V), where V represents the number of vertices in the graph.

50. Given f(n) = 120n + 20, prove that f(n) = O(n2)


We have to find C and k such that f(n) ≤ C * n^2 for all n ≥ k.
f(n) = 120n + 20
for n ≥ 1, we have:
f(n) = 120n + 20 ≤ 120n^2 + 20n^2 (since n ≤ n^2 for n ≥ 1)
f(n) = 120n + 20 ≤ 140n^2
f(n) ≤ C * n^2, where C = 140 and k = 1.
SECTION – B (5 Mark Questions)

1. Write a note on Combinatorial problem


 Combinational problem refers to a type of problem in mathematics and computer science that
involves counting or generating combinations of elements from a given set. Combinations are
unordered selections of items, where the order of selection does not matter.
 Combinatorial problems often involve questions such as:
How many different combinations can be formed from a set of elements?
What is the probability of obtaining a specific combination?
How many ways can a set of elements be partitioned into subsets?
To solve combinatorial problems, several techniques and principles are commonly used,
including:
 The Multiplication Principle: This principle states that if one event can occur in m ways and a
second independent event can occur in n ways, then the two events can occur together in m n
ways.
 Pigeonhole Principle: This principle states that if there are more pigeons than pigeonholes, at
least
one pigeonhole must contain more than one pigeon. It is often used to prove the existence of
certain combinatorial patterns or constraints.
 Combinatorial problems can range from relatively simple counting problems to highly complex
optimization and graph theory problems. They find applications in various areas, such as
cryptography, network analysis, scheduling, genetics, and game theory.
 Solving combinatorial problems requires logical reasoning, mathematical skills, and familiarity
with combinatorial techniques and principles. Various algorithms and strategies have been
developed to tackle combinatorial problems efficiently, such as backtracking, dynamic
programming, and branch and bound algorithms.
 Overall, combinatorial problems play a crucial role in mathematics and computer science, and
their study and solutions contribute to advancements in various fields of research and practical
applications.

2. Write a note on Fake coin problem.


Fake coin problem is a classic problem in computer science to identify the fake coin using a
balanced scale. The method uses decrease by constant factor technique.

There are two approaches to solving this problem:


a) Dividing the coins into 2 groups where, w(n) = w(n/2) -> w(n/4) -> ….
Therefore, time complexity = 0(log2n)
b) Dividing the coins into 3 groups where, w(n) = w(n/3) -> w(n/6) -> ….
Therefore, time complexity = 0(log3n)

Algorithm:
STEP-1: Divide all the coins into 3 equal groups. If the total number of coins are not equal to 3,
place the extra coin(s) aside and check the, later.
STEP-2: Weigh the first two groups against each other using the balance scale.
STEP-3: There are two possibilities:
a) The scale will be balanced. Hence, an assumption is made that the fake coin might be
in the 3rd group.
b) The scale is not balanced, that means that the fake coin is in the lighter group.
STEP-4: Repeat step -1,2 and 3 until the lighter group is identified.
STEP-5: Stop the process.

EX: (C1, C2, C3, C4, C5, C6, C7, C8, C9) Let C8 be the fake coin.
STEP-1: Group-1: (C1, C2, C3)
Group-2: (C4, C5, C6)
Group-3: (C7, C8, C9)
STEP-2: Weight Group-1 and Group-2, Assume that they weigh the same on the balance scale.
This implies that the fake coin is in Group-3
STEP-3: Divide Group-3 into 3 sub-groups:
Group-3(1): C(7)
Group-3(2): C(8)
Group-3(3): C(9)
STEP-4: Weigh the subgroups 1 and 2 which indicates that the subgroup 3(2) weighs lighter than
subgroup 3(1). Hence C8 is the fake coin.

3. Write a program to find an using brute-force based algorithm.


# Brute force method
#A simple solution to calculate pow(a, n) would multiply a exactly n times. We can do that by
using a simple for loop
def bpower(a, n):
pow = 1
for i in range(n):
pow = pow * a
return pow
# Divide and Conquer method
#The problem can be recursively defined by:
# dpower(x, n) = dpower(x, n / 2) * dpower(x, n / 2); // if n is even
# dpower(x, n) = x * dpower(x, n / 2) * dpower(x, n / 2); // if n is odd
def dpower(x, y):
if (y == 0):
return 1
elif (int(y % 2) == 0):
return (dpower(x, int(y / 2)) *
dpower(x, int(y / 2)))
else:
return (x * dpower(x, int(y / 2)) *
dpower(x, int(y / 2)))
# Main block
a=int(input("Enter a :"))
n=int(input("Enter n :"))
print("Brute Force method a^n : ",bpower(a,n))
print("Divide and Conquer a^n : ",dpower(a,n))

OUTPUT:
Enter a :2
Enter n :3
Brute Force method a^n : 8
Divide and Conquer a^n : 8
4. Write DIJIKSTRA’s algorithm to find the shortest path from a given vertex to all other vertices
in a graph
Algorithm Dijkstra (V, C, D, n)

Input: V = set of vertices, C = cost adjacency matrix of directed graph G(V,E) n=


number of vertices in a given graph.
D[i] contains the length or the current shortest path to vertex i.
C[i][j] is the cost of going from vertex i to j. If there is no path, we assume
C[i][j] = ∞ and C[i][i] = 0
{
S = {1} // assume '1' as source vertex.
for (i= 2 to n) do
D[i] = C[1, i]
for (i = 1 to n) do
{
Choose a vertex 'W' in (V- S) such that D[W] is minimum.
S=SUW // add W to S
for each vertex V in (V-S) do
D[V]= Min (D[V], D[W] + C[W][V])
}
}

5. Write Huffman’s algorithm to construct an optimal prefix code


Huffman Algorithm to Generate Huffman Tree and Codes
STEP-1: Create a Priority Queue Q consisting of each unique character:
This step involves creating a priority queue, which is a data structure that stores elements based
on their priority, In this case, the priority is determined by the frequencies of the characters.

STEP-2: Sort frequencies in ascending order and store in priority queue.


The frequencies of the characters are sorted in ascending order and stored in the priority queue
Q. This ensures that the characters with the lowest frequencies will have higher priority in the
queue.

STEP-3: Loop through all the unique characters in the queue:


(a) Create a newNode. This new node will eventually become a parent node.
(b) Extract minimum value from Q and assign it to leftChild of newNode.
We take the node with the smallest frequency from the front of the queue and make it the eft
child of the new node.
(c) Extract minimum value from Q and assign it to rightChild of newNode We take the next
smallest frequency node from the queue and make it the right child the new node.
(d) Calculate the sum of these two minimum values and assign it to the value of newNode
The frequency value of the new node is set to be the sum of its children's frequencies
(e) Insert this newNode into the queue
Repeat these step until only one node remains in the queue - this is the rout of the Huffman tree.

STEP-4: Create Huffman Codes:


Starting from the root, create the codes by traversing the tree. Moving to the left child adds a so
the code, and moving to the right child adds a '1'. When we reach a leaf node (a symbol), assign
the code accumulated during the traversal to this symbol. In the end, the most frequent symbols
will be represented by the shortest codes, while less frequent symbols will have longer codes

6. Construct a state-space for asset S={11,13,24,7} and M=31


The state space represents all possible combinations of assets that can be selected from the set S
to achieve the target value M. Each state in the state space corresponds to a subset of assets. The
state space consists of the following states:

1. Start State: An empty set, representing no assets selected. State: {}.

2. Individual Asset States: States representing the inclusion of a single asset from the given set:
{11}, {13}, {24}, {7}.

3. Combined Asset States: States representing the inclusion of a combination of assets from the
given set: {11, 13}, {11, 24}, {11, 7}, {13, 24}, {13, 7}, {24, 7}.

4. Complete Asset State: A state representing the inclusion of all assets from the given set: {11,
13, 24, 7}.

5. Goal States: States where the sum of assets in the set equals the target value M=31. Possible
goal states: {13, 18}, {7, 24}.
The state space encompasses various states, ranging from no assets selected to all assets
included, and includes the goal states where the sum of assets equals the target value. Each state
represents a specific combination of assets from the given set.

7. With a neat diagram, discuss the sequence of steps in designing and analysing an algorithm.

a) UNDERSTANDING THE PROBLEM: Unambiguity in problem definition, Corrections


of the algorithm Input section
Identify the input, output, and any constraints or special cases.
Determine the problem's complexity, such as its time and space requirements

b) DECIDE ON: storage capacity in particular RAM capacity to be identified. Identifying


linear /parallel processing for solving problem

c) DESIGN AN ALGORITHM: It is a general approach in solving the problems


algorithmically that is applicable to variety of problem from different areas of computing.
d) DATA STRUCTURE: identifying the tools required in designing the problem such as
flowchart, pseudo code etc.: Identifying the data structures such as track, queue, Array, tree,
graph etc. designing an algorithm.

e) PROVING ALGORITHMIC CORRECTNESS: The algorithm should have mathematical


induction to define the correctness of and the algorithm and also there should be the
algorithm, also there should be an ending point where the algorithm steps by returning the
solution to a problem.

f) ANALYIZE THE ALGORITHM: The efficiency of the algorithm is also measured in


terms of time efficiency and this kind of analysis is called performance analysis the
algorithm is also measured in terms simplicity generally

g) CODING AN ALGORITM: A conversion from steps to equal programming steps, testing


and debugging and arriving at a Computerised solution

8. Write a program to solve the string matching problem using KMP algorithm
def build_prefix_table(pattern):
prefix_table = [0] * len(pattern)
length = 0 # Length of the previous longest prefix suffix

for i in range(1, len(pattern)):


while length > 0 and pattern[i] != pattern[length]:
length = prefix_table[length - 1]

if pattern[i] == pattern[length]:
length += 1
prefix_table[i] = length

return prefix_table

def kmp_search(text, pattern):


prefix_table = build_prefix_table(pattern)
matches = []
n = len(text)
m = len(pattern)
i, j = 0, 0 # Pointers for text and pattern

while i < n:
if pattern[j] == text[i]:
i += 1
j += 1

if j == m:
matches.append(i - j)
j = prefix_table[j - 1]
else:
if j != 0:
j = prefix_table[j - 1]
else:
i += 1

return matches

# Example usage:
text = "ABCABCDABABCDABCDABDE"
pattern = "ABCDABD"
matches = kmp_search(text, pattern)

if len(matches) > 0:
print("Pattern found at positions:")
print(matches)
else:
print("Pattern not found in the text.")

9. List Boyer-Moore algorithm outline.


Algorithm or General Procedure or Steps involved in Bayer-Moore String Matching

Step 1:
First, we need to do preprocessing. We do this by creating two shift tables the bad character shift
table and the good suffix shift table. These tables are made based on the given pattern and the
alphabet used in both the pattern and the text.

Step 2:
We start the search by aligning the pattern with the beginning of the text. We then enter a loop
where we keep on comparing the characters in the pattern with the corresponding characters in
the text. We start the comparison from the last character of the pattern and move towards the
beginning of the pattern.

a. If all the characters in the pattern match with the corresponding characters in the text, we have
found a match and the search stops.
b. If a mismatch occurs after matching k characters from the right, we consult our shift tables to
decide how much to shift the pattern to the right.

(1) If k=0. we first look up the mismatched character from the text in the bad character shift
table. i.e, the mismatch occurs at the last character of the pattern, we shift the pattern to the right
by the amount indicated in the table,
(11) If k>0, i.e, there has been a partial match, we also look up the shift value from the good
suffix shift table. The pattern is then shifted to the right by the larger of the two shift values
(from the bad character shift table and the good suffix shift table), or by 1 if the shift value is not
found in either table.

Step 3:
The loop in step 2 is repeated until either a match is found, or the pattern has moved past the end
of the text, indicating no match exists. If a match is found, the algorithm returns the position of
the match. If no match is found, the algorithm returns -1.

10. Write the Warshall’s algorithm to compute the transitive closure of a graph.
Algorithm: Warshall's (C)

//Input: The adjacency matrix (C) of given digraph G (V, E)


// Output: The transitive closure of the digraph D.
b <- C
for k <- 1to n do
for i <- 1 to n do
for 1+1 to n do
Dk[i, j] = D(k-1)[i, j] ] v [D(k-1) [i,k]^ D(k-1) [k,j]]
end for
end for
end for
return Dn

11. Explain decision trees for searching a sorted array with an example.
A decision tree is a data structure used to represent a sequence of decisions and their potential
outcomes. It's often used in various algorithms, including searching in a sorted array. A decision
tree breaks down the search process into a series of binary decisions, leading to the final
outcome.

Let's explain decision trees for searching a sorted array with an example:

Suppose you have a sorted array of integers: [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]. You want to
search for the value 23 within this array using a decision tree approach.

1. Start at the middle element of the array: 16.


2. Is 16 equal to 23? No.
- If 23 is greater than 16, then the value must be in the right half of the array.
- If 23 is less than 16, then the value must be in the left half of the array.

Now, we have two branches in the decision tree:

```
16
/ \
/ \
value < 16 value > 16
```

3. Move to the middle element of the right half of the array: 38.
4. Is 38 equal to 23? No.
- If 23 is greater than 38, then the value must be in the right sub-array.
- If 23 is less than 38, then the value must be in the left sub-array.

Now, we have two more branches in the decision tree:

```
16
/ \
/ \
value < 16 38
/ \
/ \
value < 38 value > 38
```

5. Move to the middle element of the left sub-array: 5.


6. Is 5 equal to 23? No.
- If 23 is greater than 5, then the value must be in the right sub-sub-array.
- If 23 is less than 5, then the value must be in the left sub-sub-array.

Continuing this process, we eventually reach the value 23 in the array.

```
16
/ \
/ \
value < 16 38
/ \
/ \
value < 38 5
/ \
/ \
value < 5 23
```

This decision tree illustrates the process of searching for the value 23 in the sorted array using
binary decisions at each step. The decision tree helps visualize the binary search algorithm,
which is an efficient way to search in a sorted array by repeatedly halving the search space.

12. With an example, explain Step counts w.r.t to computing the time efficiency of algorithm.
Step counts are a way to analyze and compute the time efficiency of an algorithm by counting
the number of elementary operations or steps performed during its execution. By determining
the number of steps required for an algorithm, we can estimate its time complexity and make
comparisons between different algorithms.
In this example, we can count the number of steps performed in the algorithm:

EX: int mean(int a[], int n);


{
int sum = 0; (1 step)
for(int i=0; i<n; i++) (1 step= n+1)
sum += a[i]; (1 step= n)
return(sum); (1 step)
}
Total step count = 1+ (n+1) +n+ 1 = 2n+3

In this case, the step count is directly proportional to the size of the input array, denoted by n.
This indicates that the time complexity of the algorithm is linear, or O(n), as the number of
steps grows linearly with the input size.

13. Write Johnson-Trotter algorithm for generating permutations


The Johnson-Trotter algorithm is a method for generating all permutations of a given set. It
utilizes the concept of "mobile elements" and their movements to generate the permutations.
Algorithm: Johnson-Trotter:
STEP-1: Initialize the permutation as the starting arrangement of elements (e.g., in lexicographic
order).
STEP-2: Initialize the direction array, which tracks the direction of each element's movement (1
for right, -1 for left). Initially, set all directions to -1.
STEP-3: While there is at least one mobile element:
a. Find the largest mobile element (an element that is greater than its neighbouring element in the
direction it is facing).
b. Swap the mobile element with its neighbour in the direction it is facing.
c. Reverse the direction of all elements greater than the mobile element.
STEP-4: Output the current permutation.
STEP-5: Repeat steps 3-4 until there are no more mobile elements.

14. Write a program to solve towers of Hanoi problem for different number of disks.

# Recursive Python function to solve the tower of hanoi


def TowerOfHanoi(n, source, destination, auxiliary):
if n == 1:
print("Move disk 1 from source", source, "to destination", destination)
return
TowerOfHanoi(n - 1, source, auxiliary, destination)
print("Move disk", n, "from source", source, "to destination", destination)
TowerOfHanoi(n - 1, auxiliary, destination, source)
# Main Block
n=int(input("Enter number of disk : "))
TowerOfHanoi(n, 'A', 'B', 'C')
# A, C, B are the name of rods
OUTPUT:
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B
Move disk 3 from source A to destination C
Move disk 1 from source B to destination A
Move disk 2 from source B to destination C
Move disk 1 from source A to destination C
Move disk 4 from source A to destination B
Move disk 1 from source C to destination B
Move disk 2 from source C to destination A
Move disk 1 from source B to destination A
Move disk 3 from source C to destination B
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B

15. Write an algorithm to solve Knapsack problem using memory function


Algorithm MFKnapsack(I,j)
Input: A non- negative integer 1 indicating the number of the first items being considered and
non-negative integer 3 indicating the knapsack's capacity.
Output: The values of an optimal feasible subset of the first 1 items.
NOTE: The weights are w[1..n], values are v[1] and table ven,
Initialize first row and first column entries with zero and remaining entries with -1.
if v[i, j] < 0
if j < w[1]
value <- MFKnapsack (i,-1,j)
else
value <- max (MFKnapsack(i.-1,j), v[i] MFknapsack(i,-1,j,-w[i]))
endif
v[i, j] value
return v[i, j]

16. Write a note on Russian peasant multiplication


The algorithm works as follows:
STEP-1: Start with the two numbers to be multiplied.
STEP-2: Create two columns: one for the first number and one for the second number.
STEP-3: In the first column, continuously divide the number by 2 (halve it) until reaching 1.
STEP-4: In the second column, continuously multiply the number by 2 (double it) until
reaching the same number of rows as the first column.
STEP-5: Cross out all the rows in the first column that have an even number.
STEP-6: Sum up the remaining numbers in the second column. This is the result of the
multiplication.
EX: 18 by 7
Col-1 Col-2
18 7
9 14
4 28
2 56
1 112
14+112=126

17. Write a note on travel salesman problem


As the definition for greedy approach states, we need to find the best optimal solution locally to
figure out the global optimal solution. The inputs taken by the algorithm are the graph G {V, E},
where V is the set of vertices and E is the set of edges. The shortest path of graph G starting from
one vertex returning to the same vertex is obtained as the output.
Examples
Consider the following graph with six cities and the distances between them −

From the given graph, since the origin is already mentioned, the solution must always start from
that node. Among the edges leading from A, A → B has the shortest distance.
Then, B → C has the shortest and only edge between, therefore it is included in the output graph.

There’s only one edge between C → D, therefore it is added to the output graph.

There’s two outward edges from D. Even though, D → B has lower distance than D → E, B is
already visited once and it would form a cycle if added to the output graph. Therefore, D → E is
added into the output graph.

There’s only one edge from e, that is E → F. Therefore, it is added into the output graph.

Again, even though F → C has lower distance than F → A, F → A is added into the output graph
in order to avoid the cycle that would form and C is already visited once.

The shortest path that originates and ends at A is A → B → C → D → E → F → A


The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

Even though, the cost of path could be decreased if it originates from other nodes but the
question is not raised with respect to that.

18. Explain Decision Trees for Insertion sort algorithm.


Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the first
card is already sorted in the card game, and then we select an unsorted card. If the selected
unsorted card is greater than the first card, it will be placed at the right side; otherwise, it will be
placed at the left side. Similarly, all unsorted cards are taken and put in their exact place.
Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step2 - Pick the next element, and store it separately in a key.
Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to the
next element. Else, shift greater elements in the array towards the right.
Step 5 - Insert the value.
Step 6 - Repeat until the array is sorted.

EX: Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that
are 31 and 8.
Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.


17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

19. Write Topological Sorting Algorithm and Explain with examples.


Topological sorting is an algorithm used to order the vertices of a directed acyclic graph (DAG)
in such a way that for every directed edge (u, v), vertex u comes before vertex v in the ordering.
This ordering is often referred to as a topological order. Topological sorting is used in various
applications such as task scheduling, dependency resolution, and determining the order of
events.
Algorithm: Topological Sort (G)
1. Find the in-degree INDEG(N) of each node N of G.
2. Put all the nodes with zero in-degree in a queue Q
3. Repeat Step 4 and 5 until Queue become empty.
4. Remove the front node N of the queue Q and add it to T.
(Set Front = Front + 1)
5. Repeat the following for each neighbor M of the node N.
a. Set INDEG(M) = INDEG(M)- 1
[delete the edges from N to M]
b. If INDEG(M) = 0 then Add M to the rear end of the Q
6. Exit

EX: A B E
C D

STEP-1: Write in-degree of each vertex-


INDEG (A)=0
INDEG (B)=1
INDEG (C)=2
INDEG (D)=2
INDEG (E)=2

STEP-2: Add A into the queue Q


A
Front=1, Rear=0

STEP-3: Remove A from Q and Add it to T. Q=>

T=
A

Decrease the indegree of A’s neighbouring nodes by 1


INDEG (B)=2-1=1
INDEG (C)=2-1=1

STEP-4: Add C into Q


C

STEP-5: Remove C from Q and Add it to T. Q=>

T=
A C
Decrease the indegree of C’s neighbouring nodes by 1
INDEG (B)=1-1=0
INDEG (D)=1-1=0

STEP-6: Add both B and D into Q=


B D
T=
A C B
Therefore, now Q=
D
Decrease the indegree of D’s neighbouring nodes by 1
INDEG (E)=1-1=0

STEP-7: Add E into Q=


D E
T=
A C B D
Therefore, now Q=
E

STEP-8: Add E into T


A C B D E
Therefore, now Q=

Since all the nodes in the DAG had been visited and the queue is null, stop the process.
Therefore, the topological sort: A->C->B->D->E

20. Discuss Asymptotic notations.


Asymptotic Notations are programming languages that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
This is also referred to as an algorithm’s growth rate.
There are mainly three asymptotic notations:
Big-O Notation (O-notation)
Omega Notation (Ω-notation)
Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation):


Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.

EX: f(n)=2n+5
Theta notation defines tight bound curve. So, let c1=2 and c2=3
2(n)<=2n+5<=3n
If n=1, 2(1)<=2(1)+5<=3(1) => 2<=7<=3 -> False
If n=2, 2(2)<=2(2)+5<=3(2) => 4<=9<=6 -> False
If n=3, 2(3)<=2(3)+5<=3(3) => 6<=11<=9 -> False
If n=4, 2(4)<=2(4)+5<=3(4) => 8<=13<=12 -> False
If n=5, 2(5)<=2(5)+5<=3(5) => 10<=15<=15 -> True
Condition is satisfied.
Therefore, c1.g(n)<= f(n)<=c2.g(n)

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it
gives the worst-case complexity of an algorithm.

EX: f(n)=2n+5
Big O defines upper bound curve. So, let c=3
2n+5<=3n
If n=1, 2(1)+5<=3(1) => 7<=3 -> False
If n=2, 2(2)+5<=3(2) => 9<=6 -> False
If n=3, 2(3)+5<=3(3) => 11<=9 -> False
If n=4, 2(4)+5<=3(4) => 13<=12 -> False
If n=5, 2(5)+5<=3(5) => 15<=15 -> True
Condition is satisfied.
Therefore, f(n)<=c.g(n)

3. Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

EX: f(n)=2n+5
Omega Notation defines lower bound curve. So, let c=2
2n+5>=2n
If n=1, 2(1)+5>=2(1) => 7>=2 -> True
Condition is satisfied.
Therefore, f(n)>=c.g(n)

21. Write the advantages and disadvantages of divide and conquer technique
Advantages of Divide and Conquer Algorithm:
 The difficult problem can be solved easily.
 It divides the entire problem into subproblems thus it can be solved parallelly ensuring
multiprocessing
 Efficiently uses cache memory without occupying much space
 Reduces time complexity of the problem
 Solving difficult problems: Divide and conquer technique is a tool for solving difficult
problems conceptually. e.g. Tower of Hanoi puzzle.
 Algorithm efficiency: The divide-and-conquer paradigm often helps in the discovery of efficient
algorithms.

Disadvantages of Divide and Conquer Algorithm:


 It involves recursion which is sometimes slow
 Efficiency depends on the implementation of logic
 It may crash the system if the recursion is performed rigorously.
 Overhead: The process of dividing the problem into subproblems and then combining the
solutions can require additional time and resources.
 Complexity: Dividing a problem into smaller subproblems can increase the complexity of the
overall solution.
 Difficulty of implementation: Some problems are difficult to divide into smaller subproblems or
require a complex algorithm to do so. In these cases, it can be challenging to implement a
divide and conquer solution.

22. Explain merge sort algorithm with an example


Merge sort keeps on dividing the list into equal halves until it can no more be divided. By
definition, if it is only one element in the list, it is considered sorted. Then, merge sort combines
the smaller sorted lists keeping the new list sorted too.

Step 1 − if it is only one element in the list, consider it already sorted, so return.

Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.

Example
In the following example, we have shown Merge-Sort algorithm step by step. First, every
iteration array is divided into two sub-arrays, until the sub-array contains only one element.
When these sub-arrays cannot be divided further, then merge operations are performed.

23. Compare depth first search and breadth first search.


DFS BFS
 DFS stands for Depth First Search.  BFS stands for Breadth First Search.
 DFS(Depth First Search) uses Stack  BFS(Breadth First Search) uses
data structure. Queue data structure for finding the
shortest path.
 DFS is also a traversal approach in  BFS is a traversal approach in which
which the traverse begins at the root we first walk through all nodes on the
node and proceeds through the nodes same level before moving on to the
as far as possible until we reach the next level.
node with no unvisited nearby nodes.
 In DFS, we might traverse through  BFS can be used to find a single
more edges to reach a destination source shortest path in an unweighted
vertex from a source. graph because, in BFS, we reach a
vertex with a minimum number of
edges from a source vertex.
 DFS builds the tree sub-tree by sub-  BFS builds the tree level by level.
tree.
 It works on the concept of LIFO (Last  It works on the concept of FIFO (First
In First Out). In First Out).
 DFS is more suitable when there are  BFS is more suitable for searching
solutions away from source. vertices closer to the given source.

24. Discuss order of Growth


The order of growth, also known as the time complexity or asymptotic complexity, is a way to
describe how the runtime or resource usage of an algorithm grows relative to the input size. It
provides an estimate of how the algorithm's performance scales as the input size increases.
The order of growth is typically expressed using big O notation, denoted as O(f(n)), where f(n)
represents a function that characterizes the algorithm's behavior. The order of growth provides
an upper bound on the algorithm's runtime.

Here are some commonly encountered orders of growth, listed in increasing order of
performance:

1. O(1) - The algorithm's runtime does not depend on the input size. It executes a constant
number of operations, regardless of the input.

2. O(log n) - The algorithm's runtime grows logarithmically with the input size. Each step
reduces the problem size by a constant fraction, resulting in efficient performance for large
inputs.

3. O(n) - The algorithm's runtime grows linearly with the input size. Each input element is
processed exactly once, leading to a proportional increase in runtime.

4. O(n log n) - The algorithm's runtime grows in proportion to n multiplied by the logarithm of
n. It arises in efficient sorting algorithms like Merge Sort and Quick Sort.

5. O(n^2) - The algorithm's runtime grows quadratically with the input size. It commonly occurs
in nested loops, where each element needs to be compared with every other element.

6. O(2^n) - The algorithm's runtime grows exponentially with the input size. It is often
associated with brute-force algorithms that explore all possible combinations, making it
inefficient for larger inputs.

7. O(n!) - The algorithm's runtime grows factorially with the input size. It arises in algorithms
that involve generating permutations or combinations.

25. Explain what are the basic steps that are to be followed to analyze recursive and non-recursive
algorithm.
In analysing the efficiency of any recursive algorithm, there are some basic steps to be
followed:
1. Deciding the input parameters size.
2. Identifying the basic operations required.
3. Finding the reasons if the basic operation is to be executed more than once.
4. Setting up a recurrence relation, with an appropriate initial condition, for expressing the
number of times the basic operation is executed.
5. Solving the recurrence relation for finding the complex function and order of growth

In analysing the efficiency of any non-recursive algorithm, there are some basic steps to be
followed:
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the inner most loop.)
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case, and,
if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-form formula
for the count or, at the very least, establish its order of growth.

26. Find the minimum cost spanning tree by Prim's algorithm.


27. Explain 4-Queen's problem.
4 Queens Problem
4 – Queens’ problem is to place 4 – queens on a 4 x 4 chessboard in such a manner that no queens
attack each other by being in the same row, column, or diagonal.

We will look for the solution for n=4 on a 4 x 4 chessboard.

Here we have to place 4 queens say Q1, Q2, Q3, Q4 on the 4 x 4 chessboard such that no 2
queens attack each other.

 Let’s suppose we’re putting our first queen Q1 at position (1, 1) now for Q2 we can’t put it in 1
row( because they will conflict ).
 So for Q2 we will have to consider row 2. In row 2 we can place it in column 3 I.e at (2, 3) but
then there will be no option for placing Q3 in row 3.
 So we backtrack one step and place Q2 at (2, 4) then we find the position for placing Q3 is (3,
2) but by this, no option will be left for placing Q4.
 Then we have to backtrack till ‘Q1’ and put it to (1, 2) instead of (1, 1) and then all other queens
can be placed safely by moving Q2 to the position (2, 4), Q3 to (3, 1), and Q4 to (4, 3).
Hence we got our solution as (2, 4, 1, 3), this is the one possible solution for the 4-Queen
Problem. For another solution, we will have to backtrack to all possible partial solutions

The other possible solution for the 4 Queen problem is (3, 1, 4, 2)


SECTION –C (8 Mark Questions)

1. Explain in detail the mathematical analysis of algorithm for Matrix multiplication


In analysing the efficiency of any non-recursive algorithm, there are some basic steps to
be followed:
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the inner most loop.)
3. Check whether the number of times the basic operation is executed depends only on the
size
of an input. If it also depends on some additional property, the worst-case, average-case,
and,
if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is
executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-form
formula
for the count or, at the very least, establish its order of growth.
Algorithm- Matrix Multiplication(A[0...n-1, 0..n-1], B[0..n-1,0..n-1])
// Multiplication of two n*n matrices A and B
// Input : Two n*n matrices A and B
// Output: Another n*n matrix C = AB
{
for i = 0 to n - 1 do
for j = 0 to n - 1 do
C[i, j] = 0
for k =0 to n-1 do
C[i,j] = C[i,j] + A[i, k] * B [k,j];
}

The Steps Involved:

1. Input: Parameter1 size: n^2


Parameter2 size: n^2

2. Basic operation required: C[i,j]=C[i,j]+A[i,k] * B[k, j]

3. Reason: As each row of one matrix is to be multiplied with each column of the other
with respective elements, three for loops: one for traversing the input matrices, and the
remaining two for Initializing the output matrix and traversing all the three matrices are to
be used.

4. Relation for number of times the basic operation is used:

5. Basic formulae involved in solving the relation:

Therefore, T(n) = n3
The time complexity in finding the multiplication of two matrices - O(n3)

2. Write an Lomuto portioning algorithm. With neat diagram explain its working.
The Lomuto Partitioning algorithm is a partitioning technique used in the QuickSort
algorithm to divide an array into two parts, such that all elements less than or equal to the
pivot are on the left side, and all elements greater than the pivot are on the right side. The
Lomuto partitioning algorithm is easier to understand and implement compared to other
partitioning methods.
Here's the Lomuto Partitioning algorithm in Python:

```python
def lomuto_partition(arr, low, high):
pivot = arr[high]
i = low - 1

for j in range(low, high):


if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]

arr[i + 1], arr[high] = arr[high], arr[i + 1]


return i + 1
```

**Working of Lomuto Partitioning Algorithm**:

Let's understand the working of the Lomuto partitioning algorithm with the help of a step-
by-step diagram:

Consider the array: `[8, 4, 6, 2, 7, 3, 1, 5]`

Step 1: Choose a pivot (usually the last element of the array). In this case, the pivot is 5
(last element).

```
[8, 4, 6, 2, 7, 3, 1, (5)]
```

Step 2: Initialize the variables `i` and `j`. `i` will keep track of the index where the elements
less than or equal to the pivot will be placed. `j` will be used to iterate through the array.

```
i j
↓ ↓
[8, 4, 6, 2, 7, 3, 1, (5)]
```

Step 3: Compare the element at index `j` with the pivot (5). If the element is less than or
equal to the pivot, swap it with the element at index `i`, and increment `i`.

```
i j
↓ ↓
[4, 8, 6, 2, 7, 3, 1, (5)]
```

Step 4: Repeat the process until `j` reaches the second-to-last element.

```
i j
↓ ↓
[4, 1, 6, 2, 7, 3, 8, (5)]
```

Step 5: At the end of the loop, place the pivot (5) in its correct position by swapping it with
the element at index `i+1`.

```
i j
↓ ↓
[4, 1, 5, 2, 7, 3, 8, (6)]
```

Step 6: The pivot is now in its correct position (index 2). All elements to the left of the pivot
are less than or equal to it, and all elements to the right are greater than it.

```
i j
↓ ↓
[4, 1, (5), 2, 7, 3, 8, 6]
```

Step 7: Return the index of the pivot (i + 1), which is 3. The array is now partitioned, and
we can recursively apply QuickSort on the left and right subarrays.

The Lomuto Partitioning algorithm efficiently divides the array into two parts around the
pivot, facilitating the QuickSort process. The process continues recursively on both
subarrays until the entire array is sorted.

3. Apply Horspool’s algorithm to search for the pattern GREAT in the text
SAURAVISREALLYGREAT
Character G R E A T
Shift Value 4 3 2 1 5

SV(G)= 5-0-1=4
SV(R)= 5-1-1=3
SV(E)= 5-2-1=2
SV(A)= 5-3-1=1
SV(T)= 5

T !=A. A doesn’t occur in the pattern, move 1

T !=V. V doesn’t occur in the pattern, move 5


T !=A. A doesn’t occur in the pattern, move 1

T !=L. L doesn’t occur in the pattern, move 5

T !=E. E doesn’t occur in the pattern, move 1

T =T. Pattern Matched

4. Apply Warshall’s algorithm to find the transitive closure of the digraph defined by the
following adjacency matrix:

To find the transitive closure of a directed graph using Warshall's algorithm, we perform a
matrix operation to determine if there is a path between any two vertices in the graph. The
transitive closure matrix will show the existence of a path from vertex i to vertex j, where a
value of 1 represents a path and 0 represents no path.

The adjacency matrix you provided for the directed graph is as follows:

```
0100
0010
0001
0000
```

Let's apply Warshall's algorithm to find the transitive closure:

**Step 1: Initialize the Transitive Closure Matrix (T)**

We start with the same adjacency matrix since there are no direct paths between vertices.
The transitive closure matrix T is initially the same as the adjacency matrix.

```
T= 0100
0010
0001
0000
```

**Step 2: Update Transitive Closure Matrix (T)**

Using Warshall's algorithm, we update the transitive closure matrix T as follows:

- For each vertex i, check all possible intermediate vertices (k) and update the transitive
closure matrix T[i][j] as T[i][j] OR (T[i][k] AND T[k][j]).

```
k = 1:

T= 0100
0010
0001
0000

k = 2:

T= 0110
0011
0001
0000

k = 3:

T= 0111
0011
0001
0000
```

**Step 3: Final Transitive Closure Matrix (T)**


After updating the transitive closure matrix for all possible intermediate vertices (k), the
final transitive closure matrix T is:

```
T= 0111
0011
0001
0000
```

**Result:**

The transitive closure of the directed graph represented by the given adjacency matrix is:

```
T= 0111
0011
0001
0000
```

The value T[i][j] = 1 indicates that there is a path from vertex i to vertex j in the graph.

5. Apply PRIM’s algorithm for the following graph

Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.

Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two
edges from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among the
edges, the edge BD has the minimum weight. So, add it to the MST.
Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of
C, i.e., E and A. So, select the edge DE and add it to the MST.

Step 4 - Now, select the edge CD, and add it to the MST.

Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a
cycle to the graph. So, choose the edge CA and add it to the MST.

So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below -
Cost of MST = 4 + 2 + 1 + 3 = 10 units.

6. Define the classes P and NP and derive the relationships between them.
7. Using sieve of Erasthenes method, generate prime numbers between 2 to 50
STEP-1 Mark all numbers which are divisible by 2 and greater than or equal to the square
of it.
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
STEP-2 Move to next unmarked number 3 and mark all the numbers which are the
multiples of 3 and are greater than or equal to the square of it.

2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
STEP-3 Move to next unmarked number 5 and mark all the numbers which are the
multiples of 3 and are greater than or equal to the square of it.
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
We continue this process and our final table will look like below:
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
So prime numbers are the unmarked ones: 2,3,5,7,11,17,19,23,29,31,37,41,43,47

8. Write the DFS traversal for the given graph G

STEP-1: Start at vertex A


Push A onto the stack and mark A as visited
Stack: [A]
Visited: [A]
Now explore the neighbours of A i.e., B and D

STEP-2: Push B and D onto the stack


Stack: [B, D]
Mark B as visited
Visited: [A, B]
Now explore the neighbours of B i.e., C and F

STEP-3: Stack: [D, C, F]


Now explore the neighbours of D i.e., F
Ignore it since it is already in the stack
Mark D as visited
Visited: [A, B, D]
Now explore the neighbours of C i.e., H, G and E

STEP-4: Push H, G and E onto the stack


Stack: [C, F, H, G, E]
Mark C as visited
Visited: [A, B, D, C]
Now explore the neighbours of F i.e., A
Since A is already visited ignore it.
Mark F as visited
Visited: [A, B, D, C, F]
STEP-5: Visit H
Stack: [H, G, E]
Visited: [A, B, D, C, F]
Now explore the neighbours of H i.e., A
Since A is already visited ignore it.
Mark H as visited
Visited: [A, B, D, C, F, H]

STEP-5: Visit G
Stack: [G, E]
Visited: [A, B, D, C, F, H]
Now explore the neighbours of G i.e., H and E
Since H is already visited ignore it.
Since E is already in the stack ignore it.
Mark G as visited
Visited: [A, B, D, C, F, H, G]

STEP-6: Visit E
Stack: [E]
Visited: [A, B, D, C, F, H, G, E]
Hence DFS traversal is finished.
DFS traversal A> B> D> C> F> H> G> E

9. Apply Floyd’s algorithm to the following graph.


10. Using dynamic programming, solve the following Knapsack problem.
n=5, { w1, w2, w3, w4, w5 } = { 2, 1, 5, 2, 5 }
{ p1, p2, p3, p4, p5 } = { 20, 05, 15, 10, 12 }
W = maximum Knapsack capacity = 8.
11. Explain Branch-and-Bound technique comparing with Backtracking. What are the methods
involved in Branch and bound.
BRANCH AND BOUND BACKTRACKING
Branch-and-Bound is a systematic, best- Backtracking is a systematic, depth-first
first search strategy used to find the search strategy used to find all possible
optimal solution to a problem. solutions to a problem by exploring the
entire solution space.
It explores the solution space by dividing It traverses the solution space recursively,
it into smaller subproblems (branches) and trying out each potential solution, and
then intelligently pruning branches that backtracks when a dead-end or invalid
cannot lead to better solutions than the solution is encountered.
current best solution found so far.
The technique aims to minimize the cost Backtracking explores all possible
(or maximize the value) of the solution solutions, even those that are suboptimal,
and quickly narrow down the search space and continues searching until all possible
by eliminating unpromising branches. paths have been explored.
Branch-and-Bound is suitable for solving It is mainly used for solving problems
optimization problems, where the goal is where multiple solutions exist, and we need
to find the best solution from a large set of to find all of them or one of them.
possible solutions.
Methods involved in Branch and Bound:

Bounding function (Upper Bound): A bounding function estimates the maximum (for
maximization problems) or minimum (for minimization problems) possible value of the
optimal solution from a given node. It helps in determining if a node can be pruned
(discarded) or not.
Relaxation technique (Lower Bound): A relaxation technique is used to estimate the
minimum possible value of the optimal solution from a given node. It provides a lower
bound on the optimal solution.

12. Apply KRUSKAL’s algorithm for the following graph.


13. Explain in detail the mathematical analysis of algorithm in Checking the uniqueness in an
array of n elements
The mathematical analysis of an algorithm involves evaluating its performance in terms of
time complexity and space complexity. In the case of checking the uniqueness of an array
of n elements, we want to determine the time and space required by the algorithm as the
size of the input (n) grows. Let's go through the mathematical analysis step by step:

**Algorithm to Check Uniqueness in an Array**:


The problem is to determine whether all elements in an array of size n are unique or if
there are any duplicate elements.

Here's a simple algorithm to check the uniqueness of an array:

```python
def has_duplicates(arr):
seen = set()
for num in arr:
if num in seen:
return True
seen.add(num)
return False
```

**Mathematical Analysis**:

**Time Complexity**:

1. Initializing the set `seen` takes constant time, which we can represent as O(1).
2. The for loop iterates through the entire array of size n. For each element, the lookup in
the set `seen` takes constant time on average (O(1)).
3. If the element is not in the set, it is inserted, which also takes constant time on average
(O(1)).

Therefore, the overall time complexity of the algorithm is O(n) since the dominant factor
is the linear iteration through the array.

**Space Complexity**:

The space complexity of the algorithm is O(n) because we use a set to store unique
elements. In the worst case, when all elements are unique, the set will contain all n
elements.

14. With an example explain Strassen’s Matrix Multiplication


Strassen's matrix multiplication is an algorithm used to efficiently multiply two matrices. It
reduces the number of standard scalar multiplications required in the conventional matrix
multiplication, which leads to improved performance for large matrices. The algorithm uses
a recursive approach and is based on divide and conquer strategy.

**Strassen's Matrix Multiplication Algorithm**:

Given two square matrices A and B of size n x n, the goal is to compute their product C = A
* B.

The algorithm works as follows:

1. Divide both matrices A and B into four equally-sized submatrices each: A11, A12, A21,
A22, and B11, B12, B21, B22.

```
A11 | A12 B11 | B12
----|---- * ----|----
A21 | A22 B21 | B22
```

2. Compute seven products recursively:


```
P1 = A11 * (B12 - B22)
P2 = (A11 + A12) * B22
P3 = (A21 + A22) * B11
P4 = A22 * (B21 - B11)
P5 = (A11 + A22) * (B11 + B22)
P6 = (A12 - A22) * (B21 + B22)
P7 = (A11 - A21) * (B11 + B12)
```

3. Compute the four submatrices of the result matrix C:


```
C11 = P5 + P4 - P2 + P6
C12 = P1 + P2
C21 = P3 + P4
C22 = P5 + P1 - P3 - P7
```

4. Combine the four submatrices to form the final result matrix C.

**Example**:
Let's demonstrate Strassen's matrix multiplication algorithm with a simple example of 2x2
matrices.

Consider two matrices:

```
A=|12|
|34|

B=|56|
|78|
```

Step 1: Divide the matrices into submatrices (A11, A12, A21, A22, B11, B12, B21, B22):

```
A11 = | 1 | B11 = | 5 |
------ ------
|4| |8|

A12 = | 2 | B12 = | 6 |
------ ------
|3| |7|

A21 = | | B21 = | |
------ ------
|3| |7|
A22 = | 4 | B22 = | 8 |
------ ------
| | | |
```

Step 2: Compute seven products recursively:


```
P1 = A11 * (B12 - B22) = (1) * (6 - 8) = -2
P2 = (A11 + A12) * B22 = (1 + 2) * 8 = 24
P3 = (A21 + A22) * B11 = (3 + 4) * 5 = 35
P4 = A22 * (B21 - B11) = (4) * (7 - 5) = 8
P5 = (A11 + A22) * (B11 + B22) = (1 + 4) * (5 + 8) = 45
P6 = (A12 - A22) * (B21 + B22) = (2 - 4) * (7 + 8) = -30
P7 = (A11 - A21) * (B11 + B12) = (1 - 3) * (5 + 6) = -11
```

Step 3: Compute the four submatrices of the result matrix C:


```
C11 = P5 + P4 - P2 + P6 = 45 + 8 - 24 - 30 = -1
C12 = P1 + P2 = -2 + 24 = 22
C21 = P3 + P4 = 35 + 8 = 43
C22 = P5 + P1 - P3 - P7 = 45 - 2 - 35 + 11 = 19
```

Step 4: Combine the four submatrices to form the final result matrix C:
```
C = | -1 22 |
| 43 19 |
```

So, the result of matrix multiplication C = A * B is:

```
C = | -1 22 |
| 43 19 |
```

The final matrix C is the product of matrices A and B using Strassen's algorithm. The
standard matrix multiplication would require 8 scalar multiplications to compute the result,
whereas Strassen's algorithm only uses 7 scalar multiplications, which demonstrates its
efficiency for larger matrices.

15. With an example explain the topological sorting (include both the method (a) DFS & (b)
Source removal)
Topological sorting is a linear ordering of the vertices of a directed acyclic graph (DAG) in
such a way that for every directed edge (u, v), vertex u comes before vertex v in the
ordering. Topological sorting is applicable only to DAGs, as cyclic graphs do not have a
valid topological order.

Let's go through both methods of topological sorting using an example graph:

**Example Graph**:
Consider the following directed acyclic graph (DAG):

```
1 --> 2 --> 4
| ^
v |
3 --> 5 ---+
```

**Method (a) DFS (Depth-First Search)**:


DFS-based topological sorting involves traversing the graph using Depth-First Search and
visiting each vertex recursively. The ordering of the vertices is based on the order of
finishing times of the vertices (time they are marked as visited).

**Step 1**: Start from any unvisited vertex and perform DFS on the graph. For each
vertex, after visiting all its adjacent vertices, mark it as visited and add it to the front of the
topological order list.

**Step 2**: Continue this process until all vertices are visited.

**DFS-based Topological Sorting**:


Start from vertex 1 and perform DFS:

1. Visit vertex 1, then visit vertex 2, then visit vertex 4.


2. As vertex 4 has no unvisited neighbors, mark it as visited and add it to the front of the
topological order list.
3. Go back to vertex 2, visit vertex 5.
4. As vertex 5 has no unvisited neighbors, mark it as visited and add it to the front of the
topological order list.
5. Go back to vertex 2, mark it as visited, and add it to the front of the topological order list.
6. Go back to vertex 1, visit vertex 3.
7. As vertex 3 has no unvisited neighbors, mark it as visited and add it to the front of the
topological order list.
8. Go back to vertex 1, mark it as visited, and add it to the front of the topological order list.

The topological order obtained using DFS is: [1, 3, 2, 5, 4]

**Method (b) Source Removal**:


The source removal method iteratively removes vertices with no incoming edges (in-degree
0) from the graph until all vertices are removed. The order in which the vertices are
removed forms the topological ordering.

**Step 1**: Find all vertices with in-degree 0 and add them to the set of sources.

**Step 2**: While the set of sources is not empty:


a. Remove a vertex v from the set of sources and add it to the topological order list.
b. For each neighbor u of v, decrement the in-degree of u.
c. If the in-degree of u becomes 0, add u to the set of sources.

**Source Removal-based Topological Sorting**:


1. Vertices 1 and 3 have in-degree 0 and are added to the set of sources.
2. Remove vertex 1 from the set of sources and add it to the topological order list.
- Decrement in-degree of vertex 2.
- Vertex 2 still has incoming edge from vertex 3, so it's not added to the set of sources.
3. Remove vertex 3 from the set of sources and add it to the topological order list.
- Decrement in-degree of vertex 5.
- Vertex 5 still has incoming edge from vertex 2, so it's not added to the set of sources.
4. Remove vertex 2 from the set of sources and add it to the topological order list.
- Decrement in-degree of vertex 4.
- Vertex 4 still has incoming edge from vertex 5, so it's not added to the set of sources.
5. Remove vertex 5 from the set of sources and add it to the topological order list.
- Vertex 4 still has incoming edge from vertex 5, so it's not added to the set of sources.
6. Remove vertex 4 from the set of sources and add it to the topological order list.

The topological order obtained using source removal is: [1, 3, 2, 5, 4]

Both methods have resulted in the same topological order for the given DAG: [1, 3, 2, 5, 4].
Topological sorting provides an ordering that satisfies the dependencies between vertices,
and it is widely used in various applications like task scheduling, dependency resolution,
and compiler optimization.

16. Apply Warshall’s algorithm to the following graph


17. Apply DIJIKSTRA’s algorithm for the following graph
18. Find the optimal solution for a Knapsack problem with M=40, N=4,
{w1,w2,s3,s4}={20,25,10,15} and [p1,p2,p3,p4] = [20,40,35,45]
19. Explain N-Queens Problem. Solve 4-Queens problem and write the solution space tree.
The N-Queens Problem is a classic puzzle and computational problem that involves
placing N queens on an NxN chessboard in such a way that no two queens attack each other. In
chess, a queen can attack in any direction (horizontally, vertically, and diagonally). Therefore,
the challenge is to find a placement of N queens on the chessboard such that no two queens share
the same row, column, or diagonal.
For example, the 4-Queens Problem requires placing four queens on a 4x4 chessboard in a
configuration where no two queens threaten each other.

Solution Space Tree for the 4-Queens Problem:

A solution space tree represents the exploration of different possibilities and choices made
during the backtracking process to solve the problem. Each node in the tree represents a partial
configuration of the chessboard, and the edges represent the placement of the next queen. The
goal is to find a complete configuration (all queens placed) that satisfies the constraints (no two
queens attacking each other).

Let's represent the chessboard by numbers (1 to 4), where each number indicates the row in
which the queen is placed in that column.

```
ROOT
| \
Q1 Q2
| \
Q2 Q1
| \
... ...
```

Note: The above tree representation does not show the entire search space, as it can be extensive
and difficult to display entirely. Instead, it shows the initial branching at the first two levels of
the tree.

Explanation of the Solution Space Tree:

- The root node represents the initial configuration, where no queen is placed on the chessboard.
- At the first level, we place the first queen (Q1) in the first column (column 1).
- At the second level, we place the second queen (Q2) in the second column (column 2). This
creates the first possible partial configuration.
- From here, the algorithm would continue exploring all possible configurations by placing the
next queens in the subsequent columns, considering the constraints of the problem (no two
queens on the same row, column, or diagonal).

The backtracking algorithm will explore all possible combinations of queen placements on the
chessboard until a valid solution is found or all possibilities are exhausted.

20. Explain how to generate Optimal BST using Dynamic Programming.


Generating an Optimal Binary Search Tree (BST) using Dynamic Programming involves
finding the arrangement of keys in a BST that minimizes the expected search cost. In an
optimal BST, keys are arranged such that frequently searched keys are placed closer to the
root, reducing the average search time.
To achieve this, we use a Dynamic Programming approach to build the optimal BST
incrementally. The key idea is to compute the optimal BST for subtrees and then combine
them to form the final optimal BST.

Here are the steps to generate an Optimal BST using Dynamic Programming:

1. Input: We need a sorted list of keys and their corresponding probabilities (frequencies) of
being searched.

2. Define a DP Table: Create a 2D DP table, say `dp`, with dimensions (n+1) x (n+1), where
n is the number of keys. `dp[i][j]` will represent the cost of the optimal BST containing
keys from the ith to the jth element of the sorted list.

3. Fill Base Cases: For each individual key (i.e., i == j), the cost of the optimal BST is
simply its own probability, i.e., `dp[i][i] = frequency[i]`.

4. Calculate Optimal BST for Subtrees: For each sub-array length l (l = 2 to n), fill the `dp`
table using the formula:
```
dp[i][j] = min { dp[i][k-1] + dp[k+1][j] + sum(frequency[i:j]) }
```
where `k` is the root of the subtree, and `sum(frequency[i:j])` is the sum of probabilities
from i to j (inclusive).

This formula represents the cost of choosing k as the root. It includes the cost of the left
subtree (dp[i][k-1]), the cost of the right subtree (dp[k+1][j]), and the cost of searching the
current subtree's root (sum of probabilities from i to j).

5. Backtrack to Build the Optimal BST: To construct the optimal BST, you need to keep
track of the roots chosen at each step (k) and recursively build the left and right subtrees
until the entire tree is formed.

6. Final Result: The `dp[1][n]` will hold the cost of the optimal BST containing all keys
from the sorted list.

Here's some pseudo-code to illustrate the dynamic programming approach:

```python
def generate_optimal_bst(keys, frequency):
n = len(keys)
dp = [[0 for _ in range(n+1)] for _ in range(n+1)]

for i in range(1, n+1):


dp[i][i] = frequency[i]

for l in range(2, n+1):


for i in range(1, n-l+2):
j=i+l-1
dp[i][j] = float('inf')
for k in range(i, j+1):
cost = dp[i][k-1] + dp[k+1][j] + sum(frequency[i:j+1])
dp[i][j] = min(dp[i][j], cost)
return dp[1][n]

# Example usage:
keys = [1, 2, 3, 4]
frequency = [0.1, 0.4, 0.3, 0.2]
print(generate_optimal_bst(keys, frequency)) # Output: 1.2 (minimum expected search
cost)
```

The above algorithm finds the minimum expected search cost of an optimal BST containing
the given keys and their probabilities. To construct the actual optimal BST, you would need
to perform additional steps to keep track of the root choices and build the tree recursively.

21. Explain the Mathematical Analysis for finding the duplicate elements in an Array.

```python
def has_duplicates(arr):
n = len(arr)
for i in range(n):
for j in range(i+1, n):
if arr[i] == arr[j]:
return True
return False
```

**Mathematical Analysis**:

Let's analyze the time complexity of the efficient approach using a hash set.

- **Step 1**: Initializing the hash set takes O(1) time.


- **Step 2**: Iterating through the array of size N takes O(N) time.
- **Step 3**: For each element in the array, checking if it exists in the hash set takes O(1)
time on average.
- **Step 4**: If an element is not in the hash set, we insert it, which also takes O(1) time on
average.

As a result, the overall time complexity of the efficient approach is O(N) since the
dominant factor is the linear iteration through the array.

On the other hand, the naive approach has a time complexity of O(N^2), which is
significantly slower for larger arrays.

22. What are the different methods of obtaining the Lower Bound of an algorithm.
Obtaining a lower bound for an algorithm involves determining the minimum number of
operations or comparisons required to solve a particular problem optimally. It provides a
baseline to assess the efficiency of different algorithms for the same problem. There are
several methods to obtain the lower bound of an algorithm:

1. Adversary Argument: In this method, an adversary is assumed to be controlling the input


data to force the algorithm to perform a minimum number of operations. The adversary
strategically chooses the worst-case inputs for the algorithm, thereby setting the lower
bound.
2. Decision Tree Method: This method is commonly used to find the lower bound for
sorting and searching algorithms. A decision tree is constructed, where each internal node
represents a comparison, and each leaf node represents a possible outcome. The height of
the decision tree gives the number of comparisons required for the algorithm, and this
serves as the lower bound.

3. Information Theory: Lower bounds can be obtained using concepts from information
theory, such as entropy. Shannon's entropy measures the amount of uncertainty or
randomness in a probability distribution. The minimum number of bits required to represent
the information lower bounds the algorithm's performance.

4. Reduction from Another Problem: Sometimes, the lower bound of a problem can be
obtained by reducing it to another well-studied problem with a known lower bound. If we
can prove that the problem is at least as hard as the known problem, then the known lower
bound applies to the original problem as well.

5. Pigeonhole Principle: This method involves using the pigeonhole principle to show that a
certain number of distinct inputs will necessarily lead to the same output. This establishes a
lower bound on the number of distinct outputs the algorithm must produce.

6. Omega Notation: The lower bound of an algorithm can be expressed using the Omega
notation (Ω). If a problem requires at least Ω(f(n)) time or space to be solved, it represents a
lower bound for the problem.

7. Best Possible Case: In some cases, the best possible case of an algorithm represents the
lower bound. If the best case scenario is Ω(f(n)), it means the algorithm cannot perform
better than that for any input.

8. Lower Bound of Subproblems: For algorithms that use divide and conquer or dynamic
programming, analyzing the lower bound of subproblems can sometimes provide insights
into the overall lower bound of the algorithm.

23. Explain in detail the Asympotic notations used to describe he running time of an algorithm.
Asymptotic Notations are programming languages that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
This is also referred to as an algorithm’s growth rate.
There are mainly three asymptotic notations:
Big-O Notation (O-notation)
Omega Notation (Ω-notation)
Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation):


Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm.
EX: f(n)=2n+5
Theta notation defines tight bound curve. So, let c1=2 and c2=3
2(n)<=2n+5<=3n
If n=1, 2(1)<=2(1)+5<=3(1) => 2<=7<=3 -> False
If n=2, 2(2)<=2(2)+5<=3(2) => 4<=9<=6 -> False
If n=3, 2(3)<=2(3)+5<=3(3) => 6<=11<=9 -> False
If n=4, 2(4)<=2(4)+5<=3(4) => 8<=13<=12 -> False
If n=5, 2(5)<=2(5)+5<=3(5) => 10<=15<=15 -> True
Condition is satisfied.
Therefore, c1.g(n)<= f(n)<=c2.g(n)

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an algorithm. Therefore,
it gives the worst-case complexity of an algorithm.

EX: f(n)=2n+5
Big O defines upper bound curve. So, let c=3
2n+5<=3n
If n=1, 2(1)+5<=3(1) => 7<=3 -> False
If n=2, 2(2)+5<=3(2) => 9<=6 -> False
If n=3, 2(3)+5<=3(3) => 11<=9 -> False
If n=4, 2(4)+5<=3(4) => 13<=12 -> False
If n=5, 2(5)+5<=3(5) => 15<=15 -> True
Condition is satisfied.
Therefore, f(n)<=c.g(n)

3. Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

EX: f(n)=2n+5
Omega Notation defines lower bound curve. So, let c=2
2n+5>=2n
If n=1, 2(1)+5>=2(1) => 7>=2 -> True
Condition is satisfied.
Therefore, f(n)>=c.g(n)
24. Write the algorithm for merge sort and trace the data 38,27,43,3,9,82,10
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By
definition, if it is only one element in the list, it is considered sorted. Then, merge sort
combines the smaller sorted lists keeping the new list sorted too.

Step 1 − if it is only one element in the list, consider it already sorted, so return.

Step 2 − divide the list recursively into two halves until it can no more be divided.

Step 3 − merge the smaller lists into new list in sorted order.

25. Write control abstraction for Backtracking. Draw the state space tree for the graph with n=3
vertices and m=3 colors (Red, Blue, Green)

Given graph with n=3 vertices and m=3 colors (Red, Blue, Green):
1
/ \
2 3

Step 1: Initial Configuration


- Initialize the graph and available colors: 1(R/B/G), 2(R/B/G), 3(R/B/G).
- Start with an empty coloring status for each vertex.

Step 2: Choose Vertex


- Choose vertex 1 since it's the first uncolored vertex.

Step 3: Choose Color for Vertex 1


- Try coloring vertex 1 with Red (R).
- Check the valid color condition: No neighbors yet, so it's valid.

Step 4: Move to Vertex 2


- Choose vertex 2 since it's the next uncolored vertex.

Step 5: Choose Color for Vertex 2


- Try coloring vertex 2 with Blue (B).
- Check the valid color condition: No neighbors yet, so it's valid.

Step 6: Move to Vertex 3


- Choose vertex 3 since it's the next uncolored vertex.

Step 7: Choose Color for Vertex 3


- Try coloring vertex 3 with Blue (B).
- Check the valid color condition: No neighbors yet, so it's valid.

Step 8: Solution Found


- All vertices are colored without conflicts: 1(R), 2(B), 3(B).
- Valid coloring solution found!

If a valid solution wasn't found, the backtracking process would start:


- If vertex 3 had to be colored differently (e.g., Red or Green) due to conflicts, we
would backtrack to vertex 2 and try a different color for it.
- If there's no valid color for vertex 2, we would backtrack further to vertex 1 and
change its color as well.

The process would continue like this, exploring different color combinations and
backtracking when conflicts arise, until either a valid solution is found or all
possibilities are exhausted. The tree in the previous response visualizes this process,
showing the branching of possibilities and where conflicts occur.
1 (R/B/G)
/ | \
(B/G) 2 (G/B) 3 (B/R)
| | |
X(R) X(R) X(G)

In this tree:
- Each vertex is labeled with its number (1, 2, 3), along with the available colors in
parentheses.
- The edges represent the selection of a color for the corresponding vertex.
- The 'X' indicates that a conflict occurred and the current branch of the tree is invalid.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy