ADA SolBank Final
ADA SolBank Final
11. State any two difference between greedy algorithm and dynamic programming
12. Define Branch-and-bound technique.
Branch and bound is one of the techniques used for problem solving. It is similar to the
backtracking since it also uses the state space tree.
It is used for solving the optimization problems and minimization problems.
26. Mention the best case and worst case time complexities of Linear Search Algorithm
Best Case Time Complexity: The best case scenario occurs when the target element is found at
the very beginning of the list. In this case, the linear search algorithm would require only one
comparison to find the target. Therefore, the best case time complexity is O(1), which denotes
constant time
Worst Case Time Complexity: The worst case scenario happens when the target element is either
not present in the list or is located at the very end. In this case, the linear search algorithm would
need to compare the target element with each element in the list, resulting in n comparisons,
where n is the number of elements in the list. Therefore, the worst case time complexity is O(n),
which denotes linear time.
30. Write the time complexity of (A) Merge sort (b) Binary search
Time Complexity of Merge Sort: O(n log n)
Time Complexity of Binary Search: O(log n)
|Q| | | |
| | | |Q|
| | |Q| |
| |Q| | |
Solution 2:
- Queen 1: Placed at (1, 3) on the chessboard.
- Queen 2: Placed at (2, 1) on the chessboard.
- Queen 3: Placed at (3, 4) on the chessboard.
- Queen 4: Placed at (4, 2) on the chessboard.
| | |Q| |
|Q| | | |
| | | |Q|
| |Q| | |
In both solutions, the queens are placed in such a way that they do not threaten each other. No
two queens share the same row, column, or diagonal, satisfying the requirements of the 4-Queen
Problem. It's important to note that these are just two possible solutions, and there can be
additional valid arrangements of the queens on the chessboard.
?
/ \
? ?
/\ /\
? ?? ?
/\/\/\
5 23 1
In the worst case, the decision tree must have enough leaf nodes to represent all possible
permutations of the input elements. Since there are n! (n factorial) possible permutations for a
list of n elements, the height of the decision tree must be at least log(n!) = Ω(n log n).
39. What is Decrease by a Constant? Give an example.
In this variation, the size of an instance is reduced by the same constant on each iteration or
the recursive step of the algorithm. Typically, this constant is equal to one , although other
constant size reductions can happen. This variation is used in many algorithms like;
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, or subsets
Insertion sort.
Algorithm:
STEP-1: Divide all the coins into 3 equal groups. If the total number of coins are not equal to 3,
place the extra coin(s) aside and check the, later.
STEP-2: Weigh the first two groups against each other using the balance scale.
STEP-3: There are two possibilities:
a) The scale will be balanced. Hence, an assumption is made that the fake coin might be
in the 3rd group.
b) The scale is not balanced, that means that the fake coin is in the lighter group.
STEP-4: Repeat step -1,2 and 3 until the lighter group is identified.
STEP-5: Stop the process.
EX: (C1, C2, C3, C4, C5, C6, C7, C8, C9) Let C8 be the fake coin.
STEP-1: Group-1: (C1, C2, C3)
Group-2: (C4, C5, C6)
Group-3: (C7, C8, C9)
STEP-2: Weight Group-1 and Group-2, Assume that they weigh the same on the balance scale.
This implies that the fake coin is in Group-3
STEP-3: Divide Group-3 into 3 sub-groups:
Group-3(1): C(7)
Group-3(2): C(8)
Group-3(3): C(9)
STEP-4: Weigh the subgroups 1 and 2 which indicates that the subgroup 3(2) weighs lighter than
subgroup 3(1). Hence C8 is the fake coin.
OUTPUT:
Enter a :2
Enter n :3
Brute Force method a^n : 8
Divide and Conquer a^n : 8
4. Write DIJIKSTRA’s algorithm to find the shortest path from a given vertex to all other vertices
in a graph
Algorithm Dijkstra (V, C, D, n)
2. Individual Asset States: States representing the inclusion of a single asset from the given set:
{11}, {13}, {24}, {7}.
3. Combined Asset States: States representing the inclusion of a combination of assets from the
given set: {11, 13}, {11, 24}, {11, 7}, {13, 24}, {13, 7}, {24, 7}.
4. Complete Asset State: A state representing the inclusion of all assets from the given set: {11,
13, 24, 7}.
5. Goal States: States where the sum of assets in the set equals the target value M=31. Possible
goal states: {13, 18}, {7, 24}.
The state space encompasses various states, ranging from no assets selected to all assets
included, and includes the goal states where the sum of assets equals the target value. Each state
represents a specific combination of assets from the given set.
7. With a neat diagram, discuss the sequence of steps in designing and analysing an algorithm.
8. Write a program to solve the string matching problem using KMP algorithm
def build_prefix_table(pattern):
prefix_table = [0] * len(pattern)
length = 0 # Length of the previous longest prefix suffix
if pattern[i] == pattern[length]:
length += 1
prefix_table[i] = length
return prefix_table
while i < n:
if pattern[j] == text[i]:
i += 1
j += 1
if j == m:
matches.append(i - j)
j = prefix_table[j - 1]
else:
if j != 0:
j = prefix_table[j - 1]
else:
i += 1
return matches
# Example usage:
text = "ABCABCDABABCDABCDABDE"
pattern = "ABCDABD"
matches = kmp_search(text, pattern)
if len(matches) > 0:
print("Pattern found at positions:")
print(matches)
else:
print("Pattern not found in the text.")
Step 1:
First, we need to do preprocessing. We do this by creating two shift tables the bad character shift
table and the good suffix shift table. These tables are made based on the given pattern and the
alphabet used in both the pattern and the text.
Step 2:
We start the search by aligning the pattern with the beginning of the text. We then enter a loop
where we keep on comparing the characters in the pattern with the corresponding characters in
the text. We start the comparison from the last character of the pattern and move towards the
beginning of the pattern.
a. If all the characters in the pattern match with the corresponding characters in the text, we have
found a match and the search stops.
b. If a mismatch occurs after matching k characters from the right, we consult our shift tables to
decide how much to shift the pattern to the right.
(1) If k=0. we first look up the mismatched character from the text in the bad character shift
table. i.e, the mismatch occurs at the last character of the pattern, we shift the pattern to the right
by the amount indicated in the table,
(11) If k>0, i.e, there has been a partial match, we also look up the shift value from the good
suffix shift table. The pattern is then shifted to the right by the larger of the two shift values
(from the bad character shift table and the good suffix shift table), or by 1 if the shift value is not
found in either table.
Step 3:
The loop in step 2 is repeated until either a match is found, or the pattern has moved past the end
of the text, indicating no match exists. If a match is found, the algorithm returns the position of
the match. If no match is found, the algorithm returns -1.
10. Write the Warshall’s algorithm to compute the transitive closure of a graph.
Algorithm: Warshall's (C)
11. Explain decision trees for searching a sorted array with an example.
A decision tree is a data structure used to represent a sequence of decisions and their potential
outcomes. It's often used in various algorithms, including searching in a sorted array. A decision
tree breaks down the search process into a series of binary decisions, leading to the final
outcome.
Let's explain decision trees for searching a sorted array with an example:
Suppose you have a sorted array of integers: [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]. You want to
search for the value 23 within this array using a decision tree approach.
```
16
/ \
/ \
value < 16 value > 16
```
3. Move to the middle element of the right half of the array: 38.
4. Is 38 equal to 23? No.
- If 23 is greater than 38, then the value must be in the right sub-array.
- If 23 is less than 38, then the value must be in the left sub-array.
```
16
/ \
/ \
value < 16 38
/ \
/ \
value < 38 value > 38
```
```
16
/ \
/ \
value < 16 38
/ \
/ \
value < 38 5
/ \
/ \
value < 5 23
```
This decision tree illustrates the process of searching for the value 23 in the sorted array using
binary decisions at each step. The decision tree helps visualize the binary search algorithm,
which is an efficient way to search in a sorted array by repeatedly halving the search space.
12. With an example, explain Step counts w.r.t to computing the time efficiency of algorithm.
Step counts are a way to analyze and compute the time efficiency of an algorithm by counting
the number of elementary operations or steps performed during its execution. By determining
the number of steps required for an algorithm, we can estimate its time complexity and make
comparisons between different algorithms.
In this example, we can count the number of steps performed in the algorithm:
In this case, the step count is directly proportional to the size of the input array, denoted by n.
This indicates that the time complexity of the algorithm is linear, or O(n), as the number of
steps grows linearly with the input size.
14. Write a program to solve towers of Hanoi problem for different number of disks.
From the given graph, since the origin is already mentioned, the solution must always start from
that node. Among the edges leading from A, A → B has the shortest distance.
Then, B → C has the shortest and only edge between, therefore it is included in the output graph.
There’s only one edge between C → D, therefore it is added to the output graph.
There’s two outward edges from D. Even though, D → B has lower distance than D → E, B is
already visited once and it would form a cycle if added to the output graph. Therefore, D → E is
added into the output graph.
There’s only one edge from e, that is E → F. Therefore, it is added into the output graph.
Again, even though F → C has lower distance than F → A, F → A is added into the output graph
in order to avoid the cycle that would form and C is already visited once.
Even though, the cost of path could be decreased if it originates from other nodes but the
question is not raised with respect to that.
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that
are 31 and 8.
Both 31 and 8 are not sorted. So, swap them.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
EX: A B E
C D
T=
A
T=
A C
Decrease the indegree of C’s neighbouring nodes by 1
INDEG (B)=1-1=0
INDEG (D)=1-1=0
Since all the nodes in the DAG had been visited and the queue is null, stop the process.
Therefore, the topological sort: A->C->B->D->E
EX: f(n)=2n+5
Theta notation defines tight bound curve. So, let c1=2 and c2=3
2(n)<=2n+5<=3n
If n=1, 2(1)<=2(1)+5<=3(1) => 2<=7<=3 -> False
If n=2, 2(2)<=2(2)+5<=3(2) => 4<=9<=6 -> False
If n=3, 2(3)<=2(3)+5<=3(3) => 6<=11<=9 -> False
If n=4, 2(4)<=2(4)+5<=3(4) => 8<=13<=12 -> False
If n=5, 2(5)<=2(5)+5<=3(5) => 10<=15<=15 -> True
Condition is satisfied.
Therefore, c1.g(n)<= f(n)<=c2.g(n)
EX: f(n)=2n+5
Big O defines upper bound curve. So, let c=3
2n+5<=3n
If n=1, 2(1)+5<=3(1) => 7<=3 -> False
If n=2, 2(2)+5<=3(2) => 9<=6 -> False
If n=3, 2(3)+5<=3(3) => 11<=9 -> False
If n=4, 2(4)+5<=3(4) => 13<=12 -> False
If n=5, 2(5)+5<=3(5) => 15<=15 -> True
Condition is satisfied.
Therefore, f(n)<=c.g(n)
EX: f(n)=2n+5
Omega Notation defines lower bound curve. So, let c=2
2n+5>=2n
If n=1, 2(1)+5>=2(1) => 7>=2 -> True
Condition is satisfied.
Therefore, f(n)>=c.g(n)
21. Write the advantages and disadvantages of divide and conquer technique
Advantages of Divide and Conquer Algorithm:
The difficult problem can be solved easily.
It divides the entire problem into subproblems thus it can be solved parallelly ensuring
multiprocessing
Efficiently uses cache memory without occupying much space
Reduces time complexity of the problem
Solving difficult problems: Divide and conquer technique is a tool for solving difficult
problems conceptually. e.g. Tower of Hanoi puzzle.
Algorithm efficiency: The divide-and-conquer paradigm often helps in the discovery of efficient
algorithms.
Step 1 − if it is only one element in the list, consider it already sorted, so return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
Example
In the following example, we have shown Merge-Sort algorithm step by step. First, every
iteration array is divided into two sub-arrays, until the sub-array contains only one element.
When these sub-arrays cannot be divided further, then merge operations are performed.
Here are some commonly encountered orders of growth, listed in increasing order of
performance:
1. O(1) - The algorithm's runtime does not depend on the input size. It executes a constant
number of operations, regardless of the input.
2. O(log n) - The algorithm's runtime grows logarithmically with the input size. Each step
reduces the problem size by a constant fraction, resulting in efficient performance for large
inputs.
3. O(n) - The algorithm's runtime grows linearly with the input size. Each input element is
processed exactly once, leading to a proportional increase in runtime.
4. O(n log n) - The algorithm's runtime grows in proportion to n multiplied by the logarithm of
n. It arises in efficient sorting algorithms like Merge Sort and Quick Sort.
5. O(n^2) - The algorithm's runtime grows quadratically with the input size. It commonly occurs
in nested loops, where each element needs to be compared with every other element.
6. O(2^n) - The algorithm's runtime grows exponentially with the input size. It is often
associated with brute-force algorithms that explore all possible combinations, making it
inefficient for larger inputs.
7. O(n!) - The algorithm's runtime grows factorially with the input size. It arises in algorithms
that involve generating permutations or combinations.
25. Explain what are the basic steps that are to be followed to analyze recursive and non-recursive
algorithm.
In analysing the efficiency of any recursive algorithm, there are some basic steps to be
followed:
1. Deciding the input parameters size.
2. Identifying the basic operations required.
3. Finding the reasons if the basic operation is to be executed more than once.
4. Setting up a recurrence relation, with an appropriate initial condition, for expressing the
number of times the basic operation is executed.
5. Solving the recurrence relation for finding the complex function and order of growth
In analysing the efficiency of any non-recursive algorithm, there are some basic steps to be
followed:
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the inner most loop.)
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case, and,
if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-form formula
for the count or, at the very least, establish its order of growth.
Here we have to place 4 queens say Q1, Q2, Q3, Q4 on the 4 x 4 chessboard such that no 2
queens attack each other.
Let’s suppose we’re putting our first queen Q1 at position (1, 1) now for Q2 we can’t put it in 1
row( because they will conflict ).
So for Q2 we will have to consider row 2. In row 2 we can place it in column 3 I.e at (2, 3) but
then there will be no option for placing Q3 in row 3.
So we backtrack one step and place Q2 at (2, 4) then we find the position for placing Q3 is (3,
2) but by this, no option will be left for placing Q4.
Then we have to backtrack till ‘Q1’ and put it to (1, 2) instead of (1, 1) and then all other queens
can be placed safely by moving Q2 to the position (2, 4), Q3 to (3, 1), and Q4 to (4, 3).
Hence we got our solution as (2, 4, 1, 3), this is the one possible solution for the 4-Queen
Problem. For another solution, we will have to backtrack to all possible partial solutions
3. Reason: As each row of one matrix is to be multiplied with each column of the other
with respective elements, three for loops: one for traversing the input matrices, and the
remaining two for Initializing the output matrix and traversing all the three matrices are to
be used.
Therefore, T(n) = n3
The time complexity in finding the multiplication of two matrices - O(n3)
2. Write an Lomuto portioning algorithm. With neat diagram explain its working.
The Lomuto Partitioning algorithm is a partitioning technique used in the QuickSort
algorithm to divide an array into two parts, such that all elements less than or equal to the
pivot are on the left side, and all elements greater than the pivot are on the right side. The
Lomuto partitioning algorithm is easier to understand and implement compared to other
partitioning methods.
Here's the Lomuto Partitioning algorithm in Python:
```python
def lomuto_partition(arr, low, high):
pivot = arr[high]
i = low - 1
Let's understand the working of the Lomuto partitioning algorithm with the help of a step-
by-step diagram:
Step 1: Choose a pivot (usually the last element of the array). In this case, the pivot is 5
(last element).
```
[8, 4, 6, 2, 7, 3, 1, (5)]
```
Step 2: Initialize the variables `i` and `j`. `i` will keep track of the index where the elements
less than or equal to the pivot will be placed. `j` will be used to iterate through the array.
```
i j
↓ ↓
[8, 4, 6, 2, 7, 3, 1, (5)]
```
Step 3: Compare the element at index `j` with the pivot (5). If the element is less than or
equal to the pivot, swap it with the element at index `i`, and increment `i`.
```
i j
↓ ↓
[4, 8, 6, 2, 7, 3, 1, (5)]
```
Step 4: Repeat the process until `j` reaches the second-to-last element.
```
i j
↓ ↓
[4, 1, 6, 2, 7, 3, 8, (5)]
```
Step 5: At the end of the loop, place the pivot (5) in its correct position by swapping it with
the element at index `i+1`.
```
i j
↓ ↓
[4, 1, 5, 2, 7, 3, 8, (6)]
```
Step 6: The pivot is now in its correct position (index 2). All elements to the left of the pivot
are less than or equal to it, and all elements to the right are greater than it.
```
i j
↓ ↓
[4, 1, (5), 2, 7, 3, 8, 6]
```
Step 7: Return the index of the pivot (i + 1), which is 3. The array is now partitioned, and
we can recursively apply QuickSort on the left and right subarrays.
The Lomuto Partitioning algorithm efficiently divides the array into two parts around the
pivot, facilitating the QuickSort process. The process continues recursively on both
subarrays until the entire array is sorted.
3. Apply Horspool’s algorithm to search for the pattern GREAT in the text
SAURAVISREALLYGREAT
Character G R E A T
Shift Value 4 3 2 1 5
SV(G)= 5-0-1=4
SV(R)= 5-1-1=3
SV(E)= 5-2-1=2
SV(A)= 5-3-1=1
SV(T)= 5
4. Apply Warshall’s algorithm to find the transitive closure of the digraph defined by the
following adjacency matrix:
To find the transitive closure of a directed graph using Warshall's algorithm, we perform a
matrix operation to determine if there is a path between any two vertices in the graph. The
transitive closure matrix will show the existence of a path from vertex i to vertex j, where a
value of 1 represents a path and 0 represents no path.
The adjacency matrix you provided for the directed graph is as follows:
```
0100
0010
0001
0000
```
We start with the same adjacency matrix since there are no direct paths between vertices.
The transitive closure matrix T is initially the same as the adjacency matrix.
```
T= 0100
0010
0001
0000
```
- For each vertex i, check all possible intermediate vertices (k) and update the transitive
closure matrix T[i][j] as T[i][j] OR (T[i][k] AND T[k][j]).
```
k = 1:
T= 0100
0010
0001
0000
k = 2:
T= 0110
0011
0001
0000
k = 3:
T= 0111
0011
0001
0000
```
```
T= 0111
0011
0001
0000
```
**Result:**
The transitive closure of the directed graph represented by the given adjacency matrix is:
```
T= 0111
0011
0001
0000
```
The value T[i][j] = 1 indicates that there is a path from vertex i to vertex j in the graph.
Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two
edges from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among the
edges, the edge BD has the minimum weight. So, add it to the MST.
Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of
C, i.e., E and A. So, select the edge DE and add it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a
cycle to the graph. So, choose the edge CA and add it to the MST.
So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below -
Cost of MST = 4 + 2 + 1 + 3 = 10 units.
6. Define the classes P and NP and derive the relationships between them.
7. Using sieve of Erasthenes method, generate prime numbers between 2 to 50
STEP-1 Mark all numbers which are divisible by 2 and greater than or equal to the square
of it.
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
STEP-2 Move to next unmarked number 3 and mark all the numbers which are the
multiples of 3 and are greater than or equal to the square of it.
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
STEP-3 Move to next unmarked number 5 and mark all the numbers which are the
multiples of 3 and are greater than or equal to the square of it.
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
We continue this process and our final table will look like below:
2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
So prime numbers are the unmarked ones: 2,3,5,7,11,17,19,23,29,31,37,41,43,47
STEP-5: Visit G
Stack: [G, E]
Visited: [A, B, D, C, F, H]
Now explore the neighbours of G i.e., H and E
Since H is already visited ignore it.
Since E is already in the stack ignore it.
Mark G as visited
Visited: [A, B, D, C, F, H, G]
STEP-6: Visit E
Stack: [E]
Visited: [A, B, D, C, F, H, G, E]
Hence DFS traversal is finished.
DFS traversal A> B> D> C> F> H> G> E
Bounding function (Upper Bound): A bounding function estimates the maximum (for
maximization problems) or minimum (for minimization problems) possible value of the
optimal solution from a given node. It helps in determining if a node can be pruned
(discarded) or not.
Relaxation technique (Lower Bound): A relaxation technique is used to estimate the
minimum possible value of the optimal solution from a given node. It provides a lower
bound on the optimal solution.
```python
def has_duplicates(arr):
seen = set()
for num in arr:
if num in seen:
return True
seen.add(num)
return False
```
**Mathematical Analysis**:
**Time Complexity**:
1. Initializing the set `seen` takes constant time, which we can represent as O(1).
2. The for loop iterates through the entire array of size n. For each element, the lookup in
the set `seen` takes constant time on average (O(1)).
3. If the element is not in the set, it is inserted, which also takes constant time on average
(O(1)).
Therefore, the overall time complexity of the algorithm is O(n) since the dominant factor
is the linear iteration through the array.
**Space Complexity**:
The space complexity of the algorithm is O(n) because we use a set to store unique
elements. In the worst case, when all elements are unique, the set will contain all n
elements.
Given two square matrices A and B of size n x n, the goal is to compute their product C = A
* B.
1. Divide both matrices A and B into four equally-sized submatrices each: A11, A12, A21,
A22, and B11, B12, B21, B22.
```
A11 | A12 B11 | B12
----|---- * ----|----
A21 | A22 B21 | B22
```
**Example**:
Let's demonstrate Strassen's matrix multiplication algorithm with a simple example of 2x2
matrices.
```
A=|12|
|34|
B=|56|
|78|
```
Step 1: Divide the matrices into submatrices (A11, A12, A21, A22, B11, B12, B21, B22):
```
A11 = | 1 | B11 = | 5 |
------ ------
|4| |8|
A12 = | 2 | B12 = | 6 |
------ ------
|3| |7|
A21 = | | B21 = | |
------ ------
|3| |7|
A22 = | 4 | B22 = | 8 |
------ ------
| | | |
```
Step 4: Combine the four submatrices to form the final result matrix C:
```
C = | -1 22 |
| 43 19 |
```
```
C = | -1 22 |
| 43 19 |
```
The final matrix C is the product of matrices A and B using Strassen's algorithm. The
standard matrix multiplication would require 8 scalar multiplications to compute the result,
whereas Strassen's algorithm only uses 7 scalar multiplications, which demonstrates its
efficiency for larger matrices.
15. With an example explain the topological sorting (include both the method (a) DFS & (b)
Source removal)
Topological sorting is a linear ordering of the vertices of a directed acyclic graph (DAG) in
such a way that for every directed edge (u, v), vertex u comes before vertex v in the
ordering. Topological sorting is applicable only to DAGs, as cyclic graphs do not have a
valid topological order.
**Example Graph**:
Consider the following directed acyclic graph (DAG):
```
1 --> 2 --> 4
| ^
v |
3 --> 5 ---+
```
**Step 1**: Start from any unvisited vertex and perform DFS on the graph. For each
vertex, after visiting all its adjacent vertices, mark it as visited and add it to the front of the
topological order list.
**Step 2**: Continue this process until all vertices are visited.
**Step 1**: Find all vertices with in-degree 0 and add them to the set of sources.
Both methods have resulted in the same topological order for the given DAG: [1, 3, 2, 5, 4].
Topological sorting provides an ordering that satisfies the dependencies between vertices,
and it is widely used in various applications like task scheduling, dependency resolution,
and compiler optimization.
A solution space tree represents the exploration of different possibilities and choices made
during the backtracking process to solve the problem. Each node in the tree represents a partial
configuration of the chessboard, and the edges represent the placement of the next queen. The
goal is to find a complete configuration (all queens placed) that satisfies the constraints (no two
queens attacking each other).
Let's represent the chessboard by numbers (1 to 4), where each number indicates the row in
which the queen is placed in that column.
```
ROOT
| \
Q1 Q2
| \
Q2 Q1
| \
... ...
```
Note: The above tree representation does not show the entire search space, as it can be extensive
and difficult to display entirely. Instead, it shows the initial branching at the first two levels of
the tree.
- The root node represents the initial configuration, where no queen is placed on the chessboard.
- At the first level, we place the first queen (Q1) in the first column (column 1).
- At the second level, we place the second queen (Q2) in the second column (column 2). This
creates the first possible partial configuration.
- From here, the algorithm would continue exploring all possible configurations by placing the
next queens in the subsequent columns, considering the constraints of the problem (no two
queens on the same row, column, or diagonal).
The backtracking algorithm will explore all possible combinations of queen placements on the
chessboard until a valid solution is found or all possibilities are exhausted.
Here are the steps to generate an Optimal BST using Dynamic Programming:
1. Input: We need a sorted list of keys and their corresponding probabilities (frequencies) of
being searched.
2. Define a DP Table: Create a 2D DP table, say `dp`, with dimensions (n+1) x (n+1), where
n is the number of keys. `dp[i][j]` will represent the cost of the optimal BST containing
keys from the ith to the jth element of the sorted list.
3. Fill Base Cases: For each individual key (i.e., i == j), the cost of the optimal BST is
simply its own probability, i.e., `dp[i][i] = frequency[i]`.
4. Calculate Optimal BST for Subtrees: For each sub-array length l (l = 2 to n), fill the `dp`
table using the formula:
```
dp[i][j] = min { dp[i][k-1] + dp[k+1][j] + sum(frequency[i:j]) }
```
where `k` is the root of the subtree, and `sum(frequency[i:j])` is the sum of probabilities
from i to j (inclusive).
This formula represents the cost of choosing k as the root. It includes the cost of the left
subtree (dp[i][k-1]), the cost of the right subtree (dp[k+1][j]), and the cost of searching the
current subtree's root (sum of probabilities from i to j).
5. Backtrack to Build the Optimal BST: To construct the optimal BST, you need to keep
track of the roots chosen at each step (k) and recursively build the left and right subtrees
until the entire tree is formed.
6. Final Result: The `dp[1][n]` will hold the cost of the optimal BST containing all keys
from the sorted list.
```python
def generate_optimal_bst(keys, frequency):
n = len(keys)
dp = [[0 for _ in range(n+1)] for _ in range(n+1)]
# Example usage:
keys = [1, 2, 3, 4]
frequency = [0.1, 0.4, 0.3, 0.2]
print(generate_optimal_bst(keys, frequency)) # Output: 1.2 (minimum expected search
cost)
```
The above algorithm finds the minimum expected search cost of an optimal BST containing
the given keys and their probabilities. To construct the actual optimal BST, you would need
to perform additional steps to keep track of the root choices and build the tree recursively.
21. Explain the Mathematical Analysis for finding the duplicate elements in an Array.
```python
def has_duplicates(arr):
n = len(arr)
for i in range(n):
for j in range(i+1, n):
if arr[i] == arr[j]:
return True
return False
```
**Mathematical Analysis**:
Let's analyze the time complexity of the efficient approach using a hash set.
As a result, the overall time complexity of the efficient approach is O(N) since the
dominant factor is the linear iteration through the array.
On the other hand, the naive approach has a time complexity of O(N^2), which is
significantly slower for larger arrays.
22. What are the different methods of obtaining the Lower Bound of an algorithm.
Obtaining a lower bound for an algorithm involves determining the minimum number of
operations or comparisons required to solve a particular problem optimally. It provides a
baseline to assess the efficiency of different algorithms for the same problem. There are
several methods to obtain the lower bound of an algorithm:
3. Information Theory: Lower bounds can be obtained using concepts from information
theory, such as entropy. Shannon's entropy measures the amount of uncertainty or
randomness in a probability distribution. The minimum number of bits required to represent
the information lower bounds the algorithm's performance.
4. Reduction from Another Problem: Sometimes, the lower bound of a problem can be
obtained by reducing it to another well-studied problem with a known lower bound. If we
can prove that the problem is at least as hard as the known problem, then the known lower
bound applies to the original problem as well.
5. Pigeonhole Principle: This method involves using the pigeonhole principle to show that a
certain number of distinct inputs will necessarily lead to the same output. This establishes a
lower bound on the number of distinct outputs the algorithm must produce.
6. Omega Notation: The lower bound of an algorithm can be expressed using the Omega
notation (Ω). If a problem requires at least Ω(f(n)) time or space to be solved, it represents a
lower bound for the problem.
7. Best Possible Case: In some cases, the best possible case of an algorithm represents the
lower bound. If the best case scenario is Ω(f(n)), it means the algorithm cannot perform
better than that for any input.
8. Lower Bound of Subproblems: For algorithms that use divide and conquer or dynamic
programming, analyzing the lower bound of subproblems can sometimes provide insights
into the overall lower bound of the algorithm.
23. Explain in detail the Asympotic notations used to describe he running time of an algorithm.
Asymptotic Notations are programming languages that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
This is also referred to as an algorithm’s growth rate.
There are mainly three asymptotic notations:
Big-O Notation (O-notation)
Omega Notation (Ω-notation)
Theta Notation (Θ-notation)
EX: f(n)=2n+5
Big O defines upper bound curve. So, let c=3
2n+5<=3n
If n=1, 2(1)+5<=3(1) => 7<=3 -> False
If n=2, 2(2)+5<=3(2) => 9<=6 -> False
If n=3, 2(3)+5<=3(3) => 11<=9 -> False
If n=4, 2(4)+5<=3(4) => 13<=12 -> False
If n=5, 2(5)+5<=3(5) => 15<=15 -> True
Condition is satisfied.
Therefore, f(n)<=c.g(n)
EX: f(n)=2n+5
Omega Notation defines lower bound curve. So, let c=2
2n+5>=2n
If n=1, 2(1)+5>=2(1) => 7>=2 -> True
Condition is satisfied.
Therefore, f(n)>=c.g(n)
24. Write the algorithm for merge sort and trace the data 38,27,43,3,9,82,10
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By
definition, if it is only one element in the list, it is considered sorted. Then, merge sort
combines the smaller sorted lists keeping the new list sorted too.
Step 1 − if it is only one element in the list, consider it already sorted, so return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
25. Write control abstraction for Backtracking. Draw the state space tree for the graph with n=3
vertices and m=3 colors (Red, Blue, Green)
Given graph with n=3 vertices and m=3 colors (Red, Blue, Green):
1
/ \
2 3
The process would continue like this, exploring different color combinations and
backtracking when conflicts arise, until either a valid solution is found or all
possibilities are exhausted. The tree in the previous response visualizes this process,
showing the branching of possibilities and where conflicts occur.
1 (R/B/G)
/ | \
(B/G) 2 (G/B) 3 (B/R)
| | |
X(R) X(R) X(G)
In this tree:
- Each vertex is labeled with its number (1, 2, 3), along with the available colors in
parentheses.
- The edges represent the selection of a color for the corresponding vertex.
- The 'X' indicates that a conflict occurred and the current branch of the tree is invalid.