AAD1 Pyq Solutions
AAD1 Pyq Solutions
def multiply_digits(n):
if n < 10:
return n
else:
last_digit = n % 10
remaining_digits = n // 10
b) Recurrence Relation:
T(n) = T(n/10) + c
where:
* T(n) represents the time complexity of the algorithm for an input number n.
* T(n/10) represents the time complexity of the recursive call on the remaining digits after removing
the last digit.
* c represents the constant time taken for basic operations like modulo, division, and multiplication.
c) Time Complexity:
The recurrence relation T(n) = T(n/10) + c represents a geometric series. The time complexity of the
algorithm is O(log n). This is because the input size is reduced by a constant factor (10) in each
recursive call.
Question 2:
To draw the recurrence tree, we start with the root node representing T(n). At each level, we split the
node into two child nodes, one representing T(n/5) and the other representing T(4n/5). We continue
this process until the subproblems become small enough to be solved directly (base case). The height
of the tree will be O(log n).
* Lower Bound: The lower bound of the recurrence relation T(n) = T(n/5) + T(4n/5) + cn is Ω(n log n).
This can be shown using the Master Theorem.
* Upper Bound: The upper bound of the recurrence relation is O(n log n). This can also be shown
using the Master Theorem.
c) Comparing f(n) and g(n):
We cannot say that f(n) = O(g(n)) because f(n) = 3n^2 + 2n + 2 grows faster than g(n) = 4n^2 + 6 as n
approaches infinity.
Question 3:
To construct a MAX-HEAP, we need to ensure that the parent node is always greater than its children.
Given the numbers 6, 7, 8, 9, 11, 12, 14, 15, 16, we can construct multiple MAX-HEAPS by placing the
missing number (10) at different positions in the heap.
To construct a MIN-HEAP, we need to ensure that the parent node is always smaller than its children.
We can use the MIN-HEAPIFY algorithm to construct the MIN-HEAP from the given numbers. The
number of calls to MIN-HEAPIFY and the number of swap operations will depend on the initial
arrangement of the numbers.
c) Increase_Key(Q, x, k) Operation:
The Increase_Key(Q, x, k) operation is used to increase the priority (key) of an element in a Min-
Priority Queue. The algorithm for this operation involves:
* Finding the index of the element with the given key x in the queue.
* If the new key is greater than the key of the parent node, swap the elements.
Question 4:
To verify if an undirected graph is connected, we can use Depth-First Search (DFS) or Breadth-First
Search (BFS). If DFS or BFS can visit all vertices in the graph starting from any vertex, then the graph is
connected.
To generate a BFS tree, we start at the given node (E) and explore its neighbors level by level. The BFS
tree for the given graph with starting node E will be:
The size of the queue required to construct the BFS tree depends on the maximum number of nodes
at any level of the tree. In the given graph, the maximum number of nodes at any level is 3 (at level
2). Therefore, a queue of size 3 will be sufficient to construct the BFS tree.
Question 5:
a) Finding Vertices Reachable from Given Node:
To find the vertices reachable from a given node u in a directed graph, we can use Depth-First Search
(DFS). DFS starts at node u and explores all its neighbors recursively. The time complexity of DFS is
O(|V| + |E|), where |V| is the number of vertices and |E| is the number of edges in the graph.
Please provide the specific questions you have for each part of the answers, and I will be happy to
elaborate further or provide additional details.
Certainly, let's break down the remaining questions and provide answers:
Question 6:
Algorithm:
* Create a Graph: Represent the locations and connections as a weighted graph, where nodes are
locations and edges are connections with weights representing the length of the insulated wire.
* Minimum Spanning Tree: Use Kruskal's or Prim's algorithm to find the minimum spanning tree of
the graph. This tree will connect all locations using the minimum total wire length.
|---|---|---|
|3|3|-|
| 5 | 3, 5 | - |
| 2 | 3, 5, 2 | - |
| 4 | 3, 5, 2, 4 | - |
| 8 | 5, 2, 4, 8 | 3 |
| 0 | 5, 2, 4, 8, 0 | 5 |
| 6 | 2, 4, 8, 0, 6 | 2 |
| 3 | 4, 8, 0, 6, 3 | 4 |
| 9 | 8, 0, 6, 3, 9 | 8 |
| 6 | 0, 6, 3, 9, 6 | 0 |
| 0 | 6, 3, 9, 6, 0 | 6 |
| 1 | 3, 9, 6, 0, 1 | 3 |
| 2 | 9, 6, 0, 1, 2 | 9 |
| 1 | 6, 0, 1, 2, 1 | 6 |
| 3 | 0, 1, 2, 1, 3 | 0 |
| 2 | 1, 2, 1, 3, 2 | 1 |
| 2 | 2, 1, 3, 2, 2 | 2 |
| 3 | 1, 3, 2, 2, 3 | 1 |
| 5 | 3, 2, 2, 3, 5 | 3 |
| 8 | 2, 2, 3, 5, 8 | 2 |
| 1 | 2, 3, 5, 8, 1 | 2 |
| 4 | 3, 5, 8, 1, 4 | 3 |
Optimal Caching:
|---|---|---|
|3|3|-|
| 5 | 3, 5 | - |
| 2 | 3, 5, 2 | - |
| 4 | 3, 5, 2, 4 | - |
| 8 | 3, 5, 2, 4, 8 | 3 |
| 0 | 3, 5, 2, 4, 0 | 3 |
| 6 | 3, 5, 2, 4, 6 | 3 |
| 3 | 3, 5, 2, 4, 6 | - |
| 9 | 3, 5, 2, 4, 9 | 3 |
| 6 | 3, 5, 2, 4, 6 | - |
| 0 | 3, 5, 2, 4, 0 | - |
| 1 | 3, 5, 2, 4, 1 | 3 |
| 2 | 3, 5, 2, 4, 1 | - |
| 1 | 3, 5, 2, 4, 1 | - |
| 3 | 3, 5, 2, 4, 1 | - |
| 2 | 3, 5, 2, 4, 1 | - |
| 2 | 3, 5, 2, 4, 1 | - |
| 3 | 3, 5, 2, 4, 1 | - |
| 5 | 3, 5, 2, 4, 1 | - |
| 8 | 3, 5, 2, 4, 1 | - |
| 1 | 3, 5, 2, 4, 1 | - |
| 4 | 3, 5, 2, 4, 1 | - |
c) Comparison:
The optimal caching strategy has fewer evictions (1 vs 12) because it anticipates future references
and evicts blocks that won't be needed for the longest time.
Question 7:
Algorithm:
* Merge-Sort: Guarantees O(n log n) time complexity in all cases, but requires additional space for
merging.
* Quick-Sort: On average, has O(n log n) time complexity, but can degenerate to O(n^2) in the worst
case (e.g., sorted or reverse-sorted input).
For the given list {12, 3, 20, 7, 5, 16, 4, 10, 8}, the distance from ascending order is 16.
Question 8:
* Base Case: If the string is empty, return 0. If the string is a single bit, return 1 if it's 0, otherwise 0.
The divide and conquer approach is more efficient than the naive algorithm, which iterates through
the entire string once, taking O(n) time.
If you have any more questions or specific parts you'd like further explanation on, feel free to ask!
Question 9
* Approach:
* Key Characteristics:
* Overlapping subproblems
* Optimal substructure (optimal solution to the problem can be constructed from optimal solutions
to its subproblems)
* Examples:
* Fibonacci sequence
* Knapsack problem
Greedy Paradigm
* Approach:
* Makes a locally optimal choice at each step in the hope that it will lead to a globally optimal
solution.
* Key Characteristics:
* Examples:
* Huffman coding
Comparison Table:
|---|---|---|
Given:
* Edges and weights: A->B (2), B->D (3), A->C (2), C->D (6), C->E (4), E->D (-7), D->C (2)
Objective:
* Check if Bellman-Ford can estimate the shortest path from A to all other nodes.
Analysis:
Bellman-Ford can detect negative weight cycles in a graph. In this graph, there is a negative weight
cycle: E->D (-7) and D->C (2), which together form a cycle with a total weight of -5.
Conclusion:
Since a negative weight cycle exists, Bellman-Ford cannot guarantee the correct shortest path
estimates for all nodes. The algorithm may converge to incorrect results in the presence of negative
weight cycles.
Question 10
Given:
* Set of intervals with start time, finish time, and weight: S = {(1,3,4), (3,6,5), (6,9,4), (6,7,2), (1,4,3),
(2,7,5)}
Objective:
Recursive Approach:
* Recursive Step:
The time complexity of this recursive solution is O(n^2), where n is the number of intervals. This is
because for each interval, we need to consider the remaining intervals to find the maximum weight
of non-conflicting intervals.
Given:
* Knapsack capacity: W = 9
Objective:
* Find an optimal selection of items to fill the knapsack so that the total profit of the selected items
is maximum.
* If the weight of the current item is less than or equal to the current capacity:
* Otherwise:
* dp[i][j] = dp[i-1][j]
Example:
For the given input, the optimal selection of items would be {i2, i4} with a total profit of 10.
If you have any further questions or would like me to elaborate on any specific aspect, feel free to
ask!