0% found this document useful (0 votes)
5 views8 pages

AAD1 Pyq Solutions

Uploaded by

24f1000294
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views8 pages

AAD1 Pyq Solutions

Uploaded by

24f1000294
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Question 1:

a) Recursive Algorithm for Multiplication of Digits:

def multiply_digits(n):

if n < 10:

return n

else:

last_digit = n % 10

remaining_digits = n // 10

return last_digit * multiply_digits(remaining_digits)

b) Recurrence Relation:

The recurrence relation for the algorithm is:

T(n) = T(n/10) + c

where:

* T(n) represents the time complexity of the algorithm for an input number n.

* T(n/10) represents the time complexity of the recursive call on the remaining digits after removing
the last digit.

* c represents the constant time taken for basic operations like modulo, division, and multiplication.

c) Time Complexity:

The recurrence relation T(n) = T(n/10) + c represents a geometric series. The time complexity of the
algorithm is O(log n). This is because the input size is reduced by a constant factor (10) in each
recursive call.

Question 2:

a) Recurrence Tree for T(n) = T(n/5) + T(4n/5) + cn:

To draw the recurrence tree, we start with the root node representing T(n). At each level, we split the
node into two child nodes, one representing T(n/5) and the other representing T(4n/5). We continue
this process until the subproblems become small enough to be solved directly (base case). The height
of the tree will be O(log n).

b) Lower and Upper Bounding Functions:

* Lower Bound: The lower bound of the recurrence relation T(n) = T(n/5) + T(4n/5) + cn is Ω(n log n).
This can be shown using the Master Theorem.

* Upper Bound: The upper bound of the recurrence relation is O(n log n). This can also be shown
using the Master Theorem.
c) Comparing f(n) and g(n):

We cannot say that f(n) = O(g(n)) because f(n) = 3n^2 + 2n + 2 grows faster than g(n) = 4n^2 + 6 as n
approaches infinity.

Question 3:

a) Constructing MAX-HEAPS from Given Numbers:

To construct a MAX-HEAP, we need to ensure that the parent node is always greater than its children.
Given the numbers 6, 7, 8, 9, 11, 12, 14, 15, 16, we can construct multiple MAX-HEAPS by placing the
missing number (10) at different positions in the heap.

b) Constructing MIN-HEAP and Counting Operations:

To construct a MIN-HEAP, we need to ensure that the parent node is always smaller than its children.
We can use the MIN-HEAPIFY algorithm to construct the MIN-HEAP from the given numbers. The
number of calls to MIN-HEAPIFY and the number of swap operations will depend on the initial
arrangement of the numbers.

c) Increase_Key(Q, x, k) Operation:

The Increase_Key(Q, x, k) operation is used to increase the priority (key) of an element in a Min-
Priority Queue. The algorithm for this operation involves:

* Finding the index of the element with the given key x in the queue.

* Updating the key of the element to k.

* If the new key is greater than the key of the parent node, swap the elements.

* Repeat step 3 until the heap property is restored.

Question 4:

a) Verifying Graph Connectivity:

To verify if an undirected graph is connected, we can use Depth-First Search (DFS) or Breadth-First
Search (BFS). If DFS or BFS can visit all vertices in the graph starting from any vertex, then the graph is
connected.

b) BFS Tree for Given Graph:

To generate a BFS tree, we start at the given node (E) and explore its neighbors level by level. The BFS
tree for the given graph with starting node E will be:

E -> B -> A -> C -> D -> F -> H -> G

c) Size of Queue for BFS Tree:

The size of the queue required to construct the BFS tree depends on the maximum number of nodes
at any level of the tree. In the given graph, the maximum number of nodes at any level is 3 (at level
2). Therefore, a queue of size 3 will be sufficient to construct the BFS tree.

Question 5:
a) Finding Vertices Reachable from Given Node:

To find the vertices reachable from a given node u in a directed graph, we can use Depth-First Search
(DFS). DFS starts at node u and explores all its neighbors recursively. The time complexity of DFS is
O(|V| + |E|), where |V| is the number of vertices and |E| is the number of edges in the graph.

Please provide the specific questions you have for each part of the answers, and I will be happy to
elaborate further or provide additional details.

Certainly, let's break down the remaining questions and provide answers:

Question 6:

a) Minimum Wire Installation:

Algorithm:

* Create a Graph: Represent the locations and connections as a weighted graph, where nodes are
locations and edges are connections with weights representing the length of the insulated wire.

* Minimum Spanning Tree: Use Kruskal's or Prim's algorithm to find the minimum spanning tree of
the graph. This tree will connect all locations using the minimum total wire length.

b) Cache Eviction Schedule:

Policy: "The block that last brought in must be evicted first."

| Reference | Cache Contents | Eviction |

|---|---|---|

|3|3|-|

| 5 | 3, 5 | - |

| 2 | 3, 5, 2 | - |

| 4 | 3, 5, 2, 4 | - |

| 8 | 5, 2, 4, 8 | 3 |

| 0 | 5, 2, 4, 8, 0 | 5 |

| 6 | 2, 4, 8, 0, 6 | 2 |

| 3 | 4, 8, 0, 6, 3 | 4 |

| 9 | 8, 0, 6, 3, 9 | 8 |

| 6 | 0, 6, 3, 9, 6 | 0 |

| 0 | 6, 3, 9, 6, 0 | 6 |

| 1 | 3, 9, 6, 0, 1 | 3 |

| 2 | 9, 6, 0, 1, 2 | 9 |

| 1 | 6, 0, 1, 2, 1 | 6 |

| 3 | 0, 1, 2, 1, 3 | 0 |
| 2 | 1, 2, 1, 3, 2 | 1 |

| 2 | 2, 1, 3, 2, 2 | 2 |

| 3 | 1, 3, 2, 2, 3 | 1 |

| 5 | 3, 2, 2, 3, 5 | 3 |

| 8 | 2, 2, 3, 5, 8 | 2 |

| 1 | 2, 3, 5, 8, 1 | 2 |

| 4 | 3, 5, 8, 1, 4 | 3 |

Optimal Caching:

| Reference | Cache Contents | Eviction |

|---|---|---|

|3|3|-|

| 5 | 3, 5 | - |

| 2 | 3, 5, 2 | - |

| 4 | 3, 5, 2, 4 | - |

| 8 | 3, 5, 2, 4, 8 | 3 |

| 0 | 3, 5, 2, 4, 0 | 3 |

| 6 | 3, 5, 2, 4, 6 | 3 |

| 3 | 3, 5, 2, 4, 6 | - |

| 9 | 3, 5, 2, 4, 9 | 3 |

| 6 | 3, 5, 2, 4, 6 | - |

| 0 | 3, 5, 2, 4, 0 | - |

| 1 | 3, 5, 2, 4, 1 | 3 |

| 2 | 3, 5, 2, 4, 1 | - |

| 1 | 3, 5, 2, 4, 1 | - |

| 3 | 3, 5, 2, 4, 1 | - |

| 2 | 3, 5, 2, 4, 1 | - |

| 2 | 3, 5, 2, 4, 1 | - |

| 3 | 3, 5, 2, 4, 1 | - |

| 5 | 3, 5, 2, 4, 1 | - |

| 8 | 3, 5, 2, 4, 1 | - |

| 1 | 3, 5, 2, 4, 1 | - |
| 4 | 3, 5, 2, 4, 1 | - |

c) Comparison:

The optimal caching strategy has fewer evictions (1 vs 12) because it anticipates future references
and evicts blocks that won't be needed for the longest time.

Question 7:

a) Merging and Sorting:

Algorithm:

* Create an empty list C.

* Initialize two pointers, i and j, to the beginning of lists A and B, respectively.

* While both lists A and B have elements:

* If A[i] >= B[j], append B[j] to C and increment j.

* Otherwise, append A[i] to C and increment i.

* Append any remaining elements from A or B to C.

b) Merge-Sort vs. Quick-Sort:

* Merge-Sort: Guarantees O(n log n) time complexity in all cases, but requires additional space for
merging.

* Quick-Sort: On average, has O(n log n) time complexity, but can degenerate to O(n^2) in the worst
case (e.g., sorted or reverse-sorted input).

c) Distance from Ascending Order:

For the given list {12, 3, 20, 7, 5, 16, 4, 10, 8}, the distance from ascending order is 16.

Question 8:

a) Divide and Conquer for Counting 0s:

* Base Case: If the string is empty, return 0. If the string is a single bit, return 1 if it's 0, otherwise 0.

* Divide: Divide the string into two halves.

* Conquer: Recursively count the number of 0s in each half.

* Combine: Add the number of 0s in each half.

b) Recurrence Relation and Time Complexity:

Recurrence relation: T(n) = 2T(n/2) + O(1)

Time complexity: O(n)

c) Comparison with Naive Algorithm:

The divide and conquer approach is more efficient than the naive algorithm, which iterates through
the entire string once, taking O(n) time.
If you have any more questions or specific parts you'd like further explanation on, feel free to ask!

Certainly, let's tackle the questions you've provided.

Question 9

(a) Compare Dynamic Programming with Greedy Paradigm:

Dynamic Programming (DP)

* Approach:

* Breaks down a problem into smaller overlapping subproblems.

* Solves each subproblem once and stores the results in a table.

* Uses the solutions of the subproblems to solve the larger problem.

* Key Characteristics:

* Overlapping subproblems

* Optimal substructure (optimal solution to the problem can be constructed from optimal solutions
to its subproblems)

* Examples:

* Fibonacci sequence

* Knapsack problem

* Longest common subsequence

Greedy Paradigm

* Approach:

* Makes a locally optimal choice at each step in the hope that it will lead to a globally optimal
solution.

* Key Characteristics:

* Does not always guarantee the optimal solution.

* Often simpler to implement than dynamic programming.

* Examples:

* Activity selection problem

* Dijkstra's algorithm for shortest paths

* Huffman coding

Comparison Table:

| Feature | Dynamic Programming | Greedy Paradigm |

|---|---|---|

| Overlapping Subproblems | Yes | No |


| Optimal Substructure | Yes | Not always |

| Solution Guarantee | Always optimal | Not always optimal |

| Complexity | Often higher | Often lower |

| Implementation | More complex | Simpler |

(b) Bellman-Ford Algorithm and Shortest Paths:

Given:

* Directed graph with vertices: A, B, C, D, E

* Edges and weights: A->B (2), B->D (3), A->C (2), C->D (6), C->E (4), E->D (-7), D->C (2)

Objective:

* Check if Bellman-Ford can estimate the shortest path from A to all other nodes.

* If yes, find the shortest path estimates.

* If no, explain the reason.

Analysis:

Bellman-Ford can detect negative weight cycles in a graph. In this graph, there is a negative weight
cycle: E->D (-7) and D->C (2), which together form a cycle with a total weight of -5.

Conclusion:

Since a negative weight cycle exists, Bellman-Ford cannot guarantee the correct shortest path
estimates for all nodes. The algorithm may converge to incorrect results in the presence of negative
weight cycles.

Question 10

(a) Finding Non-Conflicting Intervals with Maximum Weight:

Given:

* Set of intervals with start time, finish time, and weight: S = {(1,3,4), (3,6,5), (6,9,4), (6,7,2), (1,4,3),
(2,7,5)}

Objective:

* Find a set of non-conflicting intervals with the maximum weight.

Recursive Approach:

* Base Case: If the set of intervals is empty, return 0.

* Recursive Step:

* Sort the intervals by their finish time.

* Consider the first interval i.

* Calculate the maximum weight including interval i:


* Find the maximum weight of non-conflicting intervals in the remaining intervals starting from
the interval after the finish time of i.

* Add the weight of interval i to the above result.

* Calculate the maximum weight excluding interval i:

* Find the maximum weight of non-conflicting intervals in the remaining intervals.

* Return the maximum of the two values calculated above.

(b) Time Complexity Estimation:

The time complexity of this recursive solution is O(n^2), where n is the number of intervals. This is
because for each interval, we need to consider the remaining intervals to find the maximum weight
of non-conflicting intervals.

(c) Knapsack Problem with Dynamic Programming:

Given:

* Set of items: S = {i1, i2, i3, i4}

* Weights: w[] = {2, 3, 4, 5}

* Profits: p[] = {3, 4, 5, 6}

* Knapsack capacity: W = 9

Objective:

* Find an optimal selection of items to fill the knapsack so that the total profit of the selected items
is maximum.

Dynamic Programming Approach:

* Create a 2D array dp of size (n+1) x (W+1), where n is the number of items.

* Initialize the first row and first column of dp to 0.

* Iterate through the items and the knapsack capacity:

* If the weight of the current item is less than or equal to the current capacity:

* dp[i][j] = max(dp[i-1][j], dp[i-1][j-w[i-1]] + p[i-1])

* Otherwise:

* dp[i][j] = dp[i-1][j]

* The maximum profit is stored in dp[n][W].

Example:

For the given input, the optimal selection of items would be {i2, i4} with a total profit of 10.

If you have any further questions or would like me to elaborate on any specific aspect, feel free to
ask!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy