Jugal 1976 - Ge Assign
Jugal 1976 - Ge Assign
Section A
In contrast, a Binary Search Tree (BST) is a special type of binary tree where the nodes are
arranged in a specific order: for every node, the left child must contain a value less than the
parent, and the right child must contain a value greater than the parent. This property allows for
efficient searching, insertion, and deletion—ideally in O(log n) time if the tree is balanced.
So, while a BST is a type of binary tree with a strict order rule, a general binary tree has no such
constraint. Not all binary trees are BSTs, but all BSTs are binary trees.
Storing all the zero elements in a sparse matrix wastes memory and slows down computations.
So, instead of storing the entire matrix, we use special data structures that store only the non-
zero elements and their positions.
Example:
sparse_matrix = [(0, 0, 5), (0,0,0), (0, 0, 6)]
How it Works:
• Start from the first element.
• Compare each element with the target.
• If a match is found, return the index.
• If not, move to the next element.
• If the end is reached without finding the target, return -1.
Requirements:
• The data does not need to be sorted.
Example in Python:
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i
return -1
Binary Search
Definition:
Binary Search is a more efficient algorithm that works on sorted lists by repeatedly dividing the
search space in half until the target is found or determined to be missing.
How it Works:
• Find the middle element of the list.
• Compare the middle element with the target.
o If equal, return the index.
o If the target is less, repeat the search on the left half.
o If the target is more, search the right half.
• Continue until the search space is empty.
Requirements:
• The list must be sorted in ascending or descending order.
Time Complexity:
• Best Case: O(1) (if the target is the middle element)
• Average Case: O(log n)
• Worst Case: O(log n)
Example in Python:
def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low <= high:
For example, in a Postfix expression (Reverse Polish Notation) like 23*54*+9-, the stack helps
compute the result without using parentheses. It stores intermediate results and operators in a
last-in, first-out (LIFO) manner.
2. Function Call Management (Recursion)
When a program calls a function, especially recursively, the call stack keeps track of:
• Function calls
• Local variables
• Return addresses
Each time a function is called, a stack frame is pushed onto the stack. When the function
completes, its frame is popped off. This mechanism is essential for recursion to work correctly.
(e) Insertion Sort works better than QuickSort in the following situations:
1. Small Input Sizes
For small datasets (typically n < 10–20), Insertion Sort can outperform QuickSort due to:
That's why some hybrid algorithms (like TimSort or Introsort) switch to Insertion Sort for small
subarrays.
2. Nearly Sorted or Sorted Arrays
Example:
Sorted array: [1, 2, 3, 4, 5] → Insertion sort just compares once per element.
If we try to remove a node from an empty linked list, the program may encounter an error
because there is no node to delete. Specifically:
• In Python, if the head is None, accessing head.next or trying to delete it will raise an
AttributeError.
• This can cause the program to crash or behave unexpectedly.
Before attempting to remove a node, you should always check if the linked list is empty (i.e., if
head is None). If it is, handle it gracefully.
Example in Python:
class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def delete_front(self):
if self.head is None:
print("Linked list is empty. Cannot delete.")
else:
print(f"Deleting: {self.head.data}")
self.head = self.head.next
This approach prevents runtime errors and ensures safe deletion.
(g) Differences between Arrays and Linked Lists
In Python, multi-dimensional arrays are typically represented using lists or libraries like NumPy.
Python uses row-major order for storing arrays in memory, which means the elements of an
array are stored row by row, just like in languages like C.
Row-major Mapping in Python:
When you define a 2D array in Python, the elements are stored in memory row by row.
Example:
Consider the following 2D array in Python:
arr = [
[a11, a12, a13],
[a21, a22, a23],
[a31, a32, a33]
]
In row-major order, the elements are stored as:
Copy code
a11, a12, a13, a21, a22, a23, a31, a32, a33
This means Python stores the elements in memory starting from the first row, then the second
row, and so on.
Column-major Mapping:
Although Python uses row-major order by default, you can simulate column-major order by
treating the array in a transposed form. The transposed version of the 2D array would have the
columns stored as rows.
Advantages of Recursion:
1. Simpler Code: Recursion can make code easier to understand and implement, especially
for problems that naturally fit recursive solutions (e.g., tree traversal, factorial, Fibonacci
series).
2. Ease of Problem Solving: It allows problems to be broken down into smaller sub-
problems, making them easier to solve.
Disadvantages of Recursion:
1. Memory Overhead: Each recursive call adds a new frame to the stack, which can lead to a
stack overflow if the recursion is too deep or if there is insufficient stack memory.
2. Performance Issues: Recursive solutions can be inefficient if not optimized (e.g., using
memoization), leading to redundant calculations and slower performance compared to
iterative solutions.
Section B
2(a) 1. Finding Minimum Value:
• Sorted Data:
o Time Complexity: O(1)
o The minimum value is always at the first position of a sorted data structure
(ascending order), so it can be retrieved directly.
• Unsorted Data:
o Time Complexity: O(n)
o You must traverse the entire dataset to find the minimum value, as there is no
guarantee about the order of the elements.
2. Finding Median:
• Sorted Data:
o Time Complexity: O(1)
o For sorted data, the median can be directly accessed if the number of elements is
known. If the dataset has an odd number of elements, the median is the middle
element, while for an even number of elements, it is the average of the two
middle elements.
• Unsorted Data:
o Time Complexity: O(n log n) (for sorting the data) + O(1) (for finding the median
after sorting)
o To find the median in unsorted data, you must first sort the data, which takes O(n
log n) time, and then find the median by accessing the appropriate position(s) in
the sorted list.
3. Computing Average:
• Sorted Data:
o Time Complexity: O(n)
o To compute the average, you must sum all the elements and then divide by the
number of elements. Sorting does not improve this operation.
• Unsorted Data:
o Time Complexity: O(n)
o Similarly, for unsorted data, you simply sum all the elements and divide by the
number of elements. Sorting is not required for calculating the average.
Summary:
• Minimum: Easier to find in sorted data (O(1)), requires full traversal in unsorted data
(O(n)).
• Median: Quick in sorted data (O(1)), requires sorting in unsorted data (O(n log n)).
• Average: Same time complexity for both sorted and unsorted data (O(n)).
• Quick Sort: Often faster in practice (O(n log n) on average), but can degrade to O(n²) in
the worst case and is not stable.
• Heap Sort: Also O(n log n) but slower in practice and not stable; it has O(1) space
complexity.
• Insertion Sort: Efficient for small or nearly sorted datasets (O(n) for nearly sorted), but
slower for larger datasets (O(n²)).
• Bubble Sort: Rarely used in practice due to inefficiency (O(n²)).
In conclusion, Merge Sort is a strong choice for large or stable sorts but may not always be the
most efficient for smaller datasets or in-memory sorting.
Operations:
1. push(k) -> [a, d, e, f, g, k]
2. pop() -> [a, d, e, f, g]
3. push(l) -> [a, d, e, f, g, l]
4. push(s) -> Overflow (no space)
5. pop(stack, item)-> [a, d, e, f, g]
6. push(stack, t)-> [a, d, e, f, g, t]
Initial: Empty
Sequence:
enqueue(C) -> [C]
enqueue(O) -> [C, O]
dequeue() -> [O]
enqueue(M) -> [O, M]
enqueue(P) -> [O, M, P]
dequeue() -> [M, P]
enqueue(U) -> [M, P, U]
dequeue() -> [P, U]
dequeue() -> [U]
enqueue(T) -> [U, T]
enqueue(E) -> [U, T, E]
enqueue(R) -> [U, T, E, R]
2. dequeue():
Queue: [_, _, _, _]
Front: -1, Rear: -1 (Queue is empty after this operation)
4. enqueue(3):
Queue: [3, _, _, _]
Front: 0, Rear: 0
5. enqueue(7):
Queue: [3, 7, _, _]
Front: 0, Rear: 1
6. enqueue(9):
Queue: [3, 7, 9, _]
Front: 0, Rear: 2
7. enqueue(0):
Queue: [3, 7, 9, 0]
Front: 0, Rear: 3
Final Positions:
Front: 0
Rear: 3
def func(head):
if head is None:
return
func(head.next)
print(head.data, end=" "
Step-by-Step Traversal:
Function Call Action
func(None) returns
54321
Inorder sequence:
13 → 3 → 4 → 17 → 15 → 11 → 6 → 5 → 21 → 29
• Visit root
• Visit left subtree
• Visit right subtree
Preorder sequence:
6 → 11 → 3 → 13 → 4 → 17 → 15 → 5 → 21 → 29
Postorder sequence:
13 → 17 → 4 → 3 → 15 → 11 → 29 → 21 → 5 → 6
4. Height of the Tree
• Height of a binary tree = Number of edges on the longest path from root to leaf.
Path (longest):
• 6 → 5 → 21 → 29
Thus, Height = 4
In this tree:
6(b) Difference between Breadth First (BFS) and Depth First (DFS) Traversals:
Breadth First Search (BFS) Depth First Search (DFS)
def fun(i):
if i == 2:
return 1
else:
return (i - 1) * fun(i - 1)
(i) fun(6) → works fine
print(fun(6)) # Output: 120
(ii) fun(1) → leads to infinite recursion
print(fun(1))
# This will keep calling fun(0), fun(-1), ... until RecursionE