0% found this document useful (0 votes)
3 views17 pages

Jugal 1976 - Ge Assign

The document provides an overview of various data structures and algorithms, including binary trees, sparse matrices, search algorithms, stacks, sorting methods, and linked lists. It discusses their properties, applications, and performance characteristics, along with examples in Python. Additionally, it covers concepts like priority queues, recursion, and the differences between arrays and linked lists.

Uploaded by

sonamdeswal.11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views17 pages

Jugal 1976 - Ge Assign

The document provides an overview of various data structures and algorithms, including binary trees, sparse matrices, search algorithms, stacks, sorting methods, and linked lists. It discusses their properties, applications, and performance characteristics, along with examples in Python. Additionally, it covers concepts like priority queues, recursion, and the differences between arrays and linked lists.

Uploaded by

sonamdeswal.11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Assignment – Data Structures

Name – Jugal khanchi


Roll no – 1976
Course : English honours

Section A

(a) Difference between Binary Tree and Binary Search Tree


A Binary Tree is a data structure where each node can have at most two children—commonly
referred to as the left and right child. It doesn't follow any specific ordering of the elements,
meaning the left child could be smaller or greater than the parent, and the same goes for the
right child. Binary trees are used in a variety of applications like expression parsing, hierarchical
representations, and more.

In contrast, a Binary Search Tree (BST) is a special type of binary tree where the nodes are
arranged in a specific order: for every node, the left child must contain a value less than the
parent, and the right child must contain a value greater than the parent. This property allows for
efficient searching, insertion, and deletion—ideally in O(log n) time if the tree is balanced.

So, while a BST is a type of binary tree with a strict order rule, a general binary tree has no such
constraint. Not all binary trees are BSTs, but all BSTs are binary trees.

(b) Explain Sparse Matrix


Sparse Matrix:
A sparse matrix is a matrix in which most of the elements are zero. These matrices are common
in areas like machine learning, image processing, graph algorithms, and scientific computing,
where data often comes in a form with many zero values.

Storing all the zero elements in a sparse matrix wastes memory and slows down computations.
So, instead of storing the entire matrix, we use special data structures that store only the non-
zero elements and their positions.
Example:
sparse_matrix = [(0, 0, 5), (0,0,0), (0, 0, 6)]

(c) Linear Search vs Binary Search


Linear Search
Definition:
Linear Search is a simple searching algorithm that checks each element of the array or list one by
one until the target value is found or the end of the list is reached.

How it Works:
• Start from the first element.
• Compare each element with the target.
• If a match is found, return the index.
• If not, move to the next element.
• If the end is reached without finding the target, return -1.
Requirements:
• The data does not need to be sorted.

• Works on both sorted and unsorted lists.


Time Complexity:
• Best Case: O(1) (if the target is the first element)
• Average Case: O(n)
• Worst Case: O(n) (if the target is the last element or not present)

Example in Python:
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:

return i
return -1
Binary Search
Definition:
Binary Search is a more efficient algorithm that works on sorted lists by repeatedly dividing the
search space in half until the target is found or determined to be missing.
How it Works:
• Find the middle element of the list.
• Compare the middle element with the target.
o If equal, return the index.
o If the target is less, repeat the search on the left half.
o If the target is more, search the right half.
• Continue until the search space is empty.
Requirements:
• The list must be sorted in ascending or descending order.

Time Complexity:
• Best Case: O(1) (if the target is the middle element)
• Average Case: O(log n)
• Worst Case: O(log n)

Example in Python:
def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low <= high:

mid = (low + high) // 2


if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1

(d) Two Applications of Stack


1. Expression Evaluation and Conversion
Stacks are widely used in evaluating and converting expressions, especially in:
• Infix to Postfix/Prefix conversion
• Postfix/Prefix expression evaluation

For example, in a Postfix expression (Reverse Polish Notation) like 23*54*+9-, the stack helps
compute the result without using parentheses. It stores intermediate results and operators in a
last-in, first-out (LIFO) manner.
2. Function Call Management (Recursion)

When a program calls a function, especially recursively, the call stack keeps track of:

• Function calls
• Local variables
• Return addresses

Each time a function is called, a stack frame is pushed onto the stack. When the function
completes, its frame is popped off. This mechanism is essential for recursion to work correctly.

(e) Insertion Sort works better than QuickSort in the following situations:
1. Small Input Sizes
For small datasets (typically n < 10–20), Insertion Sort can outperform QuickSort due to:

• Lower overhead (no recursive function calls)


• Simpler logic

That's why some hybrid algorithms (like TimSort or Introsort) switch to Insertion Sort for small
subarrays.
2. Nearly Sorted or Sorted Arrays

• Insertion Sort runs in O(n) time for nearly sorted data.


• It only needs to make a few comparisons and shifts.
QuickSort, in contrast, still performs multiple recursive calls and partitioning steps regardless of
how sorted the data already is.

3. When Stability Is Required and Memory Is Limited

• Insertion Sort is a stable sort and in-place (uses no extra memory).


• QuickSort is not stable by default and may use more memory depending on the
implementation.

Example:
Sorted array: [1, 2, 3, 4, 5] → Insertion sort just compares once per element.

(f) Removing node from empty linked list

If we try to remove a node from an empty linked list, the program may encounter an error
because there is no node to delete. Specifically:
• In Python, if the head is None, accessing head.next or trying to delete it will raise an
AttributeError.
• This can cause the program to crash or behave unexpectedly.
Before attempting to remove a node, you should always check if the linked list is empty (i.e., if
head is None). If it is, handle it gracefully.
Example in Python:
class Node:
def __init__(self, data):

self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def delete_front(self):
if self.head is None:
print("Linked list is empty. Cannot delete.")
else:
print(f"Deleting: {self.head.data}")
self.head = self.head.next
This approach prevents runtime errors and ensures safe deletion.
(g) Differences between Arrays and Linked Lists

Feature Array Linked List


Dynamic size; non-contiguous
Memory allocation Fixed size; contiguous memory
memory
Expensive at middle or start (shifting Efficient insertion/deletion at any
Insertion/Deletion
needed) position

(h) Row Major vs Column Major Mapping

In Python, multi-dimensional arrays are typically represented using lists or libraries like NumPy.
Python uses row-major order for storing arrays in memory, which means the elements of an
array are stored row by row, just like in languages like C.
Row-major Mapping in Python:

When you define a 2D array in Python, the elements are stored in memory row by row.
Example:
Consider the following 2D array in Python:
arr = [
[a11, a12, a13],
[a21, a22, a23],
[a31, a32, a33]
]
In row-major order, the elements are stored as:
Copy code
a11, a12, a13, a21, a22, a23, a31, a32, a33
This means Python stores the elements in memory starting from the first row, then the second
row, and so on.

Column-major Mapping:
Although Python uses row-major order by default, you can simulate column-major order by
treating the array in a transposed form. The transposed version of the 2D array would have the
columns stored as rows.

For the same 2D array, the transposition would look like:


arr_transposed = [
[a11, a21, a31],
[a12, a22, a32],
[a13, a23, a33]
]
In column-major order, the elements would be stored as:
Copy code

a11, a21, a31, a12, a22, a32, a13, a23, a33


This is conceptually similar to column-major mapping, but Python itself uses row-major order for
its native lists. To achieve column-major behavior in practice, you'd typically use a library like
NumPy, which offers more efficient ways of handling such operations and provides a .T attribute
for transposing arrays.

(i) Priority Queue using Linked List


A priority queue is a type of data structure in which each element has a priority level. Elements
with higher priority are dequeued before elements with lower priority, regardless of the order
they were inserted. It is commonly used in scenarios like scheduling tasks, simulations, and
network traffic management.
• Operations: The two primary operations of a priority queue are:
1. Insert: Adding an element with a specified priority.
2. Extract/Remove: Removing the element with the highest priority.
Implementing Priority Queue using Linked List:
A priority queue can be implemented using a linked list by maintaining the elements in order
according to their priority. This way, when an element is added to the queue, it is inserted at the
appropriate position based on its priority, ensuring that the highest-priority element is always at
the front.
• Steps:
1. Insert Operation:
▪ Traverse the linked list to find the appropriate position for the new
element based on its priority.
▪ Insert the element at that position.
2. Delete Operation:
▪Remove the element at the front of the list (this will be the element with
the highest priority).
• Example: If we have a priority queue with elements (value, priority): (5, 2), (10, 1), (3, 3),
and we insert (8, 2), the list will be ordered as: (3, 3), (5, 2), (8, 2), (10, 1).

(j) Advantages and Disadvantages of Recursion

Advantages of Recursion:
1. Simpler Code: Recursion can make code easier to understand and implement, especially
for problems that naturally fit recursive solutions (e.g., tree traversal, factorial, Fibonacci
series).
2. Ease of Problem Solving: It allows problems to be broken down into smaller sub-
problems, making them easier to solve.
Disadvantages of Recursion:
1. Memory Overhead: Each recursive call adds a new frame to the stack, which can lead to a
stack overflow if the recursion is too deep or if there is insufficient stack memory.

2. Performance Issues: Recursive solutions can be inefficient if not optimized (e.g., using
memoization), leading to redundant calculations and slower performance compared to
iterative solutions.

Section B
2(a) 1. Finding Minimum Value:

• Sorted Data:
o Time Complexity: O(1)
o The minimum value is always at the first position of a sorted data structure
(ascending order), so it can be retrieved directly.
• Unsorted Data:
o Time Complexity: O(n)
o You must traverse the entire dataset to find the minimum value, as there is no
guarantee about the order of the elements.

2. Finding Median:

• Sorted Data:
o Time Complexity: O(1)
o For sorted data, the median can be directly accessed if the number of elements is
known. If the dataset has an odd number of elements, the median is the middle
element, while for an even number of elements, it is the average of the two
middle elements.
• Unsorted Data:
o Time Complexity: O(n log n) (for sorting the data) + O(1) (for finding the median
after sorting)
o To find the median in unsorted data, you must first sort the data, which takes O(n
log n) time, and then find the median by accessing the appropriate position(s) in
the sorted list.

3. Computing Average:

• Sorted Data:
o Time Complexity: O(n)
o To compute the average, you must sum all the elements and then divide by the
number of elements. Sorting does not improve this operation.
• Unsorted Data:
o Time Complexity: O(n)
o Similarly, for unsorted data, you simply sum all the elements and divide by the
number of elements. Sorting is not required for calculating the average.

Summary:

• Minimum: Easier to find in sorted data (O(1)), requires full traversal in unsorted data
(O(n)).
• Median: Quick in sorted data (O(1)), requires sorting in unsorted data (O(n log n)).
• Average: Same time complexity for both sorted and unsorted data (O(n)).

2(b) Is Merge Sort the best sorting algorithm?


Merge Sort is an efficient sorting algorithm with O(n log n) time complexity, making it ideal for
large datasets or external sorting. It is stable but requires O(n) extra space, which can be a
disadvantage compared to in-place algorithms like Quick Sort and Heap Sort.

Comparison with Other Algorithms:

• Quick Sort: Often faster in practice (O(n log n) on average), but can degrade to O(n²) in
the worst case and is not stable.
• Heap Sort: Also O(n log n) but slower in practice and not stable; it has O(1) space
complexity.
• Insertion Sort: Efficient for small or nearly sorted datasets (O(n) for nearly sorted), but
slower for larger datasets (O(n²)).
• Bubble Sort: Rarely used in practice due to inefficiency (O(n²)).

Best Use Cases for Merge Sort:


• Large datasets, external memory sorting, or when stability is important.

In conclusion, Merge Sort is a strong choice for large or stable sorts but may not always be the
most efficient for smaller datasets or in-memory sorting.

3(a) Stack Operations

Initial Stack: [a, d, e, f, g, _] (n=6)

Operations:
1. push(k) -> [a, d, e, f, g, k]
2. pop() -> [a, d, e, f, g]
3. push(l) -> [a, d, e, f, g, l]
4. push(s) -> Overflow (no space)
5. pop(stack, item)-> [a, d, e, f, g]
6. push(stack, t)-> [a, d, e, f, g, t]

3(b) Convert Infix to Postfix

Infix Expression: A * (B + D) / E - F * (G + H/K)


Read 'A'
• Postfix Expression: A
• Stack: (empty)
Read '*'
• Postfix Expression: A
• Stack: *
Read '('
• Postfix Expression: A
• Stack: * (
Read 'B'
• Postfix Expression: A B
• Stack: * (
Read '+'
• Postfix Expression: A B
• Stack: * ( +
Read 'D'
• Postfix Expression: A B D
• Stack: * ( +
Read ')'
• Postfix Expression: A B D +
• Stack: *
Read '/'
• Postfix Expression: A B D + *
• Stack: /
Read 'E'
• Postfix Expression: A B D + * E
• Stack: /
Read '-'
• Postfix Expression: A B D + * E /
• Stack: -
Read 'F'
• Postfix Expression: A B D + * E / F
• Stack: -
Read '*'
• Postfix Expression: A B D + * E / F
• Stack: - *
Read '('
• Postfix Expression: A B D + * E / F
• Stack: - * (
Read 'G'
• Postfix Expression: A B D + * E / F G
• Stack: - * (
Read '+'
• Postfix Expression: A B D + * E / F G
• Stack: - * ( +
Read 'H'
• Postfix Expression: A B D + * E / F G H
• Stack: - * ( +
Read '/'
• Postfix Expression: A B D + * E / F G H
• Stack: - * ( + /
Read 'K'
• Postfix Expression: A B D + * E / F G H K
• Stack: - * ( + /
Read ')'
• Postfix Expression: A B D + * E / F G H K / +
• Stack: - *
End of Expression
• Postfix Expression: A B D + * E / F G H K / + * -
• Stack: (empty)
Postfix Expression: ABD+*E/FGHK/+*-
(Maximum stack depth = 3)

4(a) Queue Operations (Enqueue/Dequeue)

Initial: Empty

Sequence:
enqueue(C) -> [C]
enqueue(O) -> [C, O]
dequeue() -> [O]
enqueue(M) -> [O, M]
enqueue(P) -> [O, M, P]
dequeue() -> [M, P]
enqueue(U) -> [M, P, U]
dequeue() -> [P, U]
dequeue() -> [U]
enqueue(T) -> [U, T]
enqueue(E) -> [U, T, E]
enqueue(R) -> [U, T, E, R]

4(b) Circular Queue Operations


Queue size = 4
Initial Queue: [_, _, _, _]
Operations:
1. enqueue(14):
Queue: [14, _, _, _]
Front: 0, Rear: 0

2. dequeue():
Queue: [_, _, _, _]
Front: -1, Rear: -1 (Queue is empty after this operation)

3. dequeue() (queue empty):


Queue: [_, _, _, _]
Front: -1, Rear: -1 (Queue is still empty)

4. enqueue(3):
Queue: [3, _, _, _]
Front: 0, Rear: 0

5. enqueue(7):
Queue: [3, 7, _, _]
Front: 0, Rear: 1

6. enqueue(9):
Queue: [3, 7, 9, _]
Front: 0, Rear: 2

7. enqueue(0):
Queue: [3, 7, 9, 0]
Front: 0, Rear: 3

8. enqueue(2) (Queue full):


Queue: [3, 7, 9, 0] (No change because the queue is full)
Front: 0, Rear: 3 (Queue remains full, no change in front/rear)

Final Positions:
Front: 0
Rear: 3

5(a) Singly Linked List Operations


i. class Node:
def __init__(self, value):
self.value = value
self.next = None
class LinkedList:
def __init__(self):
self.head = None

def insert_after_n(self, n, value):


temp = self.head
for _ in range(n-1):
if temp is None:
return
temp = temp.next
new_node = Node(value)
new_node.next = temp.next
temp.next = new_node

ii. def delete_end(self):


if self.head is None:
return
if self.head.next is None:
self.head = None
return
temp = self.head
while temp.next.next:
temp = temp.next
temp.next = None
iii. def search(self, value):
temp = self.head
while temp:
if temp.value == value:
return True
temp = temp.next
return False

5(b) Given Linked List (Diagram):


Head -> 1 -> 2 -> 3 -> 4 -> 5 -> None
class Node:
def __init__(self, data):
self.data = data
self.next = None

def func(head):
if head is None:
return
func(head.next)
print(head.data, end=" "

• The function func is recursive.


• It first recursively traverses to the end of the linked list without printing anything.
• When head becomes None (base case), the recursion starts returning back.
• During this return phase, each node's data is printed.
• Thus, it prints the linked list in reverse order.

Step-by-Step Traversal:
Function Call Action

func(1) calls func(2)

func(2) calls func(3)

func(3) calls func(4)

func(4) calls func(5)

func(5) calls func(None)

func(None) returns

During the returning phase:


• Print 5
• Print 4
• Print 3
• Print 2
• Print 1

Final Printed Output:

54321

6(a) Linked List Recursive Print

1. Inorder Traversal (Left, Root, Right)


Steps:

• Visit left subtree


• Visit root
• Visit right subtree

Inorder sequence:

13 → 3 → 4 → 17 → 15 → 11 → 6 → 5 → 21 → 29

2. Preorder Traversal (Root, Left, Right)


Steps:

• Visit root
• Visit left subtree
• Visit right subtree

Preorder sequence:

6 → 11 → 3 → 13 → 4 → 17 → 15 → 5 → 21 → 29

3. Postorder Traversal (Left, Right, Root)


Steps:

• Visit left subtree


• Visit right subtree
• Visit root

Postorder sequence:

13 → 17 → 4 → 3 → 15 → 11 → 29 → 21 → 5 → 6
4. Height of the Tree
• Height of a binary tree = Number of edges on the longest path from root to leaf.

Path (longest):

• 6 → 5 → 21 → 29

Height = 3 edges → 4 levels.

Thus, Height = 4

5. Is it a Complete Binary Tree?


A Complete Binary Tree is a binary tree where:

• Every level, except possibly the last, is completely filled.


• All nodes are as far left as possible.

In this tree:

• Level 2 (nodes 11 and 5) is filled.


• But level 3 and 4 are not filled (nodes missing on the left side in some branches).

Thus, this tree is NOT a complete binary tree.

6(b) Difference between Breadth First (BFS) and Depth First (DFS) Traversals:
Breadth First Search (BFS) Depth First Search (DFS)

Traverses level by level. Traverses depth first before backtracking.

Implemented using Queue. Implemented using Stack (or recursion).

Example (for above tree): 6 → 4 → 11 → 3 → 5 Example: 6 → 4 → 3 → 5 → 11 → 9 → 15 →


→ 9 → 15 → 13 → 17 → 21 → 29 13 → 17 → 21 → 29

More memory intensive. Less memory intensive if tree depth is small.


7(a) Recursive Function Output

def fun(i):
if i == 2:
return 1
else:
return (i - 1) * fun(i - 1)
(i) fun(6) → works fine
print(fun(6)) # Output: 120
(ii) fun(1) → leads to infinite recursion
print(fun(1))
# This will keep calling fun(0), fun(-1), ... until RecursionE

7(b) Recursive Sum of Array


def sum_array(arr, n):
if n <= 0:
return 0
else:
return sum_array(arr, n-1) + arr[n-1]

8(a) Tower of Hanoi Recursive Solution


def tower_of_hanoi(n, source, auxiliary, destination):
if n == 1:
print(f"Move disk 1 from {source} to {destination}")
return
tower_of_hanoi(n-1, source, destination, auxiliary)
print(f"Move disk {n} from {source} to {destination}")
tower_of_hanoi(n-1, auxiliary, source, destination)
tower_of_hanoi(3, 'A', 'C', 'B')

8(b) Address Calculation of 2D Array


Array A[5][5], find A[3][2]
- Row Major Address = (3*5 + 2)*4 = 68 bytes
- Column Major Address = (2*5 + 3)*4 = 52 bytes
(Assuming Base address 0 and size 4 bytes)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy