Data Structures and Algorithm 2&6
Data Structures and Algorithm 2&6
2 MARKS:(part a)
ANSWERS:
3).Justification for the given binary tree being a binary search tree:
To determine if the given binary tree is a binary search tree, we need to check if the property of the
binary search tree is satisfied, i.e., the left subtree of a node contains only nodes with values less than
the node's value, and the right subtree contains only nodes with values greater than the node's value.
The given binary tree does not satisfy the binary search tree property because the right subtree of the
root node contains a node (7) with a value less than the root node (10). Therefore, the given binary tree
is not a binary search tree.
16).Properties of heaps:
Complete binary tree: A heap is a complete binary tree, meaning all levels of the tree are completely
filled, except possibly the last level, which is filled from left to right.
Heap property: In a max heap, for any given node, the value of the node is greater than or equal to the
values of its children. In a min heap, the value of the node is smaller than or equal to the values of its
children.
ANSWERS:
1).Enqueue and Dequeue Operations on Double Ended Queue:
Initial Queue:
Front -> 10 -> 20 -> 30 -> Rear
A singly linked list is a data structure that consists of a sequence of nodes, where each node contains
data and a reference (or link) to the next node in the list. The last node's link points to null, indicating
the end of the list.
Insertion in a singly linked list involves creating a new node and adjusting the links to insert the node
at a specific position. Here's a diagrammatic representation of inserting a new node with data 25 after
the node with data 20:
Disjoint sets are a data structure that represents a collection of disjoint (non-overlapping) sets. Each
set is represented by a root element, and elements within the same set have the same root. The disjoint
set data structure supports two main operations: Union and Find.
Union: Merges two sets into a single set by connecting the root of one set to the root of the other set.
Find: Determines the root of the set to which a particular element belongs.
Example:
Let's consider a set of elements: {1, 2, 3, 4, 5}
Initially, each element is in its own set: {1}, {2}, {3}, {4}, {5}
Find operation:
Find(4) -> Returns the root of the set containing element 4
Root of 4: 2
4).Tower of Hanoi:
The Tower of Hanoi is a classic puzzle that involves three rods and a number of disks of different sizes.
The objective is to move all the disks from the source rod to the destination rod, obeying the following
rules:
Initially:
Source Rod (A): Disk 3 Disk 2 Disk 1
Auxiliary Rod (B):
Destination Rod (C):
The Tower of Hanoi problem is solved when all the disks are successfully moved to the destination rod.
i) Preorder expression:
The preorder expression is obtained by visiting the nodes in the following order: root, left subtree,
right subtree.
Preorder expression: + + a * b c / + + d e f g
Postfix expression: a b c * + d e f + + g / +
7).Strassen's Matrix Multiplication is an algorithm used to efficiently multiply two matrices. It is based
on the divide-and-conquer strategy and is faster than the traditional matrix multiplication algorithm.
The main idea behind Strassen's algorithm is to divide the given matrices into smaller submatrices
recursively and perform fewer multiplications by using clever combinations of these submatrices. The
algorithm works as follows:
8).Constructing a binary search tree (BST) for the given numbers: 100, 20, 500, 10, 30
100
/ \
20 500
/
10
\
30
i) Insert a node with value 40 in this tree:
markdown
Copy code
100
/ \
20 500
/ /
10 40
\
30
ii) Delete the parent node 100:
markdown
Copy code
30
/ \
20 500
/
10
9).A Minimum Spanning Tree (MST) is a tree that spans all the vertices of a connected, weighted graph
with the minimum total weight. In other words, it is a tree that connects all the vertices of the graph
with the minimum possible sum of edge weights.
There are several algorithms to find the minimum spanning tree, such as Prim's algorithm and
Kruskal's algorithm. Here's an example of finding the minimum spanning tree using Kruskal's
algorithm:
Sort the edges in ascending order of their weights: (D, A), (A, B), (C, D), (C, E), (C, F), (B, C), (C, E).
Start with an empty graph.
Take the edge with the smallest weight (D, A) and add it to the graph.
Take the next smallest edge (A, B) and check if it forms a cycle with the edges in the graph. If it doesn't
form a cycle, add it to the graph.
Repeat step 4 for the remaining edges until all vertices are connected.
The resulting graph is the minimum spanning tree.
In this example, the minimum spanning tree is:
(A)---1---(B)
|
4
|
(C)---5---(E)
10). Depth-First Search (DFS) is a graph traversal algorithm that explores as far as possible along each
branch before backtracking. It uses a stack to keep track of the visited vertices.
11).Here's a C program using a singly linked list for insertion and deletion operations:
c
Copy code
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
struct Node* next;
};
if (temp == NULL) {
printf("Node with value %d not found.\n", value);
return;
}
prev->next = temp->next;
free(temp);
printf("Node with value %d deleted.\n", value);
}
int main() {
struct Node* head = NULL;
// Insert nodes
insertNode(&head, 5);
insertNode(&head, 10);
insertNode(&head, 15);
// Delete nodes
deleteNode(&head, 10);
deleteNode(&head, 20);
return 0;
}
Stack: Empty
Prefix Expression: Empty
f: Push f to stack
Stack: f
Prefix Expression: Empty
+: Push + to stack
Stack: f +
Prefix Expression: Empty
e: Push e to stack
Stack: f + e
Prefix Expression: Empty
-: Push - to stack
Stack: f + e -
Prefix Expression: Empty
d: Push d to stack
Stack: f + e - d
Prefix Expression: Empty
*: Push * to stack
Stack: f + e - d *
Prefix Expression: Empty
): Pop operators from stack until an opening parenthesis is encountered
Stack: f + e - d * (
Prefix Expression: ) * d - e + f
c: Push c to stack
Stack: f + e - d * ( c
Prefix Expression: ) * d - e + f
-: Push - to stack
Stack: f + e - d * ( c -
Prefix Expression: ) * d - e + f
b: Push b to stack
Stack: f + e - d * ( c - b
Prefix Expression: ) * d - e + f
+: Push + to stack
Stack: f + e - d * ( c - b +
Prefix Expression: ) * d - e + f
a: Push a to stack
Stack: f + e - d * ( c - b + a
Prefix Expression: ) * d - e + f
A
/ \
B C
/\ /\
D EF G
In-order traversal: D -> B -> E -> A -> F -> C -> G
Pre-order traversal: A -> B -> D -> E -> C -> F -> G
Post-order traversal: D -> E -> B -> F -> G -> C -> A
#include <stdio.h>
if (arr[mid] == key)
return mid;
else if (arr[mid] < key)
low = mid + 1;
else
high = mid - 1;
}
int main() {
int arr[] = {5, 9, 12, 14, 15, 32, 35, 40, 45, 95};
int n = sizeof(arr) / sizeof(arr[0]);
int key = 32;
if (index != -1)
printf("Element found at index %d\n", index);
else
printf("Element not found\n");
return 0;
}
The program defines a binarySearch function that takes an array, its size, and the key to be searched. It
performs the binary search algorithm by dividing the array in half and comparing the key with the
middle element. If the key matches, it returns the index. If the key is smaller, it continues searching in
the lower half; otherwise, it searches in the upper half. The process repeats until the key is found or the
search space is exhausted.
In the main function, an example array is provided, and the binarySearch function is called to search for
the key value of 32. The index of the key is printed if found, or a message is displayed if the key is not
present in the array.
Step 1:
12, 14, 9, 15, 35, 32, 45, 40, 5, 95
Step 2:
12, 9, 14, 15, 32, 35, 40, 5, 45, 95
Step 3:
9, 12, 14, 15, 32, 35, 5, 40, 45, 95
Step 4:
9, 12, 14, 15, 32, 5, 35, 40, 45, 95
Step 5:
9, 12, 14, 15, 5, 32, 35, 40, 45, 95
Step 6:
9, 12, 14, 5, 15, 32, 35, 40, 45, 95
Step 7:
9, 12, 5, 14, 15, 32, 35, 40, 45, 95
Step 8:
9, 5, 12, 14, 15, 32, 35, 40, 45, 95
Step 9:
5, 9, 12, 14, 15, 32, 35, 40, 45, 95
The intermediate steps show the progression of the bubble sort algorithm where the largest elements
"bubble" towards the end of the array in each iteration until the entire array is sorted in ascending
order.
3, 1, 4, 1, 5, 9, 2, 6, 5
Insertion sort works by dividing the array into a sorted and an unsorted region. It iterates through the
unsorted region, taking each element and inserting it into the correct position in the sorted region.
Here are the steps of the insertion sort algorithm applied to the given sequence:
Step 1:
1, 3, 4, 1, 5, 9, 2, 6, 5
Step 2:
1, 3, 4, 1, 5, 9, 2, 6, 5
Step 3:
1, 1, 3, 4, 5, 9, 2, 6, 5
Step 4:
1, 1, 3, 4, 5, 9, 2, 6, 5
Step 5:
1, 1, 3, 4, 5, 9, 2, 6, 5
Step 6:
1, 1, 2, 3, 4, 5, 9, 6, 5
Step 7:
1, 1, 2, 3, 4, 5, 6, 9, 5
Step 8:
1, 1, 2, 3, 4, 5, 5, 6, 9
The intermediate steps show how the insertion sort algorithm gradually builds the sorted region by
inserting each element into its correct position. The sorted region grows from left to right until the
entire sequence is sorted in ascending order.
Adjacency Matrix: A 2D matrix where each cell represents the presence or absence of an edge between
two vertices. If there is an edge between vertices i and j, the matrix cell (i, j) or (j, i) will contain a non-
zero value. Otherwise, it will contain zero. The matrix can be implemented using a 2D array or a
dynamic data structure like a nested list.
Example:
Consider a graph with 4 vertices (A, B, C, D) and the following edges: (A, B), (A, C), (B, D), (C, D).
The adjacency matrix representation would be:
A B C D
A 0 1 1 0
B 1 0 0 1
C 1 0 0 1
D 0 1 1 0
Adjacency List: Each vertex in the graph has a list of its adjacent vertices. This representation can be
implemented using an array of linked lists, where each index corresponds to a vertex, and the linked
list contains the adjacent vertices.
Example:
Using the same graph as above, the adjacency list representation would be:
A: B -> C
B: A -> D
C: A -> D
D: B -> C
Edge List: A list of all the edges in the graph, where each edge is represented by a pair of vertices. This
representation is simple and efficient for sparse graphs.
Example:
Using the same graph as above, the edge list representation would be:
18).The Traveling Salesman Problem (TSP) is a classic optimization problem that seeks to find the
shortest possible route that visits a given set of cities and returns to the starting city, visiting each city
exactly once. The problem can be defined as follows:
Given a complete weighted graph, find the Hamiltonian cycle (a cycle that visits each vertex exactly
once) with the minimum total weight.
Example:
Consider a graph with 4 cities (A, B, C, D) and the following distances between them:
For example, one possible permutation is A → B → C → D → A, which corresponds to the total distance
of 10 + 35 + 30 + 20 = 95. We would need to calculate the total distances for all possible permutations
and select the one with the minimum value.
The TSP is an NP-hard problem, meaning that there is no known efficient algorithm to solve it for large
instances. However, there are approximation algorithms and heuristics that can provide reasonably
good solutions in practice.
19).Breadth First Search (BFS) is a graph traversal algorithm that explores all the vertices of a graph in
breadth-first order, i.e., it visits all the vertices at the same level before moving to the next level. It uses
a queue data structure to keep track of the vertices to be visited next.
BFS Algorithm:
mathematica
Copy code
A: B -> C
B: A -> C -> D
C: A -> B -> D
D: B -> C
Starting with vertex A, the BFS traversal would proceed as follows:
A (visited)
B (visited)
C (visited)
D (visited)
The traversal visits each vertex in breadth-first order, exploring all the vertices at each level before
moving to the next level.
20).Minimum Spanning Tree (MST) is a subset of edges in a connected, undirected graph that connects
all the vertices together with the minimum possible total edge weight. MSTs are useful in network
design, clustering, and various optimization problems.
One of the algorithms to find the minimum spanning tree is Prim's algorithm. Here's a sketch of how
Prim's algorithm works:
Start with an arbitrary vertex (let's say vertex A) and add it to the MST.
Repeat the following steps until all vertices are included in the MST:
Find the minimum-weight edge that connects a vertex in the MST to a vertex outside the MST.
Add the vertex connected by the minimum-weight edge to the MST.
Update the MST with the newly added vertex and edge.
The process continues until all vertices are included in the MST.
Example:
Consider a connected graph with the following edge weights:
A --4-- B
||
523
||
C --1-- D --6-- E
Starting with vertex A, the minimum spanning tree (MST) would be formed as follows:
The resulting minimum spanning tree connects all the vertices (A, B, C, D, E) with the minimum total
edge weight.
21).The push and pop operations are fundamental operations performed on a stack data structure.
Stack is a Last-In-First-Out (LIFO) data structure that allows insertion and deletion of elements from
one end called the "top" of the stack.
Push Operation:
Push(5):
Stack: [5]
Push(8):
Stack: [5, 8]
Push(3):
Stack: [5, 8, 3]
Pop():
Removed Element: 3
Stack: [5, 8]
Pop():
Removed Element: 8
Stack: [5]
Push(2):
Stack: [5, 2]
After performing the push and pop operations, the final stack contains the elements [5, 2]. The push
operation inserts elements at the top, while the pop operation removes elements from the top of the
stack.
22).Asymptotic notations are used to describe the running time complexity of an algorithm in terms of
its input size. The most commonly used asymptotic notations are Big O notation (O), Omega notation
(Ω), and Theta notation (Θ).
Big O notation (O): It represents the upper bound of the running time complexity. It gives an estimate of
the worst-case scenario of an algorithm's performance. For example, if an algorithm has a time
complexity of O(n), it means that the running time grows linearly or less than linearly with the input
size.
Omega notation (Ω): It represents the lower bound of the running time complexity. It gives an estimate
of the best-case scenario of an algorithm's performance. For example, if an algorithm has a time
complexity of Ω(n^2), it means that the running time grows at least quadratically with the input size.
Theta notation (Θ): It represents both the upper and lower bounds of the running time complexity. It
provides a tight estimate of the running time. For example, if an algorithm has a time complexity of
Θ(n), it means that the running time grows linearly with the input size.
These notations help in comparing and analyzing the efficiency of algorithms. They allow us to focus on
the growth rate of the algorithm's performance as the input size increases, rather than exact values or
constants.
Diagram: [Diagram illustrating the growth rates of different time complexities (e.g., O(n), O(n^2), O(log
n), etc.) on the x-axis and the input size (n) on the y-axis. The diagram shows how different functions
grow at different rates as the input size increases.]
23).For the given tree, the post-order and in-order traversals can be determined as follows:
Tree:
1
/
23
/\/
4567
i) Post-order traversal (left subtree, right subtree, root):
4, 5, 2, 6, 7, 3, 1
Post-order traversal visits the nodes in the order: 4, 5, 2, 6, 7, 3, 1, where each node is visited after its
left and right subtrees have been visited.
In-order traversal visits the nodes in the order: 4, 2, 5, 1, 6, 3, 7, where each node is visited between the
visits to its left and right subtrees.
Both traversals provide different sequences of visiting the nodes in the tree, allowing us to examine the
structure and relationships between the nodes in different ways.
24).The maximum number of children that a binary tree node can have is 2. This property
distinguishes a binary tree from other types of trees where a node can have more than two children.
A binary tree is a hierarchical data structure where each node has at most two children, referred to as
the left child and the right child. The left child represents the left subtree, and the right child represents
the right subtree.
Diagrammatic representation:
1
/ \
2 3
/\ /\
4 56 7
In the above binary tree, each node has either 0, 1, or 2 children. For example, node 1 has two children
(2 and 3), node 2 has two children (4 and 5), and leaf nodes (4, 5, 6, 7) have no children.
If a node has more than two children, it would be considered a different type of tree, such as a ternary
tree (three children per node), quaternary tree (four children per node), etc.
Left sub-array:
Pivot: 9
After partitioning: 1, 2, 3, 9, 10
Right sub-array:
Pivot: 15
After partitioning: 15, 22, 30, 45, 64
Recursively apply the steps to the remaining sub-arrays.
Left sub-array:
Pivot: 3
After partitioning: 1, 2, 3
Right sub-array:
Pivot: 64
After partitioning: 22, 30, 45, 64
Continue the process until each sub-array contains only one element.
The final sorted array is obtained by concatenating the sorted sub-arrays.
Sorted Array: 1, 2, 3, 9, 10, 15, 22, 30, 45, 64
Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent
elements, and swaps them if they are in the wrong order. This process is repeated until the entire list is
sorted.
Numbers: 3, 6, 1, 8, 4, 5
Pass 1:
Compare 3 and 6: No swap (3, 6, 1, 8, 4, 5)
Compare 6 and 1: Swap (3, 1, 6, 8, 4, 5)
Compare 6 and 8: No swap (3, 1, 6, 8, 4, 5)
Compare 8 and 4: Swap (3, 1, 6, 4, 8, 5)
Compare 8 and 5: Swap (3, 1, 6, 4, 5, 8)
Pass 2:
Compare 3 and 1: Swap (1, 3, 6, 4, 5, 8)
Compare 3 and 6: No swap (1, 3, 6, 4, 5, 8)
Compare 6 and 4: Swap (1, 3, 4, 6, 5, 8)
Compare 6 and 5: Swap (1, 3, 4, 5, 6, 8)
Pass 3:
Compare 1 and 3: No swap (1, 3, 4, 5, 6, 8)
Compare 3 and 4: No swap (1, 3, 4, 5, 6, 8)
Compare 4 and 5: No swap (1, 3, 4, 5, 6, 8)
Pass 4:
Compare 1 and 3: No swap (1, 3, 4, 5, 6, 8)
Compare 3 and 4: No swap (1, 3, 4, 5, 6, 8)
Pass 5:
Compare 1 and 3: No swap (1, 3, 4, 5, 6, 8)
The sorted array is: 1, 3, 4, 5, 6, 8
The time complexity of bubble sort is O(n^2) in the worst and average case, where n is the number of
elements in the array.
the bubble sort algorithm would perform the following comparisons and swaps:
Pass 1:
3, 6, 1, 8, 4, 5 (compare 3 and 6, no swap)
3, 1, 6, 8, 4, 5 (swap 6 and 1)
3, 1, 6, 8, 4, 5 (compare 6 and 8, no swap)
3, 1, 6, 4, 8, 5 (swap 8 and 4)
3, 1, 6, 4, 8, 5 (compare 8 and 5, no swap)
Pass 2:
1, 3, 6, 4, 8, 5 (swap 3 and 1)
1, 3, 4, 6, 8, 5 (compare 6 and 4, no swap)
1, 3, 4, 6, 8, 5 (compare 6 and 8, no swap)
1, 3, 4, 6, 5, 8 (swap 8 and 5)
Pass 3:
1, 3, 4, 6, 5, 8 (compare 1 and 3, no swap)
1, 3, 4, 6, 5, 8 (compare 3 and 4, no swap)
1, 3, 4, 6, 5, 8 (compare 6 and 5, swap 6 and 5)
Pass 4:
1, 3, 4, 5, 6, 8 (compare 1 and 3, no swap)
1, 3, 4, 5, 6, 8 (compare 3 and 4, no swap)
1, 3, 4, 5, 6, 8 (compare 5 and 6, no swap)
The sorted list is [1, 3, 4, 5, 6, 8].
Bubble sort has a worst-case time complexity of O(n^2), where n is the number of elements in the list.
In this case, there are 6 elements, so the time complexity of bubble sort for this example is O(6^2) =
O(36).
27).Topological Sort:
Topological sort is an algorithm used to order the nodes of a directed acyclic graph (DAG) in such a way
that for every directed edge (u, v), node u comes before node v in the ordering. It is commonly used in
tasks that have dependencies, such as scheduling or compiling.
Example:
Let's consider a simple graph with six nodes and six directed edges:
Graph: 1 2
|↗|
v/v
34
↓↓
5 -> 6
The topological sort for this graph would be [1, 2, 3, 5, 4, 6]. Starting with node 1, we can follow the
steps outlined above:
Choose node 1 (no incoming edges) and add it to the result: [1].
Remove node 1 and its outgoing edge to node 3.
Choose node 2 (no incoming edges) and add it to the result: [1, 2].
Remove node 2 and its outgoing edge to node 4.
Choose node 3 (no incoming edges) and add it to the result: [1, 2, 3].
Remove node 3 and its outgoing edge to node 5.
Choose node 5 (no incoming edges) and add it to the result: [1, 2, 3, 5].
Remove node 5 and its outgoing edge to node 6.
Choose node 4 (no incoming edges) and add it to the result: [1, 2, 3, 5, 4].
Remove node 4 and its outgoing edge to node 6.
Choose node 6 (no incoming edges) and add it to the result: [1, 2, 3, 5, 4, 6].
The resulting topological sort is [1, 2, 3, 5, 4, 6].
Kruskal's algorithm is a greedy algorithm used to find a minimum-cost spanning tree in a connected
weighted graph. It works by selecting edges in ascending order of their weights and adding them to the
spanning tree if they do not create a cycle.
Example:
Let's consider a graph with the following edges and their corresponding weights:
Edges: (A, B) - 2
(B, C) - 3
(A, C) - 4
(B, D) - 1
(C, D) - 5