0% found this document useful (0 votes)
13 views26 pages

DS Model paper

The document covers fundamental concepts in data structures, including definitions and operations of stacks, time complexity, and types of data structures. It explains linked lists, their insertion and deletion methods, and provides examples of stack operations using arrays. Additionally, it discusses asymptotic notation, including Big O, Omega, and Theta notations, with their definitions, advantages, and disadvantages.

Uploaded by

mrsadiq2k07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

DS Model paper

The document covers fundamental concepts in data structures, including definitions and operations of stacks, time complexity, and types of data structures. It explains linked lists, their insertion and deletion methods, and provides examples of stack operations using arrays. Additionally, it discusses asymptotic notation, including Big O, Omega, and Theta notations, with their definitions, advantages, and disadvantages.

Uploaded by

mrsadiq2k07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Data Structure - Model Exam

2 Marks
1. Define Stack and its operations
• A Stack is a linear data structure that follows the Last In First Out (LIFO) principle,
where the last element inserted is the first to be removed.
• Operations include push (to insert an element), pop (to remove the top element),
and peek (to view the top element without removing it).
Example:
If we push 10, 20, 30 into a stack, popping once will remove 30.

2. Define Time Complexity


• Time Complexity refers to the computational complexity that describes the amount
of time an algorithm takes to run as a function of the input size.
• It helps to compare algorithms and determine which algorithm is more efficient in
terms of execution time.
Example:
A linear search has a time complexity of O(n) where n is the number of elements.

3. List out the applications of Stack


• Function Call Management: Stacks are used to manage function calls and recursive
programming by storing return addresses.
• Expression Evaluation: Stacks are used to evaluate and convert expressions like infix
to postfix or prefix notations.
Example:
The undo feature in editors like Word uses a stack.

4. Define Non-Linear Data Structure


• A Non-linear data structure is one in which data elements are not arranged
sequentially or linearly; elements can be connected in multiple ways.
• Examples include trees and graphs, where hierarchical or network-based
relationships exist between data elements.
Example:
A binary tree is a non-linear data structure.

5. Differentiate Inorder and Preorder


• Inorder Traversal: Visit left subtree, root node, then right subtree (Left → Root →
Right).
• Preorder Traversal: Visit root node first, then left subtree, and then right subtree
(Root → Left → Right).
Example:
For a tree with root 1 and left child 2:
• Inorder: 2, 1
• Preorder: 1, 2
6. List out the various Applications of AVL Trees
• Databases: AVL trees are used in databases to maintain balanced and sorted records
for fast search operations.
• Memory Management: AVL trees are used in memory management to keep track of
free memory areas efficiently.
Example:
Operating systems use AVL trees for scheduling and memory allocation.

7. Define Sorting and Types of Sorting


• Sorting is the process of arranging data in a specified order, usually ascending or
descending.
• Types of sorting include Bubble Sort, Insertion Sort, Selection Sort, Merge Sort, and
Quick Sort.
Example:
Sorting [5, 2, 9] using bubble sort results in [2, 5, 9].

8. Write any two differences between Linear Search and Binary Search
• Linear Search: Scans each element one by one until the target is found; it works on
both sorted and unsorted data.
• Binary Search: Repeatedly divides the sorted list in half to search for the target; it
only works on sorted data.
Example:
To find 3 in [1, 2, 3, 4],
• Linear search checks 1→2→3;
• Binary search checks middle elements directly.

9. Define Graph and its types


• A Graph is a non-linear data structure consisting of nodes (vertices) connected by
edges.
• Types of graphs include Directed Graphs, Undirected Graphs, Weighted Graphs, and
Unweighted Graphs.
Example:
Social networks are modeled using graphs where users are vertices and connections are
edges.

10. Define BFS and DFS


• BFS (Breadth First Search) explores all neighbor nodes at the present depth before
moving on to nodes at the next depth level.
• DFS (Depth First Search) explores as far as possible along each branch before
backtracking.
Example:
In a maze, DFS would explore one full path first, whereas BFS would explore all paths layer
by layer.
16 Marks
Asymptotic notation:
Asymptotic notation is a mathematical notation used in computer science to describe the
efficiency or complexity of algorithms as the input size approaches infinity. It provides a
concise way to express the upper or lower bounds of an algorithm's running time or space
requirements.
The most commonly used asymptotic notations are Big O, Omega, and Theta.

1. Big O Notation (O-notation)


Definition:
• Big O Notation is used to describe the upper limit of an algorithm's running time or
space requirement as a function of input size.
• It focuses on the worst-case scenario, ensuring that the algorithm does not exceed a
certain growth rate as input size increases.
• It helps programmers understand the maximum amount of time an algorithm could
possibly take, which is critical in applications where performance and scalability are
important.
Advantage:
• Helps to guarantee performance even in the worst-case scenarios.
• Useful for analysing maximum resources needed (time or space).
Disadvantage:
• Does not give information about best or average performance.
• May sometimes overestimate actual running time.
Equation:
• f(n) = O(g(n))
if there exist constants c > 0 and n₀ ≥ 0 such that
f(n) ≤ c × g(n) for all n ≥ n₀.

Diagram:

Example:
• In linear search, the time complexity is O(n) because in the worst case we might have
to check all n elements.
2. Omega Notation (Ω-notation)
Definition:
• Omega Notation provides a lower bound for an algorithm's running time or resource
consumption based on input size.
• It describes the best-case scenario performance, meaning that the algorithm will
take at least a certain amount of time or space as the input size grows.
• Omega notation is particularly useful to understand the minimum work an algorithm
must perform, regardless of any optimizations.
Advantage:
• Helps to know the minimum time an algorithm will take under the best conditions.
• Useful in optimistic analysis where best performance matters.
Disadvantage:
• Tells only about best-case, so it might hide poor performance in average or worst
cases.
• Not sufficient alone for critical applications where worst-case matters.
Equation:
• f(n) = Ω(g(n))
if there exist constants c > 0 and n₀ ≥ 0 such that
f(n) ≥ c × g(n) for all n ≥ n₀.

Diagram:

Example:
• In bubble sort, the best case (already sorted array) has time complexity Ω(n) — we
just scan once.

3. Theta Notation (Θ-notation)


Definition:
• Theta Notation gives a tight bound on the running time or space of an algorithm,
meaning it defines both the upper and lower limits simultaneously.
• It ensures that the running time grows at the same rate for both the best-case and
worst-case, giving an accurate, exact measurement of an algorithm's efficiency.
• Theta notation is extremely useful when the algorithm has consistent behavior
across all input conditions.
Advantage:
• Provides a precise measure of an algorithm's growth — both upper and lower
bounds.
• Most useful for accurate analysis of algorithms.
Disadvantage:
• Hard to prove in complex algorithms because it needs both upper and lower bounds
simultaneously.
• Cannot be applied easily when best and worst cases are very different.
Equation:
• f(n) = Θ(g(n))
if there exist constants c₁, c₂ > 0 and n₀ ≥ 0 such that
c₁ × g(n) ≤ f(n) ≤ c₂ × g(n) for all n ≥ n₀.

Diagram:

Example:
• In insertion sort for random input, the time complexity is Θ(n²) — both upper and
lower bounds grow like n².

2. Insertion and Deletion in Linked list

Definition of Linked List


A Linked List is a linear data structure in which elements are not stored at contiguous
memory locations. Instead, each element (called a node) contains two parts:
1. Data: The actual value stored.
2. Pointer (or Link): Address/reference to the next node in the sequence.
Because of this structure, linked lists allow efficient insertion and deletion of elements at any
position compared to arrays, which require shifting of elements.
Unlike arrays, linked lists do not have a fixed size; they can grow or shrink dynamically
during runtime, making them more flexible when dealing with unpredictable data sizes.

Type of Linked
Description
List
Singly Linked Each node points to the next node and the last node points to NULL. (One-
List way traversal)
Doubly Linked Each node contains two pointers: one pointing to the next node and one to
List the previous node. (Two-way traversal)
Circular
In this list, the last node points back to the first node, forming a circle.
Linked List
Insertion in Linked List
Definition
Insertion in a linked list means adding a new node to the list. It can be done at different
locations depending on the requirement:
• At the beginning
• At the end
• At a specific position
Insertion in a linked list is more efficient than arrays because it does not require shifting
elements.

Types of Insertion
1. Insertion at the Beginning:
o Create a new node.
o Set its next pointer to the current head.
o Update the head to point to the new node.
2. Insertion at the End:
o Traverse the list until the last node (whose next is NULL).
o Set its next to point to the new node.
3. Insertion at a Specific Position:
o Traverse to the node after which the new node should be inserted.
o Adjust the pointers accordingly to link the new node.

Advantages of Insertion in Linked List


• No need to shift elements as in arrays.
• Dynamic memory allocation; size can grow as needed.
• Easy to insert elements at any position.
Disadvantages of Insertion in Linked List
• Extra memory is required for storing pointers.
• Random access is not possible (need to traverse nodes sequentially).

Simple C++ Program for Insertion


cpp
CopyEdit
#include<iostream>
using namespace std;

class Node {
public:
int data;
Node* next;
};
// Function to insert node at the beginning
void insertAtBeginning(Node*& head, int newData) {
Node* newNode = new Node();
newNode->data = newData;
newNode->next = head;
head = newNode;
}

// Function to display linked list


void displayList(Node* node) {
while (node != NULL) {
cout << node->data << " -> ";
node = node->next;
}
cout << "NULL" << endl;
}

int main() {
Node* head = NULL;

insertAtBeginning(head, 30);
insertAtBeginning(head, 20);
insertAtBeginning(head, 10);

cout << "Linked List after insertion at beginning: ";


displayList(head);

return 0;
}

Output:
rust
CopyEdit
Linked List after insertion at beginning: 10 -> 20 -> 30 -> NULL

Deletion in Linked List


Definition
Deletion in a linked list means removing an existing node from the list. Like insertion,
deletion also requires pointer adjustments so that the linked structure remains intact after a
node is removed.
Deletion must also carefully free the memory occupied by the removed node to avoid
memory leaks.
Types of Deletion
1. Deletion at the Beginning:
o Point the head to the second node.
o Delete the original first node.
2. Deletion at the End:
o Traverse to the second last node.
o Set its next to NULL.
o Delete the last node.
3. Deletion at a Specific Position:
o Traverse to the node just before the node to be deleted.
o Adjust pointers to skip the node to be deleted.
o Delete the unwanted node.

Advantages of Deletion in Linked List


• Efficient removal of nodes without shifting.
• Deletion at any position is easier compared to arrays.
Disadvantages of Deletion in Linked List
• Traversal is needed to find the node, which may take O(n) time.
• If proper care is not taken, it can cause dangling pointers or memory leaks.

Simple C++ Program for Deletion


cpp
CopyEdit
#include<iostream>
using namespace std;

class Node {
public:
int data;
Node* next;
};

// Function to delete node at the beginning


void deleteAtBeginning(Node*& head) {
if (head == NULL) {
cout << "List is empty!" << endl;
return;
}
Node* temp = head;
head = head->next;
delete temp;
}

// Function to display linked list


void displayList(Node* node) {
while (node != NULL) {
cout << node->data << " -> ";
node = node->next;
}
cout << "NULL" << endl;
}

int main() {
Node* head = new Node();
Node* second = new Node();
Node* third = new Node();

head->data = 10;
second->data = 20;
third->data = 30;

head->next = second;
second->next = third;
third->next = NULL;

cout << "Original Linked List: ";


displayList(head);

deleteAtBeginning(head);

cout << "Linked List after deletion at beginning: ";


displayList(head);

return 0;
}

Output:
rust
CopyEdit
Original Linked List: 10 -> 20 -> 30 -> NULL
Linked List after deletion at beginning: 20 -> 30 -> NULL

3.Stack Operation Using Array

A stack is a linear data structure that follows the Last In, First Out (LIFO) principle. This
means the last element inserted into the stack will be the first one to be removed.
In simpler terms, think of a stack like a pile of plates: you add (push) plates on top, and
remove (pop) plates from the top only.
When implemented using an array, the stack operations are done using an array and a top
variable:
• The array stores the stack elements.
• The top variable keeps track of the index of the last inserted element (topmost
element).
If the top is -1, it means the stack is empty. If top == size - 1, it means the stack is full.

Types of Stack Operations


Operation Definition
Push Adding (inserting) an element onto the top of the stack.
Pop Removing (deleting) the topmost element from the stack.
Peek Viewing the topmost element without removing it from the stack.
Other supporting operations include:
• isEmpty(): Check if the stack is empty.
• isFull(): Check if the stack is full (when implemented using arrays).

Definitions of Stack Operations


1. Push Operation
Push means inserting a new element at the top of the stack.
Steps:
• Check if the stack is full (overflow condition).
• If not full, increment top and insert the element at stack[top].
If push is done on a full stack, it causes stack overflow.
Applications of Push
• Function calls: In recursion, each function call is pushed onto the stack until the base
case is met.
• Undo operations: In applications like text editors, each action (like typing) is pushed
onto a stack, so you can undo the last operation.
• Expression evaluation: Used in converting infix expressions to postfix and evaluating
postfix expressions.

2. Pop Operation
Pop means removing the topmost element from the stack.
Steps:
• Check if the stack is empty (underflow condition).
• If not empty, remove and return the element at stack[top], then decrement top.
If pop is done on an empty stack, it causes stack underflow.
Applications of Pop
• Function calls: When a function returns, the corresponding call is popped from the
stack, and control goes back to the previous function.
• Undo operations: In applications like text editors, you can pop the last action to undo
it.
• Expression evaluation: When evaluating postfix expressions, operands are popped
from the stack for operations.

3. Peek Operation
Peek (or Top) means viewing the topmost element without removing it from the stack.
Steps:
• Check if the stack is empty.
• If not empty, return the value at stack[top].
Applications of Peek
• Inspect top element: In situations where you want to see the most recent item
added to the stack without modifying it (e.g., checking the top of a function call
stack).
• Expression evaluation: Peek can be used to view the top element in an expression
evaluation algorithm to decide the next step (like checking the operator precedence).
• Browser History: In browsers, peek is used to see the last visited URL without
navigating away.

Advantages and Disadvantages of Stack Operations


Advantages
• Simple and easy to implement.
• Fast access to the topmost element.
• Memory efficient for known, limited size stacks (when using arrays).
• Useful in function call management, undo operations, expression evaluation.
Disadvantages
• Fixed size when using arrays — not dynamic (may cause overflow).
• Difficult to access elements other than the top.
• If not properly managed, overflow and underflow conditions can occur.

C++ Program for Stack Operations Using Array


cpp
CopyEdit
#include<iostream>
using namespace std;

#define SIZE 5

class Stack {
private:
int arr[SIZE];
int top;

public:
Stack() {
top = -1; // Initialize stack as empty
}

// Push operation
void push(int value) {
if (top == SIZE - 1) {
cout << "Stack Overflow! Cannot push " << value << endl;
return;
}
top++;
arr[top] = value;
cout << value << " pushed into stack." << endl;
}

// Pop operation
void pop() {
if (top == -1) {
cout << "Stack Underflow! Cannot pop." << endl;
return;
}
cout << arr[top] << " popped from stack." << endl;
top--;
}

// Peek operation
void peek() {
if (top == -1) {
cout << "Stack is empty!" << endl;
} else {
cout << "Top element is: " << arr[top] << endl;
}
}

// Display all elements


void display() {
if (top == -1) {
cout << "Stack is empty!" << endl;
return;
}
cout << "Stack elements: ";
for (int i = top; i >= 0; i--) {
cout << arr[i] << " ";
}
cout << endl;
}
};

int main() {
Stack s;

s.push(10);
s.push(20);
s.push(30);
s.display();

s.peek();

s.pop();
s.display();

return 0;
}

Output
csharp
CopyEdit
10 pushed into stack.
20 pushed into stack.
30 pushed into stack.
Stack elements: 30 20 10
Top element is: 30
30 popped from stack.
Stack elements: 20 10

4 in ct 2 pdf

5. Min heap and Max heap

A Heap is a special type of complete binary tree that satisfies a specific ordering property
called the heap property.
It means:
• Every level of the tree is completely filled except possibly the last.
• The last level has all keys as left as possible.
• Depending on the type of heap, the value of a parent node is either greater (in Max
Heap) or smaller (in Min Heap) than its children.
In a heap, elements are organized such that the highest-priority element can be accessed
efficiently.
Heaps are usually implemented using arrays because:
• In a complete binary tree, for a node at index i,
o Left child is at 2*i + 1
o Right child is at 2*i + 2
o Parent is at (i-1)/2 (integer division)
Thus, heaps provide a very efficient structure for managing ordered data.

Definition of Min Heap


A Min Heap is a special kind of binary heap where the value of the parent node is always
smaller than or equal to the values of its child nodes.
• The smallest element is always at the root of the tree.
• When any subtree is considered, the root is the smallest among all its descendants.
Thus, Min Heap is mainly used when we want to quickly retrieve the minimum element.

Definition of Max Heap


A Max Heap is a special kind of binary heap where the value of the parent node is always
greater than or equal to the values of its child nodes.
• The largest element is always at the root of the tree.
• When any subtree is considered, the root is the largest among all its descendants.
Thus, Max Heap is mainly used when we want to quickly retrieve the maximum element.

Types of Heaps
Type Definition
Min The value of each node is smaller than or equal to the values of its children.
Heap Minimum element at root.
Max The value of each node is greater than or equal to the values of its children.
Heap Maximum element at root.

Applications of Heap
• Priority queues (heap is the standard way to implement them).
• Heap Sort algorithm (based on heap structure).
• Finding the kth largest/smallest elements in a dataset.
• Scheduling CPU jobs, printer jobs, network packets (based on priorities).
• Graph algorithms like Dijkstra's shortest path, Prim’s Minimum Spanning Tree.

C++ Program to Construct a Min Heap


#include<iostream>
#include<vector>
using namespace std;

// Function to heapify a subtree rooted at index i


void minHeapify(vector<int>& heap, int n, int i) {
int smallest = i; // Initialize smallest as root
int left = 2 * i + 1; // left child index
int right = 2 * i + 2; // right child index

if (left < n && heap[left] < heap[smallest])


smallest = left;

if (right < n && heap[right] < heap[smallest])


smallest = right;

if (smallest != i) {
swap(heap[i], heap[smallest]);
minHeapify(heap, n, smallest); // Recursively heapify the affected subtree
}
}

// Function to build a Min Heap


void buildMinHeap(vector<int>& heap) {
int n = heap.size();
for (int i = n / 2 - 1; i >= 0; i--) {
minHeapify(heap, n, i);
}
}

// Function to display the heap


void display(vector<int>& heap) {
for (int val : heap) {
cout << val << " ";
}
cout << endl;
}

int main() {
vector<int> heap = {40, 20, 30, 10, 50, 60, 15};

cout << "Original Array: ";


display(heap);

buildMinHeap(heap);

cout << "Min Heap: ";


display(heap);

return 0;
}
Output
yaml
CopyEdit
Original Array: 40 20 30 10 50 60 15
Min Heap: 10 20 15 40 50 60 30

How Min Heap Looks (Tree Structure):


markdown
CopyEdit
10
/ \
20 15
/ \ /
40 50 60
/
30

C++ Program to Construct a Max Heap

#include<iostream>
#include<vector>
using namespace std;

// Function to heapify a subtree rooted at index i


void maxHeapify(vector<int>& heap, int n, int i) {
int largest = i; // Initialize largest as root
int left = 2 * i + 1; // left child index
int right = 2 * i + 2; // right child index

if (left < n && heap[left] > heap[largest])


largest = left;

if (right < n && heap[right] > heap[largest])


largest = right;

if (largest != i) {
swap(heap[i], heap[largest]);
maxHeapify(heap, n, largest); // Recursively heapify the affected subtree
}
}

// Function to build a Max Heap


void buildMaxHeap(vector<int>& heap) {
int n = heap.size();
for (int i = n / 2 - 1; i >= 0; i--) {
maxHeapify(heap, n, i);
}
}

// Function to display the heap


void display(vector<int>& heap) {
for (int val : heap) {
cout << val << " ";
}
cout << endl;
}

int main() {
vector<int> heap = {40, 20, 30, 10, 50, 60, 15};

cout << "Original Array: ";


display(heap);

buildMaxHeap(heap);

cout << "Max Heap: ";


display(heap);

return 0;
}

Output
yaml
CopyEdit
Original Array: 40 20 30 10 50 60 15
Max Heap: 60 50 30 10 20 40 15

How Max Heap Looks (Tree Structure):


60
/ \
50 30
/ \ /
10 20 40
/
15
6 in ct 2 pdf

7 and 8 in ct 2 pdf

9 Kruskal’s and Prim’s algorithm

Definition of Kruskal’s Algorithm


Kruskal’s Algorithm is a greedy algorithm used for finding the Minimum Spanning Tree
(MST) of a connected, weighted, and undirected graph. It finds the subset of edges that
forms a tree including all the vertices with the minimum total edge weight and without
cycles.
Characteristics of Kruskal’s Algorithm
• Uses Greedy Approach (always selects the smallest available edge).
• Uses Disjoint Set (Union-Find) data structure to detect cycles.
• Works efficiently on sparse graphs (where edges ≪ vertices²).
• Ensures that the resultant tree has minimum total edge weight.

2. Working of Kruskal’s Algorithm


Step-by-Step Explanation
1. Sort all edges in increasing order of their weights.
2. Pick the smallest edge and check if it forms a cycle using the Disjoint Set (Union-
Find).
3. If no cycle is formed, add the edge to the Minimum Spanning Tree (MST).
4. Repeat until we have (V-1) edges in the MST (where V is the number of vertices).

4. Advantages and Disadvantages of Kruskal’s Algorithm


4.1. Advantages
✔ Works well for sparse graphs.
✔ Always produces the optimal MST.
✔ Easy to implement using sorting and Union-Find.
4.2. Disadvantages
✘ Sorting takes O(E log E), making it slower for dense graphs.
✘ Cycle detection adds overhead in large graphs.
✘ Not suitable for dynamic graphs where edges are frequently added or removed.

5. Applications of Kruskal’s Algorithm


✔ Network Design – Optimizing road, electrical, and communication networks.
✔ Computer Vision – Image segmentation using MST.
✔ Cluster Analysis – Finding groups in data science.
✔ Airline Routing – Optimizing flight paths for minimal costs.
Kruskal’s Algorithm Implementation in C++
#include <iostream>
#include <vector>
#include <algorithm>

using namespace std;

class Edge {
public:
int src, dest, weight;
Edge(int s, int d, int w) : src(s), dest(d), weight(w) {}
};

// Function to compare edges based on weight (for sorting)


bool compareEdges(Edge a, Edge b) {
return a.weight < b.weight;
}

// Disjoint Set (Union-Find)


class DisjointSet {
public:
vector<int> parent, rank;

DisjointSet(int n) {
parent.resize(n);
rank.resize(n, 0);
for (int i = 0; i < n; i++)
parent[i] = i;
}

int find(int node) {


if (parent[node] != node)
parent[node] = find(parent[node]);
return parent[node];
}

void unionSet(int u, int v) {


int rootU = find(u);
int rootV = find(v);

if (rootU != rootV) {
if (rank[rootU] < rank[rootV])
parent[rootU] = rootV;
else if (rank[rootU] > rank[rootV])
parent[rootV] = rootU;
else {
parent[rootV] = rootU;
rank[rootU]++;
}
}
}
};

// Kruskal’s Algorithm function


void kruskalMST(vector<Edge> &edges, int V) {
sort(edges.begin(), edges.end(), compareEdges);
DisjointSet ds(V);

vector<Edge> mst;
int totalWeight = 0;

for (Edge &edge : edges) {


if (ds.find(edge.src) != ds.find(edge.dest)) {
ds.unionSet(edge.src, edge.dest);
mst.push_back(edge);
totalWeight += edge.weight;
}
if (mst.size() == V - 1) break;
}

cout << "Minimum Spanning Tree Edges:\n";


for (Edge &edge : mst)
cout << edge.src << " - " << edge.dest << " : " << edge.weight << endl;

cout << "Total Weight of MST: " << totalWeight << endl;
}

int main() {
int V = 6;
vector<Edge> edges = {
{0, 1, 4}, {0, 2, 4}, {1, 2, 2}, {1, 3, 6},
{2, 3, 8}, {2, 4, 5}, {3, 4, 9}, {3, 5, 10},
{4, 5, 3}
};

kruskalMST(edges, V);
return 0;
}
6.1. Output
yaml
CopyEdit
Minimum Spanning Tree Edges:
1-2:2
4-5:3
0-1:4
2-4:5
1-3:6
Total Weight of MST: 20

Definition of Prim's Algorithm

Prim's Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) of a
connected, weighted, and undirected graph. It constructs the MST by starting from an
arbitrary node and iteratively adding the smallest edge that connects to an unvisited node.
Characteristics of Prim's Algorithm

• Starts from any vertex and expands the MST.


• Uses a priority queue (min-heap) to efficiently find the smallest edge.
• Expands in a vertex-by-vertex manner instead of an edge-by-edge manner like
Kruskal’s Algorithm.
• Ensures that no cycles are formed in the MST.

2. Working of Prim's Algorithm


Step-by-Step Explanation
1. Start with any vertex and mark it as part of the MST.
2. Push all its edges into a min-heap (priority queue).
3. Pick the smallest edge that connects a visited node to an unvisited node.
4. Add this edge to the MST and mark the new vertex as visited.
5. Repeat the process until all vertices are included in the MST.

4.1. Advantages
✔ Works well for dense graphs (where edges ≈ vertices²).
✔ Ensures optimal solution for MST.
✔ Uses fewer edge comparisons than Kruskal’s Algorithm.
4.2. Disadvantages
✘ Slower for sparse graphs compared to Kruskal’s Algorithm.
✘ Implementation complexity is higher due to priority queues.
✘ Requires extra data structures (priority queue, adjacency list) for efficiency.
5. Applications of Prim's Algorithm
✔ Network Design – Laying out electrical grids, communication networks, and pipelines.
✔ Graph Theory Problems – Finding an MST in weighted graphs.
✔ Cluster Analysis – Identifying relationships in datasets.
✔ Image Processing – Used in segmentation and pattern recognition.
✔ Flight Route Optimization – Finding the cheapest flight connections.

Prim's Algorithm Implementation in C++ (Using Min-Heap)


#include <iostream>
#include <vector>
#include <queue>
#include <climits>

using namespace std;

typedef pair<int, int> pii;

// Prim's Algorithm function


void primMST(vector<vector<pii>> &graph, int V) {
vector<int> key(V, INT_MAX);
vector<bool> inMST(V, false);
vector<int> parent(V, -1);

priority_queue<pii, vector<pii>, greater<pii>> pq;

key[0] = 0;
pq.push({0, 0});

while (!pq.empty()) {
int u = pq.top().second;
pq.pop();

inMST[u] = true;

for (auto &[v, weight] : graph[u]) {


if (!inMST[v] && weight < key[v]) {
key[v] = weight;
pq.push({key[v], v});
parent[v] = u;
}
}
}
cout << "Minimum Spanning Tree Edges:\n";
for (int i = 1; i < V; i++) {
cout << parent[i] << " - " << i << " : " << key[i] << endl;
}
}

int main() {
int V = 6;
vector<vector<pii>> graph(V);

graph[0].push_back({1, 4});
graph[0].push_back({2, 4});
graph[1].push_back({2, 2});
graph[1].push_back({3, 6});
graph[2].push_back({3, 8});
graph[2].push_back({4, 5});
graph[3].push_back({4, 9});
graph[3].push_back({5, 10});
graph[4].push_back({5, 3});

primMST(graph, V);
return 0;
}
Output
Minimum Spanning Tree Edges:
0-1:4
1-2:2
2-4:5
4-5:3
1-3:6

10 Greedy Algorithm and its example

A Greedy Algorithm is a problem-solving technique where we make a sequence of choices,


each of which looks best at the moment (locally optimal), without worrying about the
global consequences.
The idea is:
• At every step, pick the best immediate option without reconsidering previous
choices.
• This process builds a solution step-by-step, aiming for a globally optimal solution.
Key Principle: "Take what looks best now!"
However, Greedy Algorithms do not always guarantee the globally optimal solution for
every problem. They work only if the problem has the Greedy Choice Property and Optimal
Substructure.
• Greedy Choice Property: A global optimum can be arrived at by choosing a local
optimum.
• Optimal Substructure: An optimal solution to the problem contains optimal solutions
to its subproblems.
Thus, greedy algorithms are simple, fast, and efficient — but they are not always correct for
every problem.

Applications of Greedy Algorithm


• Job Scheduling Problems (Minimize total completion time)
• Activity Selection Problem (Choose maximum number of activities)
• Minimum Spanning Tree (Prim’s and Kruskal’s Algorithms)
• Huffman Coding (Lossless data compression)
• Fractional Knapsack Problem (Maximize profit with weight limit)
• Dijkstra’s Algorithm (Shortest path in graphs)

Advantages of Greedy Algorithm


• Simple and easy to implement
• Fast execution (lower time complexity compared to dynamic programming)
• Often gives correct results for optimization problems like MST, scheduling, Huffman
coding.
• Memory efficient (no need to store large tables like in dynamic programming).

Disadvantages of Greedy Algorithm


• Does not always guarantee an optimal solution for all problems.
• Cannot backtrack or correct a wrong decision made earlier.
• Requires careful analysis to check if a greedy approach will work.
• Works only when the problem has Greedy Choice Property and Optimal
Substructure.

C++ Program Example: Activity Selection Problem using Greedy Algorithm


Problem Statement:
Given n activities with their start and finish times, select the maximum number of activities
that can be performed by a single person, assuming that a person can only work on a single
activity at a time.

cpp
CopyEdit
#include<iostream>
#include<algorithm>
#include<vector>
using namespace std;
// Activity structure
struct Activity {
int start, finish;
};

// Compare activities based on finish time


bool activityCompare(Activity a, Activity b) {
return a.finish < b.finish;
}

// Function to perform activity selection


void activitySelection(vector<Activity>& activities) {
int n = activities.size();

// Sort activities based on finish time


sort(activities.begin(), activities.end(), activityCompare);

cout << "Selected activities are:\n";

// The first activity always gets selected


cout << "(" << activities[0].start << ", " << activities[0].finish << ")\n";
int lastFinishTime = activities[0].finish;

// Consider the rest of the activities


for (int i = 1; i < n; i++) {
if (activities[i].start >= lastFinishTime) {
cout << "(" << activities[i].start << ", " << activities[i].finish << ")\n";
lastFinishTime = activities[i].finish;
}
}
}

int main() {
vector<Activity> activities = { {5, 9}, {1, 2}, {3, 4}, {0, 6}, {5, 7}, {8, 9} };

activitySelection(activities);

return 0;
}

Output
Selected activities are:
(1, 2)
(3, 4)
(5, 7)
(8, 9)

How the Greedy Algorithm Works Here:


1. Sort all activities by finish time.
2. Pick the activity that finishes earliest.
3. Skip all activities that overlap with the picked activity.
4. Repeat the process.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy