data structures5
data structures5
1. Array.
2. Linked list.
3. Stack.
4. Queue.
In this article, we will learn about Abstract Data Types (ADTs).
But before understanding what an ADT is, let us consider
different built-in data types provided by programming
languages. Data types such as int, float, double, and long are
built-in types that allow us to perform basic operations like
addition, subtraction, division, and multiplication. However,
there are scenarios where we need custom operations for
different data types. These operations are defined based on
specific requirements and are tailored as needed. To address
such needs, we can create data structures along with their
operations, which are known as Abstract Data Types (ADTs).
For example, we use primitive values like int, float, and char with
the understanding that these data types can operate and be
performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are
possible without detailing their implementation.
Defining ADTs: Examples
Now, let’s understand three common ADT’s: List ADT, Stack ADT,
and Queue ADT.
1. List ADT
Vies of list
The List ADT need to store the required data in the sequence and
should have the following operations:
get(): Return an element from the list at any given position.
insert(): Insert an element at any position in the list.
remove(): Remove the first occurrence of any element from a
non-empty list.
removeAt(): Remove the element at a specified location from
a non-empty list.
replace(): Replace an element at any position with another
element.
size(): Return the number of elements in the list.
isEmpty(): Return true if the list is empty; otherwise, return
false.
isFull(): Return true if the list is full; otherwise, return false.
2. Stack ADT
View of stack
The Queue ADT follows a design similar to the Stack ADT, but the
order of insertion and deletion changes to FIFO. Elements are
inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
enqueue(): Insert an element at the end of the queue.
dequeue(): Remove and return the first element of the queue,
if the queue is not empty.
peek(): Return the element of the queue without removing it,
if the queue is not empty.
size(): Return the number of elements in the queue.
isEmpty(): Return true if the queue is empty; otherwise,
return false.
1.3)overview of time and space complexity analysis for linear data structures
Time and space complexity analysis for linear data structures measures
how much time and memory an algorithm takes to run.its important for
designing software, building web-sites and analyzing large datasets
Time complexity
The amount of time it takes an algorithm to run
The number of operations like comparisions,required to complete the
algorithm
The worst-case time complexity is the maximum time it takes for any input
The average-case time complexity is the average time it takes for an input
The best-case time complexity is the minimum time it takes for an input
Space complexity
The amount of memory an algorithm uses
The fixed amount of space required by the algorithm
The variable amount of space required by the algorithm which
depends on the input size
Calculating time and space complexity
Identify the basic operation in the algorithm
Count how many times the basic operation is performed
Express the count as a function of the input size
Simplify the expression and identify the dominant term
Express the time complexity using big o notation
In Linear Search, we iterate over all the elements of the array and
check if it the current element is equal to the target element. If
we find any element to be equal to the target element, then
return the index of the current element. Otherwise, if no element
is equal to the target element, then return -1 as the element is
not found. Linear search is also known as sequential search.
For example: Consider the array arr[] = {10, 50, 30, 70, 80,
20, 90, 40} and key = 30
1. Initial Setup: Binary search begins by looking at the middle element of the
sorted array.
2. Comparison:
o If the middle element matches the target value, the search is complete.
o If the middle element is greater than the target, the target must be in
the left half of the array (since the array is sorted). So, the search
continues in the left half.
o If the middle element is less than the target, the target must be in the
right half of the array, and the search continues there.
3. Repeat: This process repeats, halving the search range each time, until the
element is found or the search range is empty (indicating the element is not
in the array).
Example:
Given a sorted array: [2, 5, 8, 12, 15, 19, 25, 32, 37, 40] and
the target element 15.
1. Time Complexity:
o Best case: O(1)O(1)O(1) — The target is found in the first
comparison (if it's the middle element).
o Average case: O(logn)O(\log n)O(logn) — With each iteration, the
search space is halved, so the time complexity grows logarithmically
with the size of the array.
o Worst case: O(logn)O(\log n)O(logn) — The algorithm may need to
make logn\log nlogn comparisons to exhaust the search space.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Binary search operates in
constant space, as it only requires a few variables to track the low,
high, and mid indices.
Sorted Array/List: The array must be sorted before applying binary search.
If the array is not sorted, you would need to sort it first, which would take
O(nlogn)O(n \log n)O(nlogn) time.
Efficient Search Space Reduction: Binary search reduces the search space
by half each time, which is why it is much more efficient than linear search
for large datasets.
Advantages:
Efficient for Large Datasets: With a time complexity of O(logn)O(\log
n)O(logn), binary search is very efficient compared to linear search,
especially for large datasets.
Constant Space: It uses O(1)O(1)O(1) space, making it very memory-
efficient.
Disadvantages:
1. Iterate through the list: Starting at the first element, compare the current
element with the next element.
2. Swap if necessary: If the current element is greater than the next one (for
ascending order), swap the two elements.
3. Repeat: Continue this process for the entire list. After each pass, the largest
element "bubbles" to the correct position at the end of the list.
4. Optimization: If during a pass, no swaps are made, the list is already sorted,
and the algorithm can terminate early.
Example:
1. First Pass:
o Compare 5 and 3, swap → [3, 5, 8, 4, 2]
o Compare 5 and 8, no swap → [3, 5, 8, 4, 2]
o Compare 8 and 4, swap → [3, 5, 4, 8, 2]
o Compare 8 and 2, swap → [3, 5, 4, 2, 8]
o After the first pass, the largest element (8) is in its correct position at
the end of the list.
2. Second Pass:
o Compare 3 and 5, no swap → [3, 5, 4, 2, 8]
o Compare 5 and 4, swap → [3, 4, 5, 2, 8]
o Compare 5 and 2, swap → [3, 4, 2, 5, 8]
o After the second pass, the second-largest element (5) is in its correct
position.
3. Third Pass:
o Compare 3 and 4, no swap → [3, 4, 2, 5, 8]
o Compare 4 and 2, swap → [3, 2, 4, 5, 8]
o After the third pass, the third-largest element (4) is in its correct
position.
4. Fourth Pass:
o Compare 3 and 2, swap → [2, 3, 4, 5, 8]
o The list is now sorted.
Time Complexity:
Space Complexity:
Space Complexity: O(1)O(1)O(1) — Bubble Sort is an in-place sorting
algorithm, meaning it doesn’t require additional storage proportional to the
input size.
Advantages:
Disadvantages:
Selection Sort is a simple and intuitive sorting algorithm. It repeatedly selects the
smallest (or largest) element from the unsorted portion of the array and swaps it
with the element at the beginning of the unsorted portion. This process is repeated
until the entire array is sorted.
Example:
Step-by-step Process:
First Pass:
Now, we focus on the unsorted portion of the array: [25, 12, 22, 64].
Find the smallest element in this subarray:
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is smaller.
o Compare 12 with 64, 12 is still smaller.
o The smallest element is 12.
1. Time Complexity:
o Best, Average, and Worst Case: O(n2)O(n^2)O(n2) — Selection Sort
always compares every element with every other element in the
unsorted portion, resulting in a quadratic number of comparisons.
Best Case: Even if the array is already sorted, Selection Sort
still performs O(n2)O(n^2)O(n2) comparisons.
Worst Case: The worst case happens when the array is sorted
in reverse order, which still requires O(n2)O(n^2)O(n2)
comparisons.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Selection Sort is an in-place
sorting algorithm, meaning it requires only a constant amount of
additional space for the temporary variable used in swapping.
Insertion Sort is a simple comparison-based sorting algorithm that builds the final
sorted array one element at a time. It is much like sorting playing cards in your
hands, where you take one card at a time and place it in the correct position relative
to the cards already sorted.
1. Start with the second element (since a single element is trivially sorted).
2. Compare the current element with the elements in the sorted portion of
the array (to its left).
3. Shift all elements that are greater than the current element one position to
the right to make space for the current element.
4. Insert the current element into its correct position.
5. Repeat this process for all the elements in the array.
Example:
Step-by-step Process:
Initial Array:
[5, 2, 9, 1, 5, 6]
We will start with the second element (index 1), because the first element is
trivially considered sorted.
Current element: 2
Compare 2 with 5 (the element to its left).
o 2 < 5, so shift 5 to the right.
o The array now looks like: [5, 5, 9, 1, 5, 6].
Insert 2 into its correct position: Place 2 at the start.
The array becomes: [2, 5, 9, 1, 5, 6].
Current element: 9
Compare 9 with 5 (the element to its left).
o 9 > 5, no shift needed.
Insert 9: It's already in the correct position.
The array remains: [2, 5, 9, 1, 5, 6].
Current element: 1
Compare 1 with 9 (shift 9 right).
o The array becomes: [2, 5, 9, 9, 5, 6].
Compare 1 with 5 (shift 5 right).
o The array becomes: [2, 5, 5, 9, 5, 6].
Compare 1 with 2 (shift 2 right).
o The array becomes: [2, 2, 5, 9, 5, 6].
Insert 1 at the start.
The array becomes: [1, 2, 5, 9, 5, 6].
Fourth Pass (i = 4):
Current element: 5
Compare 5 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 9, 9, 6].
Compare 5 with 5 (no shift needed).
Insert 5 after the first 5.
The array becomes: [1, 2, 5, 5, 9, 6].
Current element: 6
Compare 6 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 5, 9, 9].
Compare 6 with 5 (no shift needed).
Insert 6 after the second 5.
The array becomes: [1, 2, 5, 5, 6, 9].
[1, 2, 5, 5, 6, 9]
1. Time Complexity:
o Best case: O(n)O(n)O(n) — This happens when the array is already
sorted. In this case, only one comparison is made for each element,
and no shifting occurs.
o Worst case: O(n2)O(n^2)O(n2) — This occurs when the array is
sorted in reverse order. Each element needs to be compared and
shifted to the beginning.
o Average case: O(n2)O(n^2)O(n2) — On average, the algorithm will
need to perform about n2/2n^2 / 2n2/2 comparisons and shifts.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Insertion sort is an in-place
sorting algorithm, meaning it doesn't require any additional space
besides the input array.
Small datasets: It is efficient for small arrays where the overhead of more
complex algorithms isn't necessary.
Partially sorted arrays: If the array is already partially sorted, Insertion Sort
can be much faster than other algorithms.
Memory-constrained environments: It uses only O(1)O(1)O(1) extra space,
so it is useful when memory is limited.
Chapter -2
A linked list starts with a head node which points to the first
node. Every node consists of data which holds the actual data
(value) associated with the node and a next pointer which holds
the memory address of the next node in the linked list. The last
node is called the tail node in the list which points
to null indicating the end of the list.
Singly linked lists contain two "buckets" in one node; one bucket
holds the data and the other bucket holds the address of the next
node of the list. Traversals can be done in one direction only as
there is only a single link between two nodes of the same list.
Doubly Linked Lists
Circular linked lists can exist in both singly linked list and doubly
linked list.
Since the last node and the first node of the circular linked list are
connected, the traversal in this linked list will go on forever until it
is broken.
Adding a new node in linked list is a more than one step activity.
We shall learn this with diagrams here. First, create a node using
the same structure and find the location where it has to be
inserted.
Now, the next node at the left should point to the new node.
LeftNode.next -> NewNode;
This will put the new node in the middle of the two. The new list
should look like this −
Insertion at Beginning
Algorithm
1. START
2. Create a node to store the data
3. Check if the list is empty
4. If the list is empty, add the data to the node and
assign the head pointer to it.
5. If the list is not empty, add the data to a node and link to the
current head. Assign the head to the newly added node.
6. END
Insertion at Ending
1. START
2. Create a new node and assign the data
3. Find the last node
4. Point the last node to new node
5. END
Algorithm
1. START
2. Create a new node and assign data to it
3. Iterate until the node at position is found
4. Point first to new first node
5. END
Deletion is also a more than one step process. We shall learn with
pictorial representation. First, locate the target node to be
removed, by using searching algorithms.
The left (previous) node of the target node now should point to
the next node of the target node −
Algorithm
1. START
2. Assign the head pointer to the next node in the list
3. END
Deletion at Ending
Algorithm
1. START
2. Iterate until you find the second last element in the list.
3. Assign NULL to the second last element in the list.
4. END
Algorithm
1. START
2. Iterate until find the current node at position in the list.
3. Assign the adjacent node of current node in the list
to its previous node.
4. END
We have to make sure that the last node is not the last node. So
we'll have some temp node, which looks like the head node
pointing to the last node. Now, we shall make all left side nodes
point to their previous nodes one by one.
Except the node (first node) pointed by the head node, all nodes
should point to their predecessor, making them their new
successor. The first node will point to NULL.
We'll make the head node point to the new first node by using the
temp node.
Algorithm
1. START
2. We use three pointers to perform the reversing:
prev, next, head.
3. Point the current node to head and assign its next value to
the prev node.
4. Iteratively repeat the step 3 for all the nodes in the list.
5. Assign head to the prev node.
Algorithm
1 START
2 If the list is not empty, iteratively check if the list
contains the key
3 If the key element is not present in the list, unsuccessful
search
4 END
The traversal operation walks through all the elements of the list
in an order and displays the elements in that order.
Algorithm
1. START
2. While the list is not empty and did not reach the end of the list,
print the data in each node
3. END
Linked List: Linked lists are less rigid in their storage structure
and elements are usually not stored in contiguous locations,
hence they need to be stored with additional tags giving a
reference to the next element.
Linked-List representation
Linked lists are often used in dynamic memory allocation and management,
where the size of data structures needs to change at runtime.
Stacks
Queues
Sparse Matrices
3. Undo Mechanism in Applications
Many applications (e.g., text editors or Photoshop) use linked lists to store
states for the "Undo" and "Redo" operations.
Each node represents a state of the document, and traversal allows going
back and forth between changes.
4. Dynamic Data Storage
Linked lists are suitable for storing data where the size is unknown or
changes frequently.
Example:
File systems like FAT (File Allocation Table) use linked lists to represent file
blocks. Each block contains a pointer to the next block, enabling sequential
storage of file data.
7. Networking
Doubly and circular linked lists are useful in memory-critical systems where
efficient memory utilization is a priority.
A doubly linked list is used to manage the history of visited web pages,
allowing users to navigate forward and backward efficiently.
11. Music and Media Players
Circular linked lists are used to implement playlists in media players, where
the last song is connected to the first, allowing continuous playback.
Linked lists (particularly adjacency lists) are used in graph algorithms for
efficient representation and traversal of nodes.
Stacks in Data Structures is a linear type of data structure that follows the
LIFO (Last-In-First-Out) principle and allows insertion and deletion
operations from one end of the stack data structure, that is top.
Implementation of the stack can be done by contiguous memory which is
an array, and non-contiguous memory which is a linked list. Stack plays a
vital role in many applications.
You can only see the top, i.e., the top-most book, namely 40, which is kept
top of the stack.
If you want to insert a new book first, namely 50, you must update the top
and then insert a new text.
And if you want to access any other book other than the topmost book that
is 40, you first remove the topmost book from the stack, and then the top
will point to the next topmost book.
There following are some operations that are implemented on the stack.
Push Operation
Push operation involves inserting new elements in the stack. Since you
have only one end to insert a unique element on top of the stack, it inserts
the new element at the top of the stack.
Pop Operation
Pop operation refers to removing the element from the stack again since
you have only one end to do all top of the stack. So removing an element
from the top of the stack is termed pop operation.
Peek Operation
Peek operation refers to retrieving the topmost element in the stack without
removing it from the collections of data elements.
isFull()
isEmpty()
isFull()
The following is the algorithm of the isFull() function:
Begin
If
return true
else
return false
else if
end
Bool isFull()
if(top == maxsize)
return true;
else
return false;
isEmpty()
Begin
If
topless than 1
return true
else
return false
else if
end
Bool isEmpty()
if(top = = -1)
return true;
else
return false;
Push Operation
Step 5: Success
end if
top ->top+1;
end
{
top = top + 1;
stack[top] = item;
else {
printf(“stack is full”);
Pop Operation
Step 5: Success
return null
end if
Return item;
End
If isEmpty()) {
item = stack[top];
top = top - 1;
return item;
else{
printf(“stack if empty”);
Peek Operation
begin to peek
return stack[top];
end
return stack[top];
You can perform the implementation of stacks in data structures using two
data structures that are an array and a linked list.
1. Expression Evaluation:
Infix to Postfix Conversion: The stack helps convert infix expressions (like
A + B * C) to postfix notation (A B C * +) by handling operator
precedence and associativity.
Postfix Expression Evaluation: After conversion to postfix, the expression
can be evaluated using a stack. Operands are pushed onto the stack, and
when an operator is encountered, operands are popped off, the operation is
performed, and the result is pushed back onto the stack.
2. Backtracking:
Example: In a maze, you might explore paths by moving forward (pushing each
decision onto the stack). If you hit a dead end, you backtrack by popping the stack
to find the previous decision point.
3. Reversing a List:
A stack can be used to reverse a list because stacks operate on a Last In, First Out
(LIFO) principle, meaning the last element added will be the first one to be
removed.
Algorithm: Push each element of the list onto a stack, and then pop all
elements from the stack, which will result in the reversed order.
UNIT-4
Queues:-
Properties of a Queue:-
3. Two Ends:
4. Dynamic Size: The size of a queue may grow or shrink as elements are
added or removed.
Operations on a Queue:-
1. Enqueue (Insertion)
2. Dequeue (Deletion)
3. Peek (Front)
5. isFull()
Types of Queues:-
2. Circular Queue – The rear wraps around when it reaches the end of the array.
4. Deque (Double-Ended Queue) – Insertion and deletion can be done from both
ends.
A queue is a linear data structure that follows the First In, First Out (FIFO)
principle. It is characterized by two primary operations. Enqueue adds an
element to the rear end of the queue, and dequeue removes an element
from the front end. how to implement queue using Linked List.
What is a Queue?
A queue is a linear data structure with both ends open for operations and
follows the principle of First In, First Out (FIFO).
The FIFO principle states that the first element getting inside a queue (i.e.,
enqueue) has to be the first element that gets out of the queue(i.e.,
dequeue). To better understand a queue, think of a line to board a bus. The
first person in the line will be the first person to board the bus and vice-
versa.
Take a look at the image below for reference.
What is a Linked List?
A linked list is a node-based linear data structure used to store data
elements. Each node in a linked list is made up of two key components,
namely data and next, that store the data and the address of the next none
in the linked list, respectively.
A single node in a linked list appears in the image below.
A linked list with multiple nodes typically looks like in the image below.
In the above image, 'HEAD' refers to the first node of the linked list, and
'Null' in Node 3's 'next' pointer/reference indicates that there is no additional
node following it, meaning the linked list ends at the third node.
The above figure shows the queue of characters forming the English
word "HELLO". Since, No deletion is performed in the queue till now,
therefore the value of front remains -1 . However, the value of rear
increases by one every time an insertion is performed in the queue. After
inserting an element into the queue shown in the above figure, the queue
will look something like following. The value of rear will become 5 while the
value of front remains same.
After deleting an element, the value of front will increase from -1 to 0.
however, the queue will look something like following.
If the item is to be inserted as the first element in the list, in that case set
the value of front and rear to 0 and insert the element at the rear end.
Otherwise keep increasing the value of rear and insert each element one
by one having rear as the index.
C Function
1. void insert (int queue[], int max, int front, int rear, int item)
2. {
3. if (rear + 1 == max)
4. {
5. printf("overflow");
6. }
7. else
8. {
9. if(front == -1 && rear == -1)
10. {
11. front = 0;
12. rear = 0;
13. }
14. else
15. {
16. rear = rear + 1;
17. }
18. queue[rear]=item;
19. }
20. }
Algorithm to delete an element from the queue
If, the value of front is -1 or value of front is greater than rear , write an
underflow message and exit.
Otherwise, keep increasing the value of front and return the item stored at
the front end of the queue at each time.
C Function
1. int delete (int queue[], int max, int front, int rear)
2. {
3. int y;
4. if (front == -1 || front > rear)
5.
6. {
7. printf("underflow");
8. }
9. else
10. {
11. y = queue[front];
12. if(front == rear)
13. {
14. front = rear = -1;
15. else
16. front = front + 1;
17.
18. }
19. return y;
20. }
21. }
Menu driven program to implement queue using array
1. #include<stdio.h>
2. #include<stdlib.h>
3. #define maxsize 5
4. void insert();
5. void delete();
6. void display();
7. int front = -1, rear = -1;
8. int queue[maxsize];
9. void main ()
10. {
11. int choice;
12. while(choice != 4)
13. {
14. printf("\n*************************Main Menu**************************
***\n");
15. printf("\
n========================================================
=========\n");
16. printf("\n1.insert an element\n2.Delete an element\n3.Display th
e queue\n4.Exit\n");
17. printf("\nEnter your choice ?");
18. scanf("%d",&choice);
19. switch(choice)
20. {
21. case 1:
22. insert();
23. break;
24. case 2:
25. delete();
26. break;
27. case 3:
28. display();
29. break;
30. case 4:
31. exit(0);
32. break;
33. default:
34. printf("\nEnter valid choice??\n");
35. }
36. }
37. }
38. void insert()
39. {
40. int item;
41. printf("\nEnter the element\n");
42. scanf("\n%d",&item);
43. if(rear == maxsize-1)
44. {
45. printf("\nOVERFLOW\n");
46. return;
47. }
48. if(front == -1 && rear == -1)
49. {
50. front = 0;
51. rear = 0;
52. }
53. else
54. {
55. rear = rear+1;
56. }
57. queue[rear] = item;
58. printf("\nValue inserted ");
59.
60. }
61. void delete()
62. {
63. int item;
64. if (front == -1 || front > rear)
65. {
66. printf("\nUNDERFLOW\n");
67. return;
68.
69. }
70. else
71. {
72. item = queue[front];
73. if(front == rear)
74. {
75. front = -1;
76. rear = -1 ;
77. }
78. else
79. {
80. front = front + 1;
81. }
82. printf("\nvalue deleted ");
83. }
84.
85.
86. }
87.
88. void display()
89. {
90. int i;
91. if(rear == -1)
92. {
93. printf("\nEmpty queue\n");
94. }
95. else
96. { printf("\nprinting values .....\n");
97. for(i=front;i<=rear;i++)
98. {
99. printf("\n%d\n",queue[i]);
100. }
101. }
102. }
Output:
*************Main Menu**************
=======================================
=======
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Value inserted
*************Main Menu**************
=======================================
=======
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Value inserted
*************Main Menu**************
===================================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
value deleted
*************Main Menu**************
=======================================
=======
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
90
*************Main Menu**************
=======================================
=======
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
o Memory wastage : The below figure shows how the memory space
is wasted in the array representation of queue. In the above figure, a
queue of size 10 having 3 elements, is shown.
Queues play a critical role in both Breadth-First Search (BFS) and Scheduling
Algorithms, thanks to their FIFO (First-In, First-Out) property.
BFS is a graph traversal algorithm that explores all nodes at the current level
before moving to the next level. A queue helps maintain the correct order of
traversal.
Graph Traversal: BFS is used to explore all reachable nodes from a source node.
Shortest Path in Unweighted Graphs: Since BFS explores layer by layer, it finds
the shortest path (in terms of edges) from the source node.
Social Networking (Friend Suggestions): BFS helps find the shortest connection
path between users.
Web Crawling: A queue is used to visit web pages level by level.
Maze Solving: BFS finds the shortest path from start to finish in a maze.
Queues are widely used in CPU scheduling, process scheduling, and task
scheduling to manage jobs in an orderly manner.
CPU Scheduling:
Round-Robin Scheduling: Each process is given a time slice and placed back into
the queue if not finished.
First Come, First Served (FCFS): The process that arrives first is executed first.
Job Scheduling in Operating Systems: Jobs are placed in a queue and processed
sequentially.
Conclusion:-
In BFS, queues help explore nodes level by level.
The FIFO principle makes queues ideal for both graph traversal (BFS) and
task scheduling in operating systems and networks.
4.4)dequeues:-
What is a queue?
A queue is a data structure in which whatever comes first will go out first,
and it follows the FIFO (First-In-First-Out) policy. Insertion in the queue is
done from one end known as the rear end or the tail, whereas the deletion
is done from another end known as the front end or the head of the
queue.
The deque stands for Double Ended Queue. Deque is a linear data
structure where the insertion and deletion operations are performed from
both ends. We can say that deque is a generalized version of the queue.
o Insertion at front
o Insertion at rear
o Deletion at front
o Deletion at rear
We can also perform peek operations in the deque along with the
operations listed above. Through peek operation, we can get the deque's
front and rear elements of the deque. So, in addition to the above
operations, following operations are also supported in deque -
In this operation, the element is inserted from the front end of the queue.
Before implementing the operation, we first have to check whether the
queue is full or not. If the queue is not full, then the element can be inserted
from the front end by using the below conditions -
o If the queue is empty, both rear and front are initialized with 0. Now,
both will point to the first element.
o Otherwise, check the position of the front if the front is less than 1
(front < 1), then reinitialize it by front = n - 1, i.e., the last index of the
array.
In this operation, the element is inserted from the rear end of the queue.
Before implementing the operation, we first have to check again whether
the queue is full or not. If the queue is not full, then the element can be
inserted from the rear end by using the below conditions -
o If the queue is empty, both rear and front are initialized with 0. Now,
both will point to the first element.
o Otherwise, increment the rear by 1. If the rear is at last index (or size
- 1), then instead of increasing it by 1, we have to make it equal to 0.
Deletion at the front end
In this operation, the element is deleted from the front end of the queue.
Before implementing the operation, we first have to check whether the
queue is empty or not.
If the queue is empty, i.e., front = -1, it is the underflow condition, and we
cannot perform the deletion. If the queue is not full, then the element can
be inserted from the front end by using the below conditions -
If the deque has only one element, set rear = -1 and front = -1.
Else if front is at end (that means front = size - 1), set front = 0.
In this operation, the element is deleted from the rear end of the queue.
Before implementing the operation, we first have to check whether the
queue is empty or not.
If the queue is empty, i.e., front = -1, it is the underflow condition, and we
cannot perform the deletion.
If the deque has only one element, set rear = -1 and front = -1.
Check full
The time complexity of all of the above operations of the deque is O(1), i.e.,
constant.
Applications of deque
Implementation of deque
Now, let's see the implementation of deque in C programming language.
1. #include <stdio.h>
2. #define size 5
3. int deque[size];
4. int f = -1, r = -1;
5. // insert_front function will insert the value from the front
6. void insert_front(int x)
7. {
8. if((f==0 && r==size-1) || (f==r+1))
9. {
10. printf("Overflow");
11. }
12. else if((f==-1) && (r==-1))
13. {
14. f=r=0;
15. deque[f]=x;
16. }
17. else if(f==0)
18. {
19. f=size-1;
20. deque[f]=x;
21. }
22. else
23. {
24. f=f-1;
25. deque[f]=x;
26. }
27. }
28.
29. // insert_rear function will insert the value from the rear
30. void insert_rear(int x)
31. {
32. if((f==0 && r==size-1) || (f==r+1))
33. {
34. printf("Overflow");
35. }
36. else if((f==-1) && (r==-1))
37. {
38. r=0;
39. deque[r]=x;
40. }
41. else if(r==size-1)
42. {
43. r=0;
44. deque[r]=x;
45. }
46. else
47. {
48. r++;
49. deque[r]=x;
50. }
51.
52. }
53.
54. // display function prints all the value of deque.
55. void display()
56. {
57. int i=f;
58. printf("\nElements in a deque are: ");
59.
60. while(i!=r)
61. {
62. printf("%d ",deque[i]);
63. i=(i+1)%size;
64. }
65. printf("%d",deque[r]);
66. }
67.
68. // getfront function retrieves the first value of the deque.
69. void getfront()
70. {
71. if((f==-1) && (r==-1))
72. {
73. printf("Deque is empty");
74. }
75. else
76. {
77. printf("\nThe value of the element at front is: %d", deque[f]);
78. }
79.
80. }
81.
82. // getrear function retrieves the last value of the deque.
83. void getrear()
84. {
85. if((f==-1) && (r==-1))
86. {
87. printf("Deque is empty");
88. }
89. else
90. {
91. printf("\nThe value of the element at rear is %d", deque[r]);
92. }
93.
94. }
95.
96. // delete_front() function deletes the element from the front
97. void delete_front()
98. {
99. if((f==-1) && (r==-1))
100. {
101. printf("Deque is empty");
102. }
103. else if(f==r)
104. {
105. printf("\nThe deleted element is %d", deque[f]);
106. f=-1;
107. r=-1;
108.
109. }
110. else if(f==(size-1))
111. {
112. printf("\nThe deleted element is %d", deque[f]);
113. f=0;
114. }
115. else
116. {
117. printf("\nThe deleted element is %d", deque[f]);
118. f=f+1;
119. }
120. }
121.
122. // delete_rear() function deletes the element from the rear
123. void delete_rear()
124. {
125. if((f==-1) && (r==-1))
126. {
127. printf("Deque is empty");
128. }
129. else if(f==r)
130. {
131. printf("\nThe deleted element is %d", deque[r]);
132. f=-1;
133. r=-1;
134.
135. }
136. else if(r==0)
137. {
138. printf("\nThe deleted element is %d", deque[r]);
139. r=size-1;
140. }
141. else
142. {
143. printf("\nThe deleted element is %d", deque[r]);
144. r=r-1;
145. }
146. }
147.
148. int main()
149. {
150. insert_front(20);
151. insert_front(10);
152. insert_rear(30);
153. insert_rear(50);
154. insert_rear(80);
155. display(); // Calling the display function to retrieve the values of de
que
156. getfront(); // Retrieve the value at front-end
157. getrear(); // Retrieve the value at rear-end
158. delete_front();
159. delete_rear();
160. display(); // calling display function to retrieve values after deletion
161. return 0;
162. }
Output:
.UNIT-5
5.1)Introduction to trees:-
In data structures, trees are hierarchical structures that consist of nodes connected
by edges. Trees are widely used to represent relationships or structures where there
is a clear parent-child relationship. Each node in a tree contains a value or data, and
may have zero or more children. Here’s an introduction to trees and their key
concepts:
Basic Terminology:
Similarly, we can see the left child of root node is greater than its left child
and smaller than its right child. So, it also satisfies the property of binary
search tree. Therefore, we can say that the tree in the above image is a
binary search tree.
In the above tree, the value of root node is 40, which is greater than its left
child 30 but smaller than right child of 30, i.e., 55. So, the above tree does
not satisfy the property of Binary search tree. Therefore, the above tree is
not a binary search tree.
Now, let's see the creation of binary search tree using an example.
Suppose the data elements are - 45, 15, 79, 90, 10, 55, 12, 20, 50
o First, we have to insert 45 into the tree as the root of the tree.
o Then, read the next element; if it is smaller than the root node, insert
it as the root of the left subtree, and move to the next element.
o Otherwise, if the element is larger than the root node, then insert it as
the root of the right subtree.
Now, let's see the process of creating the Binary search tree using the
given data element. The process of creating the BST is shown below -
As 15 is smaller than 45, so insert it as the root node of the left subtree.
As 79 is greater than 45, so insert it as the root node of the right subtree.
Step 4 - Insert 90.
90 is greater than 45 and 79, so it will be inserted as the right subtree of 79.
55 is larger than 45 and smaller than 79, so it will be inserted as the left
subtree of 79.
20 is smaller than 45 but greater than 15, so it will be inserted as the right
subtree of 15.
Step 9 - Insert 50.
50 is greater than 45 but smaller than 79 and 55. So, it will be inserted as a
left subtree of 55.
Now, the creation of binary search tree is completed. After that, let's move
towards the operations that can be performed on Binary search tree.
We can perform insert, delete and search operations on the binary search
tree.
Step1:
Step2:
Step3:
Now, let's see the algorithm to search an element in the Binary search tree.
In a binary search tree, we must delete a node from the tree by keeping in
mind that the property of BST is not violated. To delete a node from BST,
there are three possible situations occur -
We can see the process to delete a leaf node from BST in the below
image. In below image, suppose we have to delete node 90, as the node to
be deleted is a leaf node, so it will be replaced with NULL, and the
allocated space will free.
In this case, we have to replace the target node with its child, and then
delete the child node. It means that after replacing the target node with its
child node, the child node will now contain the value to be deleted. So, we
simply have to replace the child node with NULL and free up the allocated
space.
We can see the process of deleting a node with one child from BST in the
below image. In the below image, suppose we have to delete the node 79,
as the node to be deleted has only one child, so it will be replaced with its
child 55.
So, the replaced node 79 will now be a leaf node that can be easily
deleted.
This case of deleting a node in BST is a bit complex among other two
cases. In such a case, the steps to be followed are listed as follows -
We can see the process of deleting a node with two children from BST in
the below image. In the below image, suppose we have to delete node 45
that is the root node, as the node to be deleted has two children, so it will
be replaced with its inorder successor. Now, node 45 will be at the leaf of
the tree so that it can be deleted easily.
Now, let's see the process of inserting a node into BST using an example.
Binary Search Tree (BST) Traversals – Inorder, Preorder, Post Order
Output:
Inorder Traversal: 10 20 30 100 150 200 300
Preorder Traversal: 100 20 10 30 200 150 300
Postorder Traversal: 10 30 20 150 300 200 100
Input:
5.3)Introduction to Hashing
Introduction to Hashing
3/5
2/4
In both these examples the students and books were hashed to a unique
number.
Assume that you have an object and you want to assign a key to it to make
searching easy. To store the key/value pair, you can use a simple array like
a data structure where keys (integers) can be used directly as an index to
store values. However, in cases where the keys are large and cannot be
used directly as an index, you should use hashing.
In hashing, large keys are converted into small keys by using hash
functions. The values are then stored in a data structure called hash
table. The idea of hashing is to distribute entries (key/value pairs) uniformly
across an array. Each element is assigned a key (converted key). By using
that key you can access the element in O(1) time. Using the key, the
algorithm (hash function) computes an index that suggests where an entry
can be found or inserted.
hash = hashfunc(key)
index = hash % array_size
In this method, the hash is independent of the array size and it is then
reduced to an index (a number between 0 and array_size − 1) by using the
modulo operator (%).
Hash function
A hash function is any function that can be used to map a data set of an
arbitrary size to a data set of a fixed size, which falls into the hash table.
The values returned by a hash function are called hash values, hash
codes, hash sums, or simply hashes.
Let us understand the need for a good hash function. Assume that you
have to store strings in the hash table by using the hashing technique
{“abcdef”, “bcdefa”, “cdefab” , “defabc” }.
To compute the index for storing the strings, use a hash function that states
the following:
The index for a specific string will be equal to the sum of the ASCII values
of the characters modulo 599.
The hash function will compute the same index for all the strings and the
strings will be stored in the hash table in the following format. As the index
of all the strings is the same, you can create a list on that index and insert
all the strings in that list.
Here, it will take O(n) time (where n is the number of strings) to access a
specific string. This shows that the hash function is not a good hash
function.
Let’s try a different hash function. The index for a specific string will be
equal to sum of ASCII values of characters multiplied by their respective
order in the string after which it is modulo with 2069 (prime number).
Let us consider string S. You are required to count the frequency of all the
characters in this string.
string S = “ababcd”
Collision resolution techniques
The cost of a lookup is that of scanning the entries of the selected linked
list for the required key. If the distribution of the keys is sufficiently uniform,
then the average cost of a lookup depends only on the average number of
keys per linked list. For this reason, chained hash tables remain effective
even when the number of table entries (N) is much higher than the number
of slots.
For separate chaining, the worst-case scenario is when all the entries are
inserted into the same linked list. The lookup procedure may have to scan
all its entries, so the worst-case cost is proportional to the number (N) of
entries in the table.
In the following image, CodeMonk and Hashing both hash to the value 2.
The linked list at the index 2 can hold only one entry, therefore, the next
entry (in this case Hashing) is linked (attached) to the entry of CodeMonk.
Assumption
Insert
void insert(string s)
{
// Compute the index using Hash
Function
int index = hashFunc(s);
// Insert the element in the linked list at the
particular index
hashTable[index].push_back(s);
}
Search
void search(string s)
{
//Compute the index by using the hash function
int index = hashFunc(s);
//Search the linked list at that specific index
for(int i = 0;i < hashTable[index].size();i++)
{
if(hashTable[index][i] == s)
{
cout << s << " is found!" << endl;
return;
}
}
cout << s << " is not found!" << endl;
}
In open addressing, instead of in linked lists, all entry records are stored in
the array itself. When a new entry has to be inserted, the hash index of the
hashed value is computed and then the array is examined (starting with the
hashed index). If the slot at the hashed index is unoccupied, then the entry
record is inserted in slot at the hashed index else it proceeds in some
probe sequence until it finds an unoccupied slot.
and so on…
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index
will exceed the hashTableSize then roll back
while(hashTable[index] != "")
index = (index + 1) % hashTableSize;
hashTable[index] = s;
}
Search
void search(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index
will exceed the hashTableSize then roll back
while(hashTable[index] != s and
hashTable[index] != "")
index = (index + 1) % hashTableSize;
//Check if the element is present in the hash
table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
Quadratic Probing
Quadratic probing is similar to linear probing and the only difference is the
interval between successive probes or entry slots. Here, when the slot at a
hashed index for an entry record is already occupied, you must start
traversing until you find an unoccupied slot. The interval between slots is
computed by adding the successive value of an arbitrary polynomial in the
original hashed index.
Let us assume that the hashed index for an entry is index and
at index there is an occupied slot. The probe sequence will be as follows:
and so on…
Assumption
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index
will exceed the hashTableSize roll back
int h = 1;
while(hashTable[index] != "")
{
index = (index + h*h) % hashTableSize;
h++;
}
hashTable[index] = s;
}
Search
void search(string s)
{
//Compute the index using the Hash Function
int index = hashFunc(s);
//Search for an unused slot and if the index
will exceed the hashTableSize roll back
int h = 1;
while(hashTable[index] != s and
hashTable[index] != "")
{
index = (index + h*h) % hashTableSize;
h++;
}
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
Double hashing
Double hashing is similar to linear probing and the only difference is the
interval between successive probes. Here, the interval between probes is
computed by using two hash functions.
Let us say that the hashed index for an entry record is an index that is
computed by one hashing function and the slot at that index is already
occupied. You must start traversing in a specific probing sequence to look
for an unoccupied slot. The probing sequence will be:
and so on…
Here, indexH is the hash value that is computed by another hash function.
Assumption
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
//Compute the index using the hash function1
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index
exceeds the hashTableSize roll back
while(hashTable[index] != "")
index = (index + indexH) % hashTableSize;
hashTable[index] = s;
}
Search
void search(string s)
{
//Compute the index using the hash function
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index
exceeds the hashTableSize roll back
while(hashTable[index] != s and
hashTable[index] != "")
index = (index + indexH) % hashTableSize;
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
In many systems, we need to generate unique identifiers for various data objects.
Hashing can provide a way to generate identifiers that are compact, fast to
compute, and minimize collisions (though not eliminate them).
When storing files, you may need a way to quickly generate a unique identifier for
a file based on its contents. Instead of using file names directly, you can hash the
contents of the file to generate a unique identifier (hash value) for the file.
Example: File Hashing (MD5 or SHA-256)
Let’s assume we want to generate a unique identifier for a file. We can hash its
contents using a cryptographic hash function like MD5 or SHA-256.
python
Copy
import hashlib
def generate_file_hash(file_path):
# Create a hash object
hash_object = hashlib.sha256()
# Read the file in chunks to avoid large memory
consumption
with open(file_path, 'rb') as file:
while chunk := file.read(4096):
hash_object.update(chunk) # Update the
hash object with file content
# Return the hexadecimal digest (unique identifier)
return hash_object.hexdigest()
# Example usage
file_path = 'example.txt'
file_hash = generate_file_hash(file_path)
print(f"Unique Identifier for the file: {file_hash}")
Explanation:
User IDs: Hashing can be used to generate unique user IDs based on user
data (email, username, etc.).
URL Shortening: Services like URL shorteners (e.g., bit.ly) generate a
unique, short URL based on hashing the original long URL.
Distributed Systems: In distributed systems, hashing is used for generating
unique keys for partitioning data across nodes (e.g., in consistent hashing).
Caching is a technique used to store data temporarily to reduce access time for
frequently requested data. Hashing is often employed in cache management to map
data or results to a specific cache location. Hashing makes it easier to quickly look
up cached data by its key.
When a user visits a website, the page content can be cached for future visits.
Rather than regenerating the same page each time, we can cache it and use a hash
of the URL as the key to retrieve the cached content.
Here’s a simple implementation of caching using a hash table for caching web
pages based on their URLs:
python
Copy
class WebCache:
def __init__(self):
self.cache = {}
# Example usage
cache = WebCache()
One of the most common types of caching is LRU (Least Recently Used)
caching, which removes the least recently used items when the cache reaches its
limit. Hashing is often combined with doubly linked lists or other structures to
implement efficient LRU caches.
class LRUCache:
def __init__(self, capacity: int):
self.cache = OrderedDict() # Ordered
dictionary keeps the insertion order
self.capacity = capacity
# Example usage
lru_cache = LRUCache(2)
lru_cache.put(1, 1)
lru_cache.put(2, 2)
print(lru_cache.get(1)) # Returns 1
lru_cache.put(3, 3) # Evicts key 2
print(lru_cache.get(2)) # Returns -1 (not found)
Explanation:
Conclusion:
Hashing plays a vital role in both unique identifier generation and caching:
Both applications are critical in real-world systems, enabling faster data access,
improved performance, and scalability.