0% found this document useful (0 votes)
98 views35 pages

Unit 3 and Unit 4 DSA QB For ETE

The document defines various terms related to trees: - Tree, vertex, depth, degree of an element, degree of a tree, leaf It explains that a tree is a hierarchical data structure composed of nodes connected by edges. A vertex refers to a single node. Depth represents a node's distance from the root. Degree is the number of child nodes. A leaf has no children.

Uploaded by

amity1546
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views35 pages

Unit 3 and Unit 4 DSA QB For ETE

The document defines various terms related to trees: - Tree, vertex, depth, degree of an element, degree of a tree, leaf It explains that a tree is a hierarchical data structure composed of nodes connected by edges. A vertex refers to a single node. Depth represents a node's distance from the root. Degree is the number of child nodes. A leaf has no children.

Uploaded by

amity1546
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Attri

Unit-3

1. Explain the following terms :


i. Tree ii. Vertex of Tree iii. Depth
iv. Degree of an element v. Degree of Tree vi. Leaf

i. Tree: In computer science, a tree is a widely used data structure that represents a hierarchical
structure. It is composed of nodes connected by edges, where each node can have zero or more
child nodes, except for the root node which has no parent. The tree structure is often visualized as
an upside-down tree, with the root at the top and the leaf nodes at the bottom.
ii. Vertex of a Tree: In the context of a tree, a vertex refers to a single node or element within the
tree. Each vertex contains some data or value, and it may have child vertices connected to it,
forming a hierarchical relationship.
iii. Depth: The depth of a node in a tree represents its level or distance from the root node. The root
node is considered to be at depth 0, and each subsequent level incrementally increases the depth.
For example, if a node is at depth 2, it means it is two levels below the root node.
iv. Degree of an element: In the context of a tree, the degree of an element refers to the number of
immediate children or sub-nodes it has. For instance, if a node has three child nodes, its degree
would be 3. In a tree, the degree of an element can vary depending on the specific node being
considered.
v. Degree of a Tree: The degree of a tree is the maximum degree among all elements or nodes in
the tree. It represents the highest number of immediate children any node in the tree has. By
determining the degree of a tree, we can understand the maximum branching factor of the tree
structure.
vi. Leaf: In the context of a tree, a leaf (also known as a terminal node) refers to a node that does
not have any child nodes. It is located at the end of a branch and represents the bottommost level of
the tree. Leaves contain the actual data or values stored within the tree structure.
2. Show that the maximum number of nodes in a binary tree of height h is 2h + 1 – 1.

If binary search tree has height h, minimum number of nodes is h+1 (in case of left skewed and
right skewed binary search tree).
If binary search tree has height h, maximum number of nodes will be when all levels are
completely full. Total number of nodes will be 2^0 + 2^1 + …. 2^h = 2(h+1)-1.

3. Define extended binary tree, full binary tree, strictly binary tree and complete binary tree.

Here are the definitions of the following types of binary trees:

1. Extended Binary Tree: An extended binary tree is a binary tree in which every node has
either 0 or 2 children. If a node has 0 children, it is replaced by a dummy node called an
Attri

external node. An extended binary tree is used to represent binary operations where the
operands may not be present at the same level

2. Full Binary Tree: A full binary tree, also known as a proper binary tree or a 2-tree, is a
binary tree in which every node other than the leaves has two children. In other words, every
node has either 0 or 2 children

3. Strictly Binary Tree: Every node except the leaf nodes have two children or only have a
root node.The tree is said to be strictly binary tree, if every non-leaf made in a binary tree
has non-empty left and right subtrees. A strictly binary tree with n leaves always contains
2n–1 nodes.

4. Complete Binary Tree: A complete binary tree is a binary tree in which all levels are
completely filled except possibly the last level, which is filled from left to right. In other
words, all nodes are as far left as possible on each level
4. Construct a binary tree for the following :
Inorder : 9,5,1,7,2,12,8,4,3,11,
Postorder: 9,1,2,12,7,5,3,11,4,8
Find the preorder of the tree.

8
/ \
5 4
/ \ \
9 7 11
/ \ /
1 12 3
Pre-order:8, 5, 9, 1, 7, 12, 2, 4, 11, 3

5. Write algorithm for various traversing techniques of binary tree with neat example.
Let's assume we have the following binary tree as an example:
A
/ \
B C
/ \ \
D E F

Inorder Traversal Algorithm:

• Traverse the left subtree recursively.


• Visit the root node.
• Traverse the right subtree recursively.
Using the example tree, the inorder traversal will be: D -> B -> E -> A -> C -> F.
Here's the algorithm in pseudocode:
Attri

• inorder(node):
if node is not null:
inorder(node.left)
visit(node)
inorder(node.right)

Preorder Traversal Algorithm:

• Visit the root node.


• Traverse the left subtree recursively.
• Traverse the right subtree recursively.
Using the example tree, the preorder traversal will be: A -> B -> D -> E -> C -> F.
Here's the algorithm in pseudocode:
• preorder(node):
if node is not null:
visit(node)
preorder(node.left)
preorder(node.right)

Postorder Traversal Algorithm:

• Traverse the left subtree recursively.


• Traverse the right subtree recursively.
• Visit the root node.
Using the example tree, the postorder traversal will be: D -> E -> B -> F -> C -> A.
preorder(node):
if node is not null:
visit(node)
preorder(node.left)
preorder(node.right)

6. Explain binary search tree and its operations. Make a binary search tree for the following
sequence of numbers, show all steps : 45, 32, 90, 34, 68, 72, 15, 24, 30, 66, 11, 50, 10.
Binary Search Tree:
A Binary Search Tree is a binary tree, which is either empty or satisfies the following
properties :
1. Every node has a value and no two nodes have the same value (i.e., all the values
are unique).
2. If there exists a left child or left sub tree then its value is less than the value of the
root
Attri

Operations of BST:
1.Inserting a node
Inserting a node into a tree is achieved by performing two separate operations.
1. The tree must be searched to determine where the node is to be inserted.
2. Then the node is inserted into the tree.
2. Searching a node
Searching a node was part of the operation performed during insertion.
3. Deleting a node
Delete function is used to delete the specified node from a binary search tree. However, we must
delete a node from a binary search tree in such a way, that the property of binary search tree doesn't
violate.
Attri

7. Define binary search tree. Create BST for the following data, show all steps : 20, 10, 25, 5,
15, 22, 30, 3, 14, 13.
A binary search tree (BST) is a rooted binary tree data structure where the nodes are arranged in
such a way that the value of each node is greater than or equal to the values of all nodes in its left
subtree and less than or equal to the values of all nodes in its right subtree. This property is known
as the binary search property.
BSTs are a very efficient way to store and search for data. The time complexity of searching for a
particular value in a BST is O(log n), where n is the number of nodes in the tree. This is because the
search algorithm can quickly narrow down the search space by comparing the value to be searched
for with the values of the nodes in the tree.
properties :
1. Every node has a value and no two nodes have the same value (i.e., all the values
are unique).
2. If there exists a left child or left sub tree then its value is less than the value of the
root.
Create BST for the following data, show all steps : 20, 10, 25, 5, 15, 22, 30, 3, 14, 13.

8. Construct a binary tree for the following :


Inorder : Q, B, K, C, F, A, G, P, E, D, H, R
Attri

Preorder : G, B, Q, A, C, K, F, P, D, E, R, H
Find the postorder of the tree.

G
/ \
B P
/ \ \
Q A D
/ / \
C E R
/ \ /
K F H

Post-Order: Q, K, F, C, A, B, E, H, R, D, P, G

9. Illustrate the importance of various traversing techniques in graph along with its
applications.
Here are some of the most important graph traversal techniques and their applications:
• Depth-first search (DFS): DFS is a recursive algorithm that starts at a node and explores as
far as possible down one path before backtracking. DFS is often used to find connected
components in a graph, to find the shortest path between two nodes, or to print all the nodes
in a graph in a specific order.

several applications, including:


1. Finding connected components: DFS can be used to determine the connected
components of an undirected graph. Starting from a given vertex, DFS explores all
reachable vertices, marking them as visited.
2. Detecting cycles: By keeping track of visited vertices during DFS, it is possible to detect
cycles in a graph. If a visited vertex is encountered again during the traversal (excluding
the parent of the current vertex), a cycle is present.
3. Topological sorting: DFS can be used to perform a topological sort of a directed acyclic
graph (DAG). The ordering of vertices obtained through DFS reflects a valid ordering in
which each vertex comes before its dependencies.

• Breadth-first search (BFS): BFS is an iterative algorithm that starts at a node and explores
all of its neighbors before moving on to the neighbors of those neighbors. BFS is often used
to find the shortest path between two nodes, to find all the nodes in a graph that are
connected to a given node, or to print all the nodes in a graph in a specific order.

It has various applications, including:


1. Shortest path and distance calculation: BFS can be used to find the shortest path and
distance between two vertices in an unweighted graph. By exploring vertices level by
level, the shortest path from the source vertex to any other vertex can be determined.
Attri

2. Finding connected components: BFS can also be used to find connected components in
an undirected graph, similar to DFS. By exploring all reachable vertices from a given
source vertex, BFS identifies a connected component.
3. Web crawling and social network analysis: BFS is often employed in web crawling
algorithms and social network analysis to explore and discover connections between
web pages or individuals within a network.

10. What is height balanced tree ? Why height balancing of tree is required ? Create an AVL
tree for the following elements : a, z, b, y, c, x, d, w, e, v, f.
A height-balanced tree is a binary tree in which the height of the left and right subtree of any node
differ by not more than 1. This means that the tree is always "short and bushy," with no long,
unbalanced branches.
Height balancing is important for a number of reasons. First, it ensures that the tree can be searched
efficiently. A balanced tree can be searched in O(log n) time, where n is the number of nodes in the
tree. This is because the search algorithm can quickly narrow down the search space by comparing
the value to be searched for with the values of the nodes in the tree.
Second, height balancing can improve the performance of other operations on the tree, such as
insertion and deletion. When a node is inserted or deleted from a balanced tree, the height of the
tree may change. However, the tree will still be balanced, which ensures that the other operations on
the tree will continue to be efficient.
Attri

11. Create an AVL tree using the following data and show the balance factor in the resulting
tree: 14, 23, 7, 10, 33, 56, 80, 75, 90
Attri

12. Differentiate between Binary tree and Binary search tree. Also Draw a binary tree for the
expression : A * B – (C + D) * (P/Q)

Basis of
Binary Tree Binary Search Tree
comparison
A nonlinear data structure known as a
A BST is a binary tree with nodes that has
Binary Tree is one in which each node
Definition right and left subtrees that are also binary
can have a maximum of two child
search trees.
nodes.
• AVL Trees
• Complete Binary Tree
• Splay Trees
• Full Binary Tree
Types • Tango Trees
• Extended Binary Tree
• T-Trees

Since Binary Trees are not ordered, the Due to their ordered characteristics,
processes of inserting, deleting, and insertion, deletion, and searching of an
Operations
finding them take significantly more element is faster in a BST than in a Binary
time. Tree.
In a BST, the left subtree contains
There is no ordering in a Binary Tree elements that are less than the nodes
Structure in terms of how the nodes are element, while the right subtree contains
arranged. elements that are greater than the nodes
element.
The representation of data is
Data Data Representation is done in an ordered
performed in a structure that is
Representation format.
hierarchical.
Duplicate Duplicate values are permitted in Duplicate values are not permitted in the
Values binary trees. Binary Search Tree.
Because it is unordered, the speed of Because it has ordered properties, the
deletion, insertion, and searching Binary Search Tree performs element
Speed
operations in Binary Tree is slower deletion, insertion, and searching more
than in Binary Search Tree. quickly.
Typically, the time complexity is Typically, the time complexity is "O(log
Complexity
"O(n)". n)".
Attri

binary tree for the expression : A * B – (C + D) * (P/Q)

13. Number of nodes in a complete tree is 100000. Find its depth of complete tree.
Since the given tree is a complete tree with 100000 nodes, we can use the formula to calculate the
depth of the complete binary tree. According to the formula, the depth of a complete binary tree
with N nodes is log2(N+1)

Therefore, the depth of the complete binary tree with 100000 nodes can be calculated as follows:
Depth = log2(100000+1)
= log2(100001)
= 16.6096

14. What is a threaded binary tree ? Explain the advantages of using a threaded binary tree.
A Threaded Binary Tree is a binary tree in which every node that does not have a right child has a
THREAD (in actual sense, a link) to its INORDER successor. By doing this threading we avoid the
recursive method of traversing a Tree, which makes use of stacks and consumes a lot of memory
and time.
Traversing a binary tree is a common operation and it would be helpful to find more
efficient method for implementing the traversal. Moreover, half of the entries in the L-child and R-
child field will contain NULL pointer. These fields may be used more efficiently by replacing the
NULL entries by special pointers which points to nodes higher in the tree.
Such types of special pointers are called threads and binary tree with such pointers are
called threaded binary tree.
Attri

Advantages of using Threaded Binary Tree:


1. By doing threading we avoid the recursive method of traversing a Tree , which makes use of
stack and consumes a lot of memory and time .
2. The node can keep record of its root .
3. Threaded trees make in-order tree traversal a little faster, because you have guaranteed O(1)
time to access the next node. This is opposed to a regular binary tree where it would take
O(lg n) time, because you have to "climb" up the tree and then back down

15. Define AVL trees. Explain its rotation operations with example. Construct an AVL tree
with the values 10 to 1 numbers into an initially empty tree.
AVL Trees:
AVL tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left
and right subtrees cannot be more than one for all nodes. An Example Tree that is an AVL Tree.
Rotations:
Left Rotation:
If a tree becomes unbalanced, when a node is inserted into the right subtree of the right subtree,
then we perform a single left rotation −
Attri

Right Rotation:
AVL tree may become unbalanced, if a node is inserted in the left subtree of the left subtree. The
tree then needs a right rotation.

Left-Right Rotation
Double rotations are slightly complex version of already explained versions of rotations. To
understand them better, we should take note of each action performed while rotation

-> -> ->

Right-Left Rotation:
The second type of double rotation is Right-Left Rotation. It is a combination of right rotation
followed by left rotation.
Attri
Attri

16. What is a graph ? Describe various types of graph. Briefly explain few applications of
graph.
A graph is a data structure consisting of a set of vertices or nodes connected by edges. It is a
mathematical representation of relationships or connections between objects. Graphs are widely
used in computer science, mathematics, and various other fields to model and analyze complex
systems and networks.
There are several types of graphs, each with its own characteristics and applications. Here are some
commonly encountered types:
1. Undirected Graph:
• In an undirected graph, edges have no direction. The relationship between nodes is
symmetric.
• Applications: Social networks, where nodes represent individuals, and edges
represent friendships or connections.
2. Directed Graph (Digraph):
• In a directed graph, edges have a specific direction from one node to another.
• Applications: Web pages and hyperlinks, where nodes represent web pages, and
edges represent links between them.
3. Weighted Graph:
• In a weighted graph, each edge is assigned a numerical value or weight.
• Applications: Road networks, where nodes represent locations, and weighted edges
represent distances between them.
4. Bipartite Graph:
• A bipartite graph is divided into two disjoint sets of nodes, with edges only
connecting nodes from different sets.
• Applications: Matching problems, such as assigning tasks to workers or matching
students to projects.
5. Complete Graph:
• In a complete graph, every pair of distinct nodes is connected by an edge.
• Applications: Modeling communication networks, where nodes represent devices,
and edges represent direct connections.
Graphs have numerous applications across various domains. Here are a few examples:
1. Social Networks Analysis:
• Graphs are used to study social relationships, analyze patterns, and identify key
influencers or communities.
2. Network Routing and Optimization:
• Graph algorithms are applied to find the most efficient paths for data or resource
allocation in networks.
3. Recommendation Systems:
Attri

• Graph-based algorithms are employed to generate personalized recommendations by


analyzing connections between users and items.
4. Web Page Ranking:
• Graph algorithms, such as Google's PageRank, analyze the link structure of web
pages to determine their relevance and importance.
5. Transport and Logistics Planning:
• Graphs are used to model transportation networks, optimizing routes for delivery
services or public transportation.
6. Bioinformatics:
• Graphs help represent and analyze genetic relationships, protein interactions, and
biological pathways.

17. What is graph ? Discuss various terminologies used in graph.


A graph is a data structure that consists of a set of vertices (also called nodes or points) and a set of
edges (also called arcs or lines) that connect pairs of vertices. It is a mathematical representation of
relationships or connections between objects. Graphs can be used to model and analyze various
systems, networks, and relationships in computer science, mathematics, and other fields.
Now, let's discuss some key terminologies commonly used in graph theory:
1. Vertex (Node):
• A vertex, also known as a node or point, represents an entity or an element in a
graph. It can represent any object, such as a person, location, or concept.
2. Edge (Arc):
• An edge, also known as an arc or line, represents a connection or relationship
between two vertices in a graph. It can be directed or undirected, depending on
whether the relationship has a specific direction or not.
3. Degree:
• The degree of a vertex is the number of edges incident to that vertex. In a directed
graph, the degree is further classified into in-degree (number of incoming edges) and
out-degree (number of outgoing edges) of a vertex.
4. Path:
• A path is a sequence of vertices connected by edges. It represents a route or a series
of connections between vertices in a graph.
5. Cycle:
• A cycle is a path that starts and ends at the same vertex, forming a closed loop.
6. Connected Graph:
• A connected graph is a graph in which there is a path between every pair of vertices.
In other words, there are no isolated or unreachable vertices.
7. Disconnected Graph:
Attri

• A disconnected graph is a graph in which there are one or more pairs of vertices
without a path between them. It consists of two or more connected components.
8. Weighted Graph:
• In a weighted graph, each edge is assigned a numerical value or weight. It represents
the cost, distance, or any other quantitative measure associated with the relationship
between vertices.
9. Directed Graph (Digraph):
• A directed graph is a graph in which edges have a specific direction from one vertex
(the source) to another vertex (the target). The edges are represented by arrows
indicating the direction.
10.Undirected Graph:
• An undirected graph is a graph in which edges have no specific direction. The
relationship between vertices is symmetric, and edges can be traversed in both
directions.

18. Illustrate the importance of various traversing techniques in graph along with its
applications.
Graph traversal techniques are important in graph theory and computer science as they allow us to
visit all the vertices and edges of a graph. Here are some applications of graph traversal techniques:

1. Breadth-First Search (BFS):


• BFS explores a graph level by level, starting from a given source node and moving to
its neighbors before visiting deeper nodes.
• Applications:
• Shortest path finding: BFS can be used to find the shortest path between two
nodes in an unweighted graph.
• Connectivity analysis: BFS can determine if a graph is connected and find all
connected components.
• Web crawling: BFS is used to explore web pages by visiting the links on a
page before moving to deeper levels.
2. Depth-First Search (DFS):
• DFS explores a graph by going as deep as possible along each branch before
backtracking.
• Applications:
• Topological sorting: DFS can generate a linear ordering of nodes in a directed
acyclic graph (DAG).
• Detecting cycles: DFS can identify cycles in a graph, which is useful in
various applications like deadlock detection.
• Maze solving: DFS can be used to solve mazes by exploring all possible
paths until a solution is found.
3. Dijkstra's Algorithm:
Attri

• Dijkstra's algorithm is used to find the shortest path between a source node and all
other nodes in a weighted graph.
• Applications:
• GPS navigation: Dijkstra's algorithm is used to find the shortest route
between two locations, considering road distances.
• Network routing: It helps in finding the optimal path for data packets to travel
through a network.
• Resource allocation: Dijkstra's algorithm can be used to allocate resources
efficiently based on distance or cost.
4. Minimum Spanning Tree (MST) Algorithms:
• MST algorithms (such as Prim's and Kruskal's algorithms) find the minimum weight
spanning tree in a connected, weighted graph.
• Applications:
• Network design: MST algorithms help in designing cost-effective network
connections.
• Cluster analysis: MST algorithms can be used for grouping similar data points
based on distances or similarities.
5. A* Search Algorithm:
• A* search algorithm combines elements of both BFS and Dijkstra's algorithm by
considering both distance traveled and estimated distance to the goal.
• Applications:
• Pathfinding in games: A* algorithm is commonly used to find the shortest
path between two points in video games.
• Robotics and motion planning: A* algorithm helps robots plan efficient paths
in complex environments.
19. Write a short note on graph traversal. And its algorithm and difference.
Graph traversal refers to the process of visiting or exploring all the nodes or vertices of a graph in a
systematic manner. It involves traversing through the edges of the graph to access and process the
nodes. Graph traversal is an essential operation in graph theory and is used in various applications,
such as searching, pathfinding, and network analysis.
There are two main algorithms commonly used for graph traversal: Breadth-First Search (BFS) and
Depth-First Search (DFS).
1. Breadth-First Search (BFS):
• BFS explores a graph level by level, starting from a given source node and visiting
all its neighbors before moving on to deeper nodes.
• It uses a queue data structure to keep track of the nodes to visit.
• BFS guarantees that all nodes reachable from the source node are visited before
moving to nodes at a deeper level.
• BFS is typically used to find the shortest path between two nodes in an unweighted
graph and to perform connectivity analysis.
2. Depth-First Search (DFS):
Attri

• DFS explores a graph by going as deep as possible along each branch before
backtracking.
• It uses a stack data structure or recursion to keep track of nodes to visit.
• DFS visits the first unvisited neighbor of a node and continues in that direction until
it reaches a dead end.
• It backtracks and explores other unvisited branches until all nodes are visited.
• DFS is commonly used for problems such as topological sorting, cycle detection, and
maze solving.
Difference between BFS and DFS:
• BFS explores the graph level by level, while DFS explores it depth by depth.
• BFS uses a queue data structure for traversal, while DFS uses a stack or recursion.
• BFS guarantees finding the shortest path between two nodes in an unweighted graph,
whereas DFS does not provide such guarantee.
• BFS tends to consume more memory than DFS, as it needs to store all the nodes at a given
level in the queue.
• DFS may get trapped in infinite loops if not implemented with proper termination
conditions, while BFS avoids such situations.
20. Apply DFS algorithm for the graph given in Fig. by considering node 1 as starting node.
Attri

21. Implement BFS algorithm to find the shortest path from node A to J.
Attri

Unit – 4

1. Write down algorithm for linear/sequential search technique. Give its analysis.
The linear search algorithm, also known as the sequential search algorithm, is a simple searching
technique that checks each element in a list or array sequentially until a match is found or the entire
list is traversed. Here's the algorithm for linear/sequential search:
Algorithm:
1. Start at the beginning of the list/array.
2. Compare the target element with the current element.
3. If the current element matches the target element, return its index.
4. If the end of the list/array is reached without finding a match, return "not found."
Analysis:
• Best Case: The best case occurs when the target element is found at the first position. In this
case, the algorithm performs only one comparison. The time complexity is O(1).
• Worst Case: The worst case occurs when the target element is either at the last position or
not present in the list/array. In this case, the algorithm performs n comparisons, where n is
the number of elements in the list/array. The time complexity is O(n).
• Average Case: In the average case, assuming the target element is equally likely to be
present at any position, the linear search performs approximately n/2 comparisons on
average. The time complexity is O(n).
2. Write down the algorithm of binary search technique.Write down the complexity of
algorithm.

Algorithm:
1. Let min = 0 and max = length(list) - 1.
2. Repeat until min is less than or equal to max: a. Set mid = (min + max) / 2. b. If list[mid]
equals target, return mid. c. If list[mid] is greater than the target, set max = mid - 1. d. If
list[mid] is less than the target, set min = mid + 1.
3. If the target is not found, return "not found."
Complexity:
• The binary search algorithm has a time complexity of O(log n), where n is the number of
elements in the sorted list/array. This complexity arises from the fact that the search space is
halved in each iteration, leading to a logarithmic growth rate.
• In each iteration, the search space is divided in half, resulting in the elimination of a
significant portion of the remaining elements. This makes binary search much more efficient
than linear search, especially for large lists or arrays.
• The space complexity of the binary search algorithm is O(1) since it requires a constant
amount of extra space to store the variables used in the algorithm.
Attri

3. What is difference between sequential (linear) search and binary search technique ?

Sequential (Linear) Search Binary Search


Checks each element in a list/array Divides the sorted list/array in half
Overview sequentially until a match is found or repeatedly to narrow down the search
the entire list is traversed space
Works on both sorted and unsorted Requires the list/array to be sorted in
Requirement
lists/arrays ascending or descending order
Worst Case: O(n) - Linear time Worst Case: O(log n) - Logarithmic
Time Complexity complexity as it may need to traverse time complexity as the search space is
all n elements halved in each iteration
Highly efficient for large sorted
Efficiency Less efficient for large lists/arrays
lists/arrays
Compares the target element with the
Comparison Compares the target element with each
middle element to determine the search
Approach element in the list sequentially
direction
Does not eliminate a significant portion
Search Space Eliminates half of the search space with
of the search space with each
Elimination each comparison
comparison
Suitable for small lists/arrays or Ideal for large sorted lists/arrays where
Use Case
unsorted collections efficient searching is required
Space Requires O(1) space as it only uses a Requires O(1) space as it only uses a
Complexity few variables few variables

4. Apply binary search to find item 40 in the sorted array: 11, 22, 30, 33, 40, 44, 55, 60, 66, 77,
80, 88, 99. Also discuss the complexity of binary search.
1. Set the minimum index (min) to 0 and the maximum index (max) to the length of the array
minus 1.
2. Calculate the middle index (mid) as (min + max) / 2.
3. Compare the middle element (arr[mid]) with the target element (40):
4. If arr[mid] is equal to the target, return mid (the index of the target element).
5. If arr[mid] is greater than the target, set max to mid - 1 and go to step 2.
6. If arr[mid] is less than the target, set min to mid + 1 and go to step 2.
7. Repeat steps 2 and 3 until the target element is found or the min index becomes greater than
the max index.
8. If the target element is not found, return "not found".
Now, let's apply the binary search algorithm to find the item 40 in the given sorted array:
1. min = 0, max = 12 (length of the array) - 1.
2. Calculate mid = (0 + 12) / 2 = 6.
3. Compare arr[6] = 55 with the target element 40. Since 55 is greater than 40, set max = 6 - 1
= 5.
4. Repeat steps 2 and 3:
• min = 0, max = 5, mid = (0 + 5) / 2 = 2, arr[2] = 30 (less than the target).
• min = 3, max = 5, mid = (3 + 5) / 2 = 4, arr[4] = 40 (equal to the target).
Attri

• Return mid = 4.
The item 40 is found at index 4 in the sorted array.
Complexity of Binary Search:
• The binary search algorithm has a time complexity of O(log n), where n is the number of
elements in the sorted array.
• In each iteration, the search space is halved, leading to a logarithmic growth rate.
• As a result, binary search is highly efficient for large sorted arrays, as the number of
elements to be searched reduces significantly with each comparison.
• The space complexity of binary search is O(1), as it only requires a few variables to perform
the search.
5. Classify the hashing functions based on the various methods by which the key value is
found.

Hashing functions can be classified based on the various methods by which the key value is found.
Here are some common methods:
1. Direct method: The key value is used directly as the index of the hash table.
2. Subtraction method: The key value is subtracted from a prime number that is less than the
size of the hash table, and the result is used as the index of the hash table.
3. Modulo-Division method: The key value is divided by the size of the hash table, and the
remainder is used as the index of the hash table.
4. Digit-Extraction method: The key value is divided into digits, and the digits are combined in
a specific way to form the index of the hash table.
5. Mid-Square method: The key value is squared, and the middle digits are used as the index of
the hash table.
6. Folding method: The key value is divided into equal-sized pieces, and the pieces are added
together to form the index of the hash table.
7. Pseudo-random method: A pseudo-random number generator is used to generate the index of
the hash table.
6. What is collision ? Discuss collision resolution techniques.
Collision in hashing occurs when two or more keys are mapped to the same index of the hash table.
Collision resolution techniques are used to handle these collisions.
Here are some common collision resolution techniques:
1. Separate Chaining (Open Hashing): This technique involves creating a linked list of
elements to store objects with the same key together. If a collision occurs, the new element
is added to the linked list at the corresponding index of the hash table.
2. Open Addressing (Closed Hashing): This technique involves storing the records directly
within the array. If a collision occurs, the algorithm probes alternate locations in the array
until an empty slot is found.
3. Linear Probing: This involves probing alternate locations in the array with a fixed interval
between probes, often at 1.
4. Quadratic Probing: This involves probing alternate locations in the array with a quadratic
interval between probes.
Attri

5. Double Hashing: This involves using a second hash function to calculate the interval
between probes.
7. Write a short note on insertion sort. Also write its algorithm.
Insertion sort is a simple and efficient sorting algorithm that works by comparing each element with
the previous elements and then moving the element to its correct position by shifting the larger
elements to the right. It is called an in-place comparison sorting algorithm because it sorts the input
list in place without requiring any extra memory. Insertion sort is useful for small input sizes or for
partially sorted data.

Algorithm:
1. Start with the second element (index 1) of the array.
2. Compare the second element with the first element (index 0) and swap them if necessary to
ensure the first two elements are in ascending order.
3. Consider the next unsorted element (index i) and compare it with the elements in the sorted
portion (indices 0 to i-1).
4. Move the elements greater than the current element one position to the right until a proper
position is found for the current element.
5. Insert the current element into its correct position within the sorted portion.
6. Repeat steps 3-5 for the remaining unsorted elements until the entire array is sorted.

8. Write a short note on Selection sort. Also write its algorithm.


Selection sort is a simple comparison-based sorting algorithm that works by dividing the array into
two portions: a sorted portion and an unsorted portion. In each iteration, it finds the minimum (or
maximum) element from the unsorted portion and swaps it with the first unsorted element,
gradually building the sorted portion from left to right.
Here is the algorithm for selection sort:

1. Divide the list into two parts: a sorted part and an unsorted part.
2. Initially, the sorted part is empty, and the unsorted part contains the entire list.
3. Find the minimum element from the unsorted part of the list.
4. Swap the minimum element with the first element of the unsorted part.
5. Move the boundary between the sorted and unsorted parts one element to the right.
6. Repeat steps 3-5 until the entire list is sorted.
9. Write a short note on bubble sort. Also write its algorithm.

Bubble sort is a simple sorting algorithm that works by repeatedly swapping adjacent elements in an
array when a condition is met. It is called bubble sort because elements tend to move up into the
correct order like bubbles rising to the surface. Bubble sort is the simplest sorting algorithm and is
often used to teach the concept of sorting algorithms. However, it is not suitable for large data sets
as its average and worst-case time complexity is quite high.
Attri

Algorithm:
1. Start at the beginning of the array.
2. Compare the first and second elements. If they are in the wrong order, swap them.
3. Move to the next pair of elements and compare them. Continue this process until reaching
the end of the array.
4. At this point, the largest element will be at the end of the array.
5. Repeat steps 1-4 for the remaining unsorted portion of the array, excluding the last sorted
element from the previous iteration.
6. Repeat the process until the entire array is sorted.
10. Write the steps of an insertion sort algorithm and consider an array of elements arr[5]=
{5,4,3,2,1}, what are the steps of insertions done while doing insertion sort in the array.

Insertion Sort Steps for Array Arr[5] = {5, 4, 3, 2, 1}

Step 1: Compare the first two elements, i.e., 5 and 4. As 4 is smaller, swap them.
Arr[5] = {4, 5, 3, 2, 1}

Step 2: Compare the second and third elements, i.e., 5 and 3. As 3 is smaller, swap them.
Arr[5] = {4, 3, 5, 2, 1}

Step 3: Compare the third and fourth elements, i.e., 5 and 2. As 2 is smaller, swap them.
Arr[5] = {4, 3, 2, 5, 1}

Step 4: Compare the fourth and fifth elements, i.e., 5 and 1. As 1 is smaller, swap them.
Arr[5] = {4, 3, 2, 1, 5}

Step 5: Now, the first element is sorted. Compare the second and third elements, i.e., 3 and 2. As 2
is smaller, swap them.
Arr[5] = {4, 2, 3, 1, 5}

Step 6: Compare the first and second elements, i.e., 4 and 2. As 2 is smaller, swap them.
Arr[5] = {2, 4, 3, 1, 5}

Step 7: Compare the third and fourth elements, i.e., 3 and 1. As 1 is smaller, swap them.
Arr[5] = {2, 4, 1, 3, 5}

Step 8: Compare the second and third elements, i.e., 4 and 1. As 1 is smaller, swap them.
Arr[5] = {2, 1, 4, 3, 5}

Step 9: Compare the first and second elements, i.e., 2 and 1. As 1 is smaller, swap them.
Arr[5] = {1, 2, 4, 3, 5}

Step 10: Now, the first two elements are sorted. Compare the third and fourth elements, i.e., 4 and
3. As 3 is smaller, swap them.
Arr[5] = {1, 2, 3, 4, 5}

Step 11: Now, the first three elements are sorted. Compare the fourth and fifth elements, i.e., 4 and
5. As they are already in sorted order, no swapping is required.
Attri

Arr[5] = {1, 2, 3, 4, 5}

Final Arrangement: The final sorted array is {1, 2, 3, 4, 5}.

11. Write algorithm for quick sort. Trace your algorithm on the following data to sort the list:
2, 13, 4, 21, 7, 56, 51, 85, 59, 1, 9, 10. How the choice of pivot elements affects the efficiency of
algorithm.

The Quick Sort algorithm is a widely used sorting algorithm that follows the divide-and-conquer
strategy. It works by selecting a pivot element from the list, partitioning the other elements around
the pivot, and recursively applying the same process to the sub-arrays on either side of the pivot
until the entire list is sorted. Here's the algorithm for Quick Sort:
Algorithm: Quick Sort Inputs:
• A list of elements to be sorted, list[]
• Starting index of the list, low
• Ending index of the list, high
Procedure:
1. If low < high:
2. a. Choose a pivot element (usually the last element) from the list.
3. b. Set the pivot index as the result of the partition function: pivot_index = partition(list, low,
high)
4. c. Recursively call Quick Sort for the sub-array before the pivot: Quick Sort(list, low,
pivot_index - 1)
5. d. Recursively call Quick Sort for the sub-array after the pivot: Quick Sort(list, pivot_index
+ 1, high)
6. Partition function: Inputs:
• The list of elements, list[]
• Starting index of the list, low
• Ending index of the list, high
Procedure:
1. Set the pivot element as the last element of the list. pivot = list[high]
2. Initialize the partition index as low - 1. partition_index = low - 1
3. Iterate from low to high - 1: a. If the current element is smaller than or equal to the pivot: -
Increment the partition index. - Swap the current element with the element at the partition
index.
4. Swap the pivot element with the element at the partition index + 1.
5. Return the partition index + 1.
Now let's trace the Quick Sort algorithm on the given data: [2, 13, 4, 21, 7, 56, 51, 85, 59, 1, 9, 10].
1. Initial call: Quick Sort([2, 13, 4, 21, 7, 56, 51, 85, 59, 1, 9, 10], 0, 11)
Attri

• Choose the last element (10) as the pivot.


• Partition the list: [2, 4, 7, 1, 9, 10, 21, 85, 59, 56, 51, 13]
• The pivot index is 5.
2. Recursion 1: Quick Sort([2, 4, 7, 1, 9], 0, 4)
• Choose the last element (9) as the pivot.
• Partition the list: [2, 4, 7, 1, 9]
• The pivot index is 4.
3. Recursion 2: Quick Sort([2, 4, 7, 1], 0, 3)
• Choose the last element (1) as the pivot.
• Partition the list: [1, 2, 7, 4]
• The pivot index is 0.
4. Recursion 3: Quick Sort([2, 7, 4], 1, 3)
• Choose the last element (4) as the pivot.
• Partition the list: [2, 4, 7]
• The pivot index is 1.
5. Recursion 4: Quick Sort([2], 2, 3)
• The sub-array has only one element, so no sorting is required.
6. Recursion 5: Quick Sort([7], 3, 3)
• The sub-array has only one element, so no sorting is required.
7. Recursion 6: Quick Sort([13, 21, 85, 59, 56, 51], 6, 11)
• Choose the last element (51) as the pivot.
• Partition the list: [13, 21, 51, 59, 56, 85]
• The pivot index is 2.
8. Recursion 7: Quick Sort([13, 21], 6, 8)
• The sub-array has only two elements, so no sorting is required.
9. Recursion 8: Quick Sort([59, 56, 85], 9, 11)
• Choose the last element (85) as the pivot.
• Partition the list: [59, 56, 85]
• The pivot index is 2.
The final sorted list is [1, 2, 4, 7, 9, 10, 13, 21, 51, 56, 59, 85].
12. Which is the correct order of the following algorithms with respect to their time
Complexity in the best case? Bubble sort, Selection Sort, Insertion Sort, Merage sort, and
Quick Sort.
The correct order of the following algorithms with respect to their time complexity in the best case
is:
1. Insertion sort (O(n))
2. Merge sort (O(n log n))
3. Quick sort (O(n log n))
4. Selection sort (O(n^2))
Attri

5. Bubble sort (O(n^2))


The correct order of the given algorithms in terms of their time complexity in the best case is as
follows:
1. Insertion Sort: Best case time complexity is O(n), where n is the number of elements. In the
best case, each element is compared only with its previous elements and no swaps are
required.
2. Merge Sort: Best case time complexity is O(n log n). The best case occurs when the list is
already sorted or consists of single elements, resulting in fewer merge operations.
3. Quick Sort: Best case time complexity is O(n log n). Quick Sort's best case occurs when the
pivot element chosen is consistently the median or close to the median.
4. Selection Sort: Best case time complexity is O(n^2). In the best case, the list is already
sorted, but the algorithm still performs the same number of comparisons and swaps as in the
worst case, resulting in a quadratic time complexity.
5. Bubble Sort: Best case time complexity is O(n). In the best case, the list is already sorted, so
the algorithm only performs a single pass without any swaps.
13. Consider the array A[]= {6,4,8,1,3} apply the insertion sort to sort the array . Consider the
cost associated with each sort is 25 rupees , what is the total cost of the insertion sort when
element 1 reaches the first position of the array ?

Steps to perform Insertion Sort:


1. Iterate from the second element of the array to n, where n is the size of the array.
2. Compare the current element with the elements before it, and keep on swapping the elements
until the current element is in the correct position.
3. Repeat the above steps for all the elements in the array.

Given Array: A[]= {6,4,8,1,3}


After applying insertion sort, the array becomes:
A[]= {1,3,4,6,8}

Total Cost:
We have to calculate the total cost of the insertion sort when element 1 reaches the first position of
the array. It means that we need to calculate the cost of sorting the elements until the element 1
reaches the first position.

The steps to perform the insertion sort are:

1. 6 and 4 are compared and swapped. Array becomes {4,6,8,1,3}. Cost = 25.
2. 6 and 8 are compared, no swap required. Array remains {4,6,8,1,3}. Cost = 0.
3. 8 and 1 are compared and swapped. Array becomes {4,6,1,8,3}. Cost = 25.
4. 6 and 1 are compared and swapped. Array becomes {4,1,6,8,3}. Cost = 25.
5. 4 and 1 are compared and swapped. Array becomes {1,4,6,8,3}. Cost = 25.
6. 8 and 3 are compared and swapped. Array becomes {1,4,6,3,8}. Cost = 25.
7. 6 and 3 are compared and swapped. Array becomes {1,4,3,6,8}. Cost = 25.
8. 4 and 3 are compared and swapped. Array becomes {1,3,4,6,8}. Cost = 25.
Attri

So, the total cost of the insertion sort when element 1 reaches the first position of the array is
25+0+25+25+25+25+25+25 = 175.

14. Let P be a QuickSort algorithm program to sort numbers in ascending with the first
element as a pivot. Let t1 & t2 be the number of comparison operations made by P for the
inputs A ={1, 2, 3, 4, 5} & B = {4, 1, 5, 3, 2} respectively. Which one will holds?

it would be t1 > t2 , because the first case is the worst case of quicksort i.e. minimum number is
chosen as pivot. Hence in the worst case the comparisons are high.

First case [1 2 3 4 5]

1 [2 3 4 5] -> 4 comparisons
2 [3 4 5] -> 3 comparisons
3 [4 5] -> 2 comparisons
4 [5] -> 1 comparison

Second case [4 1 5 3 2]
4 [1 3 2] [5] -> 4 comparisons
1 [3 2] -> 2 comparisons
3 [2] -> 1 comparison

Number of recursive calls remain the same, but in second case the number of elements passed for
the recursive call is less and hence the number of comparisons also less.

Hence, in second case number of comparisons is less. => t1 > t2.

15. Prove that Number of comparison to sorting array A (t1) is greater then number of
comparsion to sorting array B(t2), mean t1 > t2.

The number of comparisons made by the quicksort algorithm to sort an array depends on the
distribution of the elements in the array. If the elements are already sorted, the quicksort algorithm
will only make one comparison. However, if the elements are randomly distributed, the quicksort
algorithm may make more comparisons.
In the case of the arrays A = {1, 2, 3, 4, 5} and B = {4, 1, 5, 3, 2}, the elements in array A are
already sorted, while the elements in array B are randomly distributed. Therefore, the quicksort
algorithm will make more comparisons to sort array B than to sort array A.
Specifically, the quicksort algorithm will make 3 comparisons to sort array A, and 4 comparisons to
sort array B. This means that t1 = 3 and t2 = 4, and t1 > t2.
Here is a proof of this claim:
Let A = {1, 2, 3, 4, 5} and B = {4, 1, 5, 3, 2}.

The quicksort algorithm will make 3 comparisons to sort array A:


Attri

1. The pivot element is 1.


2. The subarray to the left of the pivot is empty.
3. The subarray to the right of the pivot is {2, 3, 4, 5}, which can be sorted
in 3 comparisons.

The quicksort algorithm will make 4 comparisons to sort array B:

1. The pivot element is 4.


2. The subarray to the left of the pivot is {1}, which can be sorted in 1
comparison.
3. The subarray to the right of the pivot is {5, 3, 2}, which can be sorted in 2
comparisons.

Therefore, t1 = 3 and t2 = 4, and t1 > t2.

16. Describe two way merge sort method. Explain the complexity of merge sort method in
Best, Worst and in Average case.

Two-way merge sort is a sorting algorithm that uses a divide-and-conquer approach to sort a given
list of elements. It divides the list into smaller sub-arrays, recursively sorts them, and then merges
the sorted sub-arrays to obtain the final sorted list.
Divide and Conquer
If we can break a single big problem into smaller sub-problems, solve the smaller sub-problems and
combine their solutions to find the solution for the original big problem, it becomes easier to solve
the whole problem.
Let's take an example, Divide and Rule.
When Britishers came to India, they saw a country with different religions living in harmony, hard
working but naive citizens, unity in diversity, and found it difficult to establish their empire. So,
they adopted the policy of Divide and Rule. Where the population of India was collectively a one
big problem for them, they divided the problem into smaller problems, by instigating rivalries
between local kings, making them stand against each other, and this worked very well for them.
Well that was history, and a socio-political policy (Divide and Rule), but the idea here is, if we can
somehow divide a problem into smaller sub-problems, it becomes easier to eventually solve the
whole problem.
In Merge Sort, the given unsorted array with n elements, is divided into n subarrays, each having
one element, because a single element is always sorted in itself. Then, it repeatedly merges these
subarrays, to produce new sorted subarrays, and in the end, one complete sorted array is produced.
The concept of Divide and Conquer involves three steps:
1. Divide the problem into multiple small problems.
2. Conquer the subproblems by solving them. The idea is to break down the problem into
atomic subproblems, where they are actually solved.
3. Combine the solutions of the subproblems to find the solution of the actual problem.
Attri

Let's consider an array with values {14, 7, 3, 12, 9, 11, 6, 12}

In merge sort we follow the following steps:


1. We take a variable p and store the starting index of our array in this. And we take another
variable r and store the last index of array in it.
2. Then we find the middle of the array using the formula (p + r)/2 and mark the middle
index as q, and break the array into two subarrays, from p to q and from q + 1 to r index.
3. Then we divide these 2 subarrays again, just like we divided our main array and this
continues.
4. Once we have divided the main array into subarrays with single elements, then we start
merging the subarrays.
Time Complexity:
Best Case Complexity: O(n*log n)
Worst Case Complexity: O(n*log n)
Average Case Complexity: O(n*log n)
Space Complexity:

The space complexity of merge sort is O(n).


Attri
Attri

17. Using the merge sort method, sort the given elements in ascending order:10, 25, 16, 5, 35,
48, 8.
Attri

18. Define Heap Sort. Build a Max Heap tree and Min Heap tree from the following list of
numbers: 44, 30, 50, 22, 60, 55, 77, 55. After build Max Heap tree and Min Heap tree then
show the steps of how to delete a root node of Max Heap and Min heap Tree

Heap sort is a sorting algorithm that works by building a max heap or min heap from the input
array. A max heap is a complete binary tree where the value of each node is greater than or equal to
the values of its children. A min heap is a complete binary tree where the value of each node is less
than or equal to the values of its children.

Once the heap is built, the root node of the heap is removed and the heap is rebuilt. This process is
repeated until the heap is empty.

The following is the list of numbers:


44, 30, 50, 22, 60, 55, 77, 55

The following is the max heap tree and min heap tree built from the list of numbers:
Max Heap:
44
/ \
30 60
/ \ / \
22 50 55 77

Min Heap:
22
/ \
30 44
/ \ / \
50 55 60 77

The following are the steps on how to delete a root node of a max heap and min heap tree:
Max heap:
1. Swap the root node with the last element in the heap.
2. Recursively heapify the root node.
Min heap:
1. Swap the root node with the first element in the heap.
2. Recursively heapify the root node.
In the case of the max heap, the root node is 44. After swapping the root node with the last element
in the heap, the new root node is 22. The heap is then recursively heapified, which will ensure that
the heap property is maintained.
Attri

In the case of the min heap, the root node is 22. After swapping the root node with the first element
in the heap, the new root node is 44. The heap is then recursively heapified, which will ensure that
the heap property is maintained.
19. Use Heap sort algorithm to sort 15, 22, 30, 10, 15, 64, 1, 3, 9, 2.
Heap Sort:
A heap sort algorithm works by first organizing the data to be sorted into a special type
of binary tree called a heap. Any kind of data can be sorted either in ascending order or
in descending order using heap tree. It does this with the following steps:
1. Build a heap tree with the given set of data.
2.
a.Remove the top most item (the largest) and replace it with the last element in the heap.
b. Re-heapify the complete binary tree.
c.Place the deleted node in the output.
3. Continue step 2 until the heap tree is empty.
Algorithm:
This algorithm sorts the elements a[n]. Heap sort rearranges them in-place in non-
decreasing order. First transform the elements into a heap.
Build a Max Heap:
• Start with the given list: {15, 22, 30, 10, 15, 64, 1, 3, 9, 2}.
• Convert the list into a Max Heap by heapifying it from the bottom up.
• Heapify the list by swapping elements to maintain the Max Heap property (parent
node is greater than or equal to its children).
• After heapifying the entire list, the Max Heap is formed.
Max Heap: {64, 22, 30, 15, 15, 15, 1, 3, 9, 2}
2. Sorting:
• Start with the Max Heap: {64, 22, 30, 15, 15, 15, 1, 3, 9, 2}.
• Swap the root node (maximum value) with the last element in the heap.
• Reduce the size of the heap by one.
• Heapify the remaining elements to maintain the Max Heap property.
• Repeat the above steps until all elements are extracted from the heap.
Sorted list:
• Step 1: Swap 64 and 2, reduce heap size. Heap: {22, 15, 30, 15, 15, 15, 1, 3, 9, 64}
• Step 2: Swap 30 and 3, reduce heap size. Heap: {22, 15, 9, 15, 15, 15, 1, 3, 30}
• Step 3: Swap 22 and 1, reduce heap size. Heap: {15, 15, 9, 15, 1, 15, 22, 3}
• Step 4: Swap 15 and 3, reduce heap size. Heap: {15, 15, 9, 3, 1, 15, 22}
• Step 5: Swap 15 and 1, reduce heap size. Heap: {15, 1, 9, 3, 15, 15}
Attri

• Step 6: Swap 9 and 3, reduce heap size. Heap: {15, 1, 3, 9, 15}


• Step 7: Swap 15 and 1, reduce heap size. Heap: {1, 15, 3, 9}
• Step 8: Swap 15 and 1, reduce heap size. Heap: {1, 9, 3}
• Step 9: Swap 9 and 1, reduce heap size. Heap: {1, 3}
• Step 10: Swap 3 and 1, reduce heap size. Heap: {1}
The sorted list is {64, 30, 22, 15, 15, 15, 10, 9, 3, 2, 1}.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy