CIE III

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

1. Define Graph.

Show the adjacency matrix and adjacency list representation for the
below graph.
A graph, G, consists of two sets: a finite, nonempty set of vertices, and a finite, possibly
empty set of edges. V(G) and E(G) represent the sets of vertices and edges of G, respectively.
Alternately, we may write G = (V, E) to represent a graph.

Graphs are represented in 3 different ways: adjacency matrices, adjacency lists, and
adjacency multilists.

i. Adjacency Matrix
Let G = (V, E) be a graph with n vertices, m > 1. The adjacency matrix of G is a two-
dimensional n xn array, say adj-mat. If the edge (Vj, Vj) is in E(G), adj-mat[i][j] = 1.
If there is no such edge in E(G), adj~mat[i][j] = 0.

ii. Adjacency lists


An array of Lists is used to store edges between two vertices. The size of array is
equal to the number of vertices (i.e, n). Each index in this array represents a specific
vertex in the graph. The entry at the index i of the array contains a linked list
containing the vertices that are adjacent to vertex i.
Let’s assume there are n vertices in the graph So, create an array of list of
size n as adjList[n].
 adjList[0] will have all the nodes which are connected (neighbour) to vertex 0.
 adjList[1] will have all the nodes which are connected (neighbour) to vertex 1 and so
on.
2. Explain Elementary Graph Operations with Suitable examples.
Given a graph G = (V E) and a vertex v in V(G) we wish to visit all vertices in G that
are reachable from v (i.e., all vertices that are connected to v). We shall look at two ways of
doing this: depth-first search and breadth-first search. Although these methods work on both
directed and undirected graphs the following discussion assumes that the graphs are
undirected.

i. Depth-First Search
Begin the search by visiting the start vertex v o If v has an unvisited neighbor,
traverse it recursively o Otherwise, backtrack. We begin by visiting the start vertex v.
Next an unvisited vertex w adjacent to v is selected, and a depth-first search from w is
initiated. When a vertex u is reached such that all its adjacent vertices have been visited,
we back up to the last vertex visited that has an unvisited vertex w adjacent to it and
initiate a depth-first search from w.
The search terminates when no unvisited vertex can be reached from any of the
visited vertices. DFS traversal of a graph, produces a spanning tree as final result.
Spanning Tree is a graph without any loops. We use Stack data structure with maximum
size of total number of vertices in the graph to implement DFS

We use the following steps to implement DFS traversal...


Step 1: Define a Stack of size total number of vertices in the graph.
Step 2: Select any vertex as starting point for traversal. Visit that vertex and push it on to the
Stack.
Step 3: Visit any one of the adjacent vertex of the verex which is at top of the stack which is
not visited and push it on to the stack.
Step 4: Repeat step 3 until there are no new vertex to be visit from the vertex on top of the
stack.
Step 5: When there is no new vertex to be visit then use back tracking and pop one vertex
from the stack.
Step 6: Repeat steps 3, 4 and 5 until stack becomes Empty.
Step 7: When stack becomes Empty, then produce final spanning tree by removing unused
edges from the graph

ii. Breadth-First Search


In a breadth-first search, we begin by visiting the start vertex v. Next all unvisited vertices
adjacent to v are visited. Unvisited vertices adjacent to these newly visited vertices are then
visited and so on.
Steps: BFS traversal of a graph, produces a spanning tree as final result. Spanning Tree is a
graph without any loops. We use Queue data structure with maximum size of total number of
vertices in the graph to implement BFS traversal of a graph.

We use the following steps to implement BFS traversal...


Step 1: Define a Queue of size total number of vertices in the graph.
Step 2: Select any vertex as starting point for traversal. Visit that vertex and insert it into the
Queue.
Step 3: Visit all the adjacent vertices of the vertex which is at front of the Queue which is not
visited and insert them into the Queue.
Step 4: When there is no new vertex to be visit from the vertex at front of the Queue then
delete that vertex from the Queue.
Step 5: Repeat step 3 and 4 until queue becomes empty.
Step 6: When queue becomes Empty, then produce final spanning tree by removing unused
edges from the graph

3. What is Hashing? Explain different types of hash functions with example.


-Hashing enables us to perform the dictionary operations such as search, insert and deleting
expected time.
-In a mathematical sense, a map is a relation between two sets. We can define Map M as a set
of pairs, where each pair is of the form (key, value), where for given a key, we can find a
value using some kind of a “function” that maps keys to values.
-Hashing technique is designed to use a special function called the hash function which is
used to map a given value with a particular key for faster access of elements.
-Types of hash function are:
1. Division Method: It is the most simple method of hashing an integer x. This
method divides x by M and then uses the remainder obtained. In this case, the
hash function can be given as h(x) = x % M. It is best to choose M to be a prime
number because making M a prime number increases the likelihood that the keys
are mapped with a uniformity in the output range of values Chaining
2. Mid-Square Method:
 Here, the key K is squared. A number ‘l’ in the middle of K 2 is selected by
removing the digits from both ends. H(k)=l
 Example 1:
Solution: Let key=2345, Its
square is K2 =574525 H (2345) CEC
=45=>by discarding 57 and 25

3. Folding Method:
Step 1: Divide the key value into a number of parts. That is, divide k into parts k1,
k2, ..., kn, where each part has the same number of digits except the last part which
may have lesser digits than the other parts.
Step 2: Add the individual parts. That is, obtain the sum of k1 + k2 + ... + kn. The
hash value is produced by ignoring the last carry, if any.

4. What is Collision. What are the methods to resolve collision? Explain Linear probing
with example?
Figure shows a hash table in which each key from the set K is mapped to locations
generated by using a hash function. Note that keys k2 and k6 point to the same memory
location. This is known as collision. That is, when two or more keys map to the same
memory location collision occurs.
A method used to solve the problem of collision, also called collision resolution
technique, is applied. The two most popular methods of resolving collisions are:
1. Collision Resolution by Linear Probing (open addressing)
2. Quadratic Probing
3. Double Hashing
4. Rehashing
5. Chaining

1. Collision Resolution by Linear Probing (open addressing)


Suppose new record R with key K is to be added to the memory table T, but that
memory with H(k) = h is already filled, one way of avoiding collision is to design R to
1st available location following T[h]. We assume that T with m location is circular so
T[1] comes after T[m]. according to procedure, we search for record R in table T by
linearly searching location T[h], T[h+1]… until we meet an empty location or finding
R.

Example: Consider a hash table of size 10. Using linear probing, insert the keys 72, 27,
36, 24, 63, 81, 92, and 101 into the table.
Solution: Let H(k) = k mod
m, m = 10 Initially hash
table will be
0 1 2 3 4 5 6 7 8 9
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1
H(72) = 72 mod 10 = 2
0 1 2 3 4 5 6 7 8 9
-1 -1 72 -1 -1 -1 -1 -1 -1 -1

H(27) = 27 mod 10 = 7
0 1 2 3 4 5 6 7 8 9
-1 -1 72 -1 -1 -1 -1 27 -1 -1
H(36) = 36 mod 10 = 6
0 1 2 3 4 5 6 7 8 9
-1 -1 72 -1 -1 -1 36 27 -1 -1
H(24) = 24 mod 10 = 4
0 1 2 3 4 5 6 7 8 9
-1 -1 72 -1 24 -1 36 27 -1 -1
H(63) = 63 mod 10 =3
0 1 2 3 4 5 6 7 8 9
-1 -1 72 63 24 -1 36 27 -1 -1
H(81) = 81 mod 10 =1
0 1 2 3 4 5 6 7 8 9
-1 81 72 63 24 -1 36 27 -1 -1

H(92) = 92 mod 10 =2
Collision occurred since 2 is already filled. CEC So go to next position – 3, which is also
already filled, go to next position – 4 which is also already filled. So go to 5 – which is
not filled – so insert the key 92 in position 5.
0 1 2 3 4 5 6 7 8 9
-1 81 72 63 24 92 36 27 -1 -1
H(101) = 101 mod 10 = 1
Collision occurred since 1 is already filled. Do linear probing and the next position free is
8, so insert key 101 in position 8.
0 1 2 3 4 5 6 7 8 9
-1 81 72 63 24 92 36 27 101 -1

Key Home Actual Search


addres Address length
s
0
81 1 1 1
72 2 2 1
63 3 3 1
24 4 4 1
92 2 5 4
36 6 6 1
27 7 7 1
101 1 8 8
9

Average search length = (1+1+1+1+4+1+1+8)/ 8 = 2.25

5. What is Dynamic Hashing? Explain the following operations with example


i) Dynamic Hashing using directories
ii)Directory less Dynamic Hashing

Dynamic hashing is a data structure technique that allows for efficient management of
hash tables with a changing number of records. Dynamically increases the size of the hash
table as collision occurs.

There are two types:


1) Dynamic hashing using directory or (Extendible hashing) : uses a directory that
grows or shrinks depending on the data distribution. No overflow buckets
2) Directory less Dynamic hashing or (Linear hashing): No directory. Splits buckets
in linear order, uses overflow buckets.
i. Dynamic hashing using directory
- Uses a directory of pointers to buckets/bins which are collections of records
- The number of buckets are doubled by doubling the directory, and splitting just the bin
that overflowed.
- Directory much smaller than file, so doubling it is much cheaper.

ii. Dynamic hashing without using directory


If we assume that we have a contiguous address space which is large enough to
hold all the records, we can eliminate the directory. In effect, this leaves it to the operating
system to break the address space into pages, and to manage moving them into and out of
memory. This scheme is referred to as directoryless hashing or linear hashing.

Here, the 2 bit addresses are the actual addresses of these pages (actually they are an
offset of some base address). Thus, the hash function delivers the actual address of a page
containing the key. Moreover, every value produced by the hash function must point to an
actual page. In contrast to the directory scheme where a single page might be pointed at by
several directory entries, in the directoryless scheme there must exist a unique page for every
possible address. Figure shows a simple trie and its mapping to contiguous memory without a
directory.

Now when a page overflows, we could double the size of the address space, but this is
wasteful. Instead, whenever an overflow occurs, we add a new page to the end of the file, and
divide the identifiers in one of the pages between its original page and the new page.

6. What is Priority Queue? Write the functions to implement Maximum Priority Queue
with an example.
i) Insert into Max Priority Queue.
ii)Delete into Max Priority Queue.

A priority queue is a type of queue that arranges elements based on their priority
values. Elements with higher priority values are typically retrieved or removed before
elements with lower priority values. Each element has a priority value associated with it.
When we add an item, it is inserted in a position based on its priority value.

There are two types of priority queues based on the priority of elements.
 If the element with the smallest value has the highest priority, then that priority queue
is called the min priority queue.
 If the element with a higher value has the highest priority, then that priority queue is
known as the max priority queue.

i). Insert into Max Priority Queue


 Once you perform the new insertion, the new element will move to the empty space
from top to bottom and left to right. Additionally, if the element is not in the correct
position, it will compare it with the parent node. Following that, if the element is not
in proper order, then it swaps the elements. The swapping process continues until all
the elements inside the queue are in the correct positions.
ii). Delete into Max Priority Queue
 in a max heap, the maximum element is the root node. And it will remove the
element which has maximum priority first. Thus, you remove the root node from the
queue. This removal creates an empty slot, which will be further filled with new
insertion. Then, it compares the newly inserted element with all the elements inside
the queue to maintain the heap invariant.

7. Explain Leftist Tree with example.


A leftist tree, also known as a leftist heap, is a type of binary heap data structure used
for implementing priority queues. Like other heap data structures, it is a complete binary tree,
meaning that all levels are fully filled except possibly the last level, which is filled from left
to right.
In a leftist tree, the priority of the node is determined by its key value, and the node with
the smallest key value is designated as the root node. The left subtree of a node in a leftist
tree is always larger than the right subtree, based on the number of nodes in each subtree.
This is known as the “leftist property.”

The main operations performed on a leftist tree include insert, extract-min and merge.
The insert operation simply adds a new node to the tree, while the extract-min operation
removes the root node and updates the tree structure to maintain the leftist property. The
merge operation combines two leftist trees into a single leftist tree by linking the root nodes
and maintaining the leftist property.
A leftist tree is a binary tree with properties:
1. Normal Min Heap Property : key(i) >= key(parent(i))
2. Heavier on left side : dist(right(i)) <= dist(left(i)). Here, dist(i) is the number of edges
on the shortest path from node i to a leaf node in extended binary tree representation
(In this representation, a null child is considered as external or leaf node). The shortest
path to a descendant external node is through the right child. Every subtree is also a
leftist tree and dist( i ) = 1 + dist( right( i ) ).

Example: The below leftist tree is presented with its distance calculated for each node with
the procedure mentioned above. The rightmost node has a rank of 0 as the right subtree of
this node is null and its parent has a distance of 1 by dist( i ) = 1 + dist( right( i )). The same is
followed for each node and their s-value( or rank) is calculated.

8. Discuss AVL Tree with an example. Write a Function for insert into an AVL tree.
An AVL tree defined as a self-balancing Binary Search Tree (BST) where the
difference between heights of left and right subtrees for any node cannot be more than one.
The difference between the heights of the left subtree and the right subtree for any node is
known as the balance factor of the node.

Example:
Operations on an AVL Tree:
 Insertion
 Deletion
 Searching

Insertion in AVL Tree:


To make sure that the given tree remains AVL after every insertion, we must augment the
standard BST insert operation to perform some re-balancing.

Following are two basic operations that can be performed to balance a BST without
violating the BST property (keys(left) < key(root) < keys(right)).
 Left Rotation
 Right Rotation

Steps to follow for insertion:


Let the newly inserted node be w
 Perform standard BST insert for w.
 Starting from w, travel up and find the first unbalanced node. Let z be the first
unbalanced node, y be the child of z that comes on the path from w to z and x be
the grandchild of z that comes on the path from w to z.
 Re-balance the tree by performing appropriate rotations on the subtree rooted
with z. There can be 4 possible cases that need to be handled as x, y and z can be
arranged in 4 ways.
 Following are the possible 4 arrangements:
o y is the left child of z and x is the left child of y (Left Left Case)
o y is the left child of z and x is the right child of y (Left Right Case)
o y is the right child of z and x is the right child of y (Right Right Case)
o y is the right child of z and x is the left child of y (Right Left Case)

4 cases are implemented as follows:


1. L L rotation: Inserted node is in the left subtree of left subtree of A
2. R R rotation : Inserted node is in the right subtree of right subtree of A
3. L R rotation : Inserted node is in the right subtree of left subtree of A
4. R L rotation : Inserted node is in the left subtree of right subtree of A

9. Write a note on optimal binary search tree.


An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search
Tree, is a binary search tree that minimizes the expected search cost. In a binary search tree,
the search cost is the number of comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key
being searched for. The sum of all the weights in the tree is 1.0. The expected search cost of
a node is the sum of the product of its depth and weight, and the expected search cost of its
children.
To construct an OBST, we start with a sorted list of keys and their probabilities. We
then build a table that contains the expected search cost for all possible sub-trees of the
original list. We can use dynamic programming to fill in this table efficiently. Finally, we
use this table to construct the OBST.

Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of
frequency counts, where freq[i] is the number of searches for keys[i]. Construct a binary
search tree of all keys such that the total cost of all the searches is as small as possible.
Let us first define the cost of a BST. The cost of a BST node is the level of that node
multiplied by its frequency. The level of the root is 1.

Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
\ /
12 10

Frequency of searches of 10 and 12 are 34 and 50 respectively.


The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118

10. Explain static hashing


It is a hashing technique that enables users to lookup a definite data set. Meaning, the
data in the directory is not changing, it is "Static" or fixed. In this hashing technique, the
resulting number of data buckets in memory remains constant.

Operations Provided by Static Hashing


Static hashing provides the following operations −
 Delete − Search a record address and delete a record at the same address or delete a
chunk of records from records for that address in memory.
 Insertion − While entering a new record using static hashing, the hash function (h)
calculates bucket address "h(K)" for the search key (k), where the record is going to
be stored.
 Search − A record can be obtained using a hash function by locating the address of the
bucket where the data is stored.
 Update − It supports updating a record once it is traced in the data bucket.

Advantages of Static Hashing


Static hashing is advantageous in the following ways −
Offers unparalleled performance for small-size databases.
Allows Primary Key value to be used as a Hash Key.

Disadvantages of Static Hashing


Static hashing comes with the following disadvantages −
It cannot work efficiently with the databases that can be scaled.
It
is not a good option for large-size databases.
Bucket overflow issue occurs if there is more data and less memory.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy