Lab Manual-3
Lab Manual-3
B.TECH.
(III YEAR – ODD SEM)
(2022-23)
COLLEGE OF ENGINEERING
ROORKEE
(Affiliated to Veer Madho Singh Bhandari Uttarakhand Technical University,
Dehradun)
Approved by AICTE& MHRD, Govt of India, Accredited by NAAC
Roorkee-Haridwar Road (NH-58),
Post Box No. 27, Vardhmanpuram,
Roorkee- 247667 District. Haridwar (Uttarakhand)
www.coer.ac.in
Page | 2
COLLEGE OF ENGINEERING ROORKEE
Vision
"To impart education in Engineering with training, skill upgradation and research in
futuristic technologies and niche areas."
Mission
M1: To develop the professionals having basic and advanced competencies so that they can
serve the Society & Industry, and face the global challenges.
M2: To impart education based on latest knowledge, with analytical and experimental skills,
through advanced methods of training, research and strong Institute-Industry interface.
Page | 2
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
Vision
To promote innovation-centric education and perform research in Computer Science and
Engineering in pace with industrial development.
Mission
M1: To provide a learning environment that helps students to enhance problem solving skills
at par with global standards.
M2: To establish Industry-Institute Interaction to make students ready for the industrial
environment.
M3: To provide exposure to students to the latest tools and technologies in the area of
computer hardware and software.
M5: To foster the science of creativity and educating ownership for sustainable & scalable
ventures.
M6: To pass on the requisite moral characteristics and infuse discipline amongst the students
to make them attain consistent elevation in their professional life.
Page | 2
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)
PEO1: To equip the students with skills and latest updated so that they can work and
contribute to the continuously changing landscape of IT Industry.
PEO3: To inculcate culture of professionalism, ethical conduct, team work with good
communication skills to enable the students to be successful in their career and enable them
to launch start-ups in their chosen field.
PSO1: Students will have the ability to apply software engineering principles to design,
build, test, and deliver solutions for Software Industry
PSO2: The students will be able to use programming, database, networking and web
development concepts for developing solutions for real-life problems.
Page | 3
DEPARTMENT OF INFORMATION TECHNOLOGY
Vision
To produce professional graduates trained in the latest tools and technologies of Information
Technology, and to build a strong teaching & research environment that tracks and responds
to the challenges of modern times
Mission
1. To impart quality engineering education that enables the students become competent IT
professionals by providing professional technical education and training.
2. To imbibe experiential learning blended with critical thinking and strong Industry-
connect for the students and ignite them to pursue Research in emerging areas.
4. To create disciplined graduates with strong moral values who may continuously thrive for
higher echelons in the personal and professional life.
Page | 3
DEPARTMENT OF INFORMATION TECHNOLOGY
PEO1 To equip the students with skills and latest updates so that they can work and
contribute to the continuously changing landscape of IT Industry.
PEO3 To inculcate culture of professionalism, ethical conduct, team work with good
communication skills to enable the students to be successful in their career and
enable them to launch start-ups in their chosen field.
PSO1 The ability to understand, analyse and develop programming skills with the latest
tools and technology in computing and apply standard practices, strategies in
software development to deliver quality products and/or to pursue entrepreneur.
PSO2 To induce continuous learning aptitude and an inclination for lifelong learning; and
intent to act as good citizen by inculcating in them moral values & ethics.
Page | 3
Program Outcomes
Engineering Graduates will be able to:
Page | 3
GENERAL LABORATORY INSTRUCTIONS
1. Students are advised to come to the laboratory at least 5 minutes before (to the
starting time), those who come after 5 minutes will not be allowed into the lab.
2. Plan your task properly much before to the commencement, come prepared to
the lab with the synopsis / program / experiment details.
3. Student should enter into the laboratory with:
Laboratory observation notes with all the details (Problem statement, Aim,
Algorithm, Procedure, Program, Expected Output, etc.,) filled in for the lab
session.
Laboratory Record updated up to the last session experiments and other
utensils (if any) needed in the lab.
Proper Dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN, and occupy the
computer system allotted to you by the faculty.
5. Execute your task in the laboratory, and record the results / output in the lab
observation note book, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, must
maintain the discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high end branded
systems, which should be utilized properly.
8. Students / Faculty must keep their mobile phones in SWITCHED OFF mode
during the lab sessions.Misuse of the equipment, misbehaviors with the staff
and systems etc., will attract severe punishment.
9. Students must take the permission of the faculty in case of any urgency to go
out; if anybody found loitering outside the lab / class without permission during
working hours will be treated seriously and punished appropriately.
10. Students should LOG OFF/ SHUT DOWN the computer system before he/she
leaves the lab after completing the task (experiment) in all aspects. He/she must
ensure the system / seat is kept properly.
4
SubCode BCSP 503
Bloom’s
COURSEOUTCOMES Level
CO2 Compare the performance of various sorting and searching techniques. K2,K4
CO-PO Matrix
Course
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
Outcome
CO1 1
CO2 2 2 2
CO3 2 2 2
CO4 2 2 2
CO5 2 2 2 2
Avg
CO1 1
CO2 2
CO3 2
CO4 2
CO5 2
Avg
5
Study and Evaluation Scheme
Course Course
Teaching Scheme Credits Assigned
Code Name
BCSP 503 Design and Theory Practical Tutorial Theory Practical Tutorial Total
Analysis of
-- 02 -- -- 01 -- 01
Algorithm Lab (50
Marks)
6
LIST OF PROGRAMS
7
Experiment No. 1
Begin
for i = 0 to (n - 1) by 1 do
if (a[i] = item) then
set loc = i
Exit
endif
endfor
set loc = -1
End
Program in C:
#include <stdio.h>
int main()
{
int array[100], search, c, n;
8
for (c = 0; c < n; c++)
scanf("%d", &array[c]);
return 0;
}
Average Case-
When the element to be searched is in the middle of the array, the average case of the Linear
Search Algorithm is O(n)
Worst Case-
In the worst possible case,
The element being searched may be present at the last position or not present in the
array at all.
In the former case, the search terminates in success with n comparisons.
In the later case, the search terminates in failure with n comparisons.
Thus in worst case, linear search algorithm takes O(n) operations.
Thus, we have-
Time Complexity of Linear Search Algorithm is O(n).
Here, n is the number of elements in the linear array.
9
Applications:
Linear search can be applied to both single-dimensional and multi-dimensional arrays. Linear
search is easy to implement and effective when the array contains only a few elements.
Linear Search is also efficient when the search is performed to fetch a single search in an
unordered-List.
10
Experiment No. 2
Binary Search-
Binary Search is one of the fastest searching algorithms.
It is used for finding the location of an element in a linear array.
It works on the principle of divide and conquer technique.
Consider-
There is a linear array ‘a’ of size ‘n’.
Binary search algorithm is being used to search an element ‘item’ in this linear array.
If search ends in success, it sets loc to the index of the element otherwise it sets loc to
-1.
Variables beg and end keeps track of the index of the first and last element of the
array or sub array in which the element is being searched at that instant.
Variable mid keeps track of the index of the middle element of that array or sub array
in which the element is being searched at that instant.
11
Program in C:
#include <stdio.h>
int main()
{
int c, first, last, middle, n, search, array[100];
first = 0;
last = n - 1;
middle = (first+last)/2;
return 0;
}
Complexity
Best case time complexity: O(1)
Worst case time complexity: O(log n)
Average case time complexity: O(log n)
12
Applications:
1. Find the frequency of a given target value in an array of integers
2. Find the peak of an array which increases and then decreases
3. A sorted array is rotated n times. Search for a target value in the array.
4. Dictionary
5. Debugging a linear piece of code
13
Experiment No. 3
Pseudo-code:
Program in C:
#include<stdio.h>
#include<conio.h>
void main()
{
int i, j,n,key,arr[20];
printf("Enter number of elements to use:");
scanf("%d",&n);
printf("Enter %d elements: ",n);
14
printf("Sorted elements: ");
for(i=0;i<n;i++)
printf(" %d",arr[i]);
getch();
}
Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n
times, and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which
occurs when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements,
due to which n-1 comparisons are made for every nth element.
Applications:
Since the time complexity of Insertion sort can go to O(N^2)O(N2), it is only useful
when we have a lesser number of elements to sort in an array.
Insertion sort is an in-place algorithm, meaning it requires no extra space.
Maintains relative order of the input data in case of two equal values (stable).
15
Experiment No. 4
loop = list.count;
end for
end for
Program in C:
#include <stdio.h>
int main()
{
int array[100], n, c, d, swap;
16
scanf("%d", &array[c]);
return 0;
}
Time Complexity
Worst Case
In the worst-case scenario, the outer loop runs O(n) times. As a result, the worst-case time
complexity of bubble sort is O(n x n) = O(n x n) (n2).
Best Case
In the best-case scenario, the array is already sorted, but just in case, bubble sort performs
O(n) comparisons.As a result, the time complexity of bubble sort in the best-case scenario
is O(n).
Average Case
Bubble sort may require (n/2) passes and O(n) comparisons for each pass in the average
case. As a result, the average case time complexity of bubble sort is O(n/2 x n) = O(n/2 x
n) = O(n/2 x n) = O(n/2 x n) = O (n2).
Applications:
Bubble sort is (provably) the fastest sort available under a very specific circumstance. It
originally became well known primarily because it was one of the first algorithms (of any
kind) that was rigorously analyzed, and the proof was found that it was optimal under its
limited circumstance.
Consider a file stored on a tape drive, and so little random access memory (or such large
keys) that you can only load two records into memory at any given time. Rewinding the
tape is slow enough that doing random access within the file is generally impractical -- if
possible, you want to process records sequentially, no more than two at a time.
Back when tape drives were common, and machines with only a few thousand
(words|bytes) of RAM (of whatever sort) were common, that was sufficiently realistic to
be worth studying. That circumstance is now rare, so studying bubble sort makes little
sense at all -- but even worse, the circumstance when it's optimal isn't taught anyway, so
even when/if the right situation arose, almost nobody would realize it.
17
Experiment No. 5
Pseudo-code:
procedure Selection_sort(int Arr):
for i=0 to length(Arr):
Minimum_element = Arr[0]
for each unsorted element:
if element < Minimum_element
element = New_minimum
swap (Minimum_element, first unsorted position)
end Selection_sort
Program in C:
#include <stdio.h>
int main()
{
int array[100], n, c, d, position, swap;
printf("Enter number of elements\n");
scanf("%d", &n);
printf("Enter %d integers\n", n);
for ( c = 0 ; c < n ; c++ )
scanf("%d", &array[c]);
for ( c = 0 ; c < ( n - 1 ) ; c++ )
{
position = c;
for ( d = c + 1 ; d < n ; d++ )
{
if ( array[position] > array[d] )
position = d;
}
if ( position != c )
{
swap = array[c];
array[c] = array[position];
array[position] = swap;
}
}
printf("Sorted list in ascending order:\n");
for ( c = 0 ; c < n ; c++ )
printf("%d\n", array[c]);
return 0;
}
18
Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time complexity
of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the selection sort
algorithm is O(n2), in which the existing elements are in jumbled ordered, i.e., neither
in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which
occurs when we sort the descending order of an array into the ascending order.
Application:
Selection sort almost always outperforms bubble sort and gnome sort.
Can be useful when memory write is a costly operation.
While selection sort is preferable to insertion sort in terms of number of
writes (Θ(n) swaps versus Ο(n^2) swaps).
It almost always far exceeds the number of writes that cycle sort makes, as cycle sort
is theoretically optimal in the number of writes.
This can be important if writes are significantly more expensive than reads, such as
with EEPROM or Flash memory, where every write lessens the lifespan of the
memory
Experiment No. 6
19
element of the heap gets deleted and stored into the sorted array and the heap will again be
heapified.
Pseudo-code:
Heapify(A as array, n as int, i as int)
{
max = i
leftchild = 2i + 1
rightchild = 2i + 2
if (leftchild <= n) and (A[i] < A[leftchild])
max = leftchild
else
max = i
if (rightchild <= n) and (A[max] > A[rightchild])
max = rightchild
if (max != i)
swap(A[i], A[max])
Heapify(A, n, max)
}
Heapsort(A as array)
{
n = length(A)
for i = n/2 downto 1
Heapify(A, n ,i)
for i = n downto 2
exchange A[1] with A[i]
A.heapsize = A.heapsize - 1
Heapify(A, i, 0)
}
program in C:
#include<stdio.h>
#include<conio.h>
int temp;
void heapify(int arr[], int size, int i)
{
int largest = i;
int left = 2*i + 1;
int right = 2*i + 2;
if (left < size && arr[left] >arr[largest])
largest = left;
if (right < size && arr[right] > arr[largest])
largest = right;
if (largest != i)
{
temp = arr[i];
arr[i]= arr[largest];
arr[largest] = temp;
20
heapify(arr, size, largest);
}
}
void heapSort(int arr[], int size)
{
int i;
for (i = size / 2 - 1; i >= 0; i--)
heapify(arr, size, i);
for (i=size-1; i>=0; i--)
{
temp = arr[0];
arr[0]= arr[i];
arr[i] = temp;
heapify(arr, i, 0);
}
}
void main()
{
clrscr();
int arr[] = {1, 10, 2, 3, 4, 1, 2, 100,23, 2};
int i;
int size = sizeof(arr)/sizeof(arr[0]);
heapSort(arr, size);
printf("Sorted Elements:");
for (i=0; i<size; ++i)
printf(" %d",arr[i]);
getch();
}
Applications:
Implementation of priority queues
Security systems
Embedded systems (for example, Linux Kernel)
21
Experiment No. 7
The Merge Sort algorithm is a sorting algorithm that is considered as an example of the
divide and conquer strategy. So, in this algorithm, the array is initially divided into two
equal halves and then they are combined in a sorted manner. We can think of it as a
recursive algorithm that continuously splits the array in half until it cannot be further
divided. This means that if the array becomes empty or has only one element left, the
dividing will stop, i.e. it is the base case to stop the recursion. If the array has multiple
elements, we split the array into halves and recursively invoke the merge sort on each of
the halves. Finally, when both the halves are sorted, the merge operation is applied. Merge
operation is the process of taking two smaller sorted arrays and combining them to
eventually make a larger one.
22
Pseudo code:
mergesort(int a[], int low, int high)
{
int mid;
if(low<high)
{
mid=(low+high)/2;
mergesort(a,low,mid);
mergesort(a,mid+1,high);
merge(a,low,high,mid);
}
return(0);
}
23
a[i]=c[i];
}
}
Program in C:
#include<stdio.h>
#include<conio.h>
void mergesort(int a[],int i,int j);
void merge(int a[],int i1,int j1,int i2,int j2);
void main()
{
int a[30],n,i;
printf("Enter no of elements:");
scanf("%d",&n);
printf("Enter array elements:");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
mergesort(a,0,n-1);
printf("\nSorted array is :");
for(i=0;i<n;i++)
printf("%d ",a[i]);
getch();
}
if(i<j)
{
mid=(i+j)/2;
//left recursion
mergesort(a,i,mid);
//right recursion
mergesort(a,mid+1,j);
merge(a,i,mid,mid+1,j);
}
}
24
k=0;
for(i=i1,j=0;i<=j2;i++,j++)
a[i]=temp[j];
}
25
Experiment No. 8
26
Pseudo code:
Quicksort(A,p,r) {
if (p < r) {
q <- Partition(A,p,r)
Quicksort(A,p,q)
Quicksort(A,q+1,r)
}
}
Partition(A,p,r)
x <- A[p]
i <- p-1
j <- r+1
while (True) {
repeat
j <- j-1
until (A[j] <= x)
repeat
i <- i+1
until (A[i] >= x)
if (i<-=""> A[j]
else
return(j)
}
}
Program in C:
#include<stdio.h>
27
{
int temp = *first;
*first = *second;
*second = temp;
}
28
printf("\n");
printf("After Sorting the array: ");
//Indexing starts from 0(zero) in array (that is why length-1)
quick_sort(array, 0, length-1);
print_Array(array, length);
return 0;
}
Applications:
Commercial applications use Quicksort - generally it runs fast, no additional memory,
this compensates for the rare occasions when it runs with O(N 2). Never use in applications
which require guaranteed response time:
1. Life-critical (medical monitoring, life support in aircraft and space craft).
2. Mission-critical (monitoring and control in industrial and research plants handling
dangerous materials, control for aircraft, defense, etc) .Unless you assume the worst-case
response time.
Experiment No. 9
Pseudo code:
CountingSort(A)
//A[]-- Initial Array to Sort
//Complexity: O(k)
29
for i = 0 to k do
c[i] = 0
//Storing Count of each element
//Complexity: O(n)
for j = 0 to n do
c[A[j]] = c[A[j]] + 1
// Change C[i] such that it contains actual
//position of these elements in output array
////Complexity: O(k)
for i = 1 to k do
c[i] = c[i] + c[i-1]
//Build Output array from C[i]
//Complexity: O(n)
for j = n-1 downto 0 do
B[ c[A[j]]-1 ] = A[j]
c[A[j]] = c[A[j]] - 1
end func
Program in C:
#include <stdio.h>
/* Counting sort function */
void counting_sort(int A[], int k, int n)
{
int i, j;
int B[n], C[k+1];
//Initializing counting array C[i] to 0
for (i=0; i<=k; i++)
C[i] = 0;
//Store count of each element in array C
for (j=0; j<n; j++)
C[A[j]] = C[A[j]] + 1;
/* Change C[i] such that it contains actual
position of these elements in output array*/
for (i=1; i<k+1; i++)
C[i] = C[i] + C[i-1];
//Creating Output array from C[i]
//and decrementing value of C[i].
for (j=n-1; j>=0; j--)
{
B[C[A[j]]-1] = A[j];
C[A[j]] = C[A[j]] - 1;
}
//Printing sorted array
printf("The Sorted array is : ");
for (i=0; i<n; i++)
printf("%d ", B[i]);
}
/* The main() begins */
int main()
{
30
int n, max = 0,i;
printf("Enter the number of input : ");
scanf("%d", &n);
int A[n];
printf("\nEnter the elements to be sorted :\n");
/*Storing element in a array.
And finding max of elements to set range
of counting array C[]*/
for (i=0; i<n; i++)
{
scanf("%d", &A[i]);
if (A[i] > max) {
max= A[i];
}
}
//calling counting-sort function
counting_sort(A, max, n);
printf("\n");
return 0;
}
Time Complexity:
Worst Case Time complexity: O (n+k)
Average Case Time complexity: O(n+k)
Best Case Time complexity: O(n+k)
Applications:
It is a linear time sorting algorithm which works faster by not making a comparison. It
assumes that the number to be sorted is in range 1 to k where k is small. Basic idea is to
determine the "rank" of each number in the final sorted array.
Experiment No. 10
31
mark s as visited
while ( Q is not empty)
v = Q.dequeue( )
for all neighbors w of v in Graph G
if w is not visited
Q.enqueue( w )
mark w as visited
Program in C:
#include<stdio.h>
#include<conio.h>
int a[20][20], q[20], visited[20], n, i, j, f = 0, r = -1;
void bfs(int v) {
for(i = 1; i <= n; i++)
if(a[v][i] && !visited[i])
q[++r] = i;
if(f <= r) {
visited[q[f]] = 1;
bfs(q[f++]);
}
}
void main() {
clrscr();
int v;
printf("Enter the number of vertices: ");
scanf("%d",&n);
for(i=1; i <= n; i++) {
q[i] = 0;
visited[i] = 0;
}
printf("\nEnter graph data in matrix form:\n");
for(i=1; i<=n; i++) {
for(j=1;j<=n;j++) {
scanf("%d", &a[i][j]);
}
}
printf("Enter the starting vertex: ");
scanf("%d", &v);
bfs(v);
printf("\nThe node which are reachable are:");
for(i=1; i <= n; i++) {
if(visited[i])
printf(" %d", i);
else {
printf("\nBFS is not possible. All nodes are not reachable!");
break;
}
}
getch();
32
}
Time Complexity
O(V+E) where V is vertices and E is edges.
Applications:
Breadth-first Search is a simple graph traversal method that has a surprising range of
applications. Here are a few interesting ways in which Bread-First Search is being used:
Crawlers in Search Engines: Breadth-First Search is one of the main algorithms used for
indexing web pages. The algorithm starts traversing from the source page and follows all
the links associated with the page. Here each web page will be considered as a node in a
graph.
GPS Navigation systems: Breadth-First Search is one of the best algorithms used to find
neighboring locations by using the GPS system.
Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure,
or graph. One starts at the root (selecting some node as the root in the graph case) and
explores as far as possible along each branch before backtracking.
Pseudo code:
DFS(G,v) ( v is the vertex where the search starts )
Stack S := {}; ( start with an empty stack )
for each vertex u, set visited[u] := false;
push S, v;
while (S is not empty) do
u := pop S;
if (not visited[u]) then
33
visited[u] := true;
for each unvisited neighbour w of u
push S, w;
end if
end while
END DFS()
Program in C:
#include <stdio.h>
typedef struct node {
int value;
struct node *right;
struct node *left;
} mynode;
mynode *root;
add_node(int value);
void levelOrderTraversal(mynode *root);
int main(int argc, char* argv[]) {
root = NULL;
add_node(5);
add_node(1);
add_node(-20);
add_node(100);
add_node(23);
add_node(67);
add_node(13);
printf("\n\n\nLEVEL ORDER TRAVERSAL\n\n");
levelOrderTraversal(root);
getch();
}
// Function to add a new node...
add_node(int value) {
mynode *prev, *cur, *temp;
temp = malloc(sizeof(mynode));
temp->value = value;
temp->right = NULL;
temp->left = NULL;
if(root == NULL) {
printf("\nCreating the root..\n");
root = temp;
return;
}
prev = NULL;
cur = root;
while(cur != NULL) {
prev = cur;
//cur = (value < cur->value) ? cur->left:cur->right;
if(value < cur->value) {
34
cur = cur->left;
} else {
cur = cur->right;
}
}
if(value < prev->value) {
prev->left = temp;
} else {
prev->right = temp;
}
}
// Level order traversal..
void levelOrderTraversal(mynode *root) {
mynode *queue[100] = {(mynode *)0}; // Important to initialize!
int size = 0;
int queue_pointer = 0;
while(root) {
printf("[%d] ", root->value);
if(root->left) {
queue[size++] = root->left;
}
if(root->right) {
queue[size++] = root->right;
}
root = queue[queue_pointer++];
}
}
Time Complexity:
O ( V + E ) O(V + E) O(V+E) where V is the number of vertexes and E is the number of
edges
Applications:
Algorithms that use depth-first search as a building block include:
1. Finding connected components.
2. Topological sorting.
3. Finding 2-(edge or vertex)-connected components.
4. Finding 3-(edge or vertex)-connected components.
5. Finding the bridges of a graph.
6. Finding strongly connected components.
7. Planarity Testing[4][5]
8. Solving puzzles with only one solution, such as mazes. (DFS can be adapted to find
all solutions to a maze by only including nodes on the current path in the visited set.)
9. Maze generation may use a randomized depth-first search.
Finding biconnectivity in graphs
35
Experiment No. 11
Pseudo code:
KRUSKAL(G):
36
A=∅
For each vertex v ∈ G.V:
MAKE-SET(v)
For each edge (u, v) ∈ G.E ordered by increasing order by weight(u, v):
if FIND-SET(u) ≠ FIND-SET(v):
A = A ∪ {(u, v)}
UNION(u, v)
return A
Program in C:
// Kruskal's algorithm in C
#include <stdio.h>
#define MAX 30
edge_list elist;
int Graph[MAX][MAX], n;
edge_list spanlist;
void kruskalAlgo();
int find(int belongs[], int vertexno);
void applyUnion(int belongs[], int c1, int c2);
void sort();
void print();
37
}
sort();
spanlist.n = 0;
if (cno1 != cno2) {
spanlist.data[spanlist.n] = elist.data[i];
spanlist.n = spanlist.n + 1;
applyUnion(belongs, cno1, cno2);
}
}
}
// Sorting algo
void sort() {
int i, j;
edge temp;
38
for (i = 0; i < spanlist.n; i++) {
printf("\n%d - %d : %d", spanlist.data[i].u, spanlist.data[i].v, spanlist.data[i].w);
cost = cost + spanlist.data[i].w;
}
int main() {
int i, j, total_cost;
n = 6;
Graph[0][0] = 0;
Graph[0][1] = 4;
Graph[0][2] = 4;
Graph[0][3] = 0;
Graph[0][4] = 0;
Graph[0][5] = 0;
Graph[0][6] = 0;
Graph[1][0] = 4;
Graph[1][1] = 0;
Graph[1][2] = 2;
Graph[1][3] = 0;
Graph[1][4] = 0;
Graph[1][5] = 0;
Graph[1][6] = 0;
Graph[2][0] = 4;
Graph[2][1] = 2;
Graph[2][2] = 0;
Graph[2][3] = 3;
Graph[2][4] = 4;
Graph[2][5] = 0;
Graph[2][6] = 0;
Graph[3][0] = 0;
Graph[3][1] = 0;
Graph[3][2] = 3;
Graph[3][3] = 0;
Graph[3][4] = 3;
Graph[3][5] = 0;
Graph[3][6] = 0;
Graph[4][0] = 0;
Graph[4][1] = 0;
Graph[4][2] = 4;
Graph[4][3] = 3;
39
Graph[4][4] = 0;
Graph[4][5] = 0;
Graph[4][6] = 0;
Graph[5][0] = 0;
Graph[5][1] = 0;
Graph[5][2] = 2;
Graph[5][3] = 0;
Graph[5][4] = 3;
Graph[5][5] = 0;
Graph[5][6] = 0;
kruskalAlgo();
print();
}
Applications:
Experiment No. 12
Bellman Ford algorithm helps us find the shortest path from a vertex to all other vertices
of a weighted graph.
It is similar to Dijkstra's algorithm but it can work with graphs in which edges can have
negative weights.
Pseudo Code:
function bellmanFord(G, S)
40
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
distance[S] <- 0
Program in C
41
#include <stdio.h>
#include <stdlib.h>
int main(void) {
//create graph
struct Graph *g = (struct Graph *)malloc(sizeof(struct Graph));
g->V = 4; //total vertices
g->E = 5; //total edges
//edge 0 --> 1
g->edge[0].u = 0;
g->edge[0].v = 1;
42
g->edge[0].w = 5;
//edge 0 --> 2
g->edge[1].u = 0;
g->edge[1].v = 2;
g->edge[1].w = 4;
//edge 1 --> 3
g->edge[2].u = 1;
g->edge[2].v = 3;
g->edge[2].w = 3;
//edge 2 --> 1
g->edge[3].u = 2;
g->edge[3].v = 1;
g->edge[3].w = 6;
//edge 3 --> 2
g->edge[4].u = 3;
g->edge[4].v = 2;
g->edge[4].w = 2;
return 0;
}
//distance array
//size equal to the number of vertices of the graph g
int d[tV];
//predecessor array
//size equal to the number of vertices of the graph g
int p[tV];
43
//step 1: fill the distance array and predecessor array
for (i = 0; i < tV; i++) {
d[i] = INFINITY;
p[i] = 0;
}
44
}
Time Complexity
Best Case Complexity O(E)
Average Case Complexity O(VE)
Worst Case Complexity O(VE)
Application :
1. For calculating shortest paths in routing algorithms
2. For finding the shortest path
45
Experiment No. 13
Using Recursion
1) A function which calls itself until some condition is called recursive function.
2) In this program, we have two recursive functions available.one is minimum() and
another one is maximum(). Both these functions call by itself.
3) The main() function calls the minimum() by passing array,array size,1 as arguments.
Then the function minimum()
a) Checks the condition i<n, If it is true
b) Then it compares a[min]>a[i] if it is also true
c) Then min initialised to i and calls the function by itself by increasing i value until the
condition a[min]>a[i] becomes false. This function returns the min to the main function.
main() function prints the a[min] value of the array.
4) The main() function calls the maximum() function by passing array,array size,1 as
arguments.
Then the function maximum()
a) Checks the condition i<n, if it is true
b) Then it compares a[max]<a[i] if it is true
c) Then max initialise to i and calls the function itself by increasing i value until the
condition a[max]<a[i] becomes false. This function returns the max to the main function.
main() function prints the a[max] value of the array.
Pseudocode :
If there is single element, return it.
Else return minimum of following.
a) Last Element
b) Value returned by recursive call
for n-1 elements
Program in C:
#include <stdio.h>
#include <conio.h>
int minimum(int a[],int n,int i)
{
static int min=0;;
if(i<n)
{
if(a[min]>a[i])
{
min=i;
minimum(a,n,++i);
}
}
return min;
}
int maximum(int a[],int n,int i)
{
static int max=0;;
46
if(i<n)
{
if(a[max]<a[i])
{
max=i;
maximum(a,n,++i);
}
}
return max;
}
int main()
{
int a[1000],i,n,sum;
Time complexity
O(n)
47
Experiment No. 14
48
}
Time Complexity:
O(nlogn) + O(1) = O(nlogn)
49
Experiment No. 15
Title: Pseudo code and Program to implement solve the fractional Knapsack
problem.
Algorithm
Sort the given array of items according to weight / value(W /V) ratio in descending order.
Start adding the item with the maximum W / V ratio.
Add the whole item, if the current weight is less than the capacity, else, add a portion of
the item to the knapsack.
Stop, when all the items have been considered and the total weight becomes equal to the
weight of the given knapsack.
Program in C:
#include <stdio.h>
void simple_fill() {
int cur_w;
float tot_v;
int i, maxi;
int used[10];
cur_w = W;
50
while (cur_w > 0) { /* while there's still room*/
/* Find the best object */
maxi = -1;
for (i = 0; i < n; ++i)
if ((used[i] == 0) &&
((maxi == -1) || ((float)v[i]/c[i] > (float)v[maxi]/c[maxi])))
maxi = i;
return 0;
}
Time Complexity:
O(N *log N) where N is the size of the array.
Application:
In many cases of resource allocation along with some constraint, the problem can be
derived in a similar way of Knapsack problem. Following is a set of example.
Finding the least wasteful way to cut raw materials
portfolio optimization
Cutting stock problems
51
Experiment No. 16
Title: Pseudo code and Program to implement find Minimum Spanning Tree using
Kruskal’s Algorithm
Kruskal’s algorithm is the concept that is introduced in the graph theory of discrete
mathematics. It is used to discover the shortest path between two points in a connected
weighted graph. This algorithm converts a given graph into the forest, considering each
node as a separate tree. These trees can only link to each other if the edge connecting them
has a low value and doesn’t generate a cycle in MST structure.
A minimum spanning tree is a subset of a graph with the same number of vertices as the
graph and edges equal to the number of vertices -1. It also has a minimal cost for the sum
of all edge weights in a spanning tree.
Kruskal’s algorithm sorts all the edges in increasing order of their edge weights and keeps
adding nodes to the tree only if the chosen edge does not form any cycle. Also, it picks the
edge with a minimum cost at first and the edge with a maximum cost at last. Hence, you
can say that the Kruskal algorithm makes a locally optimal choice, intending to find the
global optimal solution. That is why it is called a Greedy Algorithm.
Pseudo code:
KRUSKAL(G):
A=∅
For each vertex v ∈ G.V:
MAKE-SET(v)
For each edge (u, v) ∈ G.E ordered by increasing order by weight(u, v):
if FIND-SET(u) ≠ FIND-SET(v):
A = A ∪ {(u, v)}
UNION(u, v)
return A
Kruskal Algorithm
52
Step 5: Repeat from step 2 until it includes |V| - 1 edges in MST.
Program in C:
// Kruskal's algorithm in C
#include <stdio.h>
#define MAX 30
edge_list elist;
int Graph[MAX][MAX], n;
edge_list spanlist;
void kruskalAlgo();
int find(int belongs[], int vertexno);
void applyUnion(int belongs[], int c1, int c2);
void sort();
void print();
sort();
53
spanlist.n = 0;
if (cno1 != cno2) {
spanlist.data[spanlist.n] = elist.data[i];
spanlist.n = spanlist.n + 1;
applyUnion(belongs, cno1, cno2);
}
}
}
// Sorting algo
void sort() {
int i, j;
edge temp;
54
printf("\nSpanning tree cost: %d", cost);
}
int main() {
int i, j, total_cost;
n = 6;
Graph[0][0] = 0;
Graph[0][1] = 4;
Graph[0][2] = 4;
Graph[0][3] = 0;
Graph[0][4] = 0;
Graph[0][5] = 0;
Graph[0][6] = 0;
Graph[1][0] = 4;
Graph[1][1] = 0;
Graph[1][2] = 2;
Graph[1][3] = 0;
Graph[1][4] = 0;
Graph[1][5] = 0;
Graph[1][6] = 0;
Graph[2][0] = 4;
Graph[2][1] = 2;
Graph[2][2] = 0;
Graph[2][3] = 3;
Graph[2][4] = 4;
Graph[2][5] = 0;
Graph[2][6] = 0;
Graph[3][0] = 0;
Graph[3][1] = 0;
Graph[3][2] = 3;
Graph[3][3] = 0;
Graph[3][4] = 3;
Graph[3][5] = 0;
Graph[3][6] = 0;
Graph[4][0] = 0;
Graph[4][1] = 0;
Graph[4][2] = 4;
Graph[4][3] = 3;
Graph[4][4] = 0;
Graph[4][5] = 0;
Graph[4][6] = 0;
Graph[5][0] = 0;
Graph[5][1] = 0;
55
Graph[5][2] = 2;
Graph[5][3] = 0;
Graph[5][4] = 3;
Graph[5][5] = 0;
Graph[5][6] = 0;
kruskalAlgo();
print();
}
Time Complexity :
O(E log E)
Application:
Experiment No. 17
Backtracking is finding the solution of a problem whereby the solution depends on the
previous steps taken. For example, in a maze problem, the solution depends on all the
steps you take one-by-one. If any of those steps is wrong, then it will not lead us to the
solution. In a maze problem, we first choose a path and continue moving along it. But once
we understand that the particular path is incorrect, then we just come back and change it.
This is what backtracking basically is.
In backtracking, we first take a step and then we see if this step taken is correct or not i.e.,
whether it will give a correct answer or not. And if it doesn’t, then we just come back and
change our first step. In general, this is accomplished by recursion. Thus, in backtracking,
we first start with a partial sub-solution of the problem (which may or may not lead us to
the solution) and then check if we can proceed further with this sub-solution or not. If not,
then we just come back and change it.
Thus, the general steps of backtracking are:
start with a sub-solution
check if this sub-solution will lead to the solution or not
If not, then come back and change the sub-solution and continue again
56
N Queen Problem : N Queens Problem is a famous puzzle in which n-queens are to be
placed on a nxn chess board such that no two queens are in the same row, column or
diagonal.The N Queen is the problem of placing N chess queens on an N×N chessboard so
that no two queens attack each other.
Algorithm
1)Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column.
Do following for every tried row.
a) If the queen can be placed safely in this row
then mark this [row, column] as part of the
solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to
a solution then return true.
c) If placing queen doesn't lead to a solution then
unmark this [row, column] (Backtrack) and go to
step (a) to try other rows.
4) If all rows have been tried and nothing worked,
return false to trigger backtracking.
Program in C:
#include<stdio.h>
#include<math.h>
int board[20],count;
int main(){
int n,i,j;
void queen(int row,int n);
printf(" - N Queens Problem Using Backtracking -");
printf("\n\nEnter number of Queens:");
scanf("%d",&n);
queen(1,n);
return 0;
}
//function for printing the solution
void print(int n)
{
int i,j;
printf("\n\nSolution %d:\n\n",++count);
for(i=1;i<=n;++i)
printf("\t%d",i);
for(i=1;i<=n;++i)
{
printf("\n\n%d",i);
for(j=1;j<=n;++j) //for nxn board
{
if(board[i]==j)
57
printf("\tQ"); //queen at i,j position
else
printf("\t-"); //empty slot } }}
/*funtion to check conflicts If no conflict for desired postion returns 1 otherwise returns
0*/
int place(int row,int column)
{
int i;
for(i=1;i<=row-1;++i) {
//checking column and digonal conflicts
if(board[i]==column)
return 0;
else
if(abs(board[i]-column)==abs(i-row))
return 0; } return 1; //no conflicts
}
//function to check for proper positioning of queen
void queen(int row,int n){
int column;
for(column=1;column<=n;++column) {
if(place(row,column)) {
board[row]=column; //no conflicts so place queen
if(row==n) //dead end
print(n); //printing the board configuration
else //try queen with next position
queen(row+1,n); } }}
Time complexity:
O(N!) : The first queen has N placements, the second queen must not be in the same
column as the first as well as at an oblique angle, so the second queen has N-1
possibilities, and so on, with a time complexity of O(N!) .
Application:
The N-queen problem is used in many practical solutions like parallel memory storage
schemes, VLSI testing, traffic control and deadlock prevention. This problem is also used
to find out solutions to more practical problems which requires permutation like travelling
salesman problem.
Experiment No. 18
58
city i to city j. The goal is to find a tour of minimum cost. We assume that every two cities
are connected. Such problems are called Traveling-salesman problem (TSP).
Algorithm:
1.Identify a hub vertex h
2. VH = V - {h}
3. for each i,j != h
4. compute savings(i,j)
5. endfor
6. sortlist = Sort vertex pairs in decreasing order of savings
7. while |VH| > 2
8. try vertex pair (i,j) in sortlist order
9. if (i,j) shortcut does not create a cycle
and degree(v) ≤ 2 for all v
10. add (i,j) segment to partial tour
11. if degree(i) = 2
12. VH = VH - {i}
13. endif
14. if degree(j) = 2
15. VH = VH - {j}
16. endif
17. endif
18. endwhile
19. Stitch together remaining two vertices and hub into final tour
Program in C:
#include <stdio.h>
#include<conio.h>
int ary[10][10],completed[10],n,cost=0;
void takeInput()
{
int i,j;
printf("Enter the number of villages: ");
scanf("%d",&n);
printf("\nEnter the Cost Matrix\n");
for(i=0;i < n;i++)
{
printf("\nEnter Elements of Row: %d\n",i+1);
for( j=0;j < n;j++)
scanf("%d",&ary[i][j]);
completed[i]=0;
}
printf("\n\nThe cost list is:");
for( i=0;i < n;i++)
{
printf("\n");
for(j=0;j < n;j++)
printf("\t%d",ary[i][j]);
}
59
}
void mincost(int city)
{
int i,ncity;
completed[city]=1;
printf("%d--->",city+1);
ncity=least(city);
if(ncity==999)
{
ncity=0;
printf("%d",ncity+1);
cost+=ary[city][ncity];
return;
}
mincost(ncity);
}
int least(int c)
{
int i,nc=999;
int min=999,kmin;
for(i=0;i < n;i++)
{
if((ary[c][i]!=0)&&(completed[i]==0))
if(ary[c][i]+ary[i][c] < min)
{
min=ary[i][0]+ary[c][i];
kmin=ary[c][i];
nc=i;
}
}
if(min!=999)
cost+=kmin;
return nc;
}
int main()
{
takeInput();
printf("\n\nThe Path is:\n");
mincost(0); //passing 0 because starting vertex
printf("\n\nMinimum cost is %d\n ",cost);
return 0;
}
Time Complexity:
O(n2*2n).
There are at most O(n*2n) subproblems, and each one takes linear time to solve. The total
running time is therefore O(n2*2n). The time complexity is much less than O(n!) but still
exponential.
Applications:
60
1)Drilling of printed circuit boards A direct application of the TSP is in the drilling
problem of printed circuit boards (PCBs). To connect a conductor on one layer with a
conductor on anotherlayer, or to position the pins of integrated circuits, holes have to be
drilled through the board.
2)Overhauling gas turbine engines It is reported this application and it occurs when gas
turbine engines of aircraft have to be overhauled. To guarantee a uniform gas flow through
the turbines thereare nozzle-guide vane assemblies located at each turbine stage. Such an
assembly basicallyconsists of a number of nozzle guide vanes affixed about its
circumference. All these vaneshave individual characteristics and the correct placement of
the vanes can result insubstantial benefits (reducing vibration, increasing uniformity of
flow, reducing fuel consumption). The problem of placing the vanes in the best possible
way can be modeled as a TSP with a special objective function.
61
Experiment No. 19
Algorithm
X and Y be two given sequences
Initialize a table LCS of dimension X.length * Y.length
X.label = X
Y.label = Y
LCS[0][] = 0
LCS[][0] = 0
Start from LCS[1][1]
Compare X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1]
Program in C:
// The longest common subsequence in C
#include <stdio.h>
#include <string.h>
int i, j, m, n, LCS_table[20][20];
char S1[20] = "ACADB", S2[20] = "CBDA", b[20][20];
void lcsAlgo() {
m = strlen(S1);
n = strlen(S2);
62
for (i = 0; i <= m; i++)
LCS_table[i][0] = 0;
for (i = 0; i <= n; i++)
LCS_table[0][i] = 0;
int i = m, j = n;
while (i > 0 && j > 0) {
if (S1[i - 1] == S2[j - 1]) {
lcsAlgo[index - 1] = S1[i - 1];
i--;
j--;
index--;
}
int main() {
lcsAlgo();
printf("\n");
}
Time Complexity:
O(2^n)
Applications:
1. In compressing genome resequencing data
63
2. To authenticate users within their mobile phone through in-air signatures
Experiment No. 20
Algorithm:
1. create a priority queue Q consisting of each unique character.
2. sort then in ascending order of their frequencies.
3. for all the unique characters:
create a newNode
extract minimum value from Q and assign it to leftChild of newNode
extract minimum value from Q and assign it to rightChild of newNode
calculate the sum of these two minimum values and assign it to the value of
newNode
insert this newNode into the tree
4.return rootNode
Program in C:
// Huffman Coding in C
#include <stdio.h>
#include <stdlib.h>
64
#define MAX_TREE_HT 50
struct MinHNode {
char item;
unsigned freq;
struct MinHNode *left, *right;
};
struct MinHeap {
unsigned size;
unsigned capacity;
struct MinHNode **array;
};
// Create nodes
struct MinHNode *newNode(char item, unsigned freq) {
struct MinHNode *temp = (struct MinHNode *)malloc(sizeof(struct MinHNode));
return temp;
}
minHeap->size = 0;
minHeap->capacity = capacity;
// Function to swap
void swapMinHNode(struct MinHNode **a, struct MinHNode **b) {
struct MinHNode *t = *a;
*a = *b;
*b = t;
}
// Heapify
void minHeapify(struct MinHeap *minHeap, int idx) {
int smallest = idx;
int left = 2 * idx + 1;
65
int right = 2 * idx + 2;
if (smallest != idx) {
swapMinHNode(&minHeap->array[smallest], &minHeap->array[idx]);
minHeapify(minHeap, smallest);
}
}
// Check if size if 1
int checkSizeOne(struct MinHeap *minHeap) {
return (minHeap->size == 1);
}
// Extract min
struct MinHNode *extractMin(struct MinHeap *minHeap) {
struct MinHNode *temp = minHeap->array[0];
minHeap->array[0] = minHeap->array[minHeap->size - 1];
--minHeap->size;
minHeapify(minHeap, 0);
return temp;
}
// Insertion function
void insertMinHeap(struct MinHeap *minHeap, struct MinHNode *minHeapNode) {
++minHeap->size;
int i = minHeap->size - 1;
66
}
minHeap->size = size;
buildMinHeap(minHeap);
return minHeap;
}
while (!checkSizeOne(minHeap)) {
left = extractMin(minHeap);
right = extractMin(minHeap);
top->left = left;
top->right = right;
insertMinHeap(minHeap, top);
}
return extractMin(minHeap);
}
67
// Wrapper function
void HuffmanCodes(char item[], int freq[], int size) {
struct MinHNode *root = buildHuffmanTree(item, freq, size);
printf("\n");
}
int main() {
char arr[] = {'A', 'B', 'C', 'D'};
int freq[] = {5, 1, 6, 3};
Time Complexity:
O(nlog n)
Extracting minimum frequency from the priority queue takes place 2*(n-1) times and its
complexity is O(log n). Thus the overall complexity is O(nlog n).
Applications:
Huffman coding is used in conventional compression formats like GZIP, BZIP2,
PKZIP, etc.
For text and fax transmissions.
Experiment No. 21
68
result obtained will remain the same. For example, for four matrices A, B, C, and D, we
would have:
((AB)C)D = ((A(BC))D) = (AB)(CD) = A((BC)D) = A(B(CD))
Algorithm:
MATRIX-CHAIN-ORDER (p)
1. n <-length[p]-1
2. for i ← 1 to n
3. do m [i, i] ← 0
4. for l ← 2 to n // l is the chain length
5. do for i ← 1 to n-l + 1
6. do j ← i+ l -1
7. m[i,j] ← ∞
8. for k ← i to j-1
9. do q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
10. If q < m [i,j]
11. then m [i,j] ← q
12. s [i,j] ← k
13. return m and s.
PRINT-OPTIMAL-PARENS (s, i, j)
1. if i=j
2. then print "A"
3. else print "("
4. PRINT-OPTIMAL-PARENS (s, i, s [i, j])
5. PRINT-OPTIMAL-PARENS (s, s [i, j] + 1, j)
6. print ")"
Program in C:
#include <stdio.h>
int MatrixChainMultuplication(int arr[], int n) {
int minMul[n][n];
int j, q;
for (int i = 1; i < n; i++)
minMul[i][i] = 0;
for (int L = 2; L < n; L++) {
for (int i = 1; i < n - L + 1; i++) {
j = i + L - 1;
minMul[i][j] = 99999999;
for (int k = i; k <= j - 1; k++) {
q = minMul[i][k] + minMul[k + 1][j] + arr[i - 1] * arr[k] * arr[j];
if (q < minMul[i][j])
minMul[i][j] = q;
}
}
}
return minMul[1][n - 1];
69
}
int main(){
int arr[] = {3, 4, 5, 6, 7, 8};
int size = sizeof(arr) / sizeof(arr[0]);
printf("Minimum number of multiplications required for the matrices multiplication is
%d ", MatrixChainMultuplication(arr, size));
getchar();
return 0;
}
Time Complexity:
O (n3)
If there are n number of matrices we are creating a table contains [(n) (n+1) ] / 2 cells that
is in worst case total number of cells n*n = n2 cells we need calculate = O (n2)
For each one of entry we need find minimum number of multiplications taking worst (it
happens at last cell in table) that is Table [1,4] which equals to O (n) time.
Finally O (n2) * O (n) = O (n3) is time complexity.
Applications:
Matrix Chain Multiplication is one of the optimization problem which is widely used
in graph algorithms, signal processing and network industry.
70