0% found this document useful (0 votes)
29 views

Ada 13

Uploaded by

Dhruv Modi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Ada 13

Uploaded by

Dhruv Modi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

220130107065 – A3

A Laboratory Manual for

Analysis and Design of


Algorithms
(3150703)

B.E. Semester 5
(Computer Engineering)

Directorate of Technical Education, Gandhinagar,


Gujarat

1
220130107065 – A3
Government Engineering Collage, Gandhinagar

Certificate
This is to certify that Mr./Ms. PADMANI MIRAL P. Enrollment No.
220130107065 of B.E. Semester 5th Computer Engineering of this Institute
(GTU Code:3150703) has satisfactorily completed the Practical / Tutorial work
for the subject Analysis and Design of Algorithms(3150703) for the academic
year 2022-23.

Place: __________
Date: __________

Name and Sign of Faculty member

Head of the Department

2
Analysis and Design of Algorithms (3150703)

Preface

Main motto of any laboratory/practical/field work is for enhancing required skills as well as
creating ability amongst students to solve real time problem by developing relevant competencies
in psychomotor domain. By keeping in view, GTU has designed competency focused outcome-
based curriculum for engineering degree programs where sufficient weightage is given to
practical work. It shows importance of enhancement of skills amongst the students and it pays
attention to utilize every second of time allotted for practical amongst students, instructors and
faculty members to achieve relevant outcomes by performing the experiments rather than having
merely study type experiments. It is must for effective implementation of competency focused
outcome-based curriculum that every practical is keenly designed to serve as a tool to develop
and enhance relevant competency required by the various industry among every student. These
psychomotor skills are very difficult to develop through traditional chalk and board content
delivery method in the classroom. Accordingly, this lab manual is designed to focus on the
industry defined relevant outcomes, rather than old practice of conducting practical to prove
concept and theory.

By using this lab manual students can go through the relevant theory and procedure in advance
before the actual performance which creates an interest and students can have basic idea prior to
performance. This in turn enhances pre-determined outcomes amongst students. Each experiment
in this manual begins with competency, industry relevant skills, course outcomes as well as
practical outcomes (objectives). The students will also achieve safety and necessary precautions
to be taken while performing practical.

This manual also provides guidelines to faculty members to facilitate student centric lab activities
through each experiment by arranging and managing necessary resources in order that the
students follow the procedures with required safety and necessary precautions to achieve the
outcomes. It also gives an idea that how students will be assessed by providing rubrics.

Algorithms are an integral part of computer science and play a vital role in solving complex
problems efficiently.The goal of this subject is to equip students with the knowledge and skills
required to design and analyze algorithms for various applications. Designing of algorithms is
important before implementation of any program or solving any problem. Analysis and Design
of Algorithms is essential for efficient problem-solving, optimizing resource utilization,
developing new technologies, and gaining a competitive advantage.This lab manual is designed
to help you learn algorithms by doing. Each experiment is structured to provide you with step-
by-step instructions on how to analyze and design a particular algorithmforspecific problem. You
will learn how to analyze various algorithms and decide efficient algorithm in terms of time
complexity. By the end of this lab, you will have a solid understanding of algorithm design and
analysis.

Utmost care has been taken while preparing this lab manual however always there is chances of
improvement. Therefore, we welcome constructive suggestions for improvement and removal of
errors if any.
Analysis and Design of Algorithms (3150703)

Practical – Course Outcome matrix

Course Outcomes (Cos):

1. Analyze the asymptotic performance of algorithms.


2. Derive and solve recurrences describing the performance of divide-and-conquer algorithms.
3. Find optimal solution by applying various methods.
4. Apply pattern matching algorithms to find particular pattern.
5. Differentiate polynomial and nonpolynomial problems.
6. Explain the major graph algorithms and their analyses. Employ graphs to model engineering
problems, when appropriate.
Sr. CO CO CO CO CO CO
Objective(s) of Experiment
No. 1 2 3 4 5 6
Implement a function for each of following problems and
count the number of steps executed/time taken by each
function on various inputs (100 to 500) and write time
complexity of each function. Also draw a comparative
1. chart of number of input versus steps executed/time

taken. In each of the following function N will be passed
by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.
Write user defined functions for the following sorting
methods and compare their performance by steps
executed/time taken for executionon various inputs
(1000 to 5000) ofrandomnature,ascending order and
descending order sorted data.Also draw a comparative
chart of number of input versus steps executed/time
2. taken for each cases (random, ascending, and √ √
descending).
1. Selection Sort
2. Bubble Sort
3. Insertion Sort
4. Merge Sort
5. Quick Sort
Implement a function of sequential search and count the
steps executed by function on various inputs (1000 to
5000) for best case, average case and worst case. Also,
3. √
write time complexity in each case and draw a
comparative chart of number of input versus steps
executed by sequential search for each case.
Compare the performance of linear search and binary
4. √ √
search for Best case, Average case and Worst case inputs.
Analysis and Design of Algorithms (3150703)

Implement functions to print nth Fibonacci number using


iteration and recursive method. Compare the
performance of two methods by counting number of
5. √ √
steps executed on various inputs. Also, draw a
comparative chart. (Fibonacci series 1, 1, 2, 3, 5, 8…..
Here 8 is the 6th Fibonacci number).
Implement a program for randomized version of quick
sort and compare its performance with the normal
version of quick sort using steps count on various inputs
(1000 to 5000) of random nature, ascending order and
6. √ √
descending order sorted data. Also draw a comparative
chart of number of input versus steps executed/time
taken for each cases (random, ascending, and
descending).
Implement program to solve problem of making a change
7. √
using dynamic programming.
Implement program of chain matrix multiplication using
8. √
dynamic programming.
Implement program to solve LCS problem using
9. √
dynamic programing.
Implement program to solve Knapsack problem using
10. √
dynamic programming.
Implement program for solution of fractional Knapsack
11. √
problem using greedy design technique.
Implement program for solution of Making Change
12. √
problem using greedy design technique
Implement program for Kruskal's algorithm to find
13. √ √
minimum spanning tree.
Implement program for Prim's algorithm to find
14. √ √
minimum spanning tree.
Implement DFS and BFS graph traversal techniques and
15. √
write its time complexities.

16. Implement Rabin-Karp string matching algorithm. √


Analysis and Design of Algorithms (3150703)

Industry Relevant Skills

The following industry relevant competencies are expected to be developed in the student by
undertaking the practical work of this laboratory.
1. Expertise in algorithm analysis
2. Judge best algorithm among various algorithms
3. Ability to solve complex problems
4. Ability to design efficient algorithm for some problems

Guidelines for Faculty members


1. Teacher should provide the guideline with demonstration of practical to the students with
all features.
2. Teacher shall explain basic concepts/theory related to the experiment to the students before
starting of each practical
3. Involve all the students in performance of each experiment.
4. Teacher is expected to share the skills and competencies to be developed in the students and
ensure that the respective skills and competencies are developed in the students after the
completion of the experimentation.
5. Teachers should give opportunity to students for hands-on experience after the
demonstration.
6. Teacher may provide additional knowledge and skills to the students even though not
covered in the manual but are expected from the students by concerned industry.
7. Give practical assignment and assess the performance of students based on task assigned to
check whether it is as per the instructions or not.
8. Teacher is expected to refer complete curriculum of the course and follow the guidelines for
implementation.

Instructions for Students


1. Students are expected to carefully listen to all the theory classes delivered by the faculty
members and understand the COs, content of the course, teaching and examination scheme,
skill set to be developed etc.
2. Students shall organize the work in the group and make record of all observations.
3. Students shall develop maintenance skill as expected by industries.
4. Student shall attempt to develop related hand-on skills and build confidence.
5. Student shall develop the habits of evolving more ideas, innovations, skills etc. apart from
those included in scope of manual.
6. Student shall refer technical magazines and data books.
7. Student should develop a habit of submitting the experimentation work as per the schedule
and s/he should be well prepared for the same.

Common Safety Instructions


1. Switch on the PC carefully (not to use wet hands)
2. Shutdown the PC properly at the end of your Lab
3. Carefully handle the peripherals (Mouse, Keyboard, Network cable etc).
4. Use Laptop in lab after getting permission from Teacher
Analysis and Design of Algorithms (3150703)

Index
(Progressive Assessment Sheet)

Sr. No. Objective(s) of Experiment Page Date of Date of Assessme Sign. of Remar
No. perform submiss nt Teacher ks
ance ion Marks with date
Implement a function for each of following
problems and count the number of steps
executed/time taken by each function on various
inputs (100 to 500) and write time complexity
of each function. Also draw a comparative chart
1. of number of input versus steps executed/time
taken. In each of the following function N will
be passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.
Write user defined functions for the following
sorting methods and compare their performance
by steps executed/time taken for executionon
various inputs (1000 to 5000)
ofrandomnature,ascending order and
descending order sorted data.Also draw a
2. comparative chart of number of input versus
steps executed/time taken for each cases
(random, ascending, and descending).
Selection Sort
Bubble Sort
1.Insertion Sort
2.Merge Sort
3.Quick Sort
Implement a function of sequential search and
count the steps executed by function on various
inputs (1000 to 5000) for best case, average case
3. and worst case. Also, write time complexity in
each case and draw a comparative chart of
number of input versus steps executed by
sequential search for each case.
Compare the performance of linear search and
4. binary search for Best case, Average case and
Worst case inputs.
Implement functions to print nth Fibonacci
number using iteration and recursive method.
5. Compare the performance of two methods by
counting number of steps executed on various
inputs. Also, draw a comparative chart.
Analysis and Design of Algorithms (3150703)

(Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is


the 6th Fibonacci number).
Implement a program for randomized version of
quick sort and compare its performance with the
normal version of quick sort using steps count
on various inputs (1000 to 5000) of random
6. nature, ascending order and descending order
sorted data. Also, draw a comparative chart of
number of input versus steps executed/time
taken for each cases (random, ascending, and
descending).
Implement program to solve problem of making
7.
a change using dynamic programming.
Implement program of chain matrix
8.
multiplication using dynamic programming.
Implement program to solve LCS problem using
9.
dynamic programing.
Implement program to solve Knapsack problem
10.
using dynamic programming.
Implement program for solution of fractional
11. Knapsack problem using greedy design
technique.
Implement program for solution of Making
12.
Change problem using greedy design technique.
Implement program for Kruskal's algorithm to
13.
find minimum spanning tree.
Implement program for Prim's algorithm to find
14.
minimum spanning tree.
Implement DFS and BFS graph traversal
15.
techniques and write its time complexities.
Implement Rabin-Karp string matching
16.
algorithm.
Total
230133107013
Experiment No: 1
Implement a function for each of following problems and count the number of steps executed/time
taken by each function on various inputs (100 to 500) and write equation for the growth rate of each
function. Also draw a comparative chart of number of input versus steps executed/time taken. In
each of the following function N will be passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1

Objectives: (a) Compare performance of various algorithms


(b) Judge best algorithm in terms of growth rate or steps executed

Equipment/Instruments: Computer System, Any C language editor

Theory:

1. Below are the steps to calculate sum of 1 to N numbers using loop

1. Take an input value for N.


2. Initialize a variable sum to zero.
3. Start a loop that iterates from i=1 to i=N.
4. In each iteration, add the value of i to the sum variable.
5. After the loop completes, output the value of sum.

2. Below are the steps to calculate sum of 1 to N numbers using equation

1. Take an input value for N.


2. Calculate sum as N*(N+1)/2.
3. Output the value of sum.

3. Below are the steps to calculate sum of 1 to N numbers using recursion

1. Take an input value for N.


2. Define a recursive function sum(n) that takes an integer argument n and returns the sum
of 1 to n.
3. In the sum(n) function, check if n equals 1. If it does, return 1 (the base case).
4. Otherwise, return the sum of n and the result of calling sum(n-1).
5. Output the result.

7|Page
230133107013
Implement three functions based on above steps and calculate the number of steps executed
by each functions on various inputs ranging from 100 to 500. Take a counter variable to
calculate the number of steps and increment it for each statement in the function.

Code:
Using loop
public class prac_1 {
public static int sumUsingLoop(int N) {
int sum = 0;
int counter = 0;
counter++;
for (int i = 1; i <= N; i++) {
counter++;
sum += i;
counter++;
}
counter++;
System.out.println("Steps in sumUsingLoop: " + counter);
return sum;
}
Using equation
public static int sumUsingEquation(int N) {
int counter = 0;
int sum = N * (N + 1) / 2;
counter += 3;
counter++;
System.out.println("Steps in sumUsingEquation: " + counter);
return sum;
}
Using recursion
public static int sumUsingRecursion(int n, int counter) {
counter++; // For function call
if (n == 1) {
counter++;
System.out.println("Steps in sumUsingRecursion at base case: " + counter);
return 1;
} else {
counter += 2;
return n + sumUsingRecursion(n - 1, counter);
}
}

public static void main(String[] args) {


int N = 500;

int sumLoop = sumUsingLoop(N);


System.out.println("Sum using loop: " + sumLoop);

int sumEquation = sumUsingEquation(N);


System.out.println("Sum using equation: " + sumEquation);

int counter = 0;
int sumRecursion = sumUsingRecursion(N, counter);
8|Page
230133107013
System.out.println("Sum using recursion: " + sumRecursion);
}
}
Output:

Observations:
The Equation method is the most efficient, executing a constant 4 steps regardless of input size,
making it ideal for large inputs. The Loop method scales linearly with input size, with steps
increasing as 2N+22N + 22N+2, making it less efficient than the Equation method but more
predictable. The Recursion method is the least efficient, scaling linearly as 3N−13N - 13N−1 with
additional overhead from function calls, making it slower for large inputs.

Result: Complete the below table based on your implementation of functions and steps executed
by each function.

Number of Steps Executed


Inputs
Loop method Equations Recursion
202 4 299
100
402 4 599
200
602 4 899
300
802 4 1199
400
1002 4 1499
500
T(N)=2N+2 T(N)=8 T(N)=3N-1
Equation→

Chart:
<Draw Comparative Chart of inputs versus number of steps executed by each algorithm>

Conclusion:
In conclusion, the Equation method is the most optimal for calculating the sum from 1 to NNN, as
it consistently executes a minimal number of steps, regardless of input size. While the Loop and
Recursion methods both scale linearly, the Loop method is more efficient than Recursion but still
less efficient than the Equation approach. For large inputs, the Equation method is clearly the best
choice due to its constant time complexity.

Quiz:
1. What is the meaning of constant growth rate of an algorithm?
Answer:
9|Page
230133107013
A constant growth rate in an algorithm refers to the situation where the execution time or the number
of steps required by the algorithm does not change regardless of the size of the input data. This
means that no matter how large the input size NNN is, the algorithm will always perform the same
number of operations.
Key Characteristics:
• Time Complexity: The time complexity of such an algorithm is represented as
O(1)O(1)O(1), where "1" indicates that the time required is constant.
• Performance: Algorithms with constant growth rates are extremely efficient because their
performance is unaffected by the size of the input. They execute in the same time whether
the input size is small or very large.
• Example: A simple example is accessing an element in an array by its index. No matter how
large the array is, accessing a specific element takes the same amount of time.

2. If one algorithm has a growth rate of n2 and second algorithm has a growth rate of n then which
algorithm execute faster? Why?
Answer:
If one algorithm has a growth rate of O(n2) O(n^2) O(n2) and another algorithm has a growth rate
of O(n)O(n)O(n), the algorithm with the growth rate of O(n)O(n)O(n) will generally execute faster.
Reason:
• Growth Rate Comparison: The growth rate O(n)O(n)O(n) means that the number of
operations or the time required increases linearly with the size of the input nnn. In contrast,
O(n2) O(n^2) O(n2) means that the number of operations increases quadratically as the input
size grows.
• Impact on Performance: As the input size nnn increases, the O(n2) O(n^2) O(n2)
algorithm will perform significantly more operations than the O(n)O(n)O(n) algorithm. For
example, if n=100n = 100n=100, an O(n)O(n)O(n) algorithm would perform approximately
100 operations, whereas an O(n2) O(n^2) O(n2) algorithm would perform 10,000
operations.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms” by E. Horowitz et al.

References used by the students:


Geeks For Geeks

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

10 | P a g e
230133107013

11 | P a g e
230133107013
Experiment No: 2
Write user defined functions for the following sorting methods and compare their performance by
steps executed/time taken for execution on various inputs (1000 to 5000) ofrandom nature,
ascending order and descending order sorted data.Also, draw a comparative chart of number of
inputs versus steps executed/time taken for each cases (random, ascending, and descending).
1.Selection Sort
2.Bubble Sort
3.Insertion Sort
4.Merge Sort
5.Quick Sort
Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1, CO2

Objectives: (a) Compare performance of various algorithms.


(b) Judge best sorting algorithm on sorted and random inputs in terms of growth
rate/time complexity.
(c) Derive time complexity from steps count on various inputs.

Equipment/Instruments: Computer System, Any C language editor

Theory:

1. Selection Sort Function in C

Void SelectionSort (inta[] , int n)


//Here ‘a’ is the array having ‘n’ number of data
{
int MinIndex,temp;
for(inti=0;i<n-1;i++)
{
MinIndex=i;
for(int j=i+1;j<n;j++)
{
if (a[MinIndex]>a[j])
MinIndex=j;
}
if(MinIndex !=i)
{
temp=a[MinIndex];
a[MinIndex] = a[j];
a[j]=temp;
}
}
}

12 | P a g e
230133107013
Code:
import java.util.Arrays;
import java.util.Random;

public class SelectionSort {


static int steps = 0;
public static void selectionSort(int[] arr) {
int n = arr.length;
steps = 0;
steps++;
for (int i = 0; i < n-1; i++) {
steps++;
int minIndex = i;
steps++;
for (int j = i+1; j < n; j++) {
steps++;
steps++;
if (arr[j] < arr[minIndex]) {
minIndex = j;
steps++;
}
}
steps++;
int temp = arr[minIndex];
arr[minIndex] = arr[i];
arr[i] = temp;
steps+=3;
}
}

public static void main(String[] args) {


int[] arrSizes = {1000, 2000, 3000, 4000, 5000};
for (int size : arrSizes) {
runAndPrintResults(size, "Random");
runAndPrintResults(size, "Ascending");
runAndPrintResults(size, "Descending");
}
}

public static void runAndPrintResults(int size, String orderType) {


int[] arr = generateArray(size, orderType);
steps = 0;
long startTime = System.nanoTime();
selectionSort(arr);
long endTime = System.nanoTime();
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps + ",
Time: " + (endTime - startTime) + " ns");
}

public static int[] generateArray(int size, String orderType) {


int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
13 | P a g e
230133107013
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}
Output:

2. Bubble Sort Function in C

Void BubbleSort (inta[], int n)


//Here ‘a’ is the array having ‘n’ number of data
{
int Swap_flag,temp;
for(inti=0;i<n-1;i++)
{
Swap_flag=0;
for(int j=0; j<n-i-1; j++)
{
if (a[j]>a[j+1])
{
temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
Swap_flag=1;
}
}
if(swap_flag==0)
return; }}

14 | P a g e
230133107013

Code:
import java.util.Arrays;
import java.util.Random;

public class Bubble_sort {


static int steps = 0;

public static void bubbleSort(int[] arr) {


int n = arr.length;
steps = 0;
steps++;
for (int i = 0; i < n-1; i++) {
steps++;
for (int j = 0; j < n-i-1; j++) {
steps++;
steps++;
if (arr[j] > arr[j+1]) {
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
steps+=3;
}
}
steps++;
}
steps++;
}
public static void main(String[] args) {
int[] arrSizes = {1000, 2000, 3000, 4000, 5000};
for (int size : arrSizes) {
runAndPrintResults(size, "Random");
runAndPrintResults(size, "Ascending");
runAndPrintResults(size, "Descending");
}
}
public static void runAndPrintResults(int size, String orderType) {
int[] arr = generateArray(size, orderType);
steps = 0;
long startTime = System.nanoTime();
bubbleSort(arr);
long endTime = System.nanoTime();
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps + ",
Time: " + (endTime - startTime) + " ns");
}
public static int[] generateArray(int size, String orderType) {
int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
15 | P a g e
230133107013
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}
Output:

3. Insertion Sort Function in C

Void Insertion_Sort (int a[],int n)


//Here ‘a’ is the array having ‘n’ number of data
{
int i,j,key;
for(j=1;j<n;j++)
{
key=a[j];
i=j-1;
while(i>=0 && a[i]>key)
{
a[i+1] = a[i];
i= i-1;
}
a[i+1] =key;
}
}

Code:
import java.util.Arrays;
import java.util.Random;

public class InsertionSort {


static int steps = 0;

16 | P a g e
230133107013

public static void insertionSort(int[] arr) {


int n = arr.length;
steps = 0;
steps++;
for (int i = 1; i < n; i++) {
steps++;
int key = arr[i];
int j = i - 1;
steps++;
while (j >= 0 && arr[j] > key) {
steps++;
arr[j + 1] = arr[j];
j = j - 1;
steps+=2;
}
steps++;
arr[j + 1] = key;
steps++;
}
steps++;
}

public static void main(String[] args) {


int[] arrSizes = {1000, 2000, 3000, 4000, 5000};
for (int size : arrSizes) {
runAndPrintResults(size, "Random");
runAndPrintResults(size, "Ascending");
runAndPrintResults(size, "Descending");
}
}

public static void runAndPrintResults(int size, String orderType) {


int[] arr = generateArray(size, orderType);
steps = 0;
long startTime = System.nanoTime();
insertionSort(arr);
long endTime = System.nanoTime();
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps +
", Time: " + (endTime - startTime) + " ns");
}

public static int[] generateArray(int size, String orderType) {


int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
17 | P a g e
230133107013
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}

Output:

4. Merge Sort Function in C

void merge(int a[],int low, int mid,int high)


{
int temp[5000];
inti,j,k;
k = low;
j = mid +1;
i = low;
while((k<=mid) && (j<=high))
{
if(a[k]<=a[j])
{ temp[i] = a[k] ;
k++;
}
else
{ temp[i] = a[j] ;
j++;
}
i++;
}
if(k<=mid)
for(;k<=mid;k++)
temp[i++] = a[k];
else
for(;j<=high;j++)
temp[i++]=a[j];
18 | P a g e
230133107013
for(i=low;i<=high;i++)a[i]=temp[i];
}
void merge_sort(int *a, long intlow,longint high ) //call this function from main
{
if(low!=high)
{
int mid = (low+high)/2;
merge_sort(a,low,mid);
merge_sort(a,mid+1,high);
merge(a,low,mid,high);
}
}

Code:
import java.util.Arrays;
import java.util.Random;

public class MergeSort {


static int steps = 0;

public static void mergeSort(int[] arr, int l, int r) {


if (l < r) {
int m = (l + r) / 2;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
}
}

public static void merge(int[] arr, int l, int m, int r) {


int n1 = m - l + 1;
int n2 = r - m;
steps++;

int[] L = new int[n1];


int[] R = new int[n2];
steps++;

for (int i = 0; i < n1; i++) {


steps++;
L[i] = arr[l + i];
steps++;
}
steps++;
for (int j = 0; j < n2; j++) {
steps++;
R[j] = arr[m + 1 + j];
steps++;
}
steps++;

int i = 0, j = 0;
19 | P a g e
230133107013
int k = l;
steps++;
while (i < n1 && j < n2) {
steps++;
steps++;
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
steps+=2;
} else {
arr[k] = R[j];
j++;
steps+=2;
}
k++;
steps++;
}
steps++;

while (i < n1) {


steps++;
arr[k] = L[i];
i++;
k++;
steps+=3;
}
steps++;

while (j < n2) {


steps++;
arr[k] = R[j];
j++;
k++;
steps+=3;
}
steps++;
}

public static void main(String[] args) {


int[] arrSizes = {1000, 2000, 3000, 4000, 5000};
for (int size : arrSizes) {
runAndPrintResults(size, "Random");
runAndPrintResults(size, "Ascending");
runAndPrintResults(size, "Descending");
}
}

public static void runAndPrintResults(int size, String orderType) {


int[] arr = generateArray(size, orderType);
steps = 0;
long startTime = System.nanoTime();
mergeSort(arr, 0, arr.length - 1);
long endTime = System.nanoTime();
20 | P a g e
230133107013
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps +
", Time: " + (endTime - startTime) + " ns");
}

public static int[] generateArray(int size, String orderType) {


int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}

Output:

5. Quick Sort Function in C

int quicksort(int q[],intlb, intub)


{
int flag=1;
inti=0,j,key,t1,t2;
if(lb<ub)
{
i=lb;
j=ub+1;
key=q[lb];

while(flag==1)
{
i++;
21 | P a g e
230133107013
while(q[i]<key)
{
i++;
}
j--;
while(q[j]>key)
j--;
if(i<j)
{
t1=q[i];
q[i]=q[j];
q[j]=t1;
}
else{
flag=0;
}
}
t2=q[lb];
q[lb]=q[j];
q[j]=t2;

quicksort(q,lb,j-1);
quicksort(q,j+1,ub);
}
return;
}

Code:
import java.util.Arrays;
import java.util.Random;

public class QuickSort {


static int steps = 0;

public static void quickSort(int[] arr, int low, int high) {


if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

public static int partition(int[] arr, int low, int high) {


int pivot = arr[high];
int i = (low - 1);
steps++;
for (int j = low; j < high; j++) {
steps++;
steps++;
if (arr[j] < pivot) {
i++;
int temp = arr[i];
22 | P a g e
230133107013
arr[i] = arr[j];
arr[j] = temp;
steps+=4;
}
}
steps++;
int temp = arr[i + 1];
arr[i + 1] = arr[high];
arr[high] = temp;
steps+=4;
return i + 1;
}

public static void main(String[] args) {


int[] arrSizes = {1000, 2000, 3000, 4000, 5000};
for (int size : arrSizes) {
runAndPrintResults(size, "Random");
runAndPrintResults(size, "Ascending");
runAndPrintResults(size, "Descending");
}
}

public static void runAndPrintResults(int size, String orderType) {


int[] arr = generateArray(size, orderType);
steps = 0;
long startTime = System.nanoTime();
quickSort(arr, 0, arr.length - 1);
long endTime = System.nanoTime();
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps + ",
Time: " + (endTime - startTime) + " ns");
}

public static int[] generateArray(int size, String orderType) {


int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}

23 | P a g e
230133107013
Output:

Write main function to use above sorting functions and calculate the number of steps executed
by each functions on various inputs ranging from 1000 to 5000. Take a counter variable to
calculate the number of steps and increment it for each statement in the function.

Below is the sample function that counts the number of steps:


voidbubble_sort(int a[], int n)
{
intswap_flag,i,j,temp;

steps++; //for int, it is a global variable

for(i=0;i<n-1;i++)
{
steps++; //for loop
steps++; //for swap
swap_flag = 0;
for(j=0;j<n-i-1;j++)
{
steps++; //for inner loop
steps++; //for if
if(a[j]>a[j+1])
{
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
swap_flag = 1;
steps++; //
steps++; //
steps++; //for four steps
steps++; //
}
}
steps++; //for inner loop

steps++; //for if
24 | P a g e
230133107013
if(swap_flag==0)
{
steps++; //for return
return;
}
}
steps++; //for outer loop false
}

Observations:
Selection, Bubble, and Insertion Sort perform poorly with O(n^2) time complexity on random and
descending data, but Insertion Sort performs best on ascending data with O(n) complexity. Merge
Sort consistently shows O(nlogn) time complexity across all data types, making it efficient
regardless of input order.Quick Sort performs well on random data with O(nlogn) but suffers from
O(n^2) in the worst case (ascending/descending data) due to poor pivot choices.

Result: Complete the below table based on your implementation of functions and steps executed
by each function. Also, prepare similar tables for ascending order sorted data and descending
order sorted data.

Number of Steps Executed (Random Data)


Inputs
Selection Bubble Insertion Merge Quick
1000 1010152 1762925 754682 76527 46880
2000 4021976 7007931 2984301 167118 102562
3000 9034613 15740562 6746902 264321 174922
4000 16046080 28276592 12136847 362272 228350
5000 25058954 44010339 18703761 466081 327620
Time O(nlogn) O(nlogn)
O(n2) O(n2) O(n2)
Complexity→

Number of Steps Executed (Ascending Data)


Inputs
Selection Bubble Insertion Merge Quick
1000 1004995 1001000 3998 72892 1902648
2000 4009995 4002000 7998 158792 7541914
3000 9014995 9003000 11998 251492 16835228
343592 29715912
4000 16019995 16004000 15998
47567794
5000 25024995 25005000 19998
442844
Time O(nlogn) O(nlogn)
O(n2) O(n) O(n)
Complexity→

Number of Steps Executed (Descending Data)


Inputs
Selection Bubble Insertion Merge Quick
72963 1552306
1000 1158621 2498045 1501175

25 | P a g e
230133107013
2000 4647417 9995976 6001929 158999 6106040
3000 10449616 22494123 13502971 251121 13868850
343988 24910104
4000 18545972 39991982 24004193
5000 28919356 62490108 37504713 441948 38633818
Time O(nlogn) O(n2)
O(n2) O(n2) O(n2)
Complexity→

Chart:
<Draw Comparative Charts of inputs versus number of steps executed on various data (Random, ascending
order sorted and descending order sorted data) by each algorithm.>
Sample Chart is as below:

Sorting Functions Analysis on Random Data


300000
250000
200000
150000
100000
50000
0
100 200 300 400 500

Selection Insertion Bubble Quick Merge

Chart:

26 | P a g e
230133107013

Conclusion:
For smaller or nearly sorted datasets, Insertion Sort is the most efficient among quadratic
algorithms. However, for larger datasets, Merge Sort is the most reliable due to its consistent
O(nlogn) performance across all cases. Quick Sort is highly efficient on random data but can
degrade to O(n^2) in the worst case without optimized pivot selection. Selection and Bubble Sort
consistently perform poorly on larger, unsorted data. For general use, Merge Sort or optimized
Quick Sort are recommended.

Quiz:
1. Which sorting function execute faster (has small steps count) in case of ascending order sorted
data?
In the case of ascending order sorted data, Insertion Sort executes the fastest with the smallest step
count. It efficiently detects that the array is already sorted, resulting in a time complexity of
O(n)O(n)O(n), making it the best choice for this scenario.

2. Which sorting function execute faster (has small steps count) in case of descending order sorted
data?
In the case of descending order sorted data, Merge Sort generally executes the fastest with a time
complexity of O(nlogn) because its performance is unaffected by the initial data order. Quick Sort
can also be efficient, but its performance may degrade to O(n^2) with poor pivot selection. Selection
Sort and Insertion Sort both have a time complexity of O(n^2), but Insertion Sort tends to be slower

27 | P a g e
230133107013
with descending data due to more shifts. Thus, Merge Sort is usually the most efficient in this
scenario.

3. Which sorting function execute faster (has small steps count) in case of random data?
Merge Sort and Quick Sort are faster with random data, with Merge Sort having consistently reliable
performance. Quick Sort is also efficient on average but can be less predictable.

4. On what kind of data, the best case of Bubble sort occurs?


The best case for Bubble Sort occurs when the data is already sorted in ascending order. In this
scenario, Bubble Sort performs only one pass through the data with no swaps needed, resulting in
a best-case time complexity of O(n). This is because, during this single pass, the algorithm detects
that no swaps are necessary and terminates early.

5. On what kind of data, the worst case of Bubble sort occurs?


The worst case for Bubble Sort occurs when the data is sorted in descending order. In this scenario,
Bubble Sort has to perform the maximum number of comparisons and swaps. Each pass through
the data will result in swapping adjacent elements, and the algorithm needs O(n^2) operations to
sort the data completely. This is because Bubble Sort must make multiple passes through the entire
array, comparing and swapping elements until no more swaps are needed.

6. On what kind of data, the best case of Quick sort occurs?


The best case for Quick Sort occurs when the pivot selection results in evenly balanced partitions.
Specifically, this happens when the pivot divides the data into two nearly equal halves each time.
In this ideal situation, Quick Sort performs with a time complexity of O(nlogn) because each
recursive call processes a smaller portion of the data, leading to a balanced recursion tree and
efficient sorting.

7. On what kind of data, the worst case of Quick sort occurs?


The worst case for Quick Sort occurs when the pivot selection consistently results in highly
unbalanced partitions. This typically happens when the pivot is the smallest or largest element in
the partition, leading to partitions where one side is empty or nearly empty, and the other side
contains most of the data. This scenario occurs with already sorted or reverse-sorted data if the pivot
selection strategy is poor, such as always picking the first or last element. In this case, Quick Sort
degrades to O(n^2) time complexity due to the deep recursion and inefficient partitioning.

8. Which sorting algorithms are in-place sorting algorithms?


In-place sorting algorithms include Selection Sort, Bubble Sort, Insertion Sort, and Quick Sort.
These algorithms sort the data directly within the array using only a small, constant amount of extra
memory. In contrast, Merge Sort is not in-place because it requires additional memory for merging
subarrays.

9. Which sorting algorithms are stable sorting algorithms?


Stable sorting algorithms include Insertion Sort, Bubble Sort, and Merge Sort. These maintain the
relative order of equal elements. Selection Sort and Quick Sort are typically not stable.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

28 | P a g e
230133107013
References used by the students:
“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

29 | P a g e
230133107013
Experiment No: 3
Implement a function of sequential search and count the steps executed by function on various
inputs (1000 to 5000) for best case, average case and worst case. Also, write time complexity in
each case and draw a comparative chart of number of input versus steps executed by sequential
search for each case.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1

Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count on various inputs.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Steps to implement sequential search is as below:

1. Take an input array A of n elements and a key value K.


2. Define a variable pos, initially set to -1.
3. Iterate through the array A, starting from the first element and continuing until either the key
value is found or the end of the array is reached.
4. For each element, compare its value to the key value K.
5. If the values match, set pos to the index of the current element and exit the loop.
6. If the end of the array is reached and the key value has not been found, posremain equal to -1.
7. Output the value of pos.

The algorithm works by sequentially iterating through the elements of the array and comparing each
element to the target value. If a match is found, the algorithm exits the loop.

Implement above functions and calculate the number of steps executed by each functions on various
inputs ranging from 1000 to 5000. Take a counter variable to calculate the number of steps and
increment it for each statement.Based on algorithm’s logic, decide best, worst and average case
inputs for the algorithm and prepare a table of steps count.

Code:
import java.util.*;

public class searching {

public static int sequentialSearch(int arr[], int key) {


int count = 0;
for (int i = 0; i < arr.length; i++) {
count++;
if (arr[i] == key) {
return count;
30 | P a g e
230133107013
}
}
return count;
}

public static void main(String args[]) {


int[] sizes = { 1000, 2000, 3000, 4000, 5000 };
Random rand = new Random();

for (int size : sizes) {


int[] array = new int[size];

// Fill array with random values


for (int i = 0; i < size; i++) {
array[i] = rand.nextInt(10000);
}

// Best Case: Target is the first element


int bestCaseTarget = array[0];
long bestCaseStartTime = System.nanoTime();
int bestCaseSteps = sequentialSearch(array, bestCaseTarget);
long bestCaseEndTime = System.nanoTime();
long bestCaseTime = bestCaseEndTime - bestCaseStartTime;

// Average Case: Target is somewhere in the middle


int averageCaseTarget = array[size / 2];
long averageCaseStartTime = System.nanoTime();
int averageCaseSteps = sequentialSearch(array, averageCaseTarget);
long averageCaseEndTime = System.nanoTime();
long averageCaseTime = averageCaseEndTime - averageCaseStartTime;

// Worst Case: Target is the last element


int worstCaseTarget = array[size - 1];
long worstCaseStartTime = System.nanoTime();
int worstCaseSteps = sequentialSearch(array, worstCaseTarget);
long worstCaseEndTime = System.nanoTime();
long worstCaseTime = worstCaseEndTime - worstCaseStartTime;

// Print the results


System.out.println("Array Size: " + size);
System.out.println("Best Case Steps: " + bestCaseSteps + ", Time (ns): " + bestCaseTime);
System.out.println("Average Case Steps: " + averageCaseSteps + ", Time (ns): " +
averageCaseTime);
System.out.println("Worst Case Steps: " + worstCaseSteps + ", Time (ns): " +
worstCaseTime);
System.out.println(" ");
}
}

31 | P a g e
230133107013
Output:

Observations:
Best Case: The best case occurs when the target element is found at the first position. The number of steps
is always 1.
Average Case: The average case occurs when the target element is found in the middle of the array. For an
array of size n, the number of steps is approximately n/2.
Worst Case: The worst case occurs when the target element is found at the last position or not found at all.
The number of steps is equal to the size of the array, i.e., n.

Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.

Number of Steps Executed


Inputs
Best Case Average Case Worst Case
1000 1 501 1000
2000 1 1001 2000
3000 1 1501 3000
4000 1 2001 4000
5000 1 2501 5000
Time O(1) O(nk) O(n-k)
Complexity→

32 | P a g e
230133107013
Chart:

Conclusion:
• Sequential search is a simple algorithm with a linear growth in the number of steps as the
input size increases.
• The best case occurs when the element is found at the first position, while the worst case
occurs when the element is at the last position or not present at all.
• Time complexity for the worst and average cases is O(n), whereas the best case is O(1).

Quiz:
1. Which is the best case of an algorithm?
The best case occurs when the key is found at the first position, requiring the minimum number of
comparisons (i.e., 1 comparison).

2. Which is the worst case of an algorithm?


The worst case occurs when the key is not present or found at the last position, requiring the
maximum number of comparisons (i.e., the size of the array, n comparisons).

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

33 | P a g e
230133107013
Experiment No: 4

Compare the performances of linear search and binary search for Best case, Average case and Worst
case inputs.

Date:

Competency and Practical Skills:Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1, CO2

Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count for different inputs.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Steps to implement binary search are as below:

1. Take an input sorted array A of n elements and a target value T.


2. Define variables start and end to represent the start and end indices of the search range, initially
set to 0 and n-1 respectively.
3. Repeat the following steps while start <= end:
a. Calculate the midpoint index mid as (start + end) / 2.
b. If the value of the midpoint element A[mid] is equal to the target value T, return the
value of mid.
c. If the value of the midpoint element A[mid] is greater than the target value T, set end
to mid-1.
d. If the value of the midpoint element A[mid] is less than the target value T, set start
to mid+1.
4. If the target value T is not found in the array, return -1.
5. Output the value returned in Step 3, representing the position of the target value T in the array.

Implement function of binary search algorithm and use linear search function implemented
in previous practical. Compare the steps count of both the functions on various inputs ranging
from 100 to 500 for each case (Best, Average, and Worst).

Code:
import java.util.Arrays;
import java.util.Random;

public class BinarySearchComparison{


public static void main(String[] args) {
int[] sizes = {100, 200, 300, 400, 500};
Random rand = new Random();
for (int size : sizes) {
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size * 2);
}
Arrays.sort(arr);

34 | P a g e
230133107013

int bestCase = arr[0];


int averageCase = arr[size / 2];
int worstCase = size * 2 + 1;
System.out.println("Array Size: " + size);
System.out.print("bestcase ");
binarySearch(arr, bestCase);
System.out.print("averagecase ");
binarySearch(arr, averageCase);
System.out.print("worstcase ");
binarySearch(arr, worstCase);
System.out.print("bestcase ");
linearSearch(arr, bestCase);
System.out.print("averagecase ");
linearSearch(arr, averageCase);
System.out.print("worstcase ");
linearSearch(arr, worstCase);
System.out.println();
}
}
public static int linearSearch(int[] arr, int x) {
int steps = 0;
steps++;
for (int i = 0; i < arr.length; i++) {
steps++;
steps++;
if (arr[i] == x) {
steps+=2;
System.out.println("Linear Search Steps: " + steps);
return i;
}
}
steps+=3;
System.out.println("Linear Search Steps: " + steps);
return -1;
}
public static int binarySearch(int[] arr, int x) {
int left = 0, right = arr.length - 1;
int steps = 0;
steps++;
while (left <= right) {
steps++;
int mid = left + (right - left) / 2;
steps+=2;
if (arr[mid] == x) {
steps+=2;
System.out.println("Binary Search Steps: " + steps);
return mid;
}
steps++;
if (arr[mid] < x) {
left = mid + 1;
35 | P a g e
230133107013
steps++;
} else {
right = mid - 1;
steps++;
}
}
steps+=3;
System.out.println("Binary Search Steps: " + steps);
return -1;
}
}

Output:

Observations:
In the best case, both Binary Search and Linear Search execute minimal steps, making them equally
efficient. However, in the average and worst cases, Binary Search significantly outperforms Linear
Search, as its steps grow logarithmically, while Linear Search's steps grow linearly with input size.
As input size increases, Binary Search remains much faster, especially for large datasets.

36 | P a g e
230133107013
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.

Number of Steps Executed (Best Case)


Inputs
Linear Search Binary Search
100 5 31
200 5 36
300 5 41
400 5 41
500 5 41
Time O(1) O(1)
Complexity→

Number of Steps Executed (Average


Inputs Case)
Linear Search Binary Search
100 105 31
200 205 36
300 305 6
400 405 41
500 505 41
Time O(1) O(logn)
Complexity→

Number of Steps Executed (Worst


Inputs Case)
Linear Search Binary Search
100 204 39
200 404 44
300 604 49
400 804 49
500 1004 49
Time O(1) O(logn)
Complexity→

Chart:
<Draw Comparative Charts of inputs versus number of steps executed by both algorithms in various cases>

37 | P a g e
230133107013

Conclusion: compare all cases in liner and binory searching and find out time complexity.

Quiz:
1. Which element should be searched for the best case of binary search algorithm?
For the best case of a binary search algorithm the element should be searched is middle element.
This is because binary search starts by checking the middle element of the array first. If the target
element matches this middle element on the first attempt, the algorithm finishes in a single step,
making it the best-case scenario.

2. Which element should be searched for the worst case of binary search algorithm?
For the worst case of the binary search algorithm, the element to be searched should be one that is
not present in the array or an element that forces the algorithm to repeatedly divide the search range
until there is only one element left to check.

3. Which algorithm executes faster in worst case?


In the worst case, binary search executes faster than linear search.
• Binary Search has a time complexity of O(log n) in the worst case, meaning the number of
steps grows logarithmically with the size of the input. This makes it much faster, especially
for large datasets.

38 | P a g e
230133107013
• Linear Search has a time complexity of O(n) in the worst case, meaning it checks every
element in the array, resulting in the number of steps growing linearly with the size of the
input.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

39 | P a g e
230133107013
Experiment No: 5

Implement functions to print nth Fibonacci number using iteration and recursive method. Compare
the performance of two methods by counting number of steps executed on various inputs. Also draw
a comparative chart. (Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is the 6th Fibonacci number).

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis

Relevant CO: CO1, CO5

Objectives: (a) Compare the performances of two different versions of same problem.
(b) Find the time complexity of algorithms.
(C) Understand the polynomial and non-polynomial problems

Equipment/Instruments: Computer System, Any C language editor

Theory:

The Fibonacci series is the sequence of numbers (also called Fibonacci numbers), where every
number is the sum of the preceding two numbers, such that the first two terms are '0' and '1'. In
some older versions of the series, the term '0' might be omitted. A Fibonacci series can thus be given
as, 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . It can thus be observed that every term can be calculated by
adding the two terms before it. We are ignoring initial zero in the series.

To represent any (n+1)th term in this series, we can give the expression as, F n = Fn-1 + Fn-2. We can
thus represent a Fibonacci series as shown in the image below,

Iterative version to print nth Fibonacci number is as below:


Input: An integer n, where n >= 1.
Output: The nth Fibonacci number.
Steps:
Initialize variables f0 = 1, f1 = 1, and i = 2.
If n is 1 or 2 then
Print 1
While i < n, do
a. Set f2 = f0 + f1.
b. Set f0 = f1.
c. Set f1 = f2.
d. Increment i by 1.
Print f1.

Recursive version to print nth Fibonacci number is as below:

Input: An integer n, where n >= 1.


Output: The nth Fibonacci number.

40 | P a g e
230133107013
If n is 1 or 2 then
return 1.
else recursively compute next number using the (n-1)th and (n-2)th Fibonacci numbers, and return
their sum.
Print the result.

Implement functions of above two versions of Fibonacci series and compare the steps count
of both the functions on various inputs ranging from 10 to 50 (if memory permits for recursive
version).
Code:
import java.util.Arrays;
public class FibonacciComparison {

public static int fibIterative(int n) {


int steps = 0;
if (n <= 1) {
steps++;
System.out.println("Steps in fibIterative (n=" + n + "): " + steps);
return n;
}
int a = 0, b = 1;
steps++;
for (int i = 2; i <= n; i++) {
steps++;
int temp = a + b;
a = b;
b = temp;
steps += 2;
}
steps++;
System.out.println("Steps in fibIterative (n=" + n + "): " + steps);
return b;
}

public static int fibRecursive(int n, int[] steps) {


steps[0]++;
if (n <= 1) {
return n;
}
return fibRecursive(n - 1, steps) + fibRecursive(n - 2, steps);
}

public static void main(String[] args) {


int[] nValues = {10, 15, 20, 25, 30};
int[] iterativeSteps = new int[nValues.length];
int[] recursiveSteps = new int[nValues.length];
System.out.println("Comparing Fibonacci (Iterative vs Recursive) Steps:");

for (int i = 0; i < nValues.length; i++) {


int n = nValues[i];
fibIterative(n);
41 | P a g e
230133107013
iterativeSteps[i] = countIterativeSteps(n);
int[] steps = {0};
fibRecursive(n, steps);
recursiveSteps[i] = steps[0];
System.out.println("Steps in fibRecursive (n=" + n + "): " + steps[0]);
}

System.out.println("\nComparative Table of Steps:");


System.out.println("n\tIterative Steps\tRecursive Steps");
for (int i = 0; i < nValues.length; i++) {
System.out.println(nValues[i] + "\t" + iterativeSteps[i] + "\t\t" + recursiveSteps[i]);
}
}
private static int countIterativeSteps(int n) {
int steps = 0;
if (n <= 1) {
steps++;
return steps;
}
int a = 0, b = 1;
steps++;
for (int i = 2; i <= n; i++) {
steps++;
int temp = a + b;
a = b;
b = temp;
steps += 2;
}
steps++;
return steps;
}
}
Output

42 | P a g e
230133107013
Observations:
Write observation based on number of steps executed by both algorithms.
The iterative approach is more efficient for computing Fibonacci numbers, especially for large values
of n, as it avoids the overhead of function calls and redundant calculations. The recursive approach
becomes impractical for large values of n due to the exponential increase in the number of function
calls and redundant calculations.

Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.

Number of Steps Executed (Random data)


Inputs
Iterative Fibonacci Recursive Fibonacci
10
20
30 (if memory doesn’t permit then reduce the
40 range)
50
Time Complexity→

Conclusion: In this practical, we implemented and analyzed two approaches for calculating the nth
Fibonacci number: iterative and recursive. By comparing the number of steps executed by each
algorithm, we observed clear differences in their performance.

Quiz:
1. What is the time complexity of iterative version of Fibonacci function?
The time complexity of the iterative version of the Fibonacci function is O(n).

2. What is the time complexity of recursive version of Fibonacci function?


The time complexity of the recursive version of the Fibonacci function is O(2^n).

3. Can you execute recursive version of Fibonacci function for more inputs?
The recursive version of the Fibonacci function, due to its exponential time complexity (O(2^n)),
becomes inefficient and slow for larger inputs, such as n > 30. As n increases, the number of
recursive calls grows rapidly, consuming significant memory and processing time, which may lead
to stack overflow errors or long execution times.

4. What do you mean by polynomial time algorithms and exponential time algorithms?

43 | P a g e
230133107013
A polynomial time algorithm is one where the time complexity can be expressed as a polynomial
function of the input size n. This means the number of steps the algorithm takes grows at a rate
proportional to nkn^knk, where kkk is a constant.

References used by the students:


“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

44 | P a g e
230133107013
Experiment No: 6

Implement a program for randomized version of quick sort and compare its performance with the
normal version of quick sort using steps count on various inputs (1000 to 5000) of random nature,
ascending order and descending order sorted data. Also, draw a comparative chart of number of
input versus steps executed/time taken for each cases (random, ascending, and descending).

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis

Relevant CO: CO1, CO2

Objectives: (a) Improve the performance of quick sort in worst case.


(b) Compare the performance of both the version of quick sort on various inputs

Equipment/Instruments: Computer System, Any C language editor

Theory:

Steps to implement randomized version of quick sort are as below:

RANDOMIZED-QUICKSORT(A, low, high)


if (low< high) {
pivot= RANDOMIZED_PARTITION(A, low, high);
RANDOMIZED-QUICKSORT(A, low, pivot);
RANDOMIZED-QUICKSORT(A, pivot+1,high);
}
RANDOMIZED_PARTITION (A,low,high) {
pos = Random(low, high)
pivot = A[pos]
swap(pivot, a[low])
left = low
right = high
while ( left <right ) {
/* Move left while item < pivot */
while( A[left] <= pivot ) left++;
/* Move right while item > pivot */
while( A[right] > pivot ) right--;
if ( left < right )
swap(A[left],A[right]);}
/* right is final position for the pivot */
swap(A[right], pivot);
return right; }

Implement a function of randomized version of quick sort as per above instructions and use
basic version of quick sort (that selects first element as pivot element). Compare the steps
count of both the functions on various inputs ranging from 1000 to 5000 for each case
(random, ascending, and descending).

Code :-

45 | P a g e
230133107013
import java.util.Random;

public class QuickSortComparison {


public static int basicQuickSortSteps = 0;
public static void basicQuickSort(int[] arr, int low, int high) {
if (low < high) {
basicQuickSortSteps++;
int pi = partition(arr, low, high);
basicQuickSort(arr, low, pi - 1);
basicQuickSort(arr, pi + 1, high);
}
}

public static int partition(int[] arr, int low, int high) {


int pivot = arr[low];
int i = low - 1;
for (int j = low; j <= high - 1; j++) {
basicQuickSortSteps++; // Comparison step
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return i + 1;
}

public static int randomizedQuickSortSteps = 0;


public static void randomizedQuickSort(int[] arr, int low, int high) {
if (low < high) {
randomizedQuickSortSteps++;
int pi = randomizedPartition(arr, low, high);
randomizedQuickSort(arr, low, pi - 1);
randomizedQuickSort(arr, pi + 1, high);
}
}

public static int randomizedPartition(int[] arr, int low, int high) {


int randomPivotIndex = new Random().nextInt(high - low + 1) + low;
swap(arr, randomPivotIndex, high);
return partition(arr, low, high);
}

public static void swap(int[] arr, int i, int j) {


int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static int[] generateRandomArray(int size) {
Random rand = new Random();
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size * 2);
46 | P a g e
230133107013
}
return arr;
}
public static int[] generateAscendingArray(int size) {
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = i + 1;
}
return arr;
}
public static int[] generateDescendingArray(int size) {
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = size - i;
}
return arr;
}

public static void main(String[] args) {


int[] sizes = {1000, 2000, 3000, 4000, 5000};
for (int size : sizes) {
int[] randomArr = generateRandomArray(size);
int[] randomArrCopy = randomArr.clone();

System.out.println("Array Size: " + size);


basicQuickSortSteps = 0;
basicQuickSort(randomArr, 0, size - 1);
System.out.println("Basic Quick Sort Steps (Random Data): " + basicQuickSortSteps);
randomizedQuickSortSteps = 0;
randomizedQuickSort(randomArrCopy, 0, size - 1);
System.out.println("Randomized Quick Sort Steps (Random Data):"+
randomizedQuickSortSteps);
int[] ascArr = generateAscendingArray(size);
int[] ascArrCopy = ascArr.clone();

basicQuickSortSteps = 0;
basicQuickSort(ascArr, 0, size - 1);
System.out.println("Basic Quick Sort Steps (Ascending Data): " + basicQuickSortSteps);
randomizedQuickSortSteps = 0;
randomizedQuickSort(ascArrCopy, 0, size - 1);
System.out.println("Randomized Quick Sort Steps (Ascending Data): "
randomizedQuickSortSteps);

int[] descArr = generateDescendingArray(size);


int[] descArrCopy = descArr.clone();
basicQuickSortSteps = 0;
basicQuickSort(descArr, 0, size - 1);
System.out.println("Basic Quick Sort Steps (Descending Data): " + basicQuickSortSteps);
randomizedQuickSortSteps = 0;
randomizedQuickSort(descArrCopy, 0, size - 1);
System.out.println("Randomized Quick Sort Steps (Descending Data): " +
randomizedQuickSortSteps); }}}
47 | P a g e
230133107013
Observations:
Write observation based on number of steps executed by both algorithms.
1. Basic Quick Sort tends to perform poorly on sorted (ascending or descending) data
because it always selects the first element as the pivot, leading to unbalanced partitions and,
consequently, higher step counts in such cases.
2. Randomized Quick Sort, by selecting a random pivot, significantly improves performance
in the worst-case scenarios (sorted data), making it more efficient in general.

Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.

Number of Steps Executed (Random data)


Randomized Quick
Inputs Sort Basic Quick Sort
1000 17079 15831

2000 37835 41290

3000 64831 62784

4000 89777 79513

5000 117847 109373

Best Case O(n log n) O(n log n)


Time
Complexity Average Case O(n log n) O(n log n)
→ Worst Case O(n log n) O(n2)

Number of Steps Executed (Sorted data)

Descending Order
Inputs Ascending Order
1000 749999 999999

2000 2999999 3999999

3000 6749999 8999999

4000 11999999 15999999

5000 18749999 24999999

Best Case O(n log n) O(n log n)


Time
Complexity Average Case O(n2) O(n2)
→ Worst Case O(n2) O(n2)

Prepare similar tables for descending order and ascending order sorted data.

48 | P a g e
230133107013
Chart:

Practical 6
30000000

25000000

20000000

15000000

10000000

5000000

0
1000 2000 3000 4000 5000

Randomized Quick Sort Basic Quick Sort Descending Order Ascending Order

Conclusion: Overall, Randomized Quick Sort is the preferred algorithm for sorting large datasets,
as it mitigates the risks of worst-case performance and achieves better average-case efficiency
compared to its basic counterpart.

Quiz:
5. What is the time complexity of Randomized Quick Sort in worst case?
The worst-case time complexity of Randomized Quick Sort is O(n^2). However, due to random
pivot selection, the worst-case scenario (where the pivot divides the array very unevenly) happens
rarely, making it much less likely than with the basic version.

6. What is the time complexity of basic version of Quick Sort on sorted data? Give reason of your
The time complexity of the basic version of Quick Sort on sorted data is O(n^2). This happens
because, in each recursive call, the first element is selected as the pivot, leading to highly
unbalanced partitions where one side has no elements, and the other has n-1 elements. This leads
to a recursion depth of n, resulting in quadratic time complexity.

7. Can we always ensure O(n*lg n) time complexity for Randomized Quick Sort?
No, we cannot always ensure O(n log n) time complexity for Randomized Quick Sort. While
random pivot selection greatly reduces the likelihood of encountering the worst case, it is still
theoretically possible for the algorithm to end up with unbalanced partitions, resulting in O(n^2)
time complexity.

8. Which algorithm executes faster on ascending order sorted data?


Randomized Quick Sort executes faster on ascending order sorted data compared to Basic Quick
Sort. This is because random pivot selection avoids the worst-case behavior seen in the basic
version, where the first element is always chosen as the pivot, leading to highly unbalanced
partitions.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

49 | P a g e
230133107013
References used by the students:
"Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

50 | P a g e
230133107013
Experiment No: 7

Implement program to solve problem of making a change using dynamic programming.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …,dn} and if
we want to change for some amount N, many combinations are possible. Suppose {d1, d2, d5, d8},
{d0, d2, d4}, {d0, d5, d7} all are feasible solutions but the solution which selects the minimum
number of coins is considered to be an optimal solution. The aim of making a change is to find a
solution with a minimum number of coins / denominations. Clearly, this is an optimization problem.

General assumption is that infinite coins are available for each denomination. We can select any
denomination any number of times.

Solution steps are as follow:

Sort all the denominations and start scanning from smallest to largest denomination.In every
iterationi, if current denomination di is acceptable, then 1 coin is added in solution and total amount
is reduced by amount di. Hence,
C[i, j] = 1 + (c [i, j – di])

C[i,j] is the minimum number of coins to make change for the amount j. Below figure shows the
content of matrix C.

Figure: Content of matrix C

using coins if current denomination is larger than current problem size, then we have to skip the
denomination and stick with previously calculated solution. Hence,
C[i, j] = C[i – 1, j]

51 | P a g e
230133107013
If above cases are not applicable then we have to stick with choice which returns minimum number
of coin. Mathematically, we formulate the problem as,
C[i, j] = min {C[i – 1, j] , 1 + C[i, j – di]}

Steps to solve making change problem are as below:

Algorithm MAKE_A_CHANGE(d,N)
// d[1…n] = [d1,d2,…,dn] is array of n denominations
// C[1…n, 0…N] is n x N array to hold the solution of sub problems
// N is the problem size, i.e. amount for which change is required

for i ← 1 to n do
C[i, 0] ← 0
end
for i ← 1 to n do
for j ← 1 to N do
if i = = 1 ans j < d [i] then
C[i, j] ← ∞
else if i == 1 then
C[i, j] ← 1 + C[1, j – d[1])
else if j < d [i] then
C[i, j] ← C[I – 1, j]
else
C[i, j] ← min (C[i – 1, j] ,1 + C[i, j – d[i])
end
end
end
return C[n, N]

Implement above algorithm and print the matrix C. Your program should return the number
of coins required and its denominations.
Code :-
import java.util.Arrays;
public class CoinChange {
public static void makeChange(int[] denominations, int n, int N) {
int[][] C = new int[n + 1][N + 1];
for (int i = 0; i <= n; i++) {
C[i][0] = 0;
}
for (int j = 1; j <= N; j++) {
C[0][j] = Integer.MAX_VALUE;
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= N; j++) {
if (denominations[i - 1] > j) {
C[i][j] = C[i - 1][j];
} else {
C[i][j] = Math.min(C[i - 1][j], (C[i][j - denominations[i - 1]] == Integer.MAX_VALUE
? Integer.MAX_VALUE : 1 + C[i][j - denominations[i - 1]]));
}
}
}
if (C[n][N] == Integer.MAX_VALUE) {
52 | P a g e
230133107013
System.out.println("No solution exists to make change for the given amount.");
} else {
System.out.println("Minimum coins required: " + C[n][N]);
}

System.out.println("Matrix C:");
for (int i = 0; i <= n; i++) {
for (int j = 0; j <= N; j++) {
if (C[i][j] == Integer.MAX_VALUE)
System.out.print(" INF ");
else
System.out.printf("%3d ", C[i][j]);
}
System.out.println();
}
}

public static void main(String[] args) {


int[] denominations = {1, 3, 4};
int N = 6;
int n = denominations.length;
makeChange(denominations, n, N);
}
}
Observations:
Write observation based on whether this algorithm returns optimal answer or not on various inputs.
1. The algorithm correctly computes the minimum number of coins needed to make the specified
amount NNN from the available denominations. For the given example with denominations of {1,
3, 4} to make an amount of 6, the optimal solution is 2 coins (3 + 3).
2. The matrix CCC represents the minimum coins required for each sub-problem, allowing for
efficient lookup and optimization.

Result

Conclusion: The dynamic programming approach to the Making Change problem effectively finds
the optimal solution by breaking the problem into smaller sub-problems and storing intermediate
results. This algorithm not only yields the correct number of coins needed but also demonstrates the
power of dynamic programming in solving optimization problems efficiently.

Quiz:
1. What is the time complexity of above algorithm?

53 | P a g e
230133107013
The time complexity of the above algorithm is O(n×N)O(n \times N)O(n×N), where nnn is the
number of denominations and NNN is the amount for which change is needed. This is because
the algorithm fills a matrix of size n×Nn \times Nn×N.
2. Does above algorithm always return optimal answer?
Yes, the above algorithm always returns an optimal answer for the Making Change problem when
the denominations are non-negative integers. It effectively finds the minimum number of coins
required for the given amount using dynamic programming techniques.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
3. https://codecrucks.com/making-change-problem-using-dynamic-programming/

References used by the students:


https://codecrucks.com/making-change-problem-using-dynamic-programming/

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

54 | P a g e
230133107013
Experiment No: 8

Implement program of chain matrix multiplication using dynamic programming.

Date:

Competency and Practical Skills:Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Given a sequence of matrices A1, A2,...,An and dimensions p0, p1,...,pn where Ai is of dimension pi−1
× pi, determine the order of multiplication (represented, say, as a binary tree) that minimizes the
number of operations.
This algorithm does not perform the multiplications; it just determines the best order in which to
perform the multiplications

Two matrices are called compatible only if the number of columns in the first matrix and the number
of rows in the second matrix are the same. Matrix multiplication is possible only if they are
compatible. Let A and B be two compatible matrices of dimensions p x q and q x r
Suppose dimension of three matrices are :

A1 = 5 x 4
A2 = 4 x 6
A3 = 6 x 2
Matrix multiplication is associative. So

(A1 A2 ) A3 = {(5 x 4 ) x (4 x 6) } x (6 x 2)
= (5 x 4 x 6) + (5 x 6 x 2)

= 180

A1 (A2 A3) = (5 x 4) x {(4 x 6) x (6 x 2) }


= (5 x 4 x 2) + (4 x 6 x 2)

= 88

The answer of both multiplication sequences would be the same, but the numbers of multiplications
are different. This leads to the question, what order should be selected for a chain of matrices to
minimize the number of multiplications?

Let us denote the number of alternative parenthesizations of a sequence of n matrices by p(n). When
n = 1, there is only one matrix and therefore only one way to parenthesize the matrix. When n ≥ 2,

55 | P a g e
230133107013
a fully parenthesized matrix product is the product of two fully parenthesized matrix sub-products,
and the split between the two subproducts may occur between the k and (k + 1)st matrices for any k
= 1, 2, 3…, n – 1. Thus we obtain the recurrence.

The solution to the recurrence is the sequence of Catalan numbers, which grows as Ω(4n / n3/2),
roughly equal to Ω(2n). Thus, the numbers of solutions are exponential in n. A brute force attempt
is infeasible to find the solution.

Any parenthesizations of the product Ai Ai + 1 … Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is for some value of k, we first compute the matrices
Ai….k and Ak + 1…j and then multiply them together to produce the final product Ai…j The cost of
computing these parenthesizations is the cost of computing Ai….k , plus the cost of computing Ak +
1…j plus the cost of multiplying them together.

We can define m[i, j] recursively as follows. If i == j, the problem is trivial; the chain consists of
only one matrix Ai…i = A. No scalar multiplications are required. Thus m[i, i] = 0 for i = 1, 2 …n.
To compute m[i, j] when i < j, we take advantage of the structure of an optimal solution of the first
step. Let us assume that the optimal parenthesizations split the product Ai Ai + 1…Aj between Ak and
Ak + 1, where i ≤ k < j. Then m[i, j] is equal to the minimum cost for computing the subproducts
Ai…k and Ak + 1…j plus the cost of multiplying these two matrices together.

Where d = {d0, d1, d2, …, dn} is the vector of matrix dimensions.


m[i, j] = Least number of multiplications required to multiply matrix sequence Ai….Aj .

Steps to solve chain matrix multiplication problem are as below:


Algorithm MATRIX_CHAIN_ORDER(p)
// p is sequence of n matrices
n ← length(p) - 1
for i ← 1 to n do
m[i,i] ← 0
end
for l ← 2 to n do
for i ← 1 to n – l + 1 do
j←i+l-1
m[i, j] ← ∞
for k ← i to j – 1 do
q ← m[i, k]+ m[k + 1, j] + di – 1 * dk * dj
if q < m[i, j] then
m[i,j ] ← q
s[i, j] ← k
end

56 | P a g e
230133107013
end
end
end
return m and s

Implement above algorithm and print the matrix m ans c.


Observations:
import java.util.Arrays;
public class MatrixChainMultiplication {

public static void matrixChainOrder(int[] p, int n, int[][] m, int[][] s) {


for (int i = 1; i <= n; i++) {
m[i][i] = 0;
}
for (int l = 2; l <= n; l++) {
for (int i = 1; i <= n - l + 1; i++) {
int j = i + l - 1;
m[i][j] = Integer.MAX_VALUE;
for (int k = i; k < j; k++) {
int q = m[i][k] + m[k + 1][j] + p[i - 1] * p[k] * p[j];
if (q < m[i][j]) {
m[i][j] = q;
s[i][j] = k;
}
}
}
}
}

public static void printOptimalParenthesization(int[][] s, int i, int j) {


if (i == j) {
System.out.print("A" + i);
} else {
System.out.print("(");
printOptimalParenthesization(s, i, s[i][j]);
printOptimalParenthesization(s, s[i][j] + 1, j);
System.out.print(")");
}
}

public static void main(String[] args) {


int[] p = {5, 4, 6, 2};
int n = p.length - 1;

int[][] m = new int[n + 1][n + 1];


int[][] s = new int[n + 1][n + 1];

matrixChainOrder(p, n, m, s);
System.out.println("Minimum number of multiplications: " + m[1][n]);
System.out.print("Optimal Parenthesization: ");
printOptimalParenthesization(s, 1, n);
System.out.println();
System.out.println("Matrix m:");

57 | P a g e
230133107013
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
System.out.printf("%4d ", m[i][j]);
}
System.out.println();
}
}
}

Result

Conclusion: The dynamic programming approach to the Chain Matrix Multiplication problem
effectively determines the optimal order of multiplication, resulting in a significant reduction in the
number of multiplications. This implementation showcases the power of dynamic programming in
solving optimization problems.

Quiz:
1. What is the time complexity of above algorithm?
The time complexity of the above algorithm is O(n3)O(n^3)O(n3), where nnn is the number of
matrices. This is due to the three nested loops: one for the chain length, one for the starting matrix,
and one for the split point.

2. Does above algorithm always return optimal answer?


Yes, the above algorithm always returns an optimal answer for the Chain Matrix Multiplication
problem. It systematically explores all possible parenthesizations and computes the minimum cost
for each possible order, ensuring that the solution is optimal.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

58 | P a g e
230133107013
References used by the students:
“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

59 | P a g e
230133107013
Experiment No: 9
Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

The Longest Common Subsequence (LCS) problem is a classic computer science problem that
involves finding the longest subsequence that is common to two given sequences.

A subsequence is a sequence that can be derived from another sequence by deleting some or no
elements without changing the order of the remaining elements. For example, given the sequence
"ABCDE", "ACE" is a subsequence of "ABCDE", but "AEC" is not a subsequence.

Given two sequences X and Y, the LCS problem involves finding the longest common subsequence
(LCS) of X and Y. The LCS need not be contiguous in the original sequences, but it must be in the
same order. For example, given the sequences "ABCDGH" and "AEDFHR", the LCS is "ADH"
with length 3.

Naïve Method:

Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence
of X whether it is a subsequence of Y, and return the longest common subsequence found. There
are 2m subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n)
time. Thus, the naïve algorithm would take O(n2m) time.

longest common subsequence (LCS) using Dynamic Programming:


Let X=<x1,x2,x3....,xm> and Y=<y1,y2,y3....,ym> be the sequences. To compute the length of an
element the following algorithm is used.
Step 1 − Construct an empty adjacency table with the size, n × m, where n = size of sequence X and
m = size of sequence Y. The rows in the table represent the elements in sequence X and columns
represent the elements in sequence Y.
Step 2 − The zeroth rows and columns must be filled with zeroes. And the remaining values are
filled in based on different cases, by maintaining a counter value.
• Case 1 − If the counter encounters common element in both X and Y sequences, increment
the counter by 1.
• Case 2 − If the counter does not encounter common elements in X and Y sequences at T[i,
j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j].

60 | P a g e
230133107013
Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is
done by tracing the path where the counter incremented first.
Step 4 − The longest common subseqence obtained by noting the elements in the traced path.
Consider the example, we have two strings X=BDCB and Y=BACDB to find the longest common
subsequence. Following table shows the construction of LCS table.

Once the values are filled, the path is traced back from the last value in the table at T[5, 4].

Algorithm is as below:

Algorithm: LCS-Length-Table-Formulation (X, Y)


m := length(X)
n := length(Y)
for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do
C[0, j] := 0
for i = 1 to m do
for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1
B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1] + 1

61 | P a g e
230133107013
B[i, j] := ‘L’
return C and B

Algorithm: Print-LCS (B, X, i, j)


if i=0 and j=0
return
if B[i, j] = ‘D’
Print-LCS(B, X, i-1, j-1)
Print(xi)
else if B[i, j] = ‘U’
Print-LCS(B, X, i-1, j)
else
Print-LCS(B, X, i, j-1)

Observations:
1. The algorithm constructs a 2D table CCC where each cell C[i][j]C[i][j]C[i][j] represents the
length of the longest common subsequence of the first iii characters of sequence XXX and the first
jjj characters of sequence YYY.
2. The algorithm fills the table based on whether characters match and tracks the direction from
which each value was derived, allowing the reconstruction of the LCS.
3. The output shows both the length of the LCS and the LCS itself, confirming the algorithm's
effectiveness in finding the optimal solution.

Result
Length of LCS: 3
Longest Common Subsequence: ADH
LCS Table (C):
0 0 0 0 0 0 0
0 0 0 0 1 1 1
0 1 1 1 1 1 1
0 1 1 2 2 2 2
0 1 1 2 2 2 3
0 1 1 2 2 3 3
0 1 1 2 2 3 3

Conclusion: The dynamic programming approach to solving the Longest Common Subsequence
(LCS) problem efficiently finds the longest subsequence shared between two sequences. The
implementation clearly demonstrates how to construct the solution step-by-step while also allowing
for the backtracking necessary to find the actual subsequence.

Quiz:
1. What is the time complexity of above algorithm?
The time complexity of the above algorithm is O(m×n)O(m \times n)O(m×n), where mmm and nnn
are the lengths of the two sequences XXX and YYY. This complexity arises from the nested loops
used to fill the LCS table.

2. Does above algorithm always return optimal answer?


Yes, the above algorithm always returns an optimal answer for the Longest Common Subsequence
problem. It systematically constructs the LCS table based on matching characters, ensuring that the
longest subsequence is found.

62 | P a g e
230133107013
3. Does Dynamic programming approach to find LCS perform well compare to naïve approach?
Yes, the dynamic programming approach performs significantly better than the naïve approach. The
naïve method has a time complexity of O(n2m)O(n^{2^m})O(n2m), making it impractical for
larger sequences, whereas dynamic programming efficiently solves the problem in polynomial time,
making it suitable for longer sequences.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


“Fundamentals of Algorithms”by E. Horowitz et al.

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

63 | P a g e
230133107013
Experiment No: 10
Implement program to solve Knapsack problem using dynamic programming.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Knapsack problem is as stated below:


Given a set of items, each having different weight and value or profit associated with it. Find the
set of items such that the total weight is less than or equal to a capacity of the knapsack and the total
value earned is as large as possible.

The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, . . . . .
, xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, . . . , vn> are weight
and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity. Select items from X and fill the
knapsack such that it would maximize the profit.
Knapsack problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either
add an entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Following are the steps to implement binary knapsack using dynamic programming.

Algorithm DP_BINARY_KNAPSACK (V, W, M)


// Description: Solve binary knapsack problem using dynamic programming
// Input: Set of items X, set of weight W, profit of items V and knapsack capacity M
// Output: Array V, which holds the solution of problem
for i ← 1 to n do
V[i, 0] ← 0
end
for i ← 1 to M do
V[0, i] ← 0
end
for V[0, i] ← 0 do
for j ← 0 to M do
if w[i] ≤ j then
V[i, j] ← max{V[i-1, j], v[i] + V[i – 1, j – w[i]]}
else
64 | P a g e
230133107013
V[i, j] ← V[i – 1, j] // w[i]>j
end
end
end

The above algorithm will just tell us the maximum value we can earn with dynamic programming.
It does not speak anything about which items should be selected. We can find the items that give
optimum result using the following algorithm.

Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end
Implement the above algorithms for the solution of binary knapsack problem.

Code:-
public class Knapsack {
public static void knapsack(int n, int W, int[] weights, int[] values) {
int[][] V = new int[n + 1][W + 1];
for (int i = 0; i <= n; i++) {
for (int w = 0; w <= W; w++) {
if (i == 0 || w == 0) {
V[i][w] = 0;
} else if (weights[i - 1] <= w) {
V[i][w] = Math.max(values[i - 1] + V[i - 1][w - weights[i - 1]], V[i - 1][w]);
} else {
V[i][w] = V[i - 1][w];
}
}
}
System.out.println("Maximum value: " + V[n][W]);
System.out.println("Selected items (index, weight, value):");
int w = W;
for (int i = n; i > 0 && w > 0; i--) {
if (V[i][w] != V[i - 1][w]) {
65 | P a g e
230133107013
System.out.println("Item " + i + ": Weight = " + weights[i - 1] + ", Value = " + values[i
- 1]);
w -= weights[i - 1];
}
}
}

public static void main(String[] args) {


int n = 4;
int W = 7;
int[] weights = {1, 3, 4, 5};
int[] values = {1, 4, 5, 7};

knapsack(n, W, weights, values);


}
}
Observations:
1. The program builds a 2D dynamic programming table V[i][j]V[i][j]V[i][j], where iii represents
the number of items considered and jjj represents the weight capacity of the knapsack.
2.By comparing the inclusion and exclusion of each item, the algorithm computes the maximum
value that can be obtained without exceeding the knapsack's capacity.
3. The algorithm also allows tracing back the selected items by checking where changes in the DP
table values occurred.

Result

Conclusion:
The dynamic programming approach to solving the 0/1 knapsack problem efficiently computes the
maximum possible value while staying within the given weight limit. The program also helps in
identifying the set of items that achieve this optimal result.

Quiz:
1. What is the time complexity of above binary knapsack algorithm?
The time complexity of the binary knapsack algorithm using dynamic programming is O(n×W)O(n
\times W)O(n×W), where nnn is the number of items and WWW is the capacity of the knapsack.
This complexity arises from the two nested loops used to fill the DP table.

2. Does above algorithm always return optimal answer?


Yes, the dynamic programming approach for the binary knapsack problem always returns an
optimal answer. It systematically explores all possible combinations of items by filling the DP table,
ensuring the optimal value is found for the given constraints.

66 | P a g e
230133107013
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

67 | P a g e
230133107013
Experiment No: 11
Implement program for solution of fractional Knapsack problem using greedy design technique.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand greedy algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:
Knapsack problem is as stated below:
Given a set of items, each having different weight and value or profit associated with it. Find the
set of items such that the total weight is less than or equal to a capacity of the knapsack and the total
value earned is as large as possible.
Brute-force approach: The brute-force approach tries all the possible solutions with all the different
fractions but it is a time-consuming approach.
Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly, we
will select the item. The item with the highest ratio would be selected first.
Following are the steps to implement fractional knapsack using greedy design strategy.

1. Compute the value-to-weight ratio for each item in the knapsack.


2. Sort the items in decreasing order of value-to-weight ratio.
3. Initialize the total weight and total value to 0.
4. For each item in the sorted list:
a. If the entire item can be added to the knapsack without exceeding the weight
capacity, add it and update the total weight and total value.
b. If the item cannot be added entirely, add a fraction of the item that fits into the
knapsack and update the total weight and total value accordingly.
c. If the knapsack is full, stop the algorithm.
5. Return the total value and the set of items in the knapsack.

Implement the program based on above logic for the solution of fractional knapsack problem.

Code :-
import java.util.Arrays;
import java.util.Comparator;

class Item {
int value, weight;
public Item(int value, int weight) {
this.value = value;
this.weight = weight;
68 | P a g e
230133107013
}
}

public class FractionalKnapsack {


public static void sortItemsByRatio(Item[] items) {
Arrays.sort(items, new Comparator<Item>() {
public int compare(Item o1, Item o2) {
double ratio1 = (double) o1.value / o1.weight;
double ratio2 = (double) o2.value / o2.weight;
if (ratio1 > ratio2) {
return -1;
} else if (ratio1 < ratio2) {
return 1;
}
return 0;
}
});
}
public static double fractionalKnapsack(Item[] items, int n, int W) {
sortItemsByRatio(items)
int currentWeight = 0;
double finalValue = 0.0;
for (int i = 0; i < n; i++) {
if (currentWeight + items[i].weight <= W) {
currentWeight += items[i].weight;
finalValue += items[i].value;
} else {
int remainingWeight = W - currentWeight;
finalValue += items[i].value * ((double) remainingWeight / items[i].weight);
break;
}
}
return finalValue;
}

public static void main(String[] args) {


int n = 3;
int W = 50;
Item[] items = { new Item(60, 10), new Item(100, 20), new Item(120, 30) };
double maxValue = fractionalKnapsack(items, n, W);
System.out.printf("Maximum value in the knapsack = %.2f\n", maxValue);
}
}

69 | P a g e
230133107013

Observations:
1. The greedy approach efficiently selects items based on the value-to-weight ratio, ensuring that
the maximum value is obtained.
2. The program gives optimal results for various inputs.
3. This approach only works optimally for the fractional knapsack problem but not for the 0/1
knapsack problem.

Result

Conclusion: The fractional knapsack problem can be solved optimally using the greedy approach.
By selecting items with the highest value-to-weight ratio, the algorithm ensures that the maximum
value is achieved within the knapsack's weight limit. The algorithm performs efficiently and returns
the optimal solution.

Quiz:
1. What is the time complexity of above knapsack algorithm?
The time complexity of the fractional knapsack algorithm is O(nlog⁡n)O(n \log n)O(nlogn) due to
the sorting step, where nnn is the number of items. The rest of the algorithm runs in linear time
O(n)O(n)O(n).

2. Does above algorithm always return optimal answer?


Yes, the above algorithm always returns an optimal answer for the fractional knapsack problem
because selecting items based on the highest value-to-weight ratio guarantees the maximum
possible value.

3. What is the time complexity solving knapsack problem using brute-force method?
The time complexity of solving the knapsack problem using the brute-force method is exponential, i.e.,
O(2n)O(2^n)O(2n), where nnn is the number of items. This is because the brute-force approach examines all
possible subsets of the items

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


Rubric wise marks obtained:

70 | P a g e
230133107013
Rubrics Understanding of Program Documentation Total
problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

71 | P a g e
230133107013
Experiment No: 12
Implement program for solution of Making Change problem using greedy design technique.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand greedy algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and if
we want to change for some amount N, many combinations are possible. {d1, d2, d5, d8}, {d0, d2,
d4}, {d0, d5, d7} can be considered as all feasible solutions if sum of their denomination is N. The
aim of making a change is to find a solution with a minimum number of coins / denominations.
Following are the steps to solve coin change problem using greedy design technique

1. Initialize a list of coin denominations in descending order.


2. Initialize a list of coin counts, where each count is initially 0.
3. While the remaining amount is greater than 0:
a. For each coin denomination in the list:
i. If the denomination is less than or equal to the remaining amount, add one
coin to the count and subtract the denomination from the remaining
amount.
ii. If the denomination is greater than the remaining amount, move on to the
next denomination.
4. Return the list of coin counts.

Implement the program based on above steps for the solution of fractional knapsack problem.

Code :-
public class CoinChange {
public static void findMinCoins(int[] denominations, int amount) {
int[] coinCount = new int[denominations.length];
for (int i = 0; i < denominations.length - 1; i++) {
for (int j = 0; j < denominations.length - i - 1; j++) {
if (denominations[j] < denominations[j + 1]) {
int temp = denominations[j];
denominations[j] = denominations[j + 1];
denominations[j + 1] = temp;
}
72 | P a g e
230133107013
}
}
for (int i = 0; i < denominations.length; i++) {
while (amount >= denominations[i]) {
amount -= denominations[i];
coinCount[i]++;
}
}
System.out.println("Minimum number of coins needed:");
for (int i = 0; i < denominations.length; i++) {
if (coinCount[i] != 0) {
System.out.printf("%d coin(s) of denomination %d\n", coinCount[i], denominations[i]);
}
}
}
public static void main(String[] args) {
int[] denominations = {25, 10, 5, 1};
int amount = 87;
findMinCoins(denominations, amount);
}
}
Observations:
The greedy approach works optimally when the denominations follow a pattern like U.S. coins,
where larger denominations can be broken down into smaller ones without losing efficiency. For
certain sets of denominations, however, the greedy algorithm may not give the optimal solution.

Result

Conclusion: The greedy algorithm efficiently solves the making change problem for typical coin systems
like U.S. currency. However, it may not always produce the optimal solution for arbitrary sets of
denominations.

Quiz:
1. What is the time complexity of above knapsack algorithm?
The time complexity of the making change algorithm is O(n)O(n)O(n), where nnn is the number of
coin denominations. This is because we iterate over each denomination once.

2. Does above algorithm always return optimal answer?


No, the greedy algorithm does not always return an optimal answer. It works for certain sets of
denominations (such as U.S. coins) but may fail for arbitrary denominations. For example, with
denominations {1, 3, 4}, the greedy approach does not yield the optimal solution for making change
for 6.

73 | P a g e
230133107013
3.What are some variations of the Making Change problem?
Unbounded Coin Change Problem: You can use an unlimited number of each denomination.
Bounded Coin Change Problem: Each denomination has a limited supply.
Exact Change Problem: Find the exact combination of coins to make the given amount without
exceeding it.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


"Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

74 | P a g e
230133107013
Experiment No: 13
Implement program for Kruskal's algorithm to find minimum spanning tree.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3, CO6

Objectives: (a) Understand how to use Kruskal's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other words,
an MST is a subset of the edges of the graph that form a tree and have the smallest sum of weights.

1. Sort all the edges in non-decreasing order of their weight.


2. Initialize an empty set of edges for the minimum spanning tree.
3. For each edge in the sorted order, add the edge to the minimum spanning tree if it does not
create a cycle in the tree. To check if adding the edge creates a cycle, you can use the Union-
Find algorithm or a similar method to keep track of the connected components of the graph.
4. Continue adding edges until there are V-1 edges in the minimum spanning tree, where V is the
number of vertices in the graph.
5. Return the set of edges in the minimum spanning tree.
6. Implement the program based on above steps for the solution of fractional knapsack problem.

Kruskal’s Algorithm is as follow:


import java.util.Arrays;
class Edge implements Comparable<Edge> {
int source, destination, weight;

public Edge(int source, int destination, int weight) {


this.source = source;
this.destination = destination;
this.weight = weight;
}
public int compareTo(Edge other) {
return this.weight - other.weight;
}
class Graph {
int vertices;
Edge[] edges;

75 | P a g e
230133107013
public Graph(int vertices, int edgesCount) {
this.vertices = vertices;
edges = new Edge[edgesCount];
}
private int find(int[] parent, int vertex) {
if (parent[vertex] == -1) {
return vertex;
}
return find(parent, parent[vertex]);
}
private void union(int[] parent, int x, int y) {
int xroot = find(parent, x);
int yroot = find(parent, y);
parent[xroot] = yroot;
}
public void kruskalMST() {
Edge[] result = new Edge[vertices - 1];
int[] parent = new int[vertices];
Arrays.fill(parent, -1);
Arrays.sort(edges);

int e = 0;
int i = 0;
while (e < vertices - 1) {
Edge nextEdge = edges[i++];
int x = find(parent, nextEdge.source);
int y = find(parent, nextEdge.destination);
if (x != y) {
result[e++] = nextEdge;
union(parent, x, y);
}
if (i >= edges.length) {
break;
}
}
System.out.println("Minimum Spanning Tree:");
int totalWeight = 0;
for (Edge edge : result) {
if (edge != null) {
System.out.println(edge.source + " -- " + edge.destination + " == " + edge.weight);
totalWeight += edge.weight;
}
}
System.out.println("Total weight of MST: " + totalWeight);
}
}

76 | P a g e
230133107013
public class KruskalAlgorithm {
public static void main(String[] args) {
int vertices = 4;
int edgesCount = 5;
Graph graph = new Graph(vertices, edgesCount);
graph.edges[0] = new Edge(0, 1, 10);
graph.edges[1] = new Edge(0, 2, 6);
graph.edges[2] = new Edge(0, 3, 5);
graph.edges[3] = new Edge(1, 3, 15);
graph.edges[4] = new Edge(2, 3, 4);
graph.kruskalMST();
}

}
Observations:
Write observation based on whether this algorithm always returns minimum spanning tree or not
on various inputs.

Result

Conclusion: Kruskal's Algorithm is an efficient method for finding the Minimum Spanning Tree
(MST) of a connected, weighted graph. It operates by considering edges in order of their weight
and progressively adding them to the MST, provided they do not form a cycle.

Quiz:
1. What is the time complexity of krushkal’s algorithm?
O(ElogE)

2. Does above krushkal’s algorithm always return optimal answer?


Yes, Kruskal's Algorithm always returns an optimal Minimum Spanning Tree for a connected, weighted,
undirected graph due to its greedy approach and adherence to the cut property.

3. What data structure is typically used to keep track of the connected components in Kruskal's
algorithm?
The data structure typically used to track connected components in Kruskal's Algorithm is the
Union-Find data structure.

77 | P a g e
230133107013
4. When does Kruskal's algorithm stop adding edges to the minimum spanning tree?
Kruskal's Algorithm stops adding edges to the Minimum Spanning Tree when it has added V−1V -
1V−1 edges, where VVV is the number of vertices in the graph.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

78 | P a g e
230133107013
Experiment No: 14
Implement program for Prim's algorithm to find minimum spanning tree.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3, CO6

Objectives: (a) Understand how to use Prim's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other words,
an MST is a subset of the edges of the graph that form a tree and have the smallest sum of weights.

Prim’s Algorithm is as follow:


import java.util.Arrays;

public class PrimsAlgorithm {


static int minKey(int key[], boolean mstSet[], int V) {
int min = Integer.MAX_VALUE, minIndex = -1;

for (int v = 0; v < V; v++) {


if (!mstSet[v] && key[v] < min) {
min = key[v];
minIndex = v;
}
}
return minIndex;
}
static void primMST(int graph[][], int V) {
int parent[] = new int[V];
int key[] = new int[V];
boolean mstSet[] = new boolean[V];
Arrays.fill(key, Integer.MAX_VALUE);
Arrays.fill(mstSet, false);
key[0] = 0;
parent[0] = -1;
for (int count = 0; count < V - 1; count++) {
int u = minKey(key, mstSet, V);
mstSet[u] = true;
for (int v = 0; v < V; v++) {
79 | P a g e
230133107013
if (graph[u][v] != 0 && !mstSet[v] && graph[u][v] < key[v]) {
parent[v] = u;
key[v] = graph[u][v];
}
}
}
System.out.println("Edge \tWeight");
for (int i = 1; i < V; i++) {
System.out.println(parent[i] + " - " + i + "\t" + graph[i][parent[i]]);
}
}

public static void main(String[] args) {


int graph[][] = new int[][] {
{0, 2, 0, 6, 0},
{2, 0, 3, 8, 5},
{0, 3, 0, 0, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0}
};
int V = graph.length;
primMST(graph, V);
}
}

Observations:
• Kruskal's algorithm efficiently finds the MST for a given graph. It always returns the correct
MST, provided the graph is connected.
• The Union-Find data structure is critical for efficiently handling cycle detection.

Result

Conclusion: Kruskal’s algorithm is a greedy approach that efficiently finds the minimum spanning tree of
a graph by sorting edges and using the Union-Find data structure to avoid cycles. It works best with sparse
graphs and guarantees an optimal solution.

Quiz:
1. What is the time complexity of Prim’s algorithm?
The time complexity of Kruskal’s algorithm is O(Elog⁡E)O(E \log E)O(ElogE), where EEE is the
number of edges. Sorting the edges takes O(Elog⁡E)O(E \log E)O(ElogE), and the Union-Find
operations take O(log⁡V)O(\log V)O(logV) on average, where VVV is the number of vertices.
80 | P a g e
230133107013
2. Does above Prim’s algorithm always return optimal answer?
Yes, Kruskal's algorithm always returns an optimal answer for the minimum spanning tree of a
connected, undirected, weighted graph.

3.When does Prim's algorithm stop adding edges to the minimum spanning tree?
The Union-Find (or Disjoint Set Union) data structure is used to keep track of the connected
components and detect cycles during the construction of the minimum spanning tree.
4.What data structure is typically used to keep track of the vertices in Prim's algorithm?
Kruskal's algorithm stops adding edges when the minimum spanning tree contains exactly V−1V-
1V−1 edges, where VVV is the number of vertices in the graph.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

81 | P a g e
230133107013
Experiment No: 15
Implement DFS and BFS graph traversal techniques and write its time complexities.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO6

Objectives: (a) Understand Graph traversal techniques.


(b) Visit all nodes of the graph.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Depth First Search is a graph traversal algorithm that explores as far as possible along each branch
before backtracking. It is used to search for a node or a path in a graph, and is implemented
recursively or iteratively.
The algorithm starts at a specified node and visits all the nodes in the graph by exploring each
branch as far as possible before backtracking to the previous node. When a node is visited, it is
marked as visited to prevent loops.

Consider the following steps for the implementation of DFS algorithm:


1. Create an empty stack and push the starting node onto it.
2. Mark the starting node as visited.
3. While the stack is not empty, pop a node from the stack and mark it as visited.
4. For each adjacent node to the current node, if the adjacent node has not been visited, mark it as
visited and push it onto the stack.
5. After processing all the adjacent nodes, you can do something with the current node, such as
printing it or storing it.
6. Repeat steps 3 to 5 until the stack is empty.

Consider the following steps for the implementation of BFS algorithm:

1. Create a queue Q and a set visited.


2. Add the starting node to the queue Q and mark it as visited.
3. While the queue is not empty:
a. Dequeue a node from the queue Q and process it.
b. For each adjacent node of the dequeued node:
i. If the adjacent node has not been visited, mark it as visited and enqueue it
into the queue Q.
ii.
code :-
import java.util.LinkedList;
import java.util.Stack;

82 | P a g e
230133107013
class Graph {
private int vertices;
private int[][] adjMatrix;

public Graph(int vertices) {


this.vertices = vertices;
adjMatrix = new int[vertices][vertices];
}
public void addEdge(int src, int dest) {
adjMatrix[src][dest] = 1;
adjMatrix[dest][src] = 1;
}
public void DFS(int startVertex) {
boolean[] visited = new boolean[vertices];
Stack<Integer> stack = new Stack<>();
stack.push(startVertex);
visited[startVertex] = true;

System.out.println("DFS traversal starting from vertex " + startVertex + ":");


while (!stack.isEmpty()) {
int currentVertex = stack.pop();
System.out.println("Visited " + currentVertex);

for (int i = 0; i < vertices; i++) {


if (adjMatrix[currentVertex][i] == 1 && !visited[i]) {
stack.push(i);
visited[i] = true;
}
}
}
}
public void BFS(int startVertex) {
boolean[] visited = new boolean[vertices];
LinkedList<Integer> queue = new LinkedList<>();
queue.add(startVertex);
visited[startVertex] = true;

System.out.println("BFS traversal starting from vertex " + startVertex + ":");


while (!queue.isEmpty()) {
int currentVertex = queue.poll();
System.out.println("Visited " + currentVertex);

for (int i = 0; i < vertices; i++) {


if (adjMatrix[currentVertex][i] == 1 && !visited[i]) {
queue.add(i);
visited[i] = true;
}
}
}
}
}
public class GraphTraversal {
public static void main(String[] args) {
83 | P a g e
230133107013
int vertices = 5;
Graph graph = new Graph(vertices);
graph.addEdge(0, 1);
graph.addEdge(0, 4);
graph.addEdge(1, 2);
graph.addEdge(1, 3);
graph.addEdge(1, 4);
graph.addEdge(2, 3);
graph.addEdge(3, 4);
graph.DFS(0);
graph.BFS(0);
}
}
Observations:
1. In DFS, the traversal goes deep into the graph by visiting as far as possible from the starting
node before backtracking.
2. In BFS, the traversal explores nodes level by level from the starting node.Result.

Conclusion: DFS and BFS are both essential traversal techniques for exploring graphs. DFS explores as
deep as possible before backtracking, while BFS explores level by level. These traversal techniques have
different applications depending on the problem's requirements.

Quiz:
1. What data structure is typically used in the iterative implementation of DFS and BFS?
DFS typically uses a stack, and BFS uses a queue.

2.What is the time complexity of DFS on a graph with V vertices and E edges?
The time complexity of DFS is O(V+E)O(V + E)O(V+E), where VVV is the number of vertices
and EEE is the number of edges.

3.What is the time complexity of BFS on a graph with V vertices and E edges?
The time complexity of BFS is also O(V+E)O(V + E)O(V+E), where VVV is the number of vertices
and EEE is the number of edges.
84 | P a g e
230133107013

4.In which order are nodes visited in a typical implementation of BFS?


In BFS, nodes are visited in level-order, meaning all nodes at the current level are visited before
moving on to nodes at the next level.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

85 | P a g e
230133107013
Experiment No: 16
Implement Rabin-Karp string matching algorithm.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO4

Objectives: (a) Find all occurrences of a pattern in a given text.


(b) Improve the performance of the brute force algorithm.
(c) Find a pattern in a given text with less time complexity in the average case.

Equipment/Instruments: Computer System, Any C language editor

Theory:
It is a string searching algorithm that is named after its authors Richard M. Carp and Michael O.
Rabin. This algorithm is used to find all the occurrences of a given pattern ‘P’’ in a given string ‘S’
in O(Ns + Np) time in average case, where ‘Ns’ and ‘Np’ are the lengths of ‘S’’ and ‘P’, respectively.

Let’s take an example to make it more clear.


Assume the given string S = “cxyzghxyzvjkxyz” and pattern P = “xyz” and we have to find all the
occurrences of ‘P’ in ‘S’.

We can see that “xyz” is occurring in “cxyzghxyzvjkxyz” at three positions. So, we have to print
that pattern ‘P’ is occurring in string ‘S’ at indices 1, 6, and 12.

Naive Pattern Searching (brute force) algorithm slides the pattern over text one by one and checks
for a match. If a match is found, then slide by 1 again to check for subsequent matches. This
approach has a time complexity of O(P* (S-P)).

The Rabin-Karp algorithm starts by computing, at each index of the text, the hash value of the string
starting at that particular index with the same length as the pattern. If the hash value of that equals
to the hash value of the given pattern, then it does a full match at that particular index.

Rabin Karp algorithm first computes the hash value of pattern P and first Np characters from S. If
hash values are same, we check the equality of actual strings. If the pattern is found, then it is called
hit. Otherwise, it is called a spurious hit. If hash values are not same, no need to compare actual
strings.

Steps of Rabin-Karp algorithm are as below:

1. Calculate the hash value of the pattern: The hash value of the pattern is calculated using a hash
function, which takes the pattern as input and produces a hash value as output.

86 | P a g e
230133107013
2. Calculate the hash values of all the possible substrings of the same length in the text: The hash
values of all the possible substrings of the same length as the pattern are calculated using the
same hash function.
3. Compare the hash value of the pattern with the hash values of all the possible substrings: If a
match is found, the algorithm checks the characters of the pattern and the substring to verify
that they are indeed equal.
4. Move on to the next possible substring: If the characters do not match, the algorithm moves on
to the next possible substring and repeats the process until all possible substrings have been
compared.

Implement the Rabin-Karp algorithm based on above steps and give different input text and pattern
to check its correctness. Also, find the time complexity of your implemented algorithm.

Code :-
public class RabinKarp {
static final int d = 256;
static final int q = 101;
static void rabinKarp(String pattern, String text) {
int Np = pattern.length();
int Ns = text.length();
int i, j;
int patternHash = 0;
int textHash = 0;
int h = 1;
int matches = 0;
for (i = 0; i < Np - 1; i++) {
h = (h * d) % q;
}
for (i = 0; i < Np; i++) {
patternHash = (d * patternHash + pattern.charAt(i)) % q;
textHash = (d * textHash + text.charAt(i)) % q;
}

for (i = 0; i <= Ns - Np; i++) {


if (patternHash == textHash) {
for (j = 0; j < Np; j++) {
if (text.charAt(i + j) != pattern.charAt(j)) {
break;
}
}
if (j == Np) {
System.out.println("Pattern found at index " + i);
matches++;
}
}
if (i < Ns - Np) {

87 | P a g e
230133107013
textHash = (d * (textHash - text.charAt(i) * h) + text.charAt(i + Np)) % q;
if (textHash < 0) {
textHash = (textHash + q);
}
}
}
if (matches == 0) {
System.out.println("Pattern not found in the given text.");
}
}
public static void main(String[] args) {
String text = "cxyzghxyzvjkxyz";
String pattern = "xyz";
rabinKarp(pattern, text);
}
}

Observations: The Rabin-Karp algorithm effectively finds the pattern in the given text by
comparing the hash values of the pattern and the substrings of the text. If a match is found, it ensures
correctness by doing a character-by-character comparison.
Result:-

Conclusion:- The Rabin-Karp algorithm is an efficient algorithm for string matching, especially
when the hash values can be used to quickly eliminate unnecessary character comparisons. It
performs well in the average case with time complexity O(Ns + Np), but can degrade to O(Ns *
Np) in the worst case due to hash collisions.

Quiz:
1. What is the Rabin-Karp algorithm used for?
The Rabin-Karp algorithm is used for finding all occurrences of a pattern in a given text by using
hash values to speed up the comparison process.

2. What is the time complexity of the Rabin-Karp algorithm in the average case?
The average case time complexity of the Rabin-Karp algorithm is O(Ns + Np), where Ns is the
length of the text and Np is the length of the pattern.

3.What is the main advantage of the Rabin-Karp algorithm over the brute force algorithm for string
matching?
The main advantage of the Rabin-Karp algorithm is that it uses hash values to avoid unnecessary
character comparisons, making it faster than the brute force algorithm in the average case.

4.What is the worst-case time complexity of the Rabin-Karp algorithm?

88 | P a g e
230133107013
The worst-case time complexity of the Rabin-Karp algorithm is O(Ns * Np) due to hash collisions,
which may require character-by-character comparisons.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks
s

89 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy