Algorithms
Algorithms
DSA
ALGORITHMS
Sorting Algorithms
Searching Algorithms
Graph Algorithms
Dynamic Programming Algorithms
Greedy Algorithms
Backtracking Algorithms
Mathematical Algorithms
Tree Algorithms
Sorting Algorithms
1. Bubble Sort
Theory:
Bubble Sort is a simple comparison-based sorting algorithm. It repeatedly steps through the
list, compares adjacent elements, and swaps them if they are in the wrong order. This
process is repeated until the list is sorted.
It is called Bubble Sort because smaller elements “bubble” to the top of the list, and larger
elements sink to the bottom.
Algorithm:
1. Start from the first element, compare it with the next element.
2. If the current element is greater than the next element, swap them.
3. Move to the next element and repeat the process for all elements.
4. Continue the process for all elements until no swaps are needed, meaning the list is
sorted.
Code (Java):
java
2. Binary Search
Theory:
Binary Search is an efficient search algorithm that works on sorted arrays. It repeatedly
divides the search interval in half. If the value of the search key is less than the item in the
middle of the interval, narrow the interval to the lower half; otherwise, narrow it to the
upper half.
Algorithm:
1. Start with two pointers, low and high, representing the current search range.
2. Find the middle element: mid = (low + high) / 2.
3. If the middle element equals the target, return the index.
4. If the target is less than the middle element, adjust the high pointer to mid - 1.
5. If the target is greater than the middle element, adjust the low pointer to mid + 1.
6. Repeat the process until low > high.
Code (Java):
java
if (result == -1) {
System.out.println("Element not found.");
} else {
System.out.println("Element found at index: " + result);
}
}
}
Time Complexity:
• Best Case: O(1) (if the element is found at the middle)
• Average/Worst Case: O(log n)
3. Merge Sort
Theory:
Merge Sort is a Divide and Conquer algorithm. It divides the input array into two halves,
recursively sorts the two halves, and then merges them to produce the sorted array.
Algorithm:
1. Divide the unsorted array into two halves.
2. Recursively sort both halves.
3. Merge the two halves back together.
Code (Java):
java
public static void merge(int[] arr, int left, int mid, int right) {
int n1 = mid - left + 1;
int n2 = right - mid;
return fib[n];
}
5. Selection Sort
Theory:
Selection Sort is a simple comparison-based sorting algorithm. It repeatedly selects the
smallest element from the unsorted portion of the array and swaps it with the first unsorted
element. This process continues until the entire array is sorted.
Algorithm:
1. Start with the first element and search the array to find the minimum element.
2. Swap the minimum element with the first element.
3. Move to the next position and repeat the process until the array is sorted.
Code (Java):
java
6. Insertion Sort
Theory:
Insertion Sort is a comparison-based sorting algorithm that builds the final sorted array one
element at a time. It is much less efficient on large lists than more advanced algorithms like
Quick Sort, Heap Sort, or Merge Sort, but it is efficient for small data sets.
Algorithm:
1. Start with the second element, compare it with the previous elements, and insert it
into its correct position.
2. Repeat the process for all elements, inserting each one into its correct position in the
sorted portion of the array.
Code (Java):
java
// Move elements of arr[0..i-1], that are greater than key, to one position ahead
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}
7. Quick Sort
Theory:
Quick Sort is an efficient, comparison-based, divide-and-conquer sorting algorithm. It works
by selecting a pivot element from the array and partitioning the other elements into two
sub-arrays, according to whether they are less than or greater than the pivot. The sub-arrays
are then sorted recursively.
Algorithm:
1. Choose a pivot element (can be the first, last, or a random element).
2. Partition the array such that elements smaller than the pivot are on the left, and
elements greater than the pivot are on the right.
3. Recursively apply the same process to the sub-arrays on the left and right of the
pivot.
Code (Java):
java
return i + 1;
}
8. Heap Sort
Theory:
Heap Sort is a comparison-based sorting algorithm that uses a binary heap data structure. It
is similar to selection sort, where the maximum element is selected and placed at the end of
the array. The key idea is to use a heap to find the maximum efficiently.
Algorithm:
1. Build a max-heap from the input data.
2. At each step, remove the largest element from the heap (which is the root) and
rebuild the heap.
3. Repeat the process until the heap is empty.
Code (Java):
java
9. Counting Sort
Theory:
Counting Sort is a non-comparison-based sorting algorithm. It counts the occurrences of
each unique element in the input array. The counts are then used to place the elements in
the correct position in the sorted array.
Algorithm:
1. Find the maximum and minimum values in the array.
2. Create a count array to store the count of each unique value.
3. Calculate the cumulative count to determine the position of each element.
4. Build the sorted array using the count array.
Code (Java):
java
Searching Algorithms
1. Linear Search
Theory:
Linear search is a simple search algorithm that checks each element in the array one by one
until the target element is found or the list is exhausted.
Algorithm:
1. Start from the first element and compare it with the target.
2. If the element matches the target, return the index.
3. If the end of the array is reached without finding the target, return -1 (not found).
Code (Java):
java
2. Binary Search
Theory:
Binary search is an efficient search algorithm that works on sorted arrays. It repeatedly
divides the array in half, checking whether the middle element is the target or whether the
target is in the left or right half.
Algorithm:
1. Compare the target with the middle element of the array.
2. If the target is equal to the middle element, return the index.
3. If the target is smaller than the middle element, repeat the search in the left half.
4. If the target is larger than the middle element, repeat the search in the right half.
Code (Java):
java
3. Jump Search
Theory:
Jump Search is a search algorithm for sorted arrays. It works by jumping ahead by a fixed
number of steps (block size), and once it finds a block where the target might be, it performs
a linear search within that block.
Algorithm:
1. Jump ahead by a fixed block size (sqrt(n)).
2. When the block where the target might be is found, perform a linear search within
that block.
3. If the target is found, return the index; otherwise, return -1.
Code (Java):
java
public class JumpSearch {
public static int jumpSearch(int[] arr, int target) {
int n = arr.length;
int step = (int) Math.sqrt(n); // Block size
int prev = 0;
4. Interpolation Search
Theory:
Interpolation Search is an improved variant of binary search for uniformly distributed data. It
estimates the position of the target value based on the value's relation to the endpoints. The
idea is to use interpolation rather than dividing the array in half as in binary search.
Algorithm:
1. Calculate the position using the formula:
pos=low+(target−arr[low])×(high−low)arr[high]−arr[low]pos = low + \frac{(target -
arr[low]) \times (high - low)}{arr[high] -
arr[low]}pos=low+arr[high]−arr[low](target−arr[low])×(high−low)
2. If the target matches the element at the calculated position, return the index.
3. If the target is smaller, search the left half; otherwise, search the right half.
4. Repeat the process until the target is found or the range becomes invalid.
Code (Java):
java
while (low <= high && target >= arr[low] && target <= arr[high]) {
if (low == high) {
if (arr[low] == target) {
return low;
}
return -1;
}
5. Exponential Search
Theory:
Exponential Search works on sorted arrays and is useful when the size of the array is not
known in advance. It first finds a range where the target might be and then uses binary
search within that range.
Algorithm:
1. Start with an index of 1 and double the index in each step until you find an index
such that the value at that index is greater than the target.
2. Perform a binary search in the range [previous_index, current_index].
Code (Java):
java
public class ExponentialSearch {
public static int exponentialSearch(int[] arr, int target) {
if (arr[0] == target) {
return 0; // If target is at the first position
}
public static int binarySearch(int[] arr, int low, int high, int target) {
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == target) {
return mid;
}
6. Ternary Search
Theory:
Ternary Search is similar to binary search, but instead of dividing the array into two halves, it
divides the array into three parts. It is performed on sorted arrays.
Algorithm:
1. Divide the array into three parts.
2. Compare the target with the first and second midpoints.
3. If the target is equal to any of the midpoints, return the index.
4. Depending on the comparison, continue searching in one of the three parts.
5. Repeat the process until the target is found or the range becomes invalid.
Code (Java):
java
1. Fibonacci Sequence
Theory:
The Fibonacci sequence is a series where each number is the sum of the two preceding
ones, starting from 0 and 1. Using dynamic programming, we store intermediate results to
avoid redundant calculations.
Algorithm:
1. Create an array to store Fibonacci numbers.
2. Initialize the first two Fibonacci numbers.
3. Iterate through the remaining numbers and fill in the array using the relation:
F(n)=F(n−1)+F(n−2)F(n) = F(n-1) + F(n-2)F(n)=F(n−1)+F(n−2)
Code (Java):
java
return fib[n];
}
return dp[m][n];
}
return maxLength;
}
for (int len = 2; len < n; len++) { // len is the chain length
for (int i = 1; i < n - len + 1; i++) {
int j = i + len - 1;
dp[i][j] = Integer.MAX_VALUE;
for (int k = i; k < j; k++) {
int cost = dp[i][k] + dp[k + 1][j] + p[i - 1] * p[k] * p[j];
if (cost < dp[i][j]) {
dp[i][j] = cost;
}
}
}
}
return dp[m][n];
}
Graph Algorithms
import java.util.*;
adj.get(0).add(1);
adj.get(0).add(2);
adj.get(1).add(0);
adj.get(1).add(3);
adj.get(2).add(0);
adj.get(2).add(3);
adj.get(3).add(1);
adj.get(3).add(2);
adj.get(3).add(4);
adj.get(4).add(3);
adj.get(4).add(5);
adj.get(5).add(4);
import java.util.*;
public class BFS {
public static void bfs(int start, List<List<Integer>> adj) {
boolean[] visited = new boolean[adj.size()];
Queue<Integer> queue = new LinkedList<>();
visited[start] = true;
queue.add(start);
while (!queue.isEmpty()) {
int node = queue.poll();
System.out.print(node + " ");
System.out.println("BFS traversal:");
bfs(0, adj);
}
}
Time Complexity:
• Time Complexity: O(V + E), where V is the number of vertices and E is the number of
edges.
• Space Complexity: O(V)
3. Dijkstra’s Algorithm
Theory:
Dijkstra’s algorithm is used to find the shortest path between nodes in a graph with non-
negative edge weights. It uses a priority queue to explore the closest unvisited node.
Algorithm:
1. Initialize the distance to the source node as 0 and all other nodes as infinity.
2. Use a priority queue to select the unvisited node with the smallest known distance.
3. For each unvisited neighbor, update the shortest known distance.
4. Repeat until all nodes have been visited.
Code (Java):
java
import java.util.*;
while (!pq.isEmpty()) {
Node node = pq.poll();
int u = node.vertex;
dijkstra(0, V, adj);
}
}
Time Complexity:
• Time Complexity: O((V + E) log V), where V is the number of vertices and E is the
number of edges.
• Space Complexity: O(V)
4. Bellman-Ford Algorithm
Theory:
The Bellman-Ford algorithm is used to find the shortest path from a single source to all other
vertices in a graph, even when edge weights are negative. Unlike Dijkstra’s algorithm, it
works on graphs with negative weights but is slower.
Algorithm:
1. Initialize the distance to the source node as 0 and all other nodes as infinity.
2. For each edge, attempt to relax it by updating the distance to its destination.
3. Repeat the process V-1 times (where V is the number of vertices).
4. Check for negative-weight cycles by running the relaxation process once more.
Code (Java):
java
import java.util.Arrays;
bellmanFord(edges, V, 0);
}
}
Time Complexity:
• Time Complexity: O(V * E), where V is the number of vertices and E is the number of
edges.
• Space Complexity: O(V)
printSolution(dist);
}
A Search Algorithm*
Theory:
The A Search Algorithm* is a popular pathfinding and graph traversal algorithm, widely used
in AI, especially in games. It combines the advantages of Dijkstra's Algorithm and Greedy
Best-First Search by using a heuristic to guide its search.
Algorithm:
1. Open list: Nodes to be evaluated.
2. Closed list: Nodes already evaluated.
3. For each node, calculate f(n) = g(n) + h(n):
o g(n): The cost to reach the node.
o h(n): Heuristic estimate of the cost to reach the goal (e.g., Euclidean
distance).
4. At each step, choose the node from the open list with the lowest f(n) value and
evaluate it.
5. Repeat until the goal is reached.
Code (Java):
java
import java.util.*;
class AStar {
static class Node implements Comparable<Node> {
int x, y;
int gCost, hCost, fCost;
Node parent;
Node(int x, int y) {
this.x = x;
this.y = y;
}
@Override
public int compareTo(Node o) {
return Integer.compare(this.fCost, o.fCost);
}
}
static int[][] directions = { {0, 1}, {1, 0}, {0, -1}, {-1, 0} };
start.gCost = 0;
start.hCost = heuristic(start, goal);
start.fCost = start.gCost + start.hCost;
openList.add(start);
while (!openList.isEmpty()) {
Node current = openList.poll();
if (current.equals(goal)) {
return reconstructPath(current);
}
closedList.add(current);
if (closedList.contains(neighbor)) {
continue;
}
if (!openList.contains(neighbor)) {
openList.add(neighbor);
}
}
}
}
}
return null; // No path found
}
import java.util.*;
@Override
public int compareTo(Edge compareEdge) {
return this.weight - compareEdge.weight;
}
}
int V, E;
Edge[] edges;
Kruskal(int V, int E) {
this.V = V;
this.E = E;
edges = new Edge[E];
}
void kruskalMST() {
Edge[] result = new Edge[V];
int e = 0;
int i = 0;
for (i = 0; i < V; ++i) {
result[i] = new Edge(0, 0, 0);
}
Arrays.sort(edges);
i = 0;
while (e < V - 1) {
Edge nextEdge = edges[i++];
int x = find(subsets, nextEdge.src);
int y = find(subsets, nextEdge.dest);
if (x != y) {
result[e++] = nextEdge;
union(subsets, x, y);
}
}
graph.kruskalMST();
}
}
Time Complexity:
• Time Complexity: O(E log E + V log V), where E is the number of edges and V is the
number of vertices.
• Space Complexity: O(V) for the union-find structure.
import java.util.*;
Arrays.fill(key, Integer.MAX_VALUE);
key[0] = 0;
parent[0] = -1;
pq.add(new Edge(0, key[0]));
while (!pq.isEmpty()) {
Edge node = pq.poll();
int u = node.vertex;
inMST[u] = true;
printMST(parent, graph);
}
import java.util.*;
int w = -1;
if (low[u] == disc[u]) {
while (w != u) {
w = stack.pop();
System.out.print(w + " ");
stackMember[w] = false;
}
System.out.println();
}
}
public void SCC(List<List<Integer>> adj, int V) {
disc = new int[V];
low = new int[V];
stackMember = new boolean[V];
stack = new Stack<>();
Arrays.fill(disc, -1);
Arrays.fill(low, -1);
adj.get(0).add(2);
adj.get(2).add(1);
adj.get(1).add(0);
adj.get(0).add(3);
adj.get(3).add(4);
TarjanSCC tarjan = new TarjanSCC();
System.out.println("Strongly Connected Components in the graph:");
tarjan.SCC(adj, V);
}
}
Time Complexity:
• Time Complexity: O(V + E), where V is the number of vertices and E is the number of
edges.
• Space Complexity: O(V) for storing discovery times and low-link values.
import java.util.*;
Arrays.fill(visited, false);
while (!stack.isEmpty()) {
int v = stack.pop();
if (!visited[v]) {
reverseDfs(v, visited, revAdj);
System.out.println();
}
}
}
adj.get(0).add(2);
adj.get(2).add(1);
adj.get(1).add(0);
adj.get(0).add(3);
adj.get(3).add(4);
Topological Sorting
Theory:
Topological Sorting is an ordering of vertices in a directed acyclic graph (DAG) such that for
every directed edge u -> v, vertex u comes before vertex v in the ordering.
Algorithm:
1. Perform DFS on the graph, and track the finishing times of vertices.
2. Once DFS is completed, push vertices to a stack. The topological order is the reverse
of the finishing order.
Code (Java):
java
import java.util.*;
while (!stack.isEmpty()) {
System.out.print(stack.pop() + " ");
}
}
adj.get(5).add(2);
adj.get(5).add(0);
adj.get(4).add(0);
adj.get(4).add(1);
adj.get(2).add(3);
adj.get(3).add(1);
uf.union(0, 2);
uf.union(1, 3);
uf.union(1, 4);
System.out.println("Set of element 3: " + uf.find(3));
System.out.println("Set of element 2: " + uf.find(2));
}
}
Time Complexity:
• Time Complexity: O(α(n)) for each operation, where α is the inverse Ackermann
function, which is nearly constant for practical purposes.
• Space Complexity: O(n).
import java.util.*;
public class CycleDetectionUndirected {
public boolean isCyclicUtil(int v, boolean[] visited, int parent, List<List<Integer>> adj) {
visited[v] = true;
for (int neighbor : adj.get(v)) {
if (!visited[neighbor]) {
if (isCyclicUtil(neighbor, visited, v, adj)) {
return true;
}
} else if (neighbor != parent) {
return true;
}
}
return false;
}
adj.get(0).add(1);
adj.get(1).add(0);
adj.get(1).add(2);
adj.get(2).add(1);
adj.get(2).add(0);
adj.get(0).add(2);
adj.get(3).add(4);
adj.get(4).add(3);
Greedy Algorithms
1. Activity Selection Problem
Theory:
The Activity Selection Problem involves selecting the maximum number of activities that
don't overlap. Greedy algorithms are used to select activities based on their finish times.
Algorithm:
1. Sort activities by their finish times.
2. Select the first activity and then repeatedly select the next activity that starts after
the last selected activity finishes.
Code (Java):
java
import java.util.Arrays;
import java.util.Comparator;
System.out.println("Selected activities:");
int lastEnd = -1;
for (int i : indices) {
if (start[i] >= lastEnd) {
System.out.println("Activity: " + i + " (" + start[i] + ", " + end[i] + ")");
lastEnd = end[i];
}
}
}
import java.util.*;
DisjointSet(int n) {
parent = new int[n];
rank = new int[n];
for (int i = 0; i < n; i++) {
parent[i] = i;
rank[i] = 0;
}
}
int find(int u) {
if (u != parent[u]) {
parent[u] = find(parent[u]);
}
return parent[u];
}
void union(int u, int v) {
int rootU = find(u);
int rootV = find(v);
if (rootU != rootV) {
if (rank[rootU] < rank[rootV]) {
parent[rootU] = rootV;
} else if (rank[rootU] > rank[rootV]) {
parent[rootV] = rootU;
} else {
parent[rootV] = rootU;
rank[rootU]++;
}
}
}
}
System.out.println("Edges in MST:");
for (Edge edge : mst) {
System.out.println(edge.src + " - " + edge.dest + ": " + edge.weight);
}
}
kruskal(V, edges);
}
}
Time Complexity:
• Time Complexity: O(E log E), due to sorting and union-find operations.
• Space Complexity: O(V + E)
import java.util.*;
Arrays.fill(key, Integer.MAX_VALUE);
key[0] = 0;
pq.add(new Edge(0, 0));
parent[0] = -1;
while (!pq.isEmpty()) {
int u = pq.poll().dest;
inMST[u] = true;
System.out.println("Edges in MST:");
for (int i = 1; i < V; i++) {
System.out.println(parent[i] + " - " + i + ": " + key[i]);
}
}
prim(V, adj);
}
}
Time Complexity:
• Time Complexity: O(E log V), due to priority queue operations.
• Space Complexity: O(V + E)
Backtracking Algorithms
1. N-Queens Problem
Theory:
The N-Queens problem involves placing N queens on an N x N chessboard so that no two
queens threaten each other. Backtracking helps to explore all possible placements.
Algorithm:
1. Place queens one by one in different columns.
2. Check if the current placement is safe.
3. If safe, recursively place the next queen.
4. Backtrack if placing further queens is not possible.
Code (Java):
java
private static boolean isSafe(int[] board, int row, int col, int n) {
for (int i = 0; i < row; i++) {
if (board[i] == col || board[i] - i == col - row || board[i] + i == col + row) {
return false;
}
}
return true;
}
2. Sudoku Solver
Theory:
The Sudoku Solver fills a 9x9 Sudoku grid using backtracking to ensure that each number 1-9
appears only once per row, column, and 3x3 subgrid.
Algorithm:
1. Find an empty cell.
2. Try placing digits 1 through 9.
3. Check if the placement is valid.
4. Recursively solve the next cell.
5. Backtrack if needed.
Code (Java):
java
private static boolean isValid(int[][] board, int row, int col, int num) {
for (int i = 0; i < 9; i++) {
if (board[row][i] == num || board[i][col] == num ||
board[row - row % 3 + i / 3][col - col % 3 + i % 3] == num) {
return false;
}
}
return true;
}
solveSudoku(board);
printBoard(board);
}
}
Time Complexity:
• Time Complexity: O(9^(N^2)), where N is the size of the grid (9).
• Space Complexity: O(1)
Tree Algorithms
class Node {
int data;
Node left, right;
System.out.println("Inorder traversal:");
tree.inorder(tree.root);
System.out.println("\nPreorder traversal:");
tree.preorder(tree.root);
System.out.println("\nPostorder traversal:");
tree.postorder(tree.root);
}
}
Time Complexity:
• Time Complexity: O(n), where n is the number of nodes in the tree.
• Space Complexity: O(h), where h is the height of the tree.
class BST {
class Node {
int key;
Node left, right;
Node root;
BST() {
root = null;
}
// Insertion
void insert(int key) {
root = insertRec(root, key);
}
return root;
}
// Search
boolean search(int key) {
return searchRec(root, key);
}
// Deletion
void delete(int key) {
root = deleteRec(root, key);
}
root.key = minValue(root.right);
root.right = deleteRec(root.right, root.key);
}
return root;
}
class LCA {
static class Node {
int data;
Node left, right;
Node(int value) {
data = value;
left = right = null;
}
}
Node root;
class AVLTree {
class Node {
int key, height;
Node left, right;
Node(int d) {
key = d;
height = 1;
}
}
Node root;
int height(Node N) {
if (N == null)
return 0;
return N.height;
}
int getBalance(Node N) {
if (N == null)
return 0;
return height(N.left) - height(N.right);
}
Node rightRotate(Node y) {
Node x = y.left;
Node T2 = x.right;
x.right = y;
y.left = T2;
y.height = Math.max(height(y.left), height(y.right)) + 1;
x.height = Math.max(height(x.left), height(x.right)) + 1;
return x;
}
Node leftRotate(Node x) {
Node y = x.right;
Node T2 = y.left;
y.left = x;
x.right = T2;
x.height = Math.max(height(x.left), height(x.right)) + 1;
y.height = Math.max(height(y.left), height(y.right)) + 1;
return y;
}
return node;
}
Mathematical Algorithms
Modular Exponentiation
Theory:
Modular Exponentiation efficiently computes (base^exponent) % mod using the property of
exponentiation by squaring. This method is efficient for very large numbers.
Algorithm:
1. If exponent == 0, return 1.
2. Recursively compute base^exponent/2 % mod and square the result.
3. If the exponent is odd, multiply by the base once more.
Code (Java):
java
Other Algorithms
printSolution(dist);
}
floydWarshall(graph);
}
}
Time Complexity:
• Time Complexity: O(V^3), where V is the number of vertices.
• Space Complexity: O(V^2)