Daa
Daa
Where:
o a ≥ 1 = number of subproblems
o b > 1 = size of each subproblem
o f(n) = work done outside recursion (e.g., merging)
📊 Three Cases:
1. Case 1:
If f(n)=O(nlogba−ϵ)f(n) = O(n^{\log_b a - \epsilon})f(n)=O(nlogb a−ϵ),
→ Then T(n)=Θ(nlogba)T(n) = \Theta(n^{\log_b a})T(n)=Θ(nlogb a)
2. Case 2:
If f(n)=Θ(nlogba)f(n) = \Theta(n^{\log_b a})f(n)=Θ(nlogb a),
→ Then T(n)=Θ(nlogba⋅logn)T(n) = \Theta(n^{\log_b a} \cdot \log
n)T(n)=Θ(nlogb a⋅logn)
3. Case 3:
If f(n)=Ω(nlogba+ϵ)f(n) = \Omega(n^{\log_b a + \epsilon})f(n)=Ω(nlogb a+ϵ)
and regularity holds,
→ Then T(n)=Θ(f(n))T(n) = \Theta(f(n))T(n)=Θ(f(n))
🧪 Example:
Example:
Solve:
Step 1: Guess
Guess: T(n)=O(nlogn)T(n) = O(n \log n)T(n)=O(nlogn)
• Assume T(k)≤c⋅klogkT(k) \leq c \cdot k \log kT(k)≤c⋅klogk for all k<nk < nk<n.
• Substitute into recurrence:
• Simplify:
=cn(logn−1)+n=cnlogn−cn+n= c n (\log n - 1) + n = c n \log n - c n +
n=cn(logn−1)+n=cnlogn−cn+n
• Time complexity measures how the time taken by an algorithm grows as the input
size increases.
• It tells us how efficient or slow an algorithm is.
Key Points:
• A loop running once for each element has time complexity O(n)O(n)O(n).
• Nested loops running through all pairs have O(n2)O(n^2)O(n2).
Understanding time complexity helps write faster and more efficient programs.
4.what do you mean by sorting explain heap short with an algorithm in detail? 8marks
What is Sorting?
Step-by-step Example:
Selection Sort is simple but inefficient for large data due to its quadratic time.
5. what is recursion tree using recursion tree find the asymptotic bound for the
equation? 8 marks
Summary:
6. what do you mean by an algorithm write the charecterstics of the algorithm in brief?
4marks
Unit 2
1. Type:
a. Kruskal: Greedy algorithm based on edges.
b. Prim: Greedy algorithm based on vertices.
2. Starting Point:
a. Kruskal: Starts with sorted edges, no starting vertex.
b. Prim: Starts from a specific vertex.
3. Approach:
a. Kruskal: Adds the smallest edge that doesn't form a cycle.
b. Prim: Adds the nearest vertex to the growing tree.
4. Cycle Detection:
a. Kruskal: Uses disjoint sets (Union-Find).
b. Prim: No special cycle detection needed.
5. Best For:
a. Kruskal: Sparse graphs.
b. Prim: Dense graphs.
2. what is detailed diffrence between quick sort and merge sort? 8 marks
1. Basic Idea
a. Quick Sort: Uses a pivot to divide the array into parts (smaller and larger).
b. Merge Sort: Divides the array into halves, then merges sorted halves.
2. Approach
a. Quick Sort: Divide-and-conquer, works in-place.
b. Merge Sort: Divide-and-conquer, uses extra space for merging.
3. Time Complexity
a. Quick Sort:
i. Best/Average: O(n log n)
ii. Worst: O(n²) (bad pivot choice)
b. Merge Sort:
i. Always: O(n log n)
4. Space Complexity
a. Quick Sort: O(log n) (for recursion)
b. Merge Sort: O(n) (needs extra array)
5. Stability
a. Quick Sort: Not stable by default
b. Merge Sort: Stable
6. Use Cases
a. Quick Sort: Faster in practice for large datasets.
b. Merge Sort: Better when stable sort or linked lists are needed.
3. what is minimum spanning tree in 2000 words point wise? 8 marks
1. Definition:
A Minimum Spanning Tree is a subgraph of a connected, undirected, weighted
graph that connects all vertices with the minimum possible total edge weight and
no cycles.
2. Spanning Tree:
A spanning tree includes all the vertices and has exactly V - 1 edges (V = number of
vertices).
3. Minimum:
Among all possible spanning trees, MST has the least total weight.
4. Cycle-Free:
MST never contains cycles (loops).
5. Unique MST:
If all edge weights are different, the MST is unique.
6. Network Design:
Used in designing least-cost networks (e.g., internet, roads, electrical grids).
7. Clustering in AI:
Helps in grouping data points in machine learning.
8. Kruskal’s Algorithm:
Sorts edges by weight and picks the smallest non-cycling edges.
9. Prim’s Algorithm:
Starts from a node and grows the tree by adding the smallest connecting edge.
4. what is divide and conquer strategy ?4marks
1. Divide – Break the problem into smaller parts (usually of the same type).
2. Conquer – Solve each subproblem recursively.
3. Combine – Merge the results of subproblems to get the final answer.
✅ Examples:
• Merge Sort
• Quick Sort
• Binary Search
• Strassen’s Matrix Multiplication
Your goal is to select items so the total weight is within the limit, and the total value is as
high as possible.
✅ Example:
Knapsack capacity = 10 kg
Items:
• Total weight = 7 kg
• Total value = ₹120
📚 Types:
• 0/1 Knapsack: Take the whole item or leave it. (Solved with Dynamic
Programming)
• Fractional Knapsack: Take part of an item. (Solved with Greedy Algorithm)
The Knapsack Problem is used in budgeting, resource allocation, and load balancing.
6. find the optimal solution for the fractional knapsack problem by making ues of the
greedy approach? 8marks
The Fractional Knapsack Problem allows taking fractions of items to maximize value
within a weight limit. The Greedy approach is ideal here and guarantees an optimal
solution.
🧮 Example:
Knapsack capacity = 50 kg
Items:
Value/Weigh
Item Weight Value
t
A 10 60 6.0
B 20 100 5.0
C 30 120 4.0
🏁 Final Answer:
1. Definition:
The Greedy Approach is a problem-solving technique that makes the best choice
at each step to find an overall optimal solution.
2. Key Idea:
It builds a solution piece by piece, always picking the locally optimal choice
without worrying about future consequences.
3. How It Works:
a. Start with an empty solution.
b. At each step, choose the option that looks best right now.
c. Repeat until the problem is solved or no more choices remain.
4. When to Use:
Works well for problems where local optimum choices lead to global optimum,
like in:
a. Fractional Knapsack
b. Activity Selection
c. Huffman Coding
d. Prim’s and Kruskal’s MST algorithms
5. Advantages:
a. Simple and intuitive
b. Usually fast and efficient
c. Easy to implement
6. Disadvantages:
a. Doesn’t always guarantee the optimal solution (not suitable for every
problem)
b. Can get stuck in local optima without reaching the best global solution
7. Example:
In the Fractional Knapsack problem, the greedy method picks items with the
highest value-to-weight ratio first to maximize total value.
Unit 3
1. Purpose:
Finds the position of a pattern P in a text T. Very efficient for large texts and long
patterns.
2. Main Idea:
Compares pattern characters from right to left. On a mismatch, uses two
heuristics to skip ahead intelligently.
3. Heuristics Used:
a. Bad Character Heuristic: On mismatch, shifts pattern so that mismatched
character in text aligns with its last occurrence in the pattern (or skips
entirely if not found).
b. Good Suffix Heuristic: If suffix matched before a mismatch, aligns next
occurrence of the suffix in pattern or skips if none.
4. Preprocessing Time:
O(m + σ) where m is pattern length, σ is alphabet size.
5. Search Time:
Best case O(n/m), worst case O(n + m), where n is text length.
📌 Example
The algorithm compares the pattern to the text from right to left. On a mismatch, it uses
two preprocessed rules to decide how far to shift the pattern:
1. Bad Character Rule: If a mismatch occurs, the pattern is shifted so that the
mismatched text character aligns with its last occurrence in the pattern. If the
character doesn't exist in the pattern, the pattern is shifted past it entirely.
2. Good Suffix Rule: If part of the pattern matches the text before a mismatch, the
pattern shifts to align the matched suffix with its next occurrence in the pattern. If
the suffix doesn't appear elsewhere, it shifts to align a prefix that matches the suffix.
The preprocessing step builds tables for both rules in O(m + σ) time (m = pattern length, σ
= alphabet size). The search itself takes O(n) time in the worst case but is often faster in
practice.
Boyer-Moore is widely used in tools like text editors and search functions for its speed and
efficiency.
3.Classify the steps used for development of dynamic programming ? 8 marks
1. Definition:
Dynamic Programming is a technique for solving problems by breaking them down
into simpler subproblems and storing their solutions to avoid redundant work.
2. Key Features:
a. Overlapping Subproblems: The problem can be broken into subproblems
which repeat.
b. Optimal Substructure: The optimal solution of the main problem depends
on optimal solutions of its subproblems.
3. Approaches:
a. Top-Down (Memoization): Use recursion and cache results of subproblems.
b. Bottom-Up (Tabulation): Build a table iteratively from smallest
subproblems.
4. Steps to Develop a DP Solution:
a. Identify subproblems.
b. Define a recurrence relation.
c. Initialize base cases.
d. Fill the DP table using the relation.
5. Advantages:
a. Avoids redundant calculations.
b. Reduces time complexity from exponential to polynomial in many cases.
6. Common Applications:
a. Fibonacci sequence, Knapsack problem, Longest Common Subsequence,
Matrix Chain Multiplication.
1. Identify if DP is Applicable
Check for optimal substructure (solution to the problem can be built from
solutions to subproblems) and overlapping subproblems (same subproblems are
solved multiple times).
2. Define the Subproblem
Break the problem into smaller, manageable subproblems. Clearly state what each
subproblem represents in the context of the overall problem.
3. Formulate the Recurrence Relation
Establish a formula or rule (recurrence) that expresses the solution of a larger
subproblem using solutions to smaller ones.
4. Choose a DP Approach
a. Top-down (Memoization): Solve recursively and store the results of
subproblems to avoid recomputation.
b. Bottom-up (Tabulation): Solve subproblems iteratively and build up the final
solution using a table.
5. Initialize Base Cases
Provide starting values for the smallest subproblems. These serve as the
foundation for solving larger subproblems.
6. Compute and Store Results
Use the recurrence relation and base cases to fill the DP table or cache. Solve each
subproblem only once.
7. Reconstruct the Solution (if needed)
Backtrack through the DP table to find the choices or path that led to the optimal
solution.
6. short note
1) longest commen subsequence problome
2) matrix chain multiplucation
🔍 Example:
For strings "ABCBDAB" and "BDCAB", the LCS is "BCAB" (length 4).
✅ Key Characteristics:
• Substructure: If last characters match, LCS includes it; else, exclude one
character and take max LCS of resulting pairs.
• Overlapping Subproblems: Same subproblems (substrings) are solved multiple
times.
📐 DP Formula:
• If X[i-1] == Y[j-1]:
dp[i][j] = 1 + dp[i-1][j-1]
• Else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
⏱ Time Complexity:
LCS is widely used in diff tools, DNA sequence analysis, and version control systems.
• Problem:
Given a sequence of matrices A1,A2,...,AnA_1, A_2, ..., A_n A1 ,A2 ,...,An , find the
most efficient way to multiply them by fully parenthesizing the product to minimize
the total number of scalar multiplications.
• Key Point:
Matrix multiplication is associative, but the order of multiplication affects
computation cost.
• Goal:
Determine the optimal parenthesization to minimize computation, not to perform
the multiplication itself.
• Dynamic Programming Approach:
o Define dp[i][j] as the minimum number of multiplications needed to
multiply matrices from AiA_iAi to AjA_jAj .
o Recurrence:
7. discuses the basic divide & conquer for matrix multiplucation? 8marks
makefile
CopyEdit
A = | A11 A12 | , B = | B11 B12 |
| A21 A22 | | B21 B22 |
4. Step 2 – Conquer:
Compute the sub-blocks of the result matrix C recursively using these formulas:
ini
CopyEdit
C11 = A11×B11 + A12×B21
C12 = A11×B12 + A12×B22
C21 = A21×B11 + A22×B21
C22 = A21×B12 + A22×B22
5. Step 3 – Combine:
Combine the four submatrices C11, C12, C21, C22 to get the full result matrix C.
6. Time Complexity:
Recurrence: T(n) = 8T(n/2) + O(n^2)
Solving gives T(n) = O(n^3), same as naive multiplication.
7. Significance:
Forms the foundation for faster algorithms like Strassen’s Algorithm, which
reduces the number of recursive multiplications.
Purpose:
KMP efficiently finds occurrences of a pattern P in a text T by avoiding unnecessary re-
comparisons after a mismatch.
Key Idea:
Steps:
Example:
• Text: "ABABDABACDABABCABAB"
• Pattern: "ABABCABAB"
When mismatch occurs after matching part of the pattern, the algorithm uses the LPS
array to slide the pattern over without rechecking characters.
Complexity:
In detail
Objective:
Efficiently find occurrences of a pattern P in a text T by minimizing redundant
comparisons.
How KMP Works
Example
• Text T: "ABABDABACDABABCABAB"
• Pattern P: "ABABCABAB"
During search, after some matches, if a mismatch occurs at P[4] vs T[...], use LPS to
shift pattern by 2 instead of 4, reducing comparisons.
Time Complexity
KMP avoids backtracking in the text, making it faster than naive search for large inputs.
Unit 4
🔙 Backtracking in Algorithms
1. Definition:
Backtracking is a systematic method for solving problems by trying out partial
solutions and abandoning them if they fail to satisfy the problem constraints.
2. Key Idea:
Build solutions incrementally, one piece at a time, and backtrack (undo) steps
when the current path leads to no solution.
3. Approach:
a. Explore all possible options (search space).
b. When a choice violates the constraints, backtrack to the previous step and
try a different option.
4. Problem Types:
Suitable for problems involving combinatorial search, such as puzzles,
permutations, combinations, graph coloring, and constraint satisfaction problems.
5. Algorithmic Structure:
a. Recursive function that tries options.
b. If an option is invalid, undo (backtrack) and try the next one.
6. Advantages:
a. Simple to implement.
b. Finds all solutions if they exist.
c. Prunes the search space by eliminating invalid paths early.
7. Limitations:
a. Can be slow (exponential time) if the search space is large.
b. Often improved with heuristics or pruning techniques (like branch and
bound).
The 8-Queens problem involves placing 8 queens on an 8×8 chessboard so that no two
queens threaten each other — meaning no two queens share the same row, column, or
diagonal.
Step-by-step process:
1. Place queens column-wise: Start from the leftmost column and place a queen in
the first safe row.
2. Check safety: For each attempted position (row, col), ensure:
a. No other queen is in the same row.
b. No other queen is on the upper-left diagonal.
c. No other queen is on the lower-left diagonal.
3. Recursive placement: If a safe position is found, place the queen and recursively
try to place the next queen in the next column.
4. Backtrack: If no safe position is found in the current column, backtrack to the
previous column and move the queen to the next possible safe row.
5. Continue this process until all 8 queens are placed safely.
6. Result: Once all queens are placed, record or print the solution.
8-Queens Problem
Problem Explanation:
• Each queen can attack any piece in the same row, column, or diagonal.
• The challenge is to place all 8 queens so none threaten each other.
Approach:
Example:
Suppose a queen is placed at (row 1, column 1). The algorithm tries to place the
second queen in column 2. Rows 1 and diagonals are attacked, so it places the queen at
(row 3, column 2). Continuing this way, the algorithm places queens at (row 5,
column 3), (row 7, column 4), and so forth, backtracking when no valid placement is
found.
Result:
css
CopyEdit
. Q . . . . . .
. . . Q . . . .
Q . . . . . . .
. . . . . Q . .
. . Q . . . . .
. . . . . . . Q
. . . . Q . . .
. . . . . . Q .
This ensures all queens are safe and demonstrates the power of backtracking in solving
complex problems.
4. write the rules of 8 queen problem find the one solution of 8 queen peoblem by
using backtracking.? 8marks
• Column 1 → Row 1
• Column 2 → Row 5
• Column 3 → Row 8
• Column 4 → Row 6
• Column 5 → Row 3
• Column 6 → Row 7
• Column 7 → Row 2
• Column 8 → Row 4
Board Representation:
css
CopyEdit
Q . . . . . . .
. . . . Q . . .
. . . . . . . Q
. . . . . Q . .
. Q . . . . . .
. . . Q . . . .
. . Q . . . . .
. . . . . . Q .
Backtracking
Recursive Backtracking
Summary
Examples include the N-Queens problem, Sudoku solver, and graph coloring problems.
Algorithm Steps:
Key Points:
This method systematically explores all possible vertex permutations to find a Hamiltonian
cycle if one exists.
A Hamiltonian Cycle in a graph is a cycle that visits every vertex exactly once and returns
to the starting vertex. Unlike the Eulerian path that visits every edge, the Hamiltonian Cycle
focuses on visiting every vertex without repetition.
Key Points:
Example:
• 0—1
• 0—3
• 1—2
• 1—3
• 2—3
This path visits every vertex once and returns to 0, forming a cycle.
Application & Complexity:
Summary:
The Hamiltonian Cycle checks if a path exists covering all vertices exactly once in a cycle.
It is fundamental in graph theory with important real-world uses but is computationally
challenging for large graphs.
If you want, I can also provide a brief explanation of algorithms used to find such cycles!.
8. what is chrometic cycle number give the state space tree for 4 colouring problem
explain graph colouring problem? 8marks
Definition:
Graph colouring is the process of assigning colours to the vertices of a graph such that no
two adjacent vertices share the same colour.
• Example:
For a triangle graph (3-cycle), the chromatic number is 3 because all three vertices
are connected to each other.
Graph Colouring Problem Statement
Given:
A graph G=(V,E)G = (V, E)G=(V,E) and an integer mmm (number of colours).
Objective:
Determine whether it is possible to colour the vertices using at most m colours so that no
two adjacent vertices share the same colour.
This is a classic constraint satisfaction problem and can be solved using backtracking.
The state space tree is used in backtracking to represent partial colourings of vertices.
Each level of the tree corresponds to one vertex, and each branch represents a possible
colour assignment.
Let vertices be V={v1,v2,v3,v4}V = \{v_1, v_2, v_3, v_4\}V={v1 ,v2 ,v3 ,v4 } and colours = {1,
2, 3, 4}.
At each level:
The root of the tree is an empty assignment. Each child adds a colour for the next vertex.
Conclusion
• The chromatic number gives the smallest number of colours needed for proper
colouring.
• The graph colouring problem can be solved using backtracking with state space
trees.
• It has practical applications in register allocation, map colouring, and scheduling
problems.
Graph colouring is a method of assigning labels, commonly called colours, to the vertices
of a graph such that no two adjacent (connected) vertices share the same colour.
✅ Purpose:
The goal is to colour a graph using the minimum number of colours while ensuring that
adjacent vertices have different colours. This minimum number is called the chromatic
number of the graph.
🎯 Applications:
• Scheduling: Assigning time slots to classes so that no two conflicting classes occur
at the same time.
• Map colouring: Ensuring adjacent countries or regions are shaded differently.
• Register allocation: Assigning variables to registers in compilers.
📘 Example:
css
CopyEdit
A
/ \
B---C
\ /
D
Edges:
• A–B
• A–C
• B–C
• B–D
• C–D
🔵 Step-by-Step Colouring:
We want to colour this graph so that no two connected vertices share the same colour.
• A → Colour 1
• B → Colour 2
• C → Colour 3
• D → Colour 1
🧠 Key Points:
• The graph was coloured using 3 colours → So, the chromatic number χ(G) = 3.
• The graph colouring method prevents conflicts and optimizes resource usage in
practical problems.
🧩 Definition:
Objective:
Select a subset of items such that:
• Total weight ≤ WWW
• Total value is maximized
• You cannot split items (either take it or leave it)
🔧 Example:
Optimal solution: Take items 2 and 3 → total value = 220, total weight = 50
💡 Solution Methods:
🧠 Applications:
• Resource allocation
• Budget optimization
• Cargo loading
✅ Conclusion:
Let me know if you want the DP table or C++/Python code for this!
Unit 5
This means:
🔷 5. Implication:
If any NP-complete problem (like SAT) can be solved in polynomial time, then all
problems in NP can be solved in polynomial time, i.e., P = NP.
🔷 6. Conclusion:
"The Boolean satisfiability problem (SAT) is NP-complete, meaning every problem in the
class NP can be reduced to SAT in polynomial time."
🔑 In Other Words:
• SAT is in NP.
• Every other problem in NP can be transformed into an instance of SAT using a
polynomial-time algorithm.
🧠 Importance:
This means:
• SAT ∈ NP
• Every problem in NP can be polynomial-time reduced to SAT
1. SAT is in NP:
Given a Boolean formula and a truth assignment, we can evaluate the formula in
polynomial time to check satisfiability.
2. Any NP problem reduces to SAT:
Let LLL be any language in NP. By definition, there exists a nondeterministic
Turing Machine (NTM) MMM that accepts LLL in polynomial time.
3. Construction:
For an input xxx, construct a Boolean formula ϕ\phiϕ that simulates the
computation of MMM on xxx within polynomial time.
a. Encode machine’s tape, state transitions, and head movements as
variables.
b. The formula ϕ\phiϕ is satisfiable iff MMM accepts xxx.
4. Conclusion:
If x∈Lx ∈ Lx∈L, then ϕ\phiϕ is satisfiable.
Thus, any NP problem can be reduced to SAT in polynomial time.
✅ Final Result:
SAT is NP-complete, and Cook’s Theorem proves the first NP-complete problem,
forming the basis of NP-completeness theory.
2.define fifo and lc lest cost search ? 4 marks
FIFO search, also known as Breadth-First Search (BFS), explores all nodes at the current
depth before moving to the next level. It uses a queue data structure, where the first node
added is the first to be expanded. It guarantees the shortest path in terms of the number of
steps if all edge costs are equal.
LC (Least-Cost) Search:
LC search, also known as Uniform Cost Search, expands the node with the lowest total
cost from the start node, regardless of depth. It uses a priority queue where nodes are
ordered by their path cost. It is optimal when all costs are non-negative and finds the least-
cost path to the goal.
FIFO Branch and Bound is a search strategy used to solve optimization problems,
combining the branch and bound method with a First-In, First-Out (FIFO) queue for node
exploration.
How it works:
Advantages:
Limitations:
In summary, FIFO Branch and Bound uses a queue to explore nodes level by level while
pruning unpromising paths, balancing simplicity and pruning efficiency.
4. explain fifo branch and bound and lc branch and bound with example? 8marks
Explanation:
Example:
Consider a shortest path problem in a graph where you want to find the minimum cost
from a start node to goal.
Explanation:
Example:
5. define branch and bound method with different types of nodes uesd? 8marks
Definition:
The Branch and Bound method is an algorithmic technique used to solve optimization
problems (like combinatorial problems) by systematically exploring candidate solutions.
This approach ensures that the optimal solution is found efficiently without exploring all
possible solutions.
Types of Nodes Used in Branch and Bound:
1. Live Node:
a. A node that is generated but not yet expanded.
b. It represents a subproblem that may contain the optimal solution.
c. Stored in a data structure (queue, stack, or priority queue) for future
expansion.
2. Dead Node:
a. A node that is pruned or fully explored.
b. Pruned because its bound shows it cannot lead to a better solution than the
current best.
c. Or fully expanded, so it no longer needs exploration.
3. Active Node:
a. Sometimes used to refer to nodes currently under consideration for
expansion.
b. Can be synonymous with live nodes depending on context.
Summary:
• The method navigates through live nodes, expanding them and pruning those that
are dead nodes based on bounds.
• It ensures efficient search by discarding non-promising paths early.
• Data structures and bounding functions define how nodes are selected and pruned.
Algorithmic Techniques
7. FIFO Branch and Bound
• Purpose: Solve optimization problems (e.g., TSP, Knapsack) by exploring state-
space trees.
• Mechanism:
o Uses a queue (FIFO) for node exploration.
o Bounds prune non-promising branches (e.g., cost > current best).
• Example: 0/1 Knapsack with profit-based bounding.
Summary Table
Solvable in Verifiable in
Class Example
P? P?
NP ? Yes SAT
Halting
NP-Hard No Maybe not
Problem
Key Takeaway:
• P ⊆ NP ⊆ NP-Complete ⊆ NP-Hard.
• NP-Hard problems may not even be decidable!
Would you like a deeper dive into any of these?
Traveling Salesman
Exampl 3-SAT, Knapsack (decision version),
(optimization), Halting
es Hamiltonian Path.
Problem.
Practic
al
Implica
tion
Includes unsolvable problems
Represents probl
(e.g., undecidable).
7. what do you mean by np hard problome explain atleast one np hard problem in
detail ? 8 marks
1. Definition:
An NP-Hard problem is at least as hard as the hardest problems in NP
(Nondeterministic Polynomial time).
2. No Efficient Solution Known:
There is no known polynomial-time algorithm to solve all NP-Hard problems.
3. Not Necessarily in NP:
NP-Hard problems may not even be verifiable in polynomial time (unlike NP
problems).
4. Key Property:
If you could solve an NP-Hard problem in polynomial time, you could solve all NP
problems in polynomial time.
1. Problem Statement:
Given a list of cities and distances, find the shortest possible route that visits
each city once and returns to the start.
2. Input:
A set of nnn cities and a distance matrix between each pair of cities.
3. Output:
A tour (cycle) that visits every city once and returns to the start, with the minimum
total distance.
4. Combinatorial Explosion:
Number of possible tours = (n−1)!/2(n - 1)! / 2(n−1)!/2.
For n=20n = 20n=20, that's over 60 trillion routes!
5. Why NP-Hard?
a. The decision version ("Is there a tour shorter than kkk?") is NP-Complete.
b. The optimization version (find the shortest route) is NP-Hard.
6. Real-World Applications:
a. Logistics and delivery routing
b. Circuit board layout
c. DNA sequencing
d. Path planning in robotics
7. Approach in Practice:
a. Exact algorithms: Only practical for small nnn.
b. Heuristics: Nearest Neighbor, Christofides Algorithm.
c. Metaheuristics: Genetic Algorithms, Simulated Annealing.
🧠 Key Takeaways
🔍 NP vs NP-Complete vs NP-Hard
✅ 2. NP-Complete Problems
✅ 3. NP-Hard Problems
🧠 Conclusion
• NP-Complete ⊆ NP-Hard.
• All NP-Complete problems are NP-Hard, but not all NP-Hard problems are NP-
Complete.
• NP-Hard problems may be even more difficult than NP problems.
Least-cost search is a search strategy that expands the node with the lowest total path
cost from the start node, not just the depth or number of steps. It is often implemented
using a priority queue, where nodes are ordered by their cumulative cost.
This method guarantees finding the optimal solution (i.e., the lowest-cost path), assuming
all step costs are non-negative.
It is also called Uniform Cost Search (UCS) and is a variant of Dijkstra’s algorithm when
used in graphs.
✅ Key Points:
Least-cost search, also known as Uniform Cost Search (UCS), is a search algorithm that
always expands the least costly path from the starting node. It uses a priority queue to
keep track of nodes, ordered by their cumulative path cost (not depth or heuristic).
✅ How It Works:
rust
CopyEdit
A --2--> B --2--> D
\ |
\--1--> C --5--> D
UCS will:
🧠 Conclusion:
Least-cost search is complete, optimal, and ideal when step costs vary.
Given a list of cities and the distances between each pair of them, what is the shortest
possible route that visits each city exactly once and returns to the starting city?
TSP is formally defined on a complete weighted graph, where:
The goal is to find a Hamiltonian cycle (a path visiting every node once) with the minimum
total weight.
TSP is known to be NP-Hard, meaning no efficient algorithm is known to solve all instances
quickly. The number of possible tours is (n−1)!/2 for n cities, making brute-force
approaches infeasible for large inputs.