0% found this document useful (0 votes)
16 views82 pages

Unit 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views82 pages

Unit 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Design and Analysis of

Algorithms
UNIT 5

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Limitations of Algorithm Power

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Introduction to Lower-Bound
Arguments:
1. Lower-bound arguments assess the
Lower Bound efficiency of algorithms by
comparing them to other algorithms
Arguments solving the same problem.

2. It's essential to understand both the


efficiency class of an algorithm and
the minimum number of operations
required to solve a problem.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


2. Trivial Lower Bounds:
1. The simplest method for obtaining 3. Information-Theoretic Arguments:
a lower-bound class is by counting 1. These arguments establish lower
the number of input and output bounds based on the amount of
items processed by any algorithm. information an algorithm needs to
produce.
2. For example, generating all
permutations of n items requires at 2. For example, in a number guessing
least O(n!) operations because game, an algorithm needs at least
there are n! possible permutations. log2n steps to guess a number
between 1 and n.
3. Similarly, evaluating a polynomial
of degree n requires processing all 3. Each question in the game yields
n coefficients, resulting in a lower at most 1 bit of information, hence
bound of O(n).
the lower bound of log2n steps.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


5. Problem Reduction:
1. This approach compares the
4. Adversary Arguments:
complexity of one problem to another
1. Adversary arguments involve playing the
by reducing one problem to the other.
role of a hostile adversary to prove lower
bounds.
2. If problem P can be reduced to problem
2. For example, in merging two sorted lists, an Q, and Q has a known lower bound,
adversary can force any correct algorithm then that lower bound applies to
to make at least 2n - 1 key comparisons. problem P as well.

3. By providing specific rules for comparisons, 3. For example, the Euclidean minimum
the adversary ensures that the algorithm spanning tree problem can be reduced
follows the most time-consuming path. to the element uniqueness problem,
establishing a lower bound of O(n log
4. Adversary arguments are used in algorithm
n).
analysis to establish lower bounds by
showing that no algorithm can perform
better than a certain level of difficulty set
by the adversary By employing these different methods, we can
establish lower bounds for various problems,
providing insights into the inherent complexity
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept of algorithmic tasks.
Decision Trees Decision Trees Overview:
1. Decision trees are a visual
representation of how an algorithm
makes decisions based on comparisons
of input elements.

2. Each internal node in the tree


represents a comparison operation,
usually denoted as a condition (e.g., "a
< b").

3. The branches from each node represent


the possible outcomes of the
comparison (e.g., "yes" or "no").

4. The leaves of the tree represent the


possible outcomes or final states of the
algorithm for a given input.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Inequality, which states ℎ≥log⁡2𝑙,
Analyzing the Decision Tree: Lower Bound on Decision Tree Height:

• The height of the decision tree corresponds provides a lower bound on the height
to the maximum number of comparisons (or depth) of binary decision trees.
needed to reach a final state.

tree, and 𝑙 represents the number of


2. Here, ℎ represents the height of the
• The worst-case scenario for the algorithm
occurs when it follows the longest path
from the root to a leaf node. leaves (final outcomes) in the tree.

• The number of comparisons made by the 3. This inequality implies that the height
algorithm in the worst case is equal to the of a decision tree must be at least log⁡2​
height of the decision tree.
of the number of its leaves.
• The height of a binary tree with l leaves is
at least log2(l), as determined by inequality
4. In other words, it provides a
benchmark for assessing the
performance of comparison-based
algorithms: they cannot be more
efficient than this lower bound.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Application to Sorting and Searching
Algorithms:
1. Decision trees can be used to analyze
the performance of sorting and
searching algorithms by considering
the number of comparisons they
make.

2. By constructing decision trees for


these algorithms, we can determine
their worst-case time complexity
based on the height of the trees and
the number of leaves.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Can a computer solve all computational
problems? Currently, the answer is no,
but for solvable problems, how much
time would it take?

P, NP, and NP-Complete ● In computer science, there exist some


problems whose solutions are not yet
Problems found, the problems are divided into
classes known as Complexity Classes

● P, NP, NP-Hard and, NP-Complete


are the complexity classes/set for
the problems. Any solvable
computational problem falls into any
one of these categories.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


NP (Nondeterministic Polynomial Time):

P (Polynomial Time): • NP refers to the class of decision problems


where a proposed solution can be verified in
• P refers to the class of decision problems polynomial time.
(problems with yes/no answers) that can
be solved by algorithms running in • In other words, if someone gives you a
polynomial time.
solution, you can quickly check whether it's
correct or not.
• In simpler terms, these are problems where
the time it takes to solve them grows at
• However, finding the solution itself may not
most polynomially with the size of the
input. be easy or may require exponential time.

• For example, sorting a list of numbers or • An example of an NP problem is the traveling


finding the shortest path in a graph are salesman problem, where given a list of cities
problems that belong to P because and distances between them, it's easy to
algorithms like quicksort and Dijkstra's verify if a proposed route visits each city
algorithm can solve them in polynomial exactly once and returns to the starting city,
time. but finding the shortest route is
computationally difficult.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


NP-Complete Problem:
NP-Hard:
• NP-complete (NPC) problems are a special
class of NP problems that are believed to • NP-Hard problems are at least as hard as
be among the hardest in NP. the hardest problems in NP but may not
necessarily be in NP themselves.
• If you can find a polynomial-time algorithm
for any one of these problems, you can • Essentially, an NP-Hard problem is a
solve all NP problems in polynomial time. problem for which there is no known
polynomial-time algorithm and is at least
• An example of an NP-complete problem is as hard as any problem in NP.
the Boolean satisfiability problem (SAT),
where you're given a Boolean formula and • An example of an NP-Hard problem is the
asked if there's an assignment of truth halting problem, which asks whether a
values to its variables that makes the given program halts or runs forever on a
whole formula true. If you can solve SAT in given input. It's undecidable, meaning no
polynomial time, you can solve any NP algorithm can solve it for all possible
problem in polynomial time. inputs.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


P Deterministic Polynomial Basic math, string operations, Easy
Time sorting, shortest path algorithms,
linear and binary search algorithms

NP Non Polynomial Integer factorization, graph Mediu


Deterministic Time homomorphism m

NP NP hard + Non Polynomial Traveling salesman. Graph coloring Hard


Complet Deterministic Time
e

NP Hard Non Non K means clustering, traveling Hardes


Deterministic Polynomial salesman, graph coloring t
Time

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


NP

P NP-Hard
NP-Complete

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Approximation Instead of Exact
Solutions:
1. Most numerical analysis problems
cannot be solved exactly.

2. They require approximate solutions


because they involve continuous
mathematics, where infinite precision is
Challenges of not feasible.

Numerical Algorithms 3. For example, computing the value of 𝑒𝑥


or evaluating definite integrals.

2. Truncation Errors:
1. When we approximate infinite
processes with finite ones, errors occur
due to truncation.

2. For instance, using Taylor polynomials


to approximate functions or numerical
integration methods like the
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
trapezoidal rule.
3. Round-off Errors: Instability and Ill-conditioning:
1. Computers represent real numbers with 1. Instability occurs when round-off errors
finite precision due to limited storage. propagate and amplify throughout the
computation, leading to inaccurate
2. Round-off errors arise from the results.
discrepancy between the true value of
a number and its representation in the 2. Ill-conditioned problems are highly
computer's memory, particularly in sensitive to small changes in input,
floating-point arithmetic. making it challenging to design stable
algorithms that produce reliable
3. These errors occur during arithmetic solutions.
operations and can lead to inaccuracies
in computations, especially for very
large or very small numbers. Overall, the challenges in numerical algorithms
stem from the need to balance approximation
4. Overflow and underflow are specific accuracy with computational efficiency while
issues related to round-off errors, mitigating the effects of truncation and round-
occurring when numbers exceed the off errors to obtain reliable solutions to
range of representable values or mathematical problems.
become too small to be accurately
represented, respectively.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Coping with Limitations of Algorithm
Power

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Backtracking ● N-Queen Problem
● Hamiltonian Cycle
● Sum of Subset

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Initialization: Begin with an initial state
before searching for a solution. This state
represents the starting point of the
problem.
● Generate Child Nodes: If the current node
● State-Space Tree: Construct a tree of is promising, generate its child by adding
choices being made, called the state- the first remaining legitimate option for the
next component of the solution. Move the
space tree. Each node in the tree
processing to this child node.
represents a partially constructed
solution.
● Backtracking: If the current node turns out
to be non-promising, backtrack to the node's
● Promising Nodes: Nodes in the state- parent to consider alternative options for its
space tree are classified as promising or last component. If no alternatives exist,
non-promising. A promising node backtrack one level up the tree, and so on.
indicates that the partial solution it
represents could potentially lead to a ● Stop Condition: If the algorithm reaches a
complete solution. complete solution to the problem, it can
either stop (if only one solution is required)
● Depth-First Search (DFS): Traverse the or continue searching for additional
state-space tree using depth-first search. solutions.
This means exploring as far as possible
Preparedalong each branchAsst.Prof,
by M.V.Bhuvaneswari, before backtracking.
CSE (AI&ML,DS) Dept
N-Queens problem

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Hamiltonian Cycle

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Sum of Subsets

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The Assignment Problem involves
assigning tasks to agents in such a
way that the total cost or time is
minimized, with each task being
assigned to exactly one agent and

Branch & Bound ●


each agent handling exactly one task.

The Branch and Bound method is an


efficient algorithmic approach to solve
Assignment Problem this problem.

● Here are the steps involved in solving


the Assignment Problem using the
Branch and Bound method:

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


3. Calculate Reduced Cost Matrix:

1. Define the cost matrix 𝐶C of size 𝑛×𝑛,


1. Initialization: 1. For the initial node, reduce the cost matrix

where 𝐶[𝑖][𝑗] represents the cost of


by subtracting the smallest element in each

assigning task 𝑖 to agent 𝑗.


row from all elements in that row, and then
subtracting the smallest element in each
column from all elements in that column.
2. Initialize the lower bound (LB) for the
root node, which is typically set to 0. 2. The sum of all subtracted elements gives
the lower bound for the node.
3. Initialize the solution path and the
4. Branching:
minimum cost found so far.
3. Select the most promising node (usually the
node with the smallest lower bound) to
2. Node Representation:
expand.
1. Each node in the search tree represents
a partial assignment of tasks to agents.
4. For the selected node, create child nodes by
assigning the next task to each possible
2. A node is represented by the current agent not yet assigned.
partial assignment, the current cost,
and the reduced cost matrix. 5. Update the cost matrix by fixing the
assignment and reducing the remaining
matrix.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
5. Bounding:
1. For each child node, compute a new
7. Repeat:
lower bound by reducing the cost
1. Continue the process of selecting the
matrix again.
most promising node, branching, and
2. If the lower bound of a node is greater bounding until all nodes are either
than the minimum cost found so far, pruned or a complete assignment is
prune the node (i.e., do not consider it found.
further).
8. Terminate:
6. Update Solution: 2. The algorithm terminates when all
3. If a complete assignment is found nodes have been processed or pruned.
(i.e., all tasks are assigned to agents),
check if its total cost is less than the 3. The current best assignment and its
current minimum cost. cost are the optimal solution to the
Assignment Problem.
4. If it is, update the minimum cost and
the best assignment found so far.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1 2 3 4 Lb = 2+3+1+2 =8

A 9 2 7 8

B 6 4 3 7
A=1 A=2 A=3 A=4
C 5 8 1 8

D 7 6 2 4

B=1 B=3 B=4

A=2 B=1 C=3 D=4 A=2 B=1 C=4 D=3

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• Given n items of known weights wi and
values vi, i = 1, 2, . . . , n, and a
knapsack of capacity W, find the most
valuable subset of the items that fit in
the knapsack.

• It is convenient to order the items of a


Knapsack Problem given instance in descending order by
their value-to-weight ratios.

• Then the first item gives the best


payoff per weight unit and the last one
gives the worst payoff per weight unit,
with ties resolved arbitrarily: v1/w1 ≥
v2/w2 ≥ ... ≥ vn/wn — (1)

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Each node on the ith level of this tree, 0 ≤ i ≤
n, represents all the subsets of n items that
include a particular selection made from the
first i-ordered items.
Example: Knapsack Capacity W= 10
• This particular selection is uniquely
determined by the path from the root to Ite weigh valu Value/
the node. m t e weight
1 4 $40 10
• A branch going to the left indicates the
inclusion of the next item, and a branch 2 7 $42 6
going to the right indicates its exclusion.
3 5 $25 5
• A simple way to compute the upper bound
(ub) is to add to v, the total value of the 4 3 $12 4
items already selected, the product of the
remaining capacity of the knapsack W −
w, and the best per-unit payoff among the
remaining items, which is vi+1/wi+1:
ub = v + (W − w)(vi+1/wi+1) — (2)
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Node 1, the left child of the root, represents the
• At the root of the state-space tree subsets that include item 1.

no items have been selected as yet. 2. The total weight and value of the items already
included are 4 and $40, respectively; the value
• Hence, both the total weight of the of the upper bound is 40 + (10 − 4) ∗ 6 = $76.
items already selected w, and their
3. Node 2 represents the subsets that do not
total value v are equal to 0.
include item 1.

• The value of the upper bound 4. Accordingly, w = 0, v = $0, and ub =0+ (10 −
computed by formula (2) is $100. 0) ∗ 6 = $60. Since node 1 has a larger upper
bound than the upper bound of node 2, it is
more promising for this maximization problem,
• The above picture displays and we branch from node 1 first.
the State-space tree of the best-
first branch-and-bound 5. Its children – nodes 3 and 4, represent subsets
algorithm for the instance of the with item 1 and with and without item 2,
respectively.
knapsack problem.
6. Since the total weight w of every subset
represented by node 3 exceeds the knapsack’s
capacity, node 3 can be terminated immediately.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Traveling Salesman Problem (TSP)
Problem Statement: Given a set of
cities and the distances between each
pair of cities, find the shortest possible
route that visits each city exactly once
Traveling Salesman Problem and returns to the origin city.

Branch and Bound Method


The Branch and Bound method is an
optimization algorithm that
systematically explores and prunes the
search tree to find the optimal solution
for the TSP.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Steps Involved
1. Initialization: 3. Calculate Reduced Cost Matrix:
1. Define the distance matrix DDD • For the initial node, reduce the distance
where D[i][j]D[i][j]D[i][j] represents matrix by subtracting the smallest
the distance between city iii and city element in each row from all elements in
jjj. that row, and then subtracting the
2. Initialize a priority queue to manage smallest element in each column from all
the nodes of the search tree, starting elements in that column.
with an initial node representing the • The sum of all subtracted elements gives
start city. the lower bound for the root node.

2. Node Representation: 4. Branching:


1. Each node in the search tree • Select the most promising node (the
represents a partial tour. node with the lowest LB) from the priority
2. The node includes: queue.
1. The current partial tour. • Generate child nodes by extending the
2. The current cost of the partial current partial tour to each unvisited city.
tour. • For each child node, update the partial
3. The level (number of cities tour, increase the level, and update the
visited so far). reduced distance matrix.
4. The reduced distance matrix. • Total Cost = Parent matrix cost +
5. The lower bound (LB) of the Reduction + Path
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
node.
5. Calculate the Lower Bound for Each
Node:
• For each child node, calculate a new
lower bound by further reducing the 8. Update Solution:
distance matrix. • If a node represents a complete tour
• Add the reduced costs to the current cost (visiting all cities and returning to the
of the tour to get the lower bound. origin city), compare its cost to the
current minimum tour cost.
6. Bounding: • If it is lower, update the minimum tour
• If the lower bound of a node is greater cost and the best tour.
than or equal to the current minimum
tour cost found, prune the node. 9. Terminate:
• Otherwise, add the node to the priority • The algorithm terminates when the
queue. priority queue is empty or a complete
tour with the minimum cost is found.
7. Repeat: • The current best tour and its cost are the
• Select the next most promising node optimal solution.
from the priority queue and repeat the
branching and bounding steps.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Approximation Algorithms as a Solution

Approximating Solutions:
• Fast Algorithms: Instead of exact
solutions, we use fast algorithms to
get approximate solutions.
• Good Enough Solutions: In many
practical applications, an approximate
solution is sufficient.
Approximation Algorithms
Heuristics:
for NP-Hard Problems • Definition: A heuristic is a common-
sense rule or strategy derived from
experience.

Given an optimization problem P,


Algorithm A is said to be an
approximation algorithm for P, if for any
given instance I, it returns an approximate
solution, that is a feasible solution.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Guaranteed to run in a polynomial time
● Guaranteed to get a solution that is close to
the optimal solution. (near optimal) ● Approximation Algorithms: Provide
practical solutions to NP-Hard problems
by trading off exactness for efficiency.
Accuracy Ratio r(sa)
• Sa – approximate solution
● Heuristics and Performance: Use
• f(sa) – Value of objective function for the
problem-specific heuristics to develop
solution given by approximation algorithm.
fast algorithms and measure their
• f(s*) – Exact (Optimal) solution of the problem
performance using accuracy ratios.
r(sa) = f(sa)/f(s*) – for minimizing the objective
function ● Real-Life Application: These
r(sa) = f(s*)/f(sa) – for maximizing the objective algorithms are particularly useful when
function exact solutions are impractical, and
approximate solutions are good enough.
Desirable Properties:
• The closer r(sa) is to 1, the better the
approximation.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Polynomial-Time Approximation Approximation Algorithm
Algorithms
Twice-around
Definition: the tree
• An algorithm is a c-approximation Minimum algorithm
Greedy Spanning tree –
algorithm if the accuracy ratio r(sa)
Approach based
does not exceed c for any instance. algorithms
The performance ratio RA is the
Christofide

s
smallest c for which the above
algorithm
condition holds. Multifragme
Nearest
nt- heuristic
Neighbor Algorithm
Performance Ratio:
• It indicates the quality of the
approximation algorithm. The goal is
to have RA​as close to 1 as possible.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Approximation ● Select a random city

Algorithms for TSP ● Find the nearest unvisited city and go


there
● Are there any unvisited cities left? If yes,
repeat step 2
● Return to the first city

Greedy Approach:
1. Nearest Neighbour Algorithm •
Advantages:
Simplicity: Easy to understand and
implement.
• Speed: Runs in O(n2) time, where n is the
number of cities.

Limitations:
• Suboptimal: May not find the shortest
possible tour.
• Greedy Nature: Locally optimal choices
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept may lead to a globally suboptimal solution
1
A B

1 6
A B 2

3
3 D C
6 2 1

D C
● Sa : Approximation solution
1 A-B-C-D-A = 1+2+1+6

= length 10

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1
A B

3
1 3
A B

3
3 D C
6 2 1
● S* : Optimal solution
D C A-B-D-C-A = 1+3+1+3
1 = length 8

r(sa) = f(sa)/f(s*) = 10/8 = 1.25


Tour sa is 25% Longer than optimal
tour s*
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Traveling Salesman Problem (TSP)
Problem Statement:
Given a set of cities and the distances
between each pair of cities, the objective is
to find the shortest possible route that visits
each city exactly once and returns to the
2. Multifragment Heuristic Algorithm starting city.

Multifragment Heuristic Approximation


Algorithm
The Multifragment Heuristic is an algorithm
that builds the tour by connecting edges
(fragments) in a way that avoids forming
cycles prematurely until the end when all
cities are connected into a single tour.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Initialization:
• Start with a set of all edges, each
Steps Involved in Multifragment Heuristic
edge representing the distance
between two cities.
• Sort all edges in increasing order of
their weights (distances).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Building the Tour:
• Initialize an empty set for the tour
edges.
• Add edges one by one to the tour,
following the sorted order, with the
following constraints: Completion:
• The process continues until all cities
• Avoid adding an edge that would are connected in a single tour that
create a cycle (except when it visits each city exactly once and
completes the tour). returns to the starting city.
• Avoid adding an edge that would
increase the degree of any vertex
(city) to more than 2.

• Continue adding edges until all cities


are included in the tour.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1 Node Edge
A B Weight
A-B 1
3
3 C-D 1
6 2
B-C 2
B-D 3
D C
1 C-A 3
D-A 6

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1
A B

1 6
A B 2

3
3 D C
6 2 1

D C
● Sa : Approximation solution
1 A-B-C-D-A = 1+2+1+6

= length 10

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1
A B

3
1 3
A B

3
3 D C
6 2 1
● S* : Optimal solution
D C A-B-D-C-A = 1+3+1+3
1 = length 8

r(sa) = f(sa)/f(s*) = 10/8 = 1.25


Tour sa is 25% Longer than optimal
tour s*
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Advantages:
• Simple Implementation: Relatively
straightforward to implement.
• Efficiency: Works in O(n2log⁡n) time due
to sorting and processing of edges. Completion:
• The process continues until all cities
Limitations: are connected in a single tour that
• Suboptimal: While it often finds good visits each city exactly once and
solutions, it may not always yield the returns to the starting city.
optimal solution.
• Potential for Poor Performance: In
some cases, the quality of the solution
can vary significantly from the optimal.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Step 1: Construct a minimum spanning
tree.

Step 2: Let the root be an arbitrary


vertex.
Minimum Spanning tree-based
algorithms: Step 3: Traverse all the vertices by DFS
and record the sequences of vertices
1. Twice around the tree algorithm (both visited and unvisited)

Step 4: Use a shortcut strategy to


generate a feasible tour.

Also remember cycles should not be


formed

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 1: Construct a minimum spanning tree

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 3: Traverse all the vertices by DFS

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 4: Use the shortcut strategy to
generate a feasible tour

• Record the visited nodes: a-b-c-b-


d-e-d-b-a (remove the duplicates)
• a-b-c-d-e-a = length 39

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 1: Construct a minimum
spanning tree.

Minimum Spanning tree-based Step 2: Find odd-degree vertices.


algorithms:
Step 3: Minimum weight matching
2. Christofides algorithm
Step 4: Find the Euler cycle path.

Step 5: Find TSP cycle path

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● MST has four odd-degree
vertices a,b,c, and e.
● a degree – 1
● b degree – 3
● c degree – 1
● d degree – 2
● e degree – 1

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Find the Euler and TSP cycle path
● a-b-c-e-d-a = length 37

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Approximation Algorithm

Approximation Algorithms Greedy Approximatio


Approach n schemes
for Knapsack problem

Discrete Continuous
Knapsack Knapsack

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 1: Compute the value-to-weight
ratios ri = vi/wi, i = 1,2,….,n for the items
given.

Step 2: Sort the items in non-decreasing


order of the ratios computed in Step 1
Greedy Algorithm for discrete
knapsack problem Step 3:
• Repeat the following operation until
no item is left in the sorted list
• if the current item on the list fits into
the knapsack place it and proceed to
the next item.
• Otherwise just proceed to the next
item.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Knapsack Capacity W= 10

Step 1 : Find V-W ratio


Step 2 : Arrange in non-decreasing order

Ite weigh valu Value/ Ite weigh valu Value/


m t e weight m t e weight
1 7 $42 6 3 4 $40 10

2 3 $12 4 1 7 $42 6

3 4 $40 10 4 5 $25 5

4 5 $25 5 2 3 $12 4

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Knapsack Capacity W= 10

Step 3 : After sorting

Ite weigh valu Value/


m t e weight
3 4 $40 10

1 7 $42 6

4 5 $25 5

2 3 $12 4

Total weight = 9/10


Profit earned = $40+$25 =$65

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Step 1: Compute the value-to-weight
ratios ri = vi/wi, i = 1,2,….,n for the items
given.

Step 2: Sort the items in non-decreasing


order of the ratios computed in Step 1
Greedy Algorithm for continuous
knapsack problem Step 3:
• Repeat the following operation until
no item is left in the sorted list
• if the current item on the list fits into
the knapsack place it and proceed to
the next item.
• Otherwise take the largest fraction to
fill the knapsack to its full capacity
and stop.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Knapsack Capacity W= 10

Step 1 : Find V-W ratio


Step 2 : Arrange in non-decreasing order

Ite weigh valu Value/ Ite weigh valu Value/


m t e weight m t e weight
1 7 $42 6 3 4 $40 10

2 3 $12 4 1 7 $42 6

3 4 $40 10 4 5 $25 5

4 5 $25 5 2 3 $12 4

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Knapsack Capacity W= 10

Step 3 : After sorting

Ite weigh valu Value/


m t e weight
3 4 $40 10

1 7 $42 6

4 5 $25 5

2 3 $12 4

Total weight = 10
Profit earned = $40+$36 =$76

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● This scheme is suggested by Prof. Shani

● This algorithm generates subsets of K


items or less and for each one that fits
Approximation scheme into the knapsack it adds the remaining
items as the greedy algorithm would do
for knapsack problem (i.e. in non-decreasing order of their
value-to-weight ratios.)

● The subset of the highest value


obtained in this fashion is returned as
the algorithm output.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


w subset Possible Profit
added items
Example: Knapsack Capacity W= 10 K=2
4+5+ {0} 1,3,4 $69
1
Ite weigh valu Value/ 4+5+ {1} 3,4 $69
m t e weight 1
1 4 $40 10 7+1 {2} 4 $46
5+4+ {3} 1,4 $69
2 7 $42 6
1
3 5 $25 5 1+4+ {4} 1,3 $69
5
4 1 $4 4 11>W {1,2} Not feasible
9+1 {1,3} 4 $69
Optimal solution is {1,3,4}
W=10, Profit = $69 5+5 {1,4} 3 $69
12>W {2,3} Not feasible
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept 8 {2,4} $46
Approximation Algorithm

Approximation Algorithms
for Nonlinear Equations Bisection Method of Newton’s
Method False Position Method

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● A bracketing method that repeatedly
divides an interval in half and selects the
subinterval that contains the root.

● It is simple and robust but can be slow to


converge.

● The bisection method is a numerical


Bisection Method technique used to find the roots of a
nonlinear equation f(x)=0.

● It is a type of bracketing method that


repeatedly narrows down an interval that
contains the root.

● The method assumes that the function f is


continuous on the interval [a,b] and that
f(a) and f(b) have opposite signs (i.e., the
Intermediate Value Theorem guarantees
that there is at least one root in [a,b].
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
4. Interval Update:
Steps of the Bisection Method: • If f(c)=0, then c is the root.

1. Initial Interval: Start with an interval • If f(a)⋅f(c)<0, then the root lies in the
[a,b] where f(a) and f(b) have opposite interval [a,c]. Set b=c.
signs.
• If f(b)⋅f(c)<0, then the root lies in the
2. Midpoint Calculation: Compute the interval [c,b]. Set a=c.
midpoint of the interval: c =
5. Convergence Check: Repeat steps 2-
3. Function Evaluation: Evaluate the 4 until the interval [a,b] is sufficiently
function at the midpoint, f(c). small, or until ∣f(c)∣ is below a predefined
tolerance level.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Solve the equation x3-x-1
using the bisection method correct to
the three decimals

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The method of false position, also known
as the regula falsi method, is an iterative
root-finding algorithm used to solve
Method of False Position equations of the form f(x)=0.
(Regula Falsi) ● It combines elements of the bisection
method and the secant method, taking
advantage of the fact that the function
changes signs over an interval where a
root exists.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Solve the equation xlogx -
1.2 = 0 using the regula falsi method
correct to the four decimals

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The Newton-Raphson method uses the
first derivative of the function to
iteratively find better approximations to
Newton- Raphson Method the root.

● Given a function f(x) and its derivative f′


(x), the method starts with an initial
guess x0 and generates a sequence of
approximations that ideally converge to
a root.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Solve the equation x3-3x-5
using the Newton’s method .

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy