4.module3 INFORMED SEARCH 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Informed search algorithms

Outline
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
• Genetic algorithms
Best-first search
• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
 Expand most desirable unexpanded node
 In BFS and DFS, when we are at a node, we can consider any of the
adjacent as next node. So both BFS and DFS blindly explore paths without
considering any cost function. The idea of Best First Search is to use an
evaluation function to decide which adjacent is most promising and then
explore.
• Implementation:
Order the nodes in fringe in decreasing order of desirability
• Special cases:
– greedy best-first search
– A* search
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to
Bucharest

• Greedy best-first search expands the node that


appears to be closest to goal
Romania with step costs in km
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g.,


Iasi  Neamt  Iasi  Neamt 

• Time? O(bm), but a good heuristic can give


dramatic improvement

• Space? O(bm) -- keeps all nodes in memory

• Optimal? No
A* search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to
goal
Working of A*
1. The algorithm maintains two sets
• OPEN list: The OPEN list keeps track of those nodes
that need to be examined.
• CLOSED list: The CLOSED list keeps track of nodes
that have already been examined.
2. Initially, the OPEN list contains just the
initial node, and the CLOSED list is empty
• g(n) = the cost of getting from the initial node to n
• h(n) = the estimate, according to the heuristic
function, of the cost from n to goal node
• f(n) = g(n)+h(n); intuitively, this is the estimate of the
best solution that goes through n
Working of A*
3. Each node also maintains a pointer to its parent, so that the
best solution if found can be retrieved.

– It has main loop that repeatedly gets the node, call it n with lowest f(n) value
from OPEN list.
– If n is goal node then stop (done) otherwise, n is removed from OPEN list &
added to the CLOSED list.
– Next all the possible successor nodes of n are generated.

4. For each successor node n, if it is already in the CLOSED list


and the copy there has an equal or lower ‘f’ estimate, and
then we can safely discard the newly generated n and move
on.

– Similarly, if n is already in the OPEN list & the copy there has an equal or
lower f estimate, we can discard the newly generated n and move on.
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state
from n.

• An admissible heuristic never overestimates the cost to reach


the goal, i.e., it is optimistic

• Example: hSLD(n) (never overestimates the actual road


distance)

• Theorem: If h(n) is admissible, A* using TREE-SEARCH is


optimal
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.

• f(G2) = g(G2) since h(G2) = 0


• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
Optimality of A * (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.

• f(G2) > f(G) from above


• h(n) ≤ h*(n) since h is admissible
• g(n) + h(n) ≤ g(n) + h*(n)
• f(n) ≤ f(G)
Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
• A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

• If h is consistent, we have

f(n') = g(n') + h(n')


= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)

• i.e., f(n) is non-decreasing along any path.

• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal


Properties of A*

• Complete? Yes (unless there are infinitely


many nodes with f ≤ f(G) )

• Time? Exponential

• Space? Keeps all nodes in memory

• Optimal? Yes
Local Search

• They apply mostly to problems for which we don't need to know the
path to the solution but only the solution itself.
• They operate using a single state or a small number of states and
explore the neighbours of that state. They usually don't store the
path.
• A particular case are optimization problems for which we search for
the best solution according to an objective function.
• The distribution of values of the objective function in the state space
is called a landscape.
• Advantages
– Use very little memory
– Find reasonable solution in large or infinite(continuous) state spaces for which
systematic algorithms are unsuitable.
Local search algorithms
• In many optimization problems, the path to the goal is
irrelevant; the goal state itself is the solution

• State space = set of "complete" configurations

• Find configuration satisfying constraints, e.g., n-


queens

• In such cases, we can use local search algorithms

• keep a single "current" state, try to improve it


• State Space Landscape
– Global maximum - a state that maximizes the objective function
over the entire landscape.
– Local maximum - a state that maximizes the objective function in
a small area around it.
– Plateau - a state such that the objective function is constant in an
area around it. (Area of state space)
– Shoulder - a plateau that has an uphill edge.
– Flat - plateau whose edges go downhill.
– Ridge - sequences of local maxima.
Terminology of Local Search
• State Space Landscape
• Global Minimum
• Global Maximum
Types of Local Search

• Hill-climbing Search
• Simulation Annealing Search
• Genetic Algorithm
Example: n-queens

• Put n queens on an n × n board with no two


queens on the same row, column, or diagonal
Hill-climbing search
• It always moves towards the goal
• Using heuristics it finds which direction will
take it closest to the goal
• It is actually combination of Generate-and-
test + Direction to move
• Here the heuristic function is to estimate how
close a given state is to a goal state
Algorithm Steps for Hill-climbing

STEP 1: Evaluate the initial state, if it is goal state then


quit otherwise make current state as initial state
STEP 2: Select a new operator that could be applied to
this state and generate a new state
STEP 3: Evaluate the new state if this new state is
closer to the goal state, then current state make the new
state as the current if it is not better ignore this state and
proceed with the current state
STEP 4: If the current state is goal state or no new
operators are available, Quit. Otherwise go to STEP 2.
Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"

• Sometimes called greedy local search


Hill-climbing search: 8-queens problem

• A local minimum with h = 1


Variations of Hill Climbing

• Stochastic HC: Choose randomly among the neighbours going


uphill.
• First-choice HC: generate random successors until one is better.
Good for states with high numbers of neighbours.
• Random restart: the sideway moves restart from a random state.
• Evolutionary hill-climbing: represents potential solutions as strings
and performs random mutations. Keeps the mutations that are better
states. It's a particular case of first-choice and the ancestor of the
genetic algorithms.
Simulated annealing search
• Annealing is the process used to temper or harden the metals
and glass by heating them to a high temperature and then
cooling them.
• Thus allowing the material to coalesce in to low energy
crystalline state.
• Initially the whole space is explored.
• It makes the procedure less sensitive to the starting point
• It avoid false foot hills based on the following changes done in
this approach
– Rather than creating maxima; minimisation is done
– The term objective function is used rather than heuristic
Algorithm form of Simulated annealing search
Simulated annealing search

• Idea: escape local maxima by allowing some "bad"


moves but gradually decrease their frequency
Properties of simulated annealing
search
• One can prove: If T decreases slowly enough, then
simulated annealing search will find a global
optimum with probability approaching 1

• Widely used in VLSI layout, airline scheduling, etc.,

• Types of searches in this method as shown below


» Local Beam search
» Stochastic Beam search
Local Beam search
• In this search, instead of single state in memory K
states are kept in memory
• Here states are generated in random fashion
• A successor function plays an important role by
generating successor of all K states
• If any one successor state is goal state then no further
processing is required
• In other case i.e. if goal state is not achieved it
observed it observes the K best successors from the
list of all successor and process is repeated
Local Beam search
• At the first glance the random parallism is achieved in
the sequence by a local beam search with K states
• To implement a local search threads are used. The K
parallel search threads carry useful information
• It work on principle of successful successor. If one
state generate good /efficient /goal reaching successor
and other K-1 state generate poor successor, then in
this situation the successful successors generating
state leads the all other states
• It drops /leaves the unsuccessful search and
concentrate on successful search
Local Beam search
• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are
generated
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
LIMITATIONS
• Lack of variation among the K states
• If the state concentrate on small area of state space
then search becomes more expensive
Stochastic Beam search
• It’s a flavour of Local Beam search it resolve the
limitations exist in Local Beam search
• It concentrate /focus on random selection of K
successor instead of selecting K best successor from
candidate successor
• The probability of random selection is increasing
function of its success rate
• It is very similar to natural selection here the child
(successor) of a parent state is eugenic (good
production) according to it success rate (fitness)
Genetic algorithms
• A successor state is generated by combining two parent states

• Start with k randomly generated states (population)

• A state is represented as a string over a finite alphabet (often a


string of 0s and 1s)

• Evaluation function (fitness function). Higher values for better


states.

• Produce the next generation of states by selection, crossover,


and mutation
• Population: Population is set of states which are generated
randomly
• Individual: Each state or individual is a string of finite
alphabet (0’s & 1’s )
• Fitness function: The evaluation function which specifies
the rating of each state is called fitness function
• Crossover: For each state pairs are divided that division
point or meeting point is called crossover point
• Mutation: Mutation is one of the genetic operation. It
works on random selections or changes.
• Schema: The schema is a substring in which position of
some bit can be unspecified
Algorithm Representation
1. Use number of state population or a set of individual
2. Use fitness function, function of rating
3. Create an individual ‘x’ (parent) by using random selection
with fitness function ’A’ (rate A)
4. Create an individual ‘y’ (parent) by using random selection
with fitness function ’B’ (rate B)
5. Child with good fitness is created for x+y
6. For small probability apply mutate operator on child
7. Add child to new population
8. The above process is repeated until child (an individual) in
not fit as specified fitness function
Genetic algorithms

• Fitness function: number of non-attacking pairs of queens (min


= 0, max = 8 × 7/2 = 28) (n*(n-1)/2)

• 24/(24+23+20+11) = 31%

• 23/(24+23+20+11) = 29% etc.,


Genetic algorithms

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy