0% found this document useful (0 votes)
5 views

Lecture 7&8_Ins

The document discusses heuristic and informed search strategies for problem-solving, highlighting the differences between blind search methods like DFS and BFS, and informed methods that utilize heuristics to improve efficiency. It explains various informed search algorithms, including A* search and its properties, such as admissibility and dominance of heuristics. Additionally, it introduces IDA* search as a memory-efficient alternative to A* that combines iterative deepening with heuristic search.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture 7&8_Ins

The document discusses heuristic and informed search strategies for problem-solving, highlighting the differences between blind search methods like DFS and BFS, and informed methods that utilize heuristics to improve efficiency. It explains various informed search algorithms, including A* search and its properties, such as admissibility and dominance of heuristics. Additionally, it introduces IDA* search as a memory-efficient alternative to A* that combines iterative deepening with heuristic search.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Lecture 7&8

Chapter_3
Solving Problems By
Searching

Heuristic (or Informed)


strategies

Presented by: https://www.linkedin.com/pulse/ai-driven-


transformation-product-information-pim-oliemans-9lboe/

Dr. Mai Ezz-eldin


Heuristic (or
Informed) strategies
Blind Search

DFS and BFS are examples of blind search strategies.


BFS may produce an optimal solution, but it still searches
blindly through the state-space.
Neither uses any knowledge about the specific domain in
question to search through the state-space in a more directed
manner.
If the search space is big, blind search can simply take too
long to be practical, or can significantly limit how deep
we're able to look into the space.

3
3
Heuristics

4
Heuristics
 Extra information used by the searching algorithm beside problem definition.
 Heuristic is an estimated cost to goal node
 Recall, heuristic is not actual cost it is just an expectation (optimistic)
 With knowledge, one can search the state space as if he was given “hints”
when exploring a maze.
 Heuristic information in search = Hints
 Leads to dramatic speed up in efficiency.

5
Informed Search
It relies on additional knowledge about the problem or domain
frequently expressed through heuristics .
It is used to distinguish more promising paths towards a goal may be
mislead, depending on the quality of the heuristic
In general, it performs much better than uninformed search.
A search strategy which searches the most promising branches of the
state-space first can:
Find a solution more quickly,
Find solutions even when there is limited time available,
Often find a better solution, since more profitable parts of the state-
space can be examined, while ignoring the unprofitable parts.
A search strategy which is better than another at identifying the most
promising branches of a search-space is said to be more informed.

6
Note

Start State

Goal State

Image credit: https://www.101computing.net/a-star-search-algorithm/


Heuristics Function
 A key component of these algorithms is a heuristic function denoted h(n).
 A heuristic function, h(n) = is the estimated cost of the cheapest path from node
n to a goal node.
 Heuristic functions are the most common form in which additional knowledge
of the problem is imparted to the search algorithm.
• n is a node in search tree.
• h(n) returns a numeric value; measure of nearness to goal.

Thus

• h(n) >= 0 for all nodes n

• h(n) = 0 implies that n is a goal node

• h(n) = infinity implies that n is a dead-end from which a goal cannot be


reached
8
Heuristics, example

Route finding problems:


 h1(n) = straight line distance (SLD) from node to goal

9
Heuristics, The 8-puzzle: example

h2(n) = number of misplaced tiles

Function h(N) that estimate the cost of the cheapest path from node N to
goal node.

5 8 1 2 3 h(N) = number of misplaced tiles


4 2 1 4 5 6
=6
7 3 6 7 8
N goal

1
0
10
Heuristics, The 8-puzzle: example

h3(n) = total Manhattan distance (i.e., no. of squares from


desired location of each tile)

Function h(N) that estimate the cost of the cheapest path from node N to
goal node.

5 8 1 2 3 h(N) = sum of the distances of every tile to


4 2 1 4 5 6 its goal position
7 3 6 7 8 =2+3+0+1+3+0+3+ 1
= 13
N goal
1
1
11
Informed search algorithms

 Best-first search.
 Greedy best-first search.
 A* search.
 IDA* Search

 Local search algorithms.


 Hill-climbing search.
 Simulated annealing search.
 Genetic algorithms.

1
2
Greedy best-first
search
Best first search - Greedy search

 Evaluation function f(n) = h(n)

 h(n) is the heuristic function

 Greedy best-first search expands the node that appears to be


closest to goal

choose node with minimum f(n)

1
4
Greedy search - example

1
5
15
Greedy search - example

1
6
16
Greedy search - example

1
7
17
Greedy search - - example

Goal is found!
1
8
18
Greedy search - example

Total coast=140+99+211= 450


Is this the optimum solution ?
1
9
19
Greedy search-Evaluation

 Complete? No

 Suppose we start in Iasi and the goal is Fagaras

 Greedy select Neamt for expand because it is closest to


Fagaras

 But it is dead end, it only generate Iasi again

 It will stuck in the loop: Iasi  Neamt  Iasi  Neamt,……

2
0
20
Greedy search Evaluation

 Complete? No (can stuck on loops)

 Time? Exponential (good heuristic can give dramatic improvement)

 Space? keeps all nodes in memory (look at this)

 Optimal? No

2
1
Informed search algorithms

 Best-first search.
 Greedy best-first search.
 A* search.
 IDA* Search

 Local search algorithms.


 Hill-climbing search.
 Simulated annealing search.
 Genetic algorithms.

2
2
A* search
A* search

 Avoid expanding paths that are already expensive


 Evaluation function f(n) = g(n)+h(n)

 g(n) = cost so far to reach n ( actual )


 h(n) = expected cost from n to goal (estimated )

2
2 4

4
A*search, example

Open queue:
Arad
 Find Bucharest starting at Arad
 f(Arad) = g(??,Arad)+h(Arad)=0+366=366

2
5
25
A*search - example

Open queue:
Sibiu, Timisoara, Zerind
 Expand Arrad and determine f(n) for each node
 f(Sibiu)=g(Arad,Sibiu)+h(Sibiu)=140+253=393

 f(Timisoara)=g(Arad,Timisoara)+h(Timisoara)=118+329=447

 f(Zerind)=g(Arad,Zerind)+h(Zerind)=75+374=449

 Best choice is Sibiu 2


6
26
A*search, example

Open queue:
Rimricu Vicea, Fagaras, Timisoara, Zerind, Arad, Oradea
 Expand Sibiu and determine f(n) for each node
 f(Arad)=g(Sibiu,Arad)+h(Arad)=280+366=646
 f(Fagaras)=g(Sibiu,Fagaras)+h(Fagaras)=239+179=415
 f(Oradea)=g(Sibiu,Oradea)+h(Oradea)=291+380=671
 f(Rimnicu Vilcea)=g(Sibiu,Rimnicu Vilcea)+ h(Rimnicu
Vilcea) =220+192=413
 Best choice is Rimnicu Vilcea; As it is at the top of the
open queue 2
7
27
A*search, example

Open queue:
Fagaras, Pitesti, Timisoara, Zerind, Craiova, Sibiu, Arad,
Oradea
 Expand Sibiu and determine f(n) for each node

 f(Arad)=g(Sibiu,Arad)+h(Arad)=280+366=646
 f(Fagaras)=g(Sibiu,Fagaras)+h(Fagaras)=239+179=415
 f(Oradea)=g(Sibiu,Oradea)+h(Oradea)=291+380=671
 f(Pitesti)=417 , f(Sibiu)=553 , f(Craiova)=526 ,

 Best choice is Fagaras; As it is at the top of the open queue


2
8
28
A*search, example

Open queue:
Pitesti, Timisoara, Zerind, Bucharest, Craiova,Sibiu,
Sibiu,Arad, Oradea
 Expand Fagaras and determine f(n) for each node
 f(Sibiu)=g(Fagaras, Sibiu)+h(Sibiu)=338+253=591
 f(Bucharest)=g(Fagaras,Bucharest)+h(Bucharest)=
450+0=450
 Best choice is Pitesti !!! 2
9

It looks like Pitesti is the next node we should expand. 29


A*search, example

Open queue:
Bucharest, Timisoara, Zerind, Bucharest, Craiova, Rimricu
Vicea, Sibiu, Sibiu, Craiova, Arad, Oradea
 Expand Pitesti and determine f(n) for each node
 f(Bucharest)=g(Pitesti,Bucharest)+h(Bucharest)=418+0
=418
 Best choice is Bucharest !!! , as it is at the top of the open
queue… 3
0

Goal is found!
A*search, example

Open queue:
Bucharest, Timisoara, Zerind, Bucharest, Craiova, Rimricu
Vicea, Sibiu, Sibiu, Craiova, Arad, Oradea
Now we “expand” the node for Bucharest.
We’re done! (And we know the path that we’ve found is optimal.)

Total coast =418


Is this the optimum solution ? yes
3
1
A*search - Evaluation
 Complete? yes
 Unless there are infinitely many nodes with h(n) ≤ h*(n) ,where h*(n) is
the true (actual) cost to reach the goal state from n
 Since bands of increasing f are added

 Time? Exponential
 Number of nodes expanded is still exponential in the length of the solution.

 Space? keeps all nodes in memory (look at this)


 It keeps all generated nodes in memory
 Hence space is the major problem not time
 Optimal? yes

3
2
Heuristics Properties - Admissible
 Heuristic function should be admissible in order to find optimum solutions
 A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is
the true (actual) cost to reach the goal state from n
Heuristic is said to be admissible:-
If it is no more than the actual cost to reach the goal.
If it never overestimates the cost of reaching the goal.
An admissible heuristic is also known as an optimistic heuristic if :
n is a node,
h is a heuristic,
h(n) is cost indicated by h to reach a goal from n
h * (n) is the actual cost to reach a goal from n,
h is admissible if h(n) ≤ h*(n).

Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal.


3
3
Heuristics Properties - Dominance
5 8 1 2 3
4 2 1 4 5 6
7 3 6 7 8
N goal

 h1 = the number of misplaced tiles =6


 h2 = manhattan distance = 13
 if h2(n) >= h1(n) for all n then h2 dominates h1, (h2 both admissible
and more informed than h1).
 It cannot overestimate, since the number of moves we need to make to get
to the goal state must be at least the sum of the distances of the tiles from
their goal positions.
 It is always better to use a heuristic function with higher values, as
long as it does not overestimate (i.e. it is admissible).

3
4
8-Puzzle
f(N) = h(N) = number of misplaced tiles

3 3 4
5 3

4
2
4 2 1
3 3
0
4

5 4
3
5
8-Puzzle
f(N) = g(N) + h(N)
with h(N) = number of misplaced tiles

3+3
1+5 2+3

3+4
5+2
0+4 3+2 4+1
1+3 2+3
5+0
3+4

1+5 2+4
3
6
8-Puzzle
f(N) = h(N) =  distances of tiles to goal

6 5

2
5 2 1
4 3
0
4

6 5
3
7
Informed search algorithms

 Best-first search.
 Greedy best-first search.
 A* search.
 IDA* Search

 Local search algorithms.


 Hill-climbing search.
 Simulated annealing search.
 Genetic algorithms.

3
8
IDA* Search
Problem with A* search
 You have to record all the nodes
 In case you have to back up from a dead-end
A* searches often run out of memory, not time

Combine A* and iterative deepening

Iterative deepening:
 Repeat depth-first search within increasing depth limit
IDA*:
 Repeat depth-first search within increasing f-limit

Complete & optimal as A*, but less memory

3
9
IDA* Search steps
f(n) is the estimate of total path cost for start to goal.
Use the same iterative deepening trick as IDS.
But iterate over f(n) rather than depth
Impose a limit on f
Define contours: f < 100, f < 200, f < 300 etc.
Use DFS to search within the f limit
Iteratively relax the limit
Greatly reduces memory usage.
• Find all nodes
Where f(n) < 100
Ignore f(n) >= 100
• Find all nodes
Where f(n) < 200
Ignore f(n) >= 200
• And so on… 4
0
Iterative Deepening A*:IDA*

• Use f(N) = g(N) + h(N) with admissible and consistent h

• Each iteration is depth-first with cutoff on the value of f of expanded nodes

IDA*: Repeat depth-


first within f-limit
increasing: Start
f0, f1, f2, f3, ... f0

f1

f2

f3

4
1
IDA* - Example

4
4 6

5 6

 This is the First Iteration of IDA*.


 In this example the initial cost threshold is 4 and every node with the price 4 is
expanded.
 We stop when we reach a cost larger then 4.

4
2
IDA* - Example

4
4 6
5 6

7 5
6 7

• This is the Second Iteration of IDA*.


• The cost threshold was 5 and every node with the cost
5 or less was expanded.
4
3
Advantages IDA*

Still complete and optimal


• Requires less memory than A*
• Avoid the overhead to sort the fringe
• IDA* may run even faster then A*.
• IDA* is much easier to implement then A* because it’s a DFS
algorithm and no open and closed lists have to be kept.

4
44
4
limitations of IDA*

 Can’t avoid revisiting states not on the current path

 if a certain node can be reached via multiple paths it will be


represented by more than 1 node in the search tree.

 A* can avoid the duplicate nodes by storing them in the memory


but IDA* is a DFS (no memory) and thus it can not detect most of
the duplicates.

 This can increase the time complexity of IDA* compared to A*.

 Thus, if there are many short cycles in the graph and there is no
memory problem - choose A*.
4
5
4
6

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy