Example: Route Planning in A Map: (Learning) (Logic) (Uncertainty) (Logic, Uncertainty)
Example: Route Planning in A Map: (Learning) (Logic) (Uncertainty) (Logic, Uncertainty)
Example: Route Planning in A Map: (Learning) (Logic) (Uncertainty) (Logic, Uncertainty)
• Computation time/space
• Solution quality
Lecture 2 ñ 3 Lecture 2 ñ 4
Lecture 2 ñ 5 Lecture 2 ñ 6
1
Depth-First Search Avoiding Loops
ñ Treat agenda as a stack (get most recently added node) • Method 1:
ñ Expansion: put children at top of stack
ñ Get new nodes from top of stack
• Don’t add a node to the agenda if it’s already in
the agenda
A O • Causes problems when there are multiple paths
Z S F
ZA SA TA
to a node and we want to be sure to get the
A R shortest
B
P • Method 2:
D
T L M C • Don’t expand a node (or add it to the agenda) if
it has already been expanded.
• We’ll adopt this one for all of our searches
Lecture 2 ñ 7 Lecture 2 ñ 8
BAZOSF RAZOS SA TA
• O(bm) time
Result = BAZOSF • O(mb) space
Lecture 2 ñ 9 Lecture 2 ñ 10
A O Let O
Z S F Z S F
ZA SA TA • b = branching factor
A R A R
SA TA OAZ B • m = maximum depth B
P P
D • d = goal depth D
TA OAZ OAS FAS RAS T L M C T L M C
2
Iterative Deepening Uniform Cost Search
• DFS is efficient in space, but has no path-length guarantee • Breadth-first and Iterative-Deepening find path with
• BFS finds min-step path but requires exponential space fewest steps (hops).
• Iterative deepening: Perform a sequence of DFS searches with • If steps have unequal cost, this is not interesting.
increasing depth-cutoff until goal is found. • How can we find the shortest path (measured by
sum of distances along path)?
DFS cutoff • Uniform Cost Search:
Space Time
depth
• Nodes in agenda keep track of total path length
1 O(b) O(b) from start to that node
2 O(2b) O(b2) • Agenda kept in priority queue ordered by path
3 O(3b) O(b3) length
4 O(4b) O(b4) • Get shortest path in queue
… … … • Explores paths in contours of total path length;
finds optimal path.
d O(db) O(bd)
Total Max = O(db) Sum = O(bd+1)
Lecture 2 ñ 13 Lecture 2 ñ 14
Lecture 2 ñ 17 Lecture 2 ñ 18
3
Admissibility Why use estimate of goal distance?
• What must be true about Order in which uniform-cost
h for A* to find optimal looks at nodes. A and B are
path? 2 73 same distance from start, so
X Y will be looked at before any
• A* finds optimal path if h h=100 h=1 longer paths. No “bias”
is admissible; h is 1 1 towards goal.
admissible when it never
overestimates.
h=0 h=0 x x
• In this example, h is not A B
admissible. goal
• In route finding g(X)+h(X) = 102 start Order of examination using
problems, straight-line g(Y)+h(Y) = 74 dist. from start + estimate of
distance to goal is dist. to goal. Note “bias”
admissible heuristic. Optimal path is not Assume states are points toward the goal; points away
found! the Euclidean plane. from goal look worse.
Lecture 2 ñ 19 Lecture 2 ñ 20
Lecture 2 ñ 21 Lecture 2 ñ 22
Lecture 2 ñ 23 Lecture 2 ñ 24
4
Multiple Minima Simulated Annealing
• Most problems of interest do not have unique global • T = initial temperature
minima that can be found by gradient descent from • x = initial guess
an arbitrary starting point. • v = Energy(x)
• Typically, local search methods (such as gradient • Repeat while T > final temperature
descent) will find local minima and get stuck there. • Repeat n times
• x0 ¨ Move(x)
• How can we escape from local minima?
• v0 = Energy(x0)
• Take some random steps!
• If v0 < v then accept new x [ x ¨ x0 ]
• Re-start from randomly chosen starting points
• Else accept new x with probability exp(-(v0 – v)/kT)
Error
• T = 0.95T /* for example */
• At high temperature, most moves accepted (and can move
between “basins”)
Local minimum
• At low temperature, only moves that improve energy are
accepted
Global minimum
Lecture 2 ñ 25 Lecture 2 ñ 26
Lecture 2 ñ 27 Lecture 2 ñ 28