Solving Problems by Searching

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

Solving problems by searching

Outline
 Introduction
 Problem-solving agents
 Problem types
 Problem formulation
 Example problems
 Basic search algorithms
INTRODUCTION
 Problem can described by
 Initial state

 Operator or successor function

 State space

 Path

 Goal test

 Problem solving involves

 Problem definition
 Problem analysis
 Knowledge representation
 Problem solving (selection of best techniques)
Problem-solving agents
Example: Shortest path between VIT
Vellore Campus To VIT Chennai Campus

 To reach the campus in shortest route as well as cross


the minimum number of Toll-gate’s
 Formulate goal:
 Shortest path
 Formulate problem:
 states: various urban areas
 actions: drive between cities
 Find solution:
 sequence of urban areas, e.g., Kancheepuram, Thambaram,
Kelambakkam, Guindy
Example: Cont…
Problem types
 Deterministic, fully observable  single-state problem
 Agent knows exactly which state it will be in; solution is a sequence

 Non-observable  sensorless problem (conformant problem)


 Agent may have no idea where it is; solution is a sequence

 Nondeterministic and/or partially observable  contingency


problem
 percepts provide new information about current state
 often interleave} search, execution

 Unknown state space  exploration problem


Example: vacuum world
 Single-state, start in #5.
Solution?

Example: vacuum world
 Single-state, start in #5.
Solution? [Right, Suck]

 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

 Contingency
 Nondeterministic: Suck may
dirty a clean carpet [Murphy's law]
 Partially observable: location, dirt at current location.

 Percept: [L, Clean], i.e., start in #5 or #7

Solution?
Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

 Contingency
 Nondeterministic: Suck may
dirty a clean carpet [Murphy's law]
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
Single-state problem formulation
A problem is defined by four items:

1. Initial state e.g., "at Katpadi"


2. Actions or successor function S(x) = set of action–state pairs
 e.g., S(Katpadi) = {<Wallaja  Kancheepuram, Padappai>, … }

3. Goal test, can be


 explicit, e.g., x = "at VIT Chennai"
 implicit, e.g., Cross-Min no. of Toll-gate or Shortest path(x)

4. Path cost (additive)


 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0

 A solution is a sequence of actions leading from the initial state to a goal state
Selecting a state space
 Real world is absurdly complex
 state space must be abstracted for problem solving

 (Abstract) state = set of real states

 (Abstract) action = complex combination of real actions


 e.g., “Katpadi  Chennai" represents a complex set of possible routes, detours, rest
stops, etc.
 For guaranteed realizability, any real state "in Katpadi“ must get to some real
state "in Katpadi“

 (Abstract) solution = set of real paths that are solutions in the real world

 Each abstract action should be "easier" than the original problem


Vacuum world state space graph

 states?
 actions?
 goal test?
 path cost?
Vacuum world state space graph

 states? integer dirt and robot location


 actions? Left, Right, Suck
 goal test? no dirt at all locations
 path cost? 1 per action
Example: The 8-puzzle

 states?
 actions?
 goal test?
 path cost?
Example: The 8-puzzle

 states? locations of tiles


 actions? move blank left, right, up, down
 goal test? = goal state (given)
 path cost? 1 per move

[Note: optimal solution of n-Puzzle family is NP-hard]
State Space
 State space is a successive configurations or states of an
instance are considered, with the intention of finding a
goal state with a desired property. [or]

 State space is a set of all possible states for a given


problem.

 Searching is needed for a solution, if steps are not known.

 The set of states forms a graph where two states are


connected if there is an operation that can be performed
to transform the first state into the second.
state space search
 In state space search a state space is formally represented as a
tuple {S, A, Action(s), Result(s, a), Cost(s,a)}, in which:

 S is the set of all possible states;

 A is the set of possible action, not related to a particular state


but regarding all the state space;

 Action(s) is the function that establish which action is possible to


perform in a certain state;

 Result(s,a) is the function that return the state reached


performing action a in state s.

 Cost(s,a) is the cost of performing an action a in state s. In many


state spaces is a constant, but this is not true in general.
Examples of State-space search
algorithms
Uninformed Search (According to Poole and
Mackworth)
-meaning that they do not know information
about the goal's location.
 Traditional depth-first search
 Breadth-first search
 Iterative deepening
 Lowest-cost-first search
Heuristic Search (Informed search algorithms)
Some algorithms take into account information about
the goal node's location in the form of a heuristic
function.
 Heuristic depth-first search
 Greedy best-first search
 A* search
Tree search algorithms
 Basic idea:
 offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding states)

Tree search example
Tree search example
Tree search example

In search algorithms, the state (or search) space is usually represented as a


graph, where nodes are states and the edges are the connections (or actions)
between the corresponding states.
When performing a tree (or graph) search, then the set of all nodes at the end of
all visited paths is called the fringe, frontier or border.
Implementation: general tree search
Implementation: states vs. nodes
 A state is a (representation of) a physical configuration
 A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth

 The Expand function creates new nodes, filling in the various


fields and using the SuccessorFn of the problem to create the
corresponding states.
Search strategies
 A search strategy is defined by picking the order of node
expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?

 Time and space complexity are measured in terms of


 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
Uninformed search strategies
 Uninformed search strategies use only the information available in the problem
definition(blind search - without using any domain specific knowledge.).

 Breadth-first search

 Uniform-cost search

 Depth-first search

 Depth-limited search

 Iterative deepening search


Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at
end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
Properties of breadth-first search

 Complete? Yes (if b is finite)

 Time? O(bd)

 Space? O(bd) (keeps every node in memory)

 Optimal? Yes (if cost = 1 per step)

 Space is the bigger problem (more than time)


Uniform-cost search
 Expand lowest path -cost unexpanded node

 Implementation:
 fringe = queue ordered by path cost

 Equivalent to breadth-first if step costs all equal

 Complete? Yes, if step cost ≥ ε

 Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is


the cost of the optimal solution
 Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))

 Optimal? Yes – nodes expanded in increasing order of g(n)


Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
Properties of depth-first search
 Complete? No: fails in infinite-depth spaces, spaces with loops
 Modify to avoid repeated states along path

 complete in finite spaces

 Time? O(bm): terrible if m is much larger than d


 but if solutions are dense, may be much faster than breadth-first

 Space? O(bm), i.e., linear space!

 Optimal? No
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
Iterative deepening search
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Iterative deepening search
 Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

 Number of nodes generated in an iterative deepening search to


depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd

 For b = 10, d = 5,
 NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111

 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

 Overhead = (123,456 - 111,111)/111,111 = 11%


Properties of iterative deepening
search

 Complete? Yes

 Time? O(bd)

 Space? O(bd)

 Optimal? Yes, if step cost = 1


Summary of algorithms
Graph search
Summary
 Problem formulation usually requires abstracting away real-world
details to define a state space that can feasibly be explored

 Variety of uninformed search strategies

 Iterative deepening search uses only linear space and not much
more time than other uninformed algorithms

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy