Chapter 3
Chapter 3
Chapter 3
Solving Problems
by
Searching and Constraint Satisfaction Problem
Problem Solving by Searching
• An important application of AI is problem solving.
• Searching is a process to find the solution for a given
set of problems.
• Search algorithms are one of the most important
areas of Artificial Intelligence.
2
Problem Solving Agents
• Problem-solving agents are goal-driven agents and focuses on
satisfying the goal.
• In AI, Search techniques are universal or well-known problem-solving
methods.
• Rational agents or Problem-solving agents in AI mostly used these
search strategies or algorithms to solve a specific problem and provide
the best result.
• Problem-solving agents are the goal-based agents and use atomic
representations, that is; states of the world are considered as wholes,
with no internal structure visible to the problem solving.
Steps performed by problem solving agents
4
Problem Formulation
• A problem can be defined formally by 5 components:
Initial state –current state( the state that agents starts
in).
Actions – possible options
Transition models – possible outcomes (description of
what each action does)
Goal Test – checking whether it reaches the goal
Path Cost – calculating the optimum path cost
5
Problem Formulation
possible options
possible outcomes
8
……
• On holiday in Romania; currently in Arad non-
refundable ticket to fly out of Bucharest tomorrow.
– Formulate goal: be in Bucharest
– Formulate problem:
• States: various cities
• Action: drive across cities
-Search: sequence of cities
9
Problem Formulation
Initial state
• e.g. IN(Arad)
Actions
• IN(ARAD)->GO(Zerind), GO(Siblu), GO(Timisoara)
Transition models
• Result(IN(Arad),GO(Zerind)) IN(Zerind)
Goal Test
• IN(x)==IN(Bucharest)
Path Cost
• C(IN(Arad),GO(Zerind),IN(Zerind)) = 75
10
Selecting a state space
• Real world is very complex.
• State space must be abstracted for problem solving.
– State = set of real states.
– Action = complex combination of real actions.
• e.g. Arad Zerind represents a complex set of possible routes,
detours, rest stops, etc.
• The abstraction is valid if the path between two states is reflected in
the real world.
– Solution = set of real paths that are solutions in the real world.
• Each abstract action should be “easier” than the real problem.
11
Example of Problems
• Toy Problem – is intended to illustrate or exercise various problem-solving
methods. They can be given concise and exact description.
• Examples: Vacuum world, Chess, Sliding-blocks puzzles, etc.
12
Example 1: vacuum world
• States?
• Initial state?
• Actions?
• Goal test?
• Path cost?
13
Example: vacuum world
• States? two locations with or without dirt: total states= NX2N
– 2 x 22=8 where N is number of states in the given problem.
• Initial state? Any state can be initial
• Actions? {Left, Right, Suck}
• Goal test? Check whether squares are clean.
• Path cost? Number of actions to reach goal.
14
Example 2: 8-Queens Problem
15
Example 3: 8-puzzle
• States?
• Initial state?
• Actions?
• Goal test?
• Path cost?
16
Example: 8-puzzle
• States? Integer location of each tile
• Initial state? Any state can be initial
• Actions? {Left, Right, Up, Down}
• Goal test? Check whether goal configuration is reached
• Path cost? Number of actions to reach goal
17
Goal Problem Search Solution Execution
Formulation Formulation • Looking for • Search algorithm • Executions of the
• Limiting the • Action and States sequences of • Takes a problem best solution
Objectives of actions as input and
Current State returns a solution
18
Basic search algorithms
• How do we find the solutions of previous problems?
– Search the state space (remember complexity of space depends on state
representation)
– Here: search through explicit tree generation
– A search tree is used to model the sequence of actions.
– It is constructed with initial state as the root.
– The actions taken make the branches and the nodes are results of those
actions (successor function).
– A node has depth, path cost and associated state in the state space.
19
continued
• Search involves moving the nodes from unexplored
region to the explored region.
• Strategical order of these moves performs a better
search.
• The moves are also known as node expansion.
20
State space vs. search tree
• A state is a (representation of) a physical
configuration
• A node is a data structure belong to a search tree
– A node has a parent, children, … and includes
path cost, depth, …
– Here node= <state, parent-node, action, path-
cost, depth>
– FRINGE= contains generated nodes which are
not yet expanded or tested.
21
Search Strategies
• A strategy is defined by picking the order of node expansion
• Performance Measures:
– Completeness – does it always find a solution if one exists?
– Time complexity – number of nodes generated/expanded
– Space complexity – maximum number of nodes in memory
– Optimality – does it always find a least-cost solution
• Time and space complexity are measured in terms of
– b – maximum branching factor or maximum number of successors of any
node;
– d – the depth of the shallowest goal node (i.e., the number of steps along
the path from the root)
– m – the maximum length of any path in the state space (may be ∞)
22
…continued
• There are 2 kinds of search, based on whether they
use information about the goal.
– Uninformed Search
• does not use any domain knowledge (Blind search )
– Informed Search
• uses domain knowledge (Heuristic search)
23
Uninformed Search Strategies
24
Uninformed Search Strategies
• Uninformed strategies use only the information
available in the problem definition
– Breadth-first search
– Depth-first search
– Iterative deepening search
25
1. Breadth-first search
• Expand shallowest unexpanded node( expand the breadth from left to right)
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
– New nodes are inserted at the end of the queue
26
Search Strategies
• Advantage of BFS
– It will provide a solution if any solution is exist.
– If there are more than one solution for a given problem the BFS
provide minimal solution which require the least number of step.
• Disadvantage of BFS
– It requires lots of memory since each level of the tree must be saved
in to memory to expand the next level.
– BFS require lots of time if the solution is far away the root node.
27
Search Strategies
• A strategy is defined by picking the order of node expansion
• Performance Measures:
– Completeness – does it always find a solution if one exists?
– Time complexity – number of nodes generated/expanded
– Space complexity – maximum number of nodes in memory
– Optimality – does it always find a least-cost solution
• Time and space complexity are measured in terms of
– b – maximum branching factor or maximum number of successors of any
node;
– d – the depth of the shallowest goal node (i.e., the number of steps along
the path from the root)
– m – the maximum length of any path in the state space (may be ∞)
28
Evaluation of Breadth-First Search
29
2. Depth first search
• Expand deepest unexpanded node
• It start from the root node and follow each path to its greatest depth
mode before moving to the next step.
• Implementation: fringe = LIFO stack, i.e., put successors at front
30
Cont..
• Advantage of DFS
– It require very less memory as it only need to store a stack of node on
the path from root node to the current node.
– It takes less time to reach the goal node than BFS.
• Disadvantage of DFS
– There is no guaranty of finding the solution( there is the possibility of
many stets re-occurring.
– DFS algorithm goes to deep down searching and some time it may get
to the infinite loop.
31
Evaluation of Depth-First Search
• Completeness :
– Yes (if search space is finite.)
• Time complexity: O(b m )
• Space complexity: O(bm)
• Optimality: No
32
3. Iterative Deepening Search
• Combines advantages of both breadth-first and depth-first search
• By continuously incrementing the depth limit by one until a solution is found
• By using a depth-first approach on every iteration, iterative deepening search first
avoids the memory cost of breadth-first search.
Iteration 1(b) 1: A
Iteration 2(b2): A B C
Iteration 3(b3): A B D E C F G H
Iteration 4(b4): A B D I J E K L M C F N
33
Comparison of Strategies
35
Informed search strategies
36
……
• Informed search strategies (Heuristic Search):
– Best-first search
• Greedy Best-first search
• A* search
37
Best First Search
• Best-first search is an instance of the general TREE-SEARCH algorithm.
• a node is selected for expansion based on an evaluation function, f(n).
– f(n) tells you the approximate distance of a node, n, from a goal node
– f(n) is constructed as a cost estimate, so the node with the lowest evaluation is
expanded first.
• The choice of f determines the search strategy.
• Most best-first algorithms include as a component of f a heuristic function, denoted h(n):
– h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
• A Heuristic is an operationally-effective bit of information on how to direct search in a
problem space.
• Heuristics are only approximately correct. Their purpose is to minimize search on average.
• n is a goal node, then h(n) = 0.
38
Greedy best-first search
• expand the node that is closest to the goal
• evaluates nodes by using just the heuristic function;
– that is, f(n) = h(n).
– use the STRAIGHT-LINE DISTANCE(hSLD).
– “greedy”—at each step it tries to get as close to
the goal as it can.
39
Example: route-finding problems in Romania
40
……
41
Evaluation of greedy best-first search
Optimal?
No!
Found: Arad Sibiu Fagaras Bucharest (450km)
42
Evaluation of greedy best-first search
Complete?
No – can get stuck in loops,
e.g., Iasi Neamt Iasi Neamt …
Initial
goal
43
Evaluation of greedy best-first search
• Time?
– O(bm) – worst case (like Depth First Search)
– But a good heuristic can give dramatic improvement
• Space?
– O(bm) – keeps all nodes in memory
44
A* search
• Best-known form of best-first search.
• pronounced “A-star search”
• Idea: avoid expanding paths that are already expensive.
– Evaluation function f(n)=g(n) + h(n) A*
• g(n) gives the path cost from the start node to node
• h(n) is the estimated cost of the cheapest path from n to the goal
• f(n) = estimated cost of the cheapest solution through n (node
with the lowest value of g(n) + h(n)).
45
….
46
A* search, evaluation
• Completeness: YES
• Time complexity: (exponential with path length)
• Space complexity:(all nodes are stored)
• Optimality: YES
Avoiding Repeated States
• The path from the first state to the next state and back to the first state again is called
a repeated state in the search tree, which is generated by a loopy path.
• This makes search tree infinite because, there is no limit to how often one can traverse
a loop.
• loops can cause certain algorithms to fail, making otherwise solvable problems
unsolvable.
• Fortunately, there is no need to consider loopy paths.
• We can rely on more than intuition for this: because path costs are additive and step
costs are nonnegative, a loopy path to any given state is never better than the same
path with the loop removed.
• The way to avoid exploring redundant paths is to remember where one has been.
• To do this, we augment the TREE-SEARCH algorithm with a data structure called the
48
explored set (also known as the closed list), which remembers every expanded node.
Constraint Satisfaction Search
• Three ways to represent states and the transitions between them.
– Atomic representation: a state is a black box with no internal structure;
– Factored representation: a state consists of a vector of attribute values; values
can be Boolean, real-valued, or one of a fixed set of symbols.
– Structured representation: a state includes objects, each of which may have
attributes of its own as well as relationships to other objects.
49
…cont’d
• Many important areas of AI are based on factored
representations, including: constraint satisfaction
algorithms, propositional logic, planning Bayesian
networks and the machine learning algorithms.
• A problem is solved when each variable has a value
that satisfies all the constraints on the variable.
• A problem described this way is called a constraint
satisfaction problem, or CSP. 50
Example problem: Map coloring
• Task: coloring each region either red, green, or blue in such a way that no neighboring regions have the same color.
• To formulate this as a CSP,
– we define the variables to be the regions X = {WA,NT, Q,NSW , V,SA, T }
– The domain of each variable is the set Di = {red,green,blue}.
– The constraints require neighboring regions to have distinct colors. Since there are nine places where regions border,
there are nine constraints:
C = {SA ≠ WA,SA ≠ NT, SA ≠ Q, SA ≠ NSW ,SA ≠ V, WA ≠ NT, NT ≠ Q, Q ≠ NSW , NSW ≠ V } .
There are many possible solutions to
this problem, such as
{
WA = red,
NT = green,
Q = red,
NSW = green,
V = red,
SA = blue,
T = red
}. 51
END
52