Lec 06 - Search

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 56

Solving Problems

Syed Ali Raza


Lecturer
Department of Computer Science
GC University Lahore
• Intelligent agents are supposed to maximize their performance
measure.
goal
• Intelligence in an agent can be achieved if an agent adopt a
and aims to satisfy it thereby maximizing its
expected utility.

Search problem components


• Suppose an agent in Romania, enjoying holidays, following could be
the performance measure:
• Health care during stay
• Maximum Sightseeing
• Avoid unnecessary feuds by abiding law
• This is a complex problem.
• Now, suppose the agent has non-refundable ticket to Burcharest,
where it need to reach next day for sure.

• The problem is simplified as the agent has to adopt the GOAL to


reach the Bucharest.
• The actions that do not reach to goal are rejected.
• Goal Formulation using agent current situation and
performance measure , is the first step in problem solving.

• Goal is exactly those world states, in which goal is satisfied. Goal is a


world sates.

• The agent action is, how to act NOW and in the FUTURE so that it
reaches the goal state.
• Now, the agent has to decide (or the designer of agent has to decide)
what kind of states and actions the agent has to consider.
• If it were to consider turn left, turn right, turn steering wheel such
degree left, it will never get out of the parking lot, let alone reach to its
destination.
• Next task is,
• PROBLEM FORMULATION: It is the process of deciding what actions
and states to consider, given a GOAL.
• Consider, that agent’s actions are driving from one major town to
other.
• Each state is being in a particular town.

• Now our agent is in Arad (A) and three roads leads out of Arad, S, T
and Z. None of it is a goal.
• If the environment is UNKNOWN, the agent cannot decide which
action is best, it has to choose randomly,
• OR it need geography of Romania to reach its goal.
• Now, suppose the agent has Map of Romania,
• The map will provide information to the agent, that what state it will
get into after taking certain action.
• The agent can use this information to get subsequent states in the
journey and get to B.

• In general, an agent with several immediate options of unknown value


can decide what to do by first examining future actions that eventually
lead to states of known value.
• “Examining Future Actions” meaning:
• We need to be specific about Properties of the environment:
• Observable: agent know the current state
• Discrete: At any given state, only a finite number of actions to choose from.
• Known: agent know which state is achieved by which action
• Deterministic: Each action has only one outcome.

• Under these assumptions, the solution to any problem is a fixed


sequence of actions.
• The process of looking for a sequence of actions that reaches the goal is
called search.

• The search algorithm take a problem as input and return the solution in
the form of ACTION SEQUENCE.

• Once a solution is found, in EXECUTION phase, these actions are carried


out.

• We have a simple “formulate, search, execute” design for the


agent
• The agent ignore any percept during execution phase, called open-
loop system, ignoring percepts, breaks the loop between agent and
environment.
• A problem can be defined by five components:

• Initial State
• Actions
• Transition Model
• Goal Test
• Path Cost

Search problem components


• Initial state
• In(Arad)

• Actions
• A description of the possible actions available to the agent. Given a
particular state returns the set of actions that can be executed in s. We
say that each of these actions is applicable in s. For example, from the
state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.

Search problem components


• Transition model
• A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s. We also use the term successor to refer to any state
reachable from a given state by a single action.
• For example, we have RESULT(In(Arad), Go(Zerind)) = In(Zerind) .

• Initial State, Actions and Transition Model defines the state space of the problem.

• State Space: The set of states reachable from initial state by any sequence of actions.

• The state space forms a directed graph or graph in which the nodes are states and
links are actions.

• A path in the state space is a sequence of states connected by a sequence of actions.

Search problem components


• Goal state: which determine whether a given state is a goal state,
sometimes its explicit set of state, and test simply check whether a goal
state is one of them, such as {In(Burcharest)}, otherwise it could be
some abstract property such as checkmate in chess.

• A path in the state space is a sequence of states connected by a sequence of


actions.

• Path cost: A function that assign numeric cost to each path. The agent
choose a cost function that reflects its performance measure. For agent
trying to get to B, time is performance measure, so a cost of path might
be length in KM. so far, we define cost of path as sum of individual
actions along the path.
• A step cost is taking action a in state s to reach state s’ i.e.
c(s, a, s′)

Search problem components


• Optimal solution
• A solution to a problem is an action sequence that leads from the initial
state to a goal state.

• Solution quality is measured by the path cost function, and an optimal


solution has the lowest path cost among all solutions.

• So for we have build an abstract model of the problem. Leaving so many


details and only focusing on relevant ones.

Search problem components


• On vacation in Romania; currently in Arad

Example: Romania
Start state

Goal state

Search
• On vacation in Romania; currently in A

Example: Romania
• On vacation in Romania; currently in A
• Initial state
• A
• Actions
• Go from one city to another
• Transition model
• If you go from city A to
city B, you end up in city B
• Goal state
• B
• Path cost
• Sum of edge costs

Example: Romania
Example: 8-Queen Problem
• 8-Queen can be formulated incremental approach or complete state
approach. We choose 2nd option.
• Initial state
• No queens on the board
• Actions
• Add a queen to any square in the leftmost empty column such that it is not
• attacked by any other queen
• Transition model
• any arrangement of n<=8 queens
• or arrangements of n<=8 queens in leftmost n columns, 1 per column, such that no queen
attacks any other.
• Goal state
• 8 queens on the board, none attacked
• Path cost
• 1 per move

Example: 8 Queen Problem


Example: Vacum Cleaner
• States
• The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might
or might not contain dirt. Thus, there are 2 × 22 = 8 possible world
states. A larger environment with n locations has n · 2n states.

• Initial State
• Any State
• Actions
• Left, right, suck
• Transition model
• The actions have their expected effects, except that moving Left in
the leftmost square, moving Right in the rightmost square, and
Sucking in a clean square have no effect.
• Goal Test
• All Square clean

• Path Cost
• Each path cost 1 to the cost is number of steps.
Example: 8 Puzzle
• States
• A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.
• Initial State
• Any state can be designated as the initial state.
• Actions
• The simplest formulation defines the actions as movements of the
blank space Left, Right, Up, or Down. Different subsets of these
are possible depending on where the blank is
Transition model
• Given a state and action, this returns the resulting state;
• Goal Test
• This checks whether the state matches the goal configuration
• Path Cost
• Each path cost 1 to the cost is number of steps.
• SEARCHING FOR SOLUTIONS
• A solution is a sequence of actions
• The possible action sequences starting at the initial state
• Form a search tree with the initial state at the root; the branches are actions and
the nodes correspond to states in the state space of the problem.
• The first step is to check whether root node is goal state?
• If no, then we consider taking various actions by Expanding current node and
generating new set of states until goal state or leaf node reached.
• We EXPAND each node starting at root node and GENERATE a new set of nodes
until reach LEAF Nodes.
• The FRONTIER are the nodes that needed to be expanded at given point.

• All search algorithms share this basic above given structure, the only difference is
how they choose which state to expand next, which is called Search Strategy.
• How to handle loopy/ Redundant path?

Searching Techniques
• SEARCHING FOR SOLUTIONS
• The way to avoid redundant path is to remember where one has been.
• We can add data structure into search algorithm called EXPLORED SET,
which remember every explored node.
• Now we need node data structure

Searching Techniques
• SEARCHING FOR SOLUTIONS
• After having nodes, we need to put them somewhere.
• The frontier needs to be stored in a way that search algorithm easily find its
next node according to search strategy.
• The appropriate data structure to store frontier is QUEUE.

• Three types of queues:


• FIFO
• LIFO
• PRIORITY QUEUE

Searching Techniques
• MEASURING PROBLEM-SOLVING PERFORMANCE
• An algorithm performance can be evaluated in following ways.
• Completeness: Is the algorithm guaranteed to find a solution when there is
one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?

Searching Techniques
• MEASURING PROBLEM-SOLVING PERFORMANCE
• Time complexity: In theoretical CS, time and space complexity is according
to problem difficulty i.e. number of Edges and Vertices in state space graph.
It is possible graph is explicitly defined but in AI most states, actions and
transition model are implicitly defined.
• For these reasons,
• Complexity is expressed in terms of three quantities:
• b, the branching factor or maximum number of successors of any node;
• d, the depth of the shallowest goal node (i.e., the number of steps along the
path from the root);
• and m, the maximum length of any path in the state space.
• Search Cost: Time complexity
• Total Cost: Both space and time

Searching Techniques
• UNINFORMED SEARCH STRATEGIES
• Also known as “blind search,” uninformed search strategies use no
information about the likely “direction” of the goal node(s)
• All they can do is generate successors and distinguish non-goal and
goal states
• The term means that the strategies have no additional information about
states beyond that provided in the problem definition.
• All search strategies are distinguished by ORDER in which nodes are
generated.

• INFORMED SEARCH STRATEGIES


• Also known as “heuristic search”, informed search strategies use
information to know which non-goal state is “more promising” than other
non-goal states.

Searching Techniques
• UNINFORMED SEARCH
• Breadth First Search
• Depth First Search
• Bidirectional Search
• Uniform Cost Search
• INFORMED SEARCH
• Best First Search
• Greedy Best First Search
• A* Search

Searching Techniques
S

3 8
1

A B C
3 15
7 20
5
D E G

Example
• Graph

The depth of a node is the number of edges present in path from the root node of a tree to that node

Tree
• BREADTH FIRST SEARCH

• It is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and
so on.
• This is achieved very simply by using a FIFO queue for the frontier.
• Thus, new nodes (which are always deeper than their parents) go to the
back of the queue, and old nodes, which are shallower than the new nodes,
get expanded first.
• The Goal test is applied to each node when it is generated.
• The memory requirements are a bigger problem for breadth-first search
than is the execution time. One might wait 13 days for the solution to an
important problem with search depth 12, nodes 1012
• 365 years for depth 16

Searching Techniques
• Goal test is applied when
Expanded node Nodes list
Node is generated
{ S0 }
S0 { A3 B1 C8 }
S A3 { B1 C8 D6 E10 G18 }
8 B1 { C8 D6 E10 G18 G21 }
3 1 C8 { D6 E10 G18 G21 G13 }
D6 { E10 G18 G21 G13 }
A B C 10
E { G18 G21 G13 }
3 15
7 20 5 G18 { G21 G13 }
Solution path found is S A G , cost 18
D E G Number of nodes expanded (including goal
node) = 7

Breadth First Search


• BREADTH FIRST SEARCH
• Complete: We can easily see that it is complete—if the shallowest goal
node is at some finite depth d, breadth-first search will eventually find it
after generating all shallower nodes (provided the branching factor b is
finite).
• Optimal: Note that as soon as a goal node is generated, we know it
• is the shallowest goal node because all shallower nodes must have been
generated already and failed the goal test. Now, the shallowest goal node is
not necessarily the optimal one; technically, breadth-first search is optimal if
the path cost is a nondecreasing function of the
• depth of the node. The most common such scenario is that all actions have
the same cost.

Searching Techniques
• BREADTH FIRST SEARCH
• Time Complexity: So far, the news about breadth-first search has been
good. The news about time and space is not so good. Imagine searching a
uniform tree where every state has b successors.
• The root of the search tree generates b nodes at the first level, each of
which generates b more nodes, for a total of b2 at the second level. Each of
these generates b more nodes, yielding b3 nodes at the third level, and so
on.
• Now suppose that the solution is at depth d. In the worst case, it is the last
node generated at that level. Then the total number of nodes generated is
• b + b2 + b3 + · · · + bd = O(bd)

Searching Techniques
• BREADTH FIRST SEARCH
• Time Complexity: (If the algorithm were to apply the goal test to nodes when
selected for expansion, rather than when generated, the whole layer of nodes at
depth d would be expanded before the goal was detected and the time
complexity would be O(bd+1).)

• Two Observation:
• 1st: The memory requirements are a bigger problem for breadth-first search than
is the execution time. One might wait 13 days for the solution to an important
problem with search depth 12, but no personal computer has the petabyte of
memory it would take.
• 2nd: The second lesson is that time is still a major factor. If your problem has a
solution at depth 16, then (given our assumptions) it will take about 350 years for
breadth-first search (or indeed any uninformed search) to find it.

Searching Techniques

Searching Techniques
• UNIFORM COST SEARCH
• When all step costs are equal, breadth-first search is optimal because it always
expands the shallowest unexpanded node.

• By a simple Extension ot BFS, we can find an algorithm which is optimal given any
cost function.
• Instead of expanding the shallowest node, uniform-cost search expands the node
n with the lowest path cost g(n). This is done by storing the frontier as a priority
queue ordered by g.

Searching Techniques
• UNIFORM COST SEARCH
• The difference is using priority queue , also there are two other difference with BFS
:
• The first is that the goal test is applied to a node when it is selected for expansion
rather than when it is first generated.

• The reason is:

• The reason is that the first goal node that is generated may be on a suboptimal
path.

• The second difference is that a test is added in case a better path is found to a
node currently on the frontier.

Searching Techniques
• Uniform Cost Search

• The problem is go from S to B.


• The successors of S are R and F, with costs 80 and 99, respectively. The least-
cost node, R is expanded.
• Next, adding Pitesti with cost 80 + 97 = 177.
• The least-cost node is now F, so it is expanded, adding B with cost 99 + 211 = 310.
• The goal node is reached but UCS keep on searching.
• Choose P adding B and goal cost = 80+97+101 = 278
• Now the algorithm checks to see if this new path is better than the old one; it is, so
the old one is discarded. Bucharest, now with g-cost 278, is selected for expansion
and the solution is returned.

Searching Techniques

Uniform Cost Search


• Uniform Cost Search

• Uniform-cost search expands nodes in order of their optimal path cost. Hence, the first
goal node selected for expansion must be the optimal solution

• Uniform-cost search does not care about the number of steps a path has, but only
about their total cost. Therefore, it will get stuck in an infinite loop if there is a path with
an infinite sequence of zero-cost actions—for example, a sequence of NoOp actions.
• Completeness is guaranteed provided the cost of every step exceeds some small
positive constant ǫ

• Completeness is guaranteed provided the cost of every step exceeds some small
positive constant Epsilon.
• Uniform-cost search is guided by path costs rather than depths, so its complexity is not
easily characterized in terms of b and d. Instead, let C∗ be the cost of the optimal
solution

Searching Techniques
Expanded node Nodes list
{ S0 }
S0 { B 1 A3 C8 }
S B1 { A3 C8 G21 }
A3 { D6 C8 E10 G18 G21 }
3 1 8
D6 { C8 E10 G18 G21 }
C8 { E10 G13 G18 G21 }
A B C E10 { G13 G18 G21 }
3 15 G13 { G18 G21 }
7 20 5
D E Solution path found is S C G, cost 13.
G
Number of nodes expanded (including goal
node) = 7

Uninform Cost Search


• DEPTH FIRST SEARCH (DFS)

• Depth-first search always expands the deepest node in the current frontier of the search tree
• DFS used LIFO for node expansion.

• The properties of depth-first search depend strongly on whether the graph-search or tree-search
version is used.

• However, it is usually implemented not as a graph search but as a tree-like search that does not
keep a table of reached states.

• Depth-first search is not cost-optimal; it returns the first solution it finds, even if it is not cheapest.

• For finite state spaces that are trees it is efficient and complete

DFS
• DEPTH FIRST SEARCH (DFS)

• For acyclic state spaces it may end up expanding the same state many times via
different paths, but will (eventually) systematically explore the entire space

• In cyclic state spaces it can get stuck in an infinite loop; therefore some
implementations of depth-first search check each new node for cycles.

• Finally, in infinite state spaces, depth-first search is not systematic: it can get stuck
going down an infinite path, even if there are no cycles. Thus, depth-first search is
incomplete

DFS
• DEPTH FIRST SEARCH (DFS)

• With all this bad news, why would anyone consider using depth-first search rather
thanbreadth-first or best-first?
• The answer is that for problems where a tree-like search is feasible, depth-first
search has much smaller needs for memory.

DFS
• DEPTH FIRST SEARCH (DFS)

• For a finite tree-shaped state-space, a depth-first tree-like search takes time proportional to the number
of states, and has memory complexity of only O(bm), where b is the branching factor and m is the
maximum depth of the tree.
• Some problems that would require exabytes of memory with breadth-first search can be handled with
only kilobytes using depth-first search.
• Because of its parsimonious use of memory, depth-first tree-like search has been adopted as the basic
workhorse of many areas of AI, including constraint satisfaction, propositional satisfiability and logic
programming
• .
• A variant of depth-first search called backtracking search uses even less memory.
• In backtracking, only one successor is generated at a time rather than all successors; each partially
expanded node remembers which successor to generate next.
• 

DFS
Expanded node Nodes list
{ S0 }
S0 { A3 B1 C8 }
S
A3 { D6 E10 G18 B1 C8 }
3 1 8 D6 { E10 G18 B1 C8 }
E10 { G18 B1 C8 }
A B C G18 { B1 C8 }
3 15
7 20 5 Solution path found is S A G, cost 18
Number of nodes expanded (including goal
D E G node) = 5

Depth First Search


a complete if b is finite;
b complete if step costs ≥ ǫ for positive Epsilon;
c optimal if step costs are all identical;

COMPARISON
A
10 22

12
T Z
5
3
S O

9 16
L
11
1 R F
M
12 13
14
D

17
15 23
C P B

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy