0% found this document useful (0 votes)
14 views45 pages

FAI Module - 2

Module 2 covers search algorithms, detailing the basic search algorithm, its variations, and key issues such as completeness and optimality. It discusses specific search strategies like Depth First Search (DFS) and Breadth First Search (BFS), highlighting their properties, advantages, and disadvantages. The module also introduces random search and the importance of evaluating search strategies based on time and space complexity.

Uploaded by

venkatasaijanesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views45 pages

FAI Module - 2

Module 2 covers search algorithms, detailing the basic search algorithm, its variations, and key issues such as completeness and optimality. It discusses specific search strategies like Depth First Search (DFS) and Breadth First Search (BFS), highlighting their properties, advantages, and disadvantages. The module also introduces random search and the importance of evaluating search strategies based on time and space complexity.

Uploaded by

venkatasaijanesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

FAI Module - 2

Module - 2
Search Algorithms

Search

Searching through a state space involves the following:


• A set of states
• Operators and their costs
• Start state
• A test to check for goal state

We will now outline the basic search algorithm, and then consider various
variations of this algorithm.

The basic search algorithm

Let L be a list containing the initial state (L= the fringe)


Loop

if L is empty return failure


Node select(L)

if Node is a goal

then return Node

(the path from initial state to


Node) else generate all successors of Node,
and
We need to denote the states that have been generated. We will call these as
nodes. The data structure
mergefor
theanewly
nodegenerated
will keepstates
trackinto
of Lnot only the state, but
also the parent state or the operator that was applied to get this state. In
addition theEnd Loopalgorithm maintains a list of nodes called the fringe. The
search
fringe keeps track of the nodes that have been generated but are yet to be
explored. The fringe represents the frontier of the search tree generated. The
basic search algorithm has been described above.

1
FAI Module - 2

Initially, the fringe contains a single node corresponding to the start state. In
this version we use only the OPEN list or fringe. The algorithm always picks
the first node from fringe for expansion. If the node contains a goal state, the

2
FAI Module - 2

path to the goal is returned. The path corresponding to a goal node can be
found by following the parent pointers. Otherwise all the successor nodes
are generated and they are added to the fringe.

The successors of the current expanded node are put in fringe. We will soon
see that the order in which the successors are put in fringe will determine the
property of the search algorithm.

Search algorithm: Key issues

Corresponding to a search algorithm, we get a search tree which contains the


generated and the explored nodes. The search tree may be unbounded. This
may happen if the state space is infinite. This can also happen if there are
loops in the search space. How can we handle loops?

Corresponding to a search algorithm, should we return a path or a node? The


answer to this depends on the problem. For problems like N-queens we are
only interested in the goal state. For problems like 15-puzzle, we are
interested in the solution path.

We see that in the basic search algorithm, we have to select a node for
expansion. Which node should we select? Alternatively, how would we
place the newly generated nodes in the fringe? We will subsequently explore
various search strategies and discuss their properties,

Depending on the search problem, we will have different cases. The search
graph may be weighted or unweighted. In some cases we may have some
knowledge about the quality of intermediate states and this can perhaps be
exploited by the search algorithm. Also depending on the problem, our aim
may be to find a minimal cost path or any to find path as soon as possible.

Which path to find?


The objective of a search problem is to find a path from the initial state to a
goal state. If there are several paths which path should be chosen? Our
objective could be to find any path, or we may need to find the shortest path
or least cost path.

3
FAI Module - 2

Evaluating Search strategies

We will look at various search strategies and evaluate their problem solving

4
FAI Module - 2

performance. What are the characteristics of the different search algorithms


and what is their efficiency? We will look at the following three factors to
measure this.

1. Completeness: Is the strategy guaranteed to find a solution if one


exists?

2. Optimality: Does the solution have low cost or the minimal cost?

3. What is the search cost associated with the time and memory
required to find a solution?
a. Time complexity: Time taken (number of nodes expanded
b. Space complexity: Space used by the algorithm measured in
terms of the maximum size of fringe

Random Search
Random search is a technique where random combinations of the hyperparameters are used to
find the best solution for the built model. It is similar to grid search, and yet it has proven to
yield better results comparatively. The drawback of random search is that it yields high
variance during computing. Since the selection of parameters is completely random; and since
no intelligence is used to sample these combinations, luck plays its part.

Below is a visual description of search pattern of the random search:

5
FAI Module - 2

As random values are selected at each instance, it is highly likely that the whole of action space
has been reached because of the randomness, which takes a huge amount of time to cover
every aspect of the combination during grid search. This works best under the assumption that
not all hyperparameters are equally important. In this search pattern, random combinations of
parameters are considered in every iteration. The chances of finding the optimal parameter are
comparatively higher in random search because of the random search pattern where the model
might end up being trained on the optimised parameters without any aliasing.

Search with Open and Closed List


• The nodes that the algorithm has generated are kept in a data structure called
OPEN or fringe. Initially only the start node is in OPEN.
• The search starts with the root node. The algorithm picks a node from OPEN
for expanding and generates all the children of the node. Expanding a node
from OPEN results in a closed node. Some search algorithms keep track of
the closed nodes in a data structure called CLOSED.

A solution to the search problem is a sequence of operators that is associated with a


path from a start node to a goal node. The cost of a solution is the sum of the arc
costs on the solution path. For large state spaces, it is not practical to represent the
whole space. State space search makes explicit a sufficient portion of an implicit
state space graph to find a goal node. Each node represents a partial solution path
from the start node to the given node. In general, from this node there are many
possible paths that have this partial path as a prefix.

The search process constructs a search tree, where


• root is the initial state and
• leaf nodes are nodes
• not yet expanded (i.e., in fringe) or
• having no successors (i.e., “dead-ends”)
Search tree may be infinite because of loops even if state space is small

The search problem will return as a solution a path to a goal node. Finding a
path is important in problems like path finding, solving 15-puzzle, and such
other problems. There are also problems like the N-queens problem for
which the path to the solution is not important. For such problems the search
problem needs to return the goal state only.

6
FAI Module - 2

Depth first Search

Algorithm

Depth First Search


Let fringe be a list containing the
initial state Loop
if fringe is empty return
failure Node remove-first (fringe)
if Node is a goal
then return the path from initial state
to Node else generate all successors of
Node, and
merge the newly generated nodes into fringe
add generated nodes to the front of fringe
End Loop

The depth first search algorithm puts newly generated nodes in the front of
OPEN. This results in expanding the deepest node first. Thus the nodes in
OPEN follow a LIFO order (Last In First Out). OPEN is thus implemented
using a stack data structure.

DFS illustrated

state space graph

B E

A D F H
C
G

7
FAI Module - 2

B C

D E D G

C F B F

G G H E G H

Search tree for the state space graph

Let us now run Depth First Search on the search space given in Figure 34,
and trace its progress.

Step 1: Initially fringe contains only the node for A.


A

B C

D E D G

C F B F

G G H E G H

FRINGE: A

8
FAI Module - 2

Step 2: A is removed from fringe. A is expanded and its children B and C


are put in front of fringe.

B C

D E D G

C F B F

G G H E G H

FRINGE: B C

Step 3: Node B is removed from fringe, and its children D and E are pushed
in front of fringe.
A

B C

D E D G

C F B F

G G H E G H

FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.

9
FAI Module - 2

B C

D E D G

C F B F

G G H E G H

Fringe : C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of


fringe.

B C

D E D G

C F B F

G G H E G H

FRINGE: G F E C

Step 6: Node G is expanded and found to be a goal node. The solution path
A-B-D-C-G is returned and the algorithm terminates.

10
FAI Module - 2

B C

D E D G

C F B F

G G H E G H

Goal!

FRINGE: G F E C

Properties of Depth First Search


Let us now examine some properties of the DFS algorithm.
• The algorithm takes exponential time.
• If N is the maximum depth of a node in the search space, in the worst
case the algorithm will take time O(bd).
• However the space taken is linear in the depth of the search tree,
O(bN).

• Note that the time taken by the algorithm is related to the maximum
depth of the search tree. If the search tree has infinite depth, the
algorithm may not terminate. This can happen if the search space is
infinite. It can also happen if the search space contains cycles. The
latter case can be handled by checking for cycles in the algorithm.
Thus Depth First Search is not complete.

11
FAI Module - 2

Breadth First Search

Algorithm

Breadth first search

Let fringe be a list containing the


initial state Loop

if fringe is empty
return failure Node
remove-first
(fringe)if Node is agoal

then return the path from initial state


to Node else generate all successors of
Node, and

(merge the newly generated nodes


• Note into
thatfringe) add first
in breadth generated nodes
search the togenerated nodes are put at the back
newly
of fringe or the
the back OPEN list. What this implies is that the nodes will be expanded
of fringe
in a FIFO (First In First Out) order. The node that enters OPEN earlier will be
expanded earlier. This amounts to expanding the shallowest nodes first
End Loop

12
FAI Module - 2

BFS illustrated
We will now consider the search space in Figure 1, and show how breadth
first search works on this graph.

Step 1: Initially fringe contains only one node corresponding to the source
state A.

B C

D E D G

C F B F

G G H E G H

FRINGE: A

Step 2: A is removed from fringe. The node is expanded, and its children B
and C are generated. They are placed at the back of fringe.

B C

D E D G

C F B F

G G H E G H

13
FAI Module - 2

FRINGE: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E


are generated and put at the back of fringe.

B C

D E D G

C F B F

G G H E G H

FRINGE : C D E

14
FAI Module - 2

Step 4: Node C is removed from fringe and is expanded. Its children D and
G are added to the back of fringe.

B C

D E D G

C F B F

G G H E G H

FRINGE: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated
and added to the back of fringe.

15
FAI Module - 2

B C

D E D G

C F B F

G G H E H

G
FRINGE: E D G C F

Step 6: Node E is removed from fringe. It has no children.


A

B C

D E D G

C F B F

G G H E G H

FRINGE: D G C F

Step 7: D is expanded, B and F are put in OPEN.

B C

D E D G

C F B F

G G H E G H

16
FAI Module - 2

FRINGE: G C F B F

17
FAI Module - 2

B C

D E D G

C F B F
Goal!

G G H E G H

Step 8: G is selected for expansion. It is found to be a goal node. So the


algorithm returns the path A C G by following the parent pointers of the
node corresponding to G. The algorithm terminates.

Properties of Breadth-First Search


We will now explore certain properties of breadth first search. Let us
consider a model of the search tree as shown in Figure 3. We assume that
every non-leaf node has b children. Suppose that d is the depth o the
shallowest goal node, and m is the depth of the node found first.

….
b children
….

Model of a search tree with uniform branching factor b

Breadth first search is:


• Complete.
• The algorithm is optimal (i.e., admissible) if all operators have the same
cost. Otherwise, breadth first search finds a solution with the shortest
path length.
• The algorithm has exponential time and space

18
FAI Module - 2

complexity. Suppose the search tree can be modeled as a b-ary tree as


shown in Figure 3. Then the time and space complexity of the algorithm
is O(bd) where d is the depth of the solution and b is the branching factor
(i.e., number of children) at each node.

Advantages of Breadth First Search


• Finds the path of minimal length to the goal.

Disadvantages of Breadth First Search


• Requires the generation and storage of a tree whose size is exponential the

Heuristic Search

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most
promising path. It takes the current state of the agent as its input and produces the estimation of how
close agent is from the goal. The heuristic method, however, might not always give the best solution,
but it guaranteed to find a good solution in reasonable time. Heuristic function estimates how close a
state is to the goal. It is represented by h(n), and it calculates the cost of an optimal path between the
pair of states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less
than or equal to the estimated cost.

Pure Heuristic Search:


Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes based on
their heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the CLOSED list, it places
those nodes which have already expanded and in the OPEN list, it places nodes which have yet not
been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and generates all its
successors and n is placed to the closed list. The algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below:

o Best First Search Algorithm(Greedy search)

19
FAI Module - 2

o A* Search Algorithm

20
FAI Module - 2

Example of Heuristic Function


A heuristic function at a node n is an estimate of the optimum cost from the
current node to a goal. It is denoted by h(n).
h(n) = estimated cost of the cheapest path from node n to a goal node

Example 1: We want a path from Kolkata to Guwahati


Heuristic for Guwahati may be straight-line distance between Kolkata and
Guwahati
h(Kolkata) = euclideanDistance(Kolkata, Guwahati)

Example 2: 8-puzzle: Misplaced Tiles Heuristics is the number of tiles out of


place.

2 8 3 1 2 3

1 6 4 8 4

7 5 7 6 5
Initial State Goal state

Figure 1: 8 puzzle

The first picture shows the current state n, and the second picture the goal
state.
h(n) = 5
because the tiles 2, 8, 1, 6 and 7 are out of place.

Manhattan Distance Heuristic: Another heuristic for 8-puzzle is the


Manhattan distance heuristic. This heuristic sums the distance that the tiles
are out of place. The distance of a tile is measured by the sum of the
differences in the x-positions and the y-positions.
For the above example, using the Manhattan distance heuristic,
h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6

21
FAI Module - 2

We will now study a heuristic search algorithm best-first search.

22
FAI Module - 2

Best First Search

Uniform Cost Search is a special case of the best first search algorithm. The
algorithm maintains a priority queue of nodes to be explored. A cost
function f(n) is applied to each node. The nodes are put in OPEN in the
order of their f values. Nodes with smaller f(n) values are expanded earlier.
The generic best first search algorithm is outlined below.

Best First Search


Let fringe be a priority queue containing the
initial state Loop
if fringe is empty return
failure Node
remove-first (fringe)
if Node is a goal
then return the path from initial state
to Node else generate all successors of Node,
and
put the newly generated nodes into
fringe according to their f values
End Loop

We will now consider different ways of defining the function f. This


leads to different search algorithms.

Greedy Search

In greedy search, the idea is to expand the node with the smallest estimated
cost to reach the goal.

We use a heuristic function


f(n) = h(n)
h(n) estimates the distance remaining to a goal.

Greedy algorithms often perform very well. They tend to find good
solutions quickly, although not always optimal ones.

23
FAI Module - 2

The resulting algorithm is not optimal. The algorithm is also incomplete,


and it may fail to find a solution even if one exists. This can be seen by
running greedy search on the following example. A good heuristic for the
route-finding problem would be straight-line distance to the goal.

Below Figure 1 is an example of a route finding problem. S is the


starting state, G is the goal state.

Figure : 1

Figure : 2

24
FAI Module - 2

Let us run the greedy search algorithm for the graph given in Figure 1. The
straight line distance heuristic estimates for the nodes are shown in Figure
2.

Step 1: S is expanded. Its children are A and D.

10.4 8.9

A D

Step 2: D has smaller cost and is expanded next.

10.4 8.9
D
A
10.4 6.9
A E

Step 3: E has smaller cost and is expanded next.


Step 4: F has smaller cost and is expanded next.

25
S
FAI 8.9 Module - 2
10.4
D
A
10.4 6.9
A E

6.7 3.0
B F

0
G

Goal H
Goal

15
15
D G
112
70 7

12 E
E
12
F
10
6 9
8 B 10
Start C

Greedy Best-First Search illustrated

We will run greedy best first search on the problem in Figure 2. We use the
straight line heuristic. h(n) is taken to be the straight line distance from n to
the goal position.
The Nodes will be expanded in the following order
A-B-E-G-H

The path obtained is A-B-E-G-H and its cost is 99


Clearly this is not an optimum path. The path A-B-C-F-H has a cost of 39.

26
FAI Module - 2

A* Search
We will next consider the famous A* algorithm. This algorithm was
given by Hart, Nilsson & Rafael in 1968.

A* is a best first search algorithm with


f(n) = g(n) + h(n)
where
g(n) = sum of edge costs from start to n
h(n) = estimate of lowest cost path from n to goal

f(n) = actual distance so far + estimated distance remaining

h(n) is said to be admissible if it underestimates the cost of any solution


that can be reached from n. If C*(n) is the cost of the cheapest solution
path from n to a goal node, and if h is admissible,
h(n) <= C*(n).
We can prove that if h(n) is admissible, then the search will find an optimal
solution.

A* Searching Process:

27
FAI Module - 2

A* Search Implementation

A* Search Algorithm is one such algorithm that has been developed to help
us. A* was initially designed as a general graph traversal algorithm. It is
widely used in solving path finding problems in video games. Because of its
flexibility and versatility, it can be used in a wide range of contexts. A* is
formulated with weighted graphs, which means it can find the best path
involving the smallest cost in terms of distance and time. This makes A*
algorithm in artificial intelligence an informed search algorithm for best-first
search. Let us have a detailed look into the various aspects of A*.

28
FAI Module - 2

What is A* Algorithm

The most important advantage of A* search algorithm which separates it from


other traversal techniques is that it has a brain. This makes A* very smart and
pushes it much ahead of other conventional algorithms.
Consider the diagram below:

Let’s try to understand Basic AI Concepts and to comprehend how does A*


algorithm work. Imagine a huge maze, one that is too big that it takes hours to
reach the endpoint manually. Once you complete it on foot, you need to go for
another one. Which implies that you would end up investing a lot of time and
effort to find the possible paths in this maze. Now, you want to make it less
time-consuming. To make it easier, we will consider this maze as a search
problem and will try to apply it to other possible mazes we might encounter in
the due course, provided they follow the same structure and rules.
As the first step to convert this maze into a search problem, we need to define
these six things.
1. A set of prospective states we might be in

29
FAI Module - 2

2. A beginning and end state


3. A way to decide if we’ve reached the endpoint
4. A set of actions in case of possible direction/path changes
5. A function that advises us about the result of an action
6. A set of costs incurring in different states/paths of movement

To solve the problem, we need to map the intersections to the nodes (denoted
by the red dots) and all the possible ways we can make movements towards the
edges (denoted by the blue lines).
A denotes the starting point and B denotes the endpoint. We define the starting
and endpoint at the nodes A and B respectively.
If we use an uninformed search algorithm, it would be like finding a path that
is blind, while an informed algorithm for a search problem would take the path
that brings you closer to your destination. For instance, consider Rubik’s cube;
it has many prospective states that you can be in and this makes the solution
very difficult. This calls for the use of a guided search algorithm to find a
solution. This explains the importance of A*.
Unlike other algorithms, A* decides to take up a step only if it is convincingly
sensible and reasonable as per its functions. Which means, it never considers
any non-optimal steps. This is why A* is a popular choice for AI systems that
replicate the real world – like video games and machine learning.

Why is A* Search Algorithm Preferred?


It’s easy to give movement to objects. But pathfinding is not simple. It is a
complex exercise. The following situation explains it.

The task is to take the unit you see at the bottom of the diagram, to the top of
it. You can see that there is nothing to indicate that the object should not take
the path denoted with pink lines. So it chooses to move that way. As and when

30
FAI Module - 2

it reaches the top it has to change its direction because of the ‘U’ shaped
obstacle. Then it changes the direction, goes around the obstacle, to reach the
top. In contrast to this, A* would have scanned the area above the object and
found a short path (denoted with blue lines). Thus, pathfinder algorithms like
A* help you plan things rather than waiting until you discover the problem.
They act proactively rather than reacting to a situation. The disadvantage is
that it is a bit slower than the other algorithms. You can use a combination of
both to achieve better results – pathfinding algorithms give bigger picture and
long paths with obstacles that change slowly; and movement algorithms for
local picture and short paths with obstacles that change faster.

The algorithm A* is outlined below:

Algorithm A*
OPEN = nodes on frontier. CLOSED =
expanded nodes. OPEN = {<s, nil>}
while OPEN is not empty
remove from OPEN the node <n,p> with minimum f(n)
place <n,p> on CLOSED
if n is a goal node,
return success (path p)
for each edge connecting n & m with cost c
if <m, q> is on CLOSED and {p|e} is cheaper than q
then remove n from
CLOSED, put
<m,{p|e}> on OPEN
else if <m,q> is on OPEN and {p|e} is cheaper than q
then replace q with {p|e}
else if m is not on OPEN
then put <m,{p|e}> on OPEN
return failure

Points to remember:

o A* algorithm returns the path which occurred first, and it does not
search for all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.

31
FAI Module - 2

o A* algorithm expands all


nodes which satisfy the
condition f(n)<="" li="">

Complete: A* algorithm is complete as long


as:

Branching factor is finite.


Cost at every action is fixed.

Optimal: A* search algorithm is optimal if


it follows below two conditions:

Admissible: the first condition requires for


optimality is that h(n) should be an
admissible heuristic for A* tree search. An
admissible heuristic is optimistic in nature.
Consistency: Second required condition is
consistency for only A* graph-search.
If the heuristic function is admissible, then
A* tree search will always find the least cost
path.

Time Complexity: The time complexity of


A* search algorithm depends on heuristic
function, and the number of nodes expanded
is exponential to the depth of solution d. So
the time complexity is O(b^d), where b is
the branching factor.
Space Complexity: The space complexity
of A* search algorithm is O(b^d)

32
FAI Module - 2

Hill Climbing Algorithm

Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution
to the problem. It terminates when it reaches a peak value where no neighbor has a
higher value.
o Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One of
the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we
need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond
that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single
current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate
and Test method produce feedback which helps to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the previous states.

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing
a graph between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function, and state-
space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.

33
FAI Module - 2

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also
another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states
have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds better than the current state, then
move else be in the same state. This algorithm has the following features:

o Less time consuming

34
FAI Module - 2

o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is closest
to the goal state. This algorithm consumes more time as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state
as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.

3. Stochastic hill climbing:

35
FAI Module - 2

Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state or
examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current
state contains the same value, because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve
the problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.

36
FAI Module - 2

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.

Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk, by
moving a successor, then it may complete but not efficient. Simulated Annealing is an algorithm
which yields both efficiency and completeness.

In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then
cooling gradually, so this allows the metal to reach a low-energy crystalline state. The same process is
used in simulated annealing in which the algorithm picks a random move, instead of picking the best
move. If the random move improves the state, then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or it moves downhill and chooses another path.

37
FAI Module - 2

Constraint satisfaction problems (CSPs)

CSP:
• state is defined by variables Xi with values from domain Di
• goal test is a set of constraints specifying allowable combinations
of values for subsets of variables
Allows useful general-purpose algorithms with more power than standard search
algorithms

Example: Map-Coloring

• Variables WA, NT, Q, NSW, V, SA, T


• Domains Di = {red,green,blue}
• Constraints: adjacent regions must have different colors
• e.g., WA ≠ NT

Solutions are complete and consistent assignments, e.g., WA = red, NT =


38
FAI Module - 2

green,Q = red,NSW = green,V = red,SA = blue,T = green


Constraint graph
• Binary CSP: each constraint relates two variables

• Constraint graph: nodes are variables, arcs are constraints

Varieties of CSPs
✓ Discrete variables

✓ finite domains:

✓ n variables, domain size d → O(d n) complete


assignments

✓ e.g., 3-SAT (NP-complete)

✓ infinite domains:

✓ integers, strings, etc.

✓ e.g., job scheduling, variables are start/end days for each job

✓ need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3

✓ Continuous variables

39
FAI Module - 2

✓ e.g., start/end times for Hubble Space Telescope observations

✓ linear constraints solvable in polynomial time by linear programming

Real-world CSPs
✓ Assignment problems

✓ e.g., who teaches what class

✓ Timetabling problems

✓ e.g., which class is offered when and where?


✓ Transportation scheduling

✓ Factory scheduling

Game Playing

Game Playing is an important domain of artificial intelligence. Games don’t


require much knowledge; the only knowledge we need to provide is the rules,
legal moves and the conditions of winning or losing the game.
Both players try to win the game. So, both of them try to make the best move
possible at each turn. Searching techniques like BFS(Breadth First Search) are
not accurate for this as the branching factor is very high, so searching will take a
lot of time. So, we need another search procedures that improve –
• Generate procedure so that only good moves are generated.
• Test procedure so that the best move can be explored first.

Mini-Max Algorithm

40
FAI Module - 2

• Mini-max algorithm is a recursive or backtracking algorithm which is used


in decision-making and game theory. It provides an optimal move for the
player assuming that opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the game-tree.
• Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe and various tow-players game. This Algorithm
computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is
called MIN.
• Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of
the tree, then backtrack the tree as the recursion.

Working of Min-Max Algorithm:


• The working of the minimax algorithm can be easily described using an
example. Below we have taken an example of game-tree which is
representing the two-player game.
• In this example, there are two players one is called Maximizer and other is
called Minimizer.
• Maximizer will try to get the Maximum possible score, and Minimizer will
try to get the minimum possible score.
• This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
• At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the
main steps involved in solving the two-player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the
utility function to get the utility values for the terminal states. In the below tree

41
FAI Module - 2

diagram, let's take A is the initial state of the tree. Suppose maximizer takes first
turn which has worst-case initial value =- infinity, and minimizer will take next
turn which has worst-case initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞, so we will compare each value in terminal state with initial value of Maximizer
and determines the higher nodes values. It will find the maximum among the all.
For node D max(-1,- -∞) => max(-1,4)= 4
For Node E max(2, -∞) => max(2, 6)= 6
For Node F max(-3, -∞) => max(-3,-5) = -3
For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values.
For node B= min(4,6) = 4
For node C= min (-3, 7) = -3

42
FAI Module - 2

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree, there
are only 4 layers, hence we reach immediately to the root node, but in real games,
there will be more than 4 layers.
For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game
Algorithm:
minimax(player,board)
if(game over in current board position)
return winner
children = all legal moves for player from this board
if(max's turn)
return maximal score of calling minimax on all the children else
(min's turn)

43
FAI Module - 2

return minimal score of calling minimax on all the children

Properties of Mini-Max algorithm:


• Complete- Min-Max algorithm is Complete. It will definitely find a solution
(if exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
• Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of
the game-tree, and m is the maximum depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is also similar
to DFS which is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge branching
factor, and the player has lots of choices to decide

44
FAI Module - 2

45

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy