Unit 1
Unit 1
2/8/2024 AI UNIT-I 1
UNIT-I
Introduction to Artificial Intelligence and Search
Strategies
• Part-I
• History and Introduction to AI
• Intelligent Agent
• Types of agents
• Environment and types
• Typical AI problems
2/8/2024 AI UNIT-I 2
UNIT-I
Introduction to Artificial Intelligence and Search
Strategies
2/8/2024 AI UNIT-I 3
Artificial Intelligence
What is AI ?
1. Intelligence
2. Artificial device
Intelligence means –
- A system with intelligence is expected to behave as intelligently as a human
– A system with intelligence is expected to behave in the best possible manner
AI UNIT-I 4
Definition of AI
“The exciting new effort to make “The study of mental faculties
computers think … machine with through the use of computational
minds, … ” (Haugeland, 1985) models” (Charniak and McDermott,
1985)
“Activities that we associated with
human thinking, activities such as “ The study of the computations
decision-making, problem solving, that make it possible to perceive,
learning … “ (Bellman, 1978) reason, and act” (Winston, 1992)
To investigate human
mind
Theories of reasoning and
learning
AI
Linguistic
Mathematics
The meaning and Theories of logic probability,
structure of language decision making and
CS computation
2/8/2024
Make AI a reality 7
AI UNIT-I
AI FIELDS
AI Fields
• Speech Recognition
• Natural Language Processing
• Computer Vision
• Image Processing
• Robotics
• Pattern Recognition (Machine Learning)
• Neural Network (Deep Learning)
Define scope and view of Artificial Intelligence
2/8/2024 AI UNIT-I 11
History of AI
• McCulloch and Pitts (1943)
– Developed a Boolean circuit model of brain
– They wrote the paper explained how it is possible for neural networks to
compute
• Minsky and Edmonds (1951)
– Built a neural network computer (SNARC)
– Used 3000 vacuum tubes and a network with 40 neurons.
• Darmouth conference (1956):
– Conference brought together the founding fathers of artificial intelligence for
the first time
– In this meeting the term “Artificial Intelligence” was adopted.
• 1952-1969
– Newell and Simon - Logic Theorist was published (considered by many to be
the first AI program )
– Samuel - Developed several programs for playing checkers
2/8/2024 AI UNIT-I 12
History…. continued
2/8/2024 AI UNIT-I 13
• Gaming
• Natural Language
Processing
Applications • Expert Systems
of AI • Vision Systems
• Speech Recognition
• Handwriting Recognition
• Intelligent Robots
Summary
• Definition of AI
• Turing Test
• Foundations of AI
• History
Intelligent Agents
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types or The Structure of Agents
An agent is any thing that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors
Autonomous Agent
2/8/2024 Unit -1 Introduction 17
Examples
Agents
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators/ effectors
• Robotic agent:
– Sensors:- cameras (picture Analysis) and infrared range finders for sensors, Solar Sensor.
– Actuators- various motors, speakers, Wheels
• The term bot is derived from robot.
• Software Agent(Soft Bot) • software bots act only in digital
– Functions as sensors spaces
– Functions as actuators • Is nothing more than a piece of code
• Example - Chatbots, the little
messaging applications that pop up in
• the corner of your screen
2/8/2024
19 Unit -1 Introduction
Agent Terminology
• Performance Measure of Agent − It is the
criteria, which determines how successful an
agent is.
• Behavior of Agent − It is the action that agent
performs after any given sequence of
percepts.
• Percept − It is agent’s perceptual inputs at a
given instance.
• Percept Sequence − It is the history of all that
an agent has perceived till date.
• Agent Function − It is a map from the precept
sequence to an action.
What is an Intelligent Agent
• An agent is anything that can
– perceive its environment through sensors, and
– act upon that environment through actuators (or effectors)
• An Intelligent Agent must sense, must act, must be autonomous (to
some extent),. It also must be rational.
• Fundamental Facilities of Intelligent Agent
• Acting
• Sensing
• Understanding, reasoning, learning
• In order to act one must sense , Blind actions is not characteristics of
Intelligence.
• Goal: Design rational agents that do a “good job” of acting in their
environments
– success determined based on some objective performance
measure
2/8/2024 Unit -1 Introduction
21
What is an Intelligent Agent
• Rational Agents
• AI is about building rational agents.
• An agent should strive to "do the right thing"
• An agent is something that perceives and acts.
• A rational agent always does the right thing
.
• Perfect Rationality( Agent knows all & correct action)
– Humans do not satisfy this rationality
• Bounded Rationality-
– Human use approximations
– Definition of Rational Agent:
For each possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure,
• Rational=best?
Yes, but best of its knowledge
• Rational=Optimal?
Yes, to the best of it’s abilities & constraints (Subject to
resources)
• PEAS Analysis:
– Specify Performance Measure
– Environment
– Actuators
– Sensors
– Performance measure??
– Environment??
– Actuators??
– Sensors??
• Deterministic AI environments are those on which the outcome can be determined base on a
specific state. In other words, deterministic environments ignore uncertainty. Ex. Chess
• Most real world AI environments are not deterministic. Instead, they can be classified as
stochastic.
– The environment is semi-dynamic if the environment itself does not change with the
passage of time but the agent's performance score does.
• Thanks
Agent types
Utility Function
a mapping of states onto real numbers
allows rational decisions in two kinds of situations
evaluation of the tradeoffs among conflicting goals
evaluation of competing goals
• Examples
– quicker, safer, more reliable ways to get where going;
– price comparison shopping
– bidding on items in an auction
– evaluating bids in an auction
• The while loop above is the "execution phase" of this agent's behavior
– Note that this architecture assumes that the execution phase does not require
monitoring of the environment
An action or an operator takes the agent from one state to another state which is
called a successor state. A state can have a number of successor states.
A plan is a sequence of actions. The cost of a plan is referred to as the path cost. The
path cost is a positive number, and a common path cost may be the sum of the costs
of the steps in the path.
Goal State
Example problem: Pegs and Disks problem
Now we will describe a sequence of actions that can be applied on the initial state.
Step 1: Move A → C
Step 2: Move A → B
Example problem: Pegs and Disks problem
Step 3: Move A → C
Step 4: Move B→ A
Example problem: Pegs and Disks problem
• Step 5: Move C → B
Step 6: Move A → B
Example problem: Pegs and Disks problem
• Step 7: Move C→ B
Example problem: Pegs and Disks problem
Search
1. A set of states
2. Operators and their costs
3. Start state
4. A test to check for goal state
We will now outline the basic search algorithm, and then consider various variations of
this algorithm.
The basic search algorithm
Let L be a list containing the initial state (L= the fringe)
In addition the search algorithm maintains a list of nodes called the fringe(open list). The
fringe keeps track of the nodes that have been generated but are yet to be explored.
Search algorithm: Key issues
• How can we handle loops?
• Corresponding to a search algorithm, should we return a path or a node?
• Which node should we select?
• Alternatively, how would we place the newly generated nodes in the fringe?
• Which path to find?
The objective of a search problem is to find a path from the initial state to a goal
state.
Our objective could be to find any path, or we may need to find the shortest path
or least cost path.
Evaluating Search strategies
What are the characteristics of the different search algorithms and what is their
efficiency? We will look at the following three factors to measure this.
1. Completeness: Is the strategy guaranteed to find a solution if one exists?
2. Optimality: Does the solution have low cost or the minimal cost?
3. What is the search cost associated with the time and memory required to find a
solution?
The different search strategies that we will consider include the following:
We also need to introduce some data structures that will be used in the search
algorithms.
Node data structure
A node used in the search algorithm is a data structure which contains the following:
1. A state description
2. A pointer to the parent of the node
3. Depth of the node
4. The operator that generated this node
5. Cost of this path (sum of operator costs) from the start state
The nodes that the algorithm has generated are kept in a data structure called OPEN or
fringe. Initially only the start node is in OPEN.
Search tree may be infinite because of loops even if state space is small
Uninformed Search Strategies
• Breadth-first search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Comparing Uninformed Search Strategies
• Completeness
– Will a solution always be found if one exists?
• Time
– How long does it take to find the solution?
– Often represented as the number of nodes searched
• Space
– How much memory is needed to perform the search?
– Often represented as the maximum number of nodes stored at once
• Optimal
– Will the optimal (least cost) solution be found?
Comparing Uninformed Search Strategies
Note that in breadth first search the newly generated nodes are put at the back of
fringe or the OPEN list. The nodes will be expanded in a FIFO (First In First Out)
order. The node that enters OPEN earlier will be expanded earlier.
Breadth-First Search
Breadth-First Search
Breadth-First Search
Breadth-First Search
BFS illustrated
Step 1: Initially fringe contains only one node corresponding to the source state A.
Figure 3
Step 2: A is removed from fringe. The node is expanded, and its children B and C
are generated. They are placed at the back of fringe.
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and
put at the back of fringe.
Version
Step 4: Node C is removed from fringe and is expanded. Its children D and G are
added to the back of fringe.
Step 5: Node D is removed from fringe. Its children C and F are generated and added
to the back of fringe.
Step 6: Node E is removed from fringe. It has no children.
Step 7: D is expanded, B and F are put in OPEN.
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns
the path A C G by following the parent pointers of the node corresponding to G. The
algorithm terminates.
Example
BFS
92
What is the Complexity of Breadth-First Search?
• Time Complexity
– assume (worst case) that there is 1
goal leaf at the RHS d=0
– so BFS will expand all nodes
d=1
= 1 + b + b2+ ......... + bd
= O (bd) d=2
G
• Space Complexity
– how many nodes can be in the queue d=0
(worst-case)?
– at depth d-1 there are bd unexpanded d=1
nodes in the Q = O (bd)
d=2
G
Advantages & Disadvantages of Breadth First Search
• Complete
– Yes if b (max branching factor) is finite
• Time
– 1 + b + b2 + … + bd = O(bd)
– exponential in d
• Space
– O(bd)
– Keeps every node in memory
– This is the big problem; an agent that generates nodes at 10 MB/sec
will produce 860 MB in 24 hours
• Optimal
– Yes (if cost is 1 per step); not optimal in general
Lessons From Breadth First Search
d=0
• Space Complexity
– how many nodes can be in the queue d=1
(worst-case)?
– at depth l < d we have b-1 nodes d=2
– at depth d we have b nodes
d=3
– total = (d-1)*(b-1) + b = O(bd)
d=4
Depth-First Search
• Complete
– No: fails in infinite-depth spaces, spaces with loops
• Modify to avoid repeated spaces along path
– Yes: in finite spaces
• Time
– O(bd)
– Not great if m is much larger than d
– But if the solutions are dense, this may be faster than breadth-first
search
• Space
– O(bd)…linear space
• Optimal
– No
Depth-Limited Search
• Key idea: Iterative deepening search (IDS) applies DLS repeatedly with
increasing depth. It terminates when a solution is found or no solutions
exists.
• IDS combines the benefits of BFS and DFS: Like DFS the memory
requirements are very modest (O(bd)). Like BFS, it is complete when
the branching factor is finite.
134
Bi-Directional Search
Complexity: time and space complexity are:
O (b d / 2 )
136
Also note that the algorithm works well only when there are unique start and goal states.
Algorithm:
• Bidirectional search involves alternate searching from the start state toward
the goal and from the goal state toward the start.
• The Goal of UCS is to find the path to the goal node which is the lowest cumulative cost
• The algorithm expands nodes in the order of their cost from the source.
• The path cost is usually taken to be the sum of the step costs.
• In uniform cost search the newly generated nodes are put in OPEN according to their path
costs.
• This ensures that when a node is selected for expansion it is a node with the cheapest cost
among the nodes in OPEN, “priority queue”
• Let g(n) = cost of the path from the start node to the current node n. Sort nodes by
increasing value of g.
140
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
– Equivalent to breadth-first if step costs all equal
141
Uniform Cost Search Enqueue nodes in order of cost
5 2 5 2 5 2
1 7 1 7
4 5
Note that Breadth First search can be seen as a special case of Uniform Cost Search, where the path cost is just the 142
depth.
Uniform Cost Search in Tree
1. fringe MAKE-EMPTY-QUEUE()
2. fringe INSERT( root_node ) // with g=0
3. loop {
1. if fringe is empty then return false // finished without goal
2. node REMOVE-SMALLEST-COST(fringe)
3. if node is a goal
1. print node and g
2. return true // that found a goal
4. Lg EXPAND(node) // Lg is set of children with their g costs
// NOTE: do not check Lg for goals here!!
5. fringe INSERT-ALL(Lg, fringe )
}
143
Uniform Cost Search in Graph
1. fringe MAKE-EMPTY-QUEUE()
2. fringe INSERT( root_node ) // with g=0
3. loop {
1. if fringe is empty then return false // finished without goal
2. node REMOVE-SMALLEST-COST(fringe)
3. if node is a goal
1. print node and g
2. return true // that found a goal
4. Lg EXPAND(node) // Lg is set of neighbours with their g costs
// NOTE: do not check Lg for goals here!!
5. fringe INSERT-IF-NEW(Lg, fringe ) // ignore revisited nodes
// unless is with new better g
}
144
Uniform cost search
• A breadth-first search finds the shallowest goal state and will therefore be
the cheapest solution provided the path cost is a function of the depth of
the solution. But, if this is not the case, then breadth-first search is not
guaranteed to find the best (i.e. cheapest solution).
• Uniform cost search remedies this by expanding the lowest cost node on
the fringe, where cost is the path cost, g(n).
• In the following slides those values that are attached to paths are the cost
of using that path.
145
Consider the following problem…
A
1 10
5 5
S B G
15 5
We wish to find the shortest route from node S to node G; that is, node S is the initial
state and node G is the goal state. In terms of path cost, we can clearly see that the
route SBG is the cheapest route. However, if we let breadth-first search chose on the
problem it will find the non-optimal path SAG, assuming that A is the first node to be
expanded at level 1. Press space to see a UCS of the same node set…
146
Once
Node
We
Node
Wenow
node
Astart
isSexpand
removed
isBremoved
with
has our
been
the from
initial
node
from
expanded
thethe
at
state
queue
the
queue
and
itfront
isand
expand
removed
and
ofthe
the
the
revealed
it…
queue,
revealed
from the
node
nodes
queue
A.
(node
Press
are
andG)
added
space
the
is added
revealed
to
tothe
continue.
toqueue.
the
node The
queue.
(node
queueG)Theis added.
then
queue sorted
is
Theagain
queue
on sorted
path is cost.
again
on pathNodes
sorted
cost.
with
onNote,
path
cheaper
we
cost.have
path
Note,now
cost
node
found
have G now
priority.In
a goal
appears
state
this
but
incase
dothe
thenot
queue
queue
recognise
twice,
will it
be
onceasNode
itasis Gnot
A10(1),
and
at node
the oncefront
B as
(5),
of
G11
followed
the
. Asqueue.
G10by
is at
Node
node
theCBfront
(15).
is theofPress
cheaper
the queue,
space.
node.
we
Pressproceed
now space. to goal state. Press space.
A 10
1
5 5
S B G
The goal state is achieved and the
15 path S-B-G is returned. In relation to
C path cost, UCS has found the optimal
route. Press space to end.
Press space to begin the search
Size of Queue: 0
1
3 Queue: Empty
S 10G
A,
B,
G B,
, 11
GC,11C, C15
Nodes expanded: 3
0
1
2 Current action:
FINISHED
Waiting….
Backtracking
Expanding
SEARCH Current level: 2
n/a
0
1
• Algorithm outline:
– Always select from the OPEN the node with the least g(n) value
for expansion, and put all newly generated nodes into OPEN
– Nodes in OPEN are sorted by their g(n) values (in ascending
order)
– Terminate if a node selected for expansion is a goal
148
Uniform-Cost Search
GENERAL-SEARCH(problem, ENQUEUE-BY-PATH-COST)
exp. node nodes list CLOSED list
S
1 8
5
A B C
3 9 4
7 5
D E G G’ G”
149
Uniform-Cost Search
GENERAL-SEARCH(problem, ENQUEUE-BY-PATH-COST)
exp. node nodes list CLOSED list
{S(0)}
S {A(1) B(5) C(8)}
A {D(4) B(5) C(8) E(8) G(10)}
S
D {B(5) C(8) E(8) G(10)} 1
5 8
B {C(8) E(8) G’(9) G(10)}
A B C
C {E(8) G’(9) G(10) G”(13)}
3 9 4
E {G’(9) G(10) G”(13) } 7 5
G’ {G(10) G”(13) } D E G G’ G”
151
Comparing Search Strategies
154
Search Graphs
1. Hamming distance The first picture shows the current state n, and the second
picture the goal state.
h(n) = 5 (because the tiles 2, 8, 1, 6 and 7 are out of place. )
a h’=1.6
h’=0.7 b c h’=0.8
• Greedy algorithms often perform very well. They tend to find good solutions
quickly, although not always optimal ones.
R1
R2
• Optimal?
– No – we just saw a counter-example
• Time?
– O(bm), can generate all nodes at depth m before finding solution
– m = maximum depth of search space
• Space?
– O(bm) – again, worst case, can generate all nodes at depth m
before finding solution
Properties of greedy best-first search
m
(m,p) (p|e)-current
(m,q)-previously
calculated distance
Example of A* search
a h’=1.6
h’=0.7 b c h’=0.8
140
A* Search
A* Search
A* Search
A* Search
Romania with step costs in km
R1
R2
Goal 1 Goal 2
Problem decomposition into an and-or
graph
• Or a goal may be split into 2 (or more) sub-
goals, EITHER of which must be satisfied if the
goal is to succeed; the links joining the goals
aren't marked with a curved line:
Goal 1 Goal 2
Problem decomposition into an and-or
graph
• Example
• "The function of a financial advisor is to help
the user decide whether to invest in a savings
account, or the stock market, or both. The
recommended investment depends on the
investor's income and the current amount
they have saved:
Problem decomposition into an and-or
graph
Advise user:
investment
should be X
• Step 2: decide what sub-goals this goal can be
split into.
In this case, X can be one of three things:
savings, stocks or a mixture.
Add three sub-goals to the graph. Make sure
the links indicate “or” rather than “and”.
Advise user:
investment
should be X
Amount
saved < Y Amount Income is Income > Income < Income is not
saved > Y steady W W steady
• Now we need a box in which the value of Y
is calculated:
Y is Z times 3000
heuristic cost
function
global minimum
• Begin
– 1. Identify possible starting states and measure
the distance (f) of their closeness with the goal
node; Push them in a stack according to the
ascending order of their f ;
– 2. Repeat
• Pop stack to get the stack-top element;
• If the stack-top element is the goal, announce it and exit
• Else push its children into the stack in the ascending order of their f
values;
• Looks one step ahead to determine if any successor is better than the
current state; if there is, move to the best successor.
-5 -5 -2
2 8 3 1 2 3
1 4 h = -3 8 4 h = -1
7 6 5 7 6 5
-3 -4
2 3 2 3
1 8 4 1 8 4 h = -2
7 6 5 7 6 5
h = -3 -4
f(n) = -(number of tiles out of place)
Drawbacks
• Problems:
of hill climbing
– Local Maxima: peaks that aren’t the highest point in the
space
– Plateaus: the space has a broad flat region that gives the
search algorithm no direction (random walk)
– Ridges: flat like a plateau, but with drop-offs to the sides;
• Remedies:
–Random restart
–Problem reformulation
• Some problem spaces are great for hill
climbing and others are terrible.
Hill Climbing Search
• Variants of Hill climbing
– Stochastic Hill Climbing
– First Choice Hill Climbing
– Random restart hill climbing
– Evolutionary Hill Climbing
– Stochastic Hill Climbing
• Basic hill climbing selects always up hill moves,
• This selects random from available uphill moves
• This help in addressing issues with simple hill climbing
like ridge.
– Random restart hill climbing
• It tries to overcome other problem with hill climbing
• Initial state is randomly generated
• Reaches to a position from where no progressive state is
possible
• Local maxima problem is handled by RRHC
– Evolutionary Hill Climbing
• Performs random mutations
• Genetic algorithm base search
Example of a local optimum
2 5
-4
1 7 4
start 8 6 3 goal
1 2 5 1 2 5 1 2 3
7 4 7 4 -4 8 4 0
8 6 3 8 6 3 7 6 5
-3
1 2 5
8 7 4 -4
6 3
The N-Queens Problem
• Suppose you have 8 chess
queens...
• ...and a chess board
The N-Queens Problem
Can the queens be placed on the
board so that no two queens are
attacking each other
?
The N-Queens Problem
Two queens are not allowed in the
same row...
The N-Queens Problem
Two queens are not allowed in the
same row, or in the same column...
The N-Queens Problem
Two queens are not allowed in the
same row, or in the same column, or
along the same diagonal.
The N-Queens Problem
The number of queens, and the size
of the board can vary. N Queens
N columns
The N-Queens Problem
We will write a program which tries
to find a way to place N queens on
an N x N chess board.
How the program works
The program uses a stack
to keep track of where
each queen is placed.
How the program works
ROW 1, COL 1
How the program works
We also have an integer
variable to keep track of
how many rows have
been filled so far.
ROW 1, COL 1
1 filled
How the program works
Each time we try to place
a new queen in the next
row, we start by placing
the queen in the first
column...
ROW 2, COL 1
ROW 1, COL 1
1 filled
How the program works
...if there is a conflict
with another queen,
then we shift the new
queen to the next
column.
ROW 2, COL 2
ROW 1, COL 1
1 filled
How the program works
If another conflict occurs,
the queen is shifted
rightward again.
ROW 2, COL 3
ROW 1, COL 1
1 filled
How the program works
When there are no
conflicts, we stop and
add one to the value of
filled.
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
Let's look at the third
row. The first position
we try has a conflict...
ROW 3, COL 1
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
...so we shift to column
2. But another conflict
arises...
ROW 3, COL 2
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
...and we shift to the
third column.
Yet another conflict
arises...
ROW 3, COL 3
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
...and we shift to column
4. There's still a conflict
in column 4, so we try to
shift rightward again...
ROW 3, COL 4
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
...but there's nowhere
else to go.
ROW 3, COL 4
ROW 2, COL 3
ROW 1, COL 1
2 filled
How the program works
1 filled
How the program works
Now we continue
working on row 2,
shifting the queen to the
right.
ROW 2, COL 4
ROW 1, COL 1
1 filled
How the program works
This position has no
conflicts, so we can
increase filled by 1, and
move to row 3.
ROW 2, COL 4
ROW 1, COL 1
2 filled
How the program works
In row 3, we start again
at the first column.
ROW 3, COL 1
ROW 2, COL 4
ROW 1, COL 1
2 filled
Local search – example
Hill-climbing search - example
◼ complete-state formulation for 8-queens
❑ successor function returns all possible states generated by moving a single
queen to another square in the same column (8 x 7 = 56 successors for each
state)
❑ the heuristic cost function h is the number of pairs of queens that are attacking
each other
initial state
Representation
• In the search tree, a variable is assigned at each
level.
• Solutions have to be complete assignment, therefore
they appear at depth n, the number of variable and
maximum depth of the tree.
• Depth-first search algorithms are popular in CSPs.
• The simplest class of CSP (map coloring, n-queens)
are characterized by:
– discrete variables
– finite domains
Finite domains
• If the maximum size of the domain of any variable is
d, then the number of possible complete
assignments is O(dn), exponential in the number of
variables.
• CSPs with finite domain include Boolean CSPs,
whose variables can only be true or false.
• In most practical applications, CSP algorithms can
solve problems with domains orders of magnitude
larger than the ones solvable by uninformed search
algorithms.
Constraints
• The simplest type is the unary constraint, which
constraints the values of just one variable.
• A binary constraint relates two variables.
• Higher-order constraints involve three or more
variables. Cryptarithmetic puzzles are an example:
Cryptarithmetic puzzles
• Variables: F, T, U, W, R, O, X1, X2, X3
• Domains: {0,1,2,3,4,5,6,7,8,9}
• Constraints:
– Alldiff (F,T,U,W,R,O)
– O + O = R + 10 · X1
– X1 + W + W = U + 10 · X2
– X2 + T + T = O + 10 · X3
– X3 = F, T ≠ 0, F ≠ 0
Depth-first search with
backtracking
• Standard depth-first search on a CSP wastes time
searching when constraints have already been
violated.
• Because of the way that the operators have been
defined, an operator can never redeem a constraint
that has already been violated.
• A first improvement is:
– To test constraints after each variable assignment
– If all possible values violate some constraint, then the
algorithm backtracks to the last valid assignment
• Variables are classified as: past, current, future.
Backtracking search algorithm
Backtracking search algorithm
1. Set each variable as undefined. Empty stack. All variables are future variables.
2. Select a future variable as current variable.
If it exists, delete it from FUTURE and stack it (top = current variable),
if not, the assignment is a solution.
3. Select an unused value for the current variable.
If it exists, mark the value as used,
if not, set current variable as undefined,
mark all its values as unused,
unstack the variable and add it to FUTURE,
if stack is empty, there is no solution,
if not, go to 3.
4. Test constraints between past variables and the current one.
If they are satisfied, go to 2,
if not, go to 3.
(It is possible to use heuristics to select variables (2.) and values (3.).
Forward checking algorithm
Forward checking: example
Forward checking: example
Forward checking: example
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Constraint propagation
• Forward checking propagates information from assigned to unassigned
variables, but doesn't provide early detection for all failures:
◼ Domain
◼ {0, …, 9}
◼ Distinct variables, S ≠ E, M ≠ S, …
◼ Search
◼ Assign scores for proposed solution, h
◼ Update the bound, b
Variables:
Domain:
//domain
var int l[Letters] in 0..9;
307
What Kinds of Games?
Mainly games of strategy with the following
characteristics:
308
Two-Player Game
Opponent’s Move
Game yes
Over?
no
Generate Successors
Evaluate Successors
no Game yes
Over?
309
Game Tree (2-player, Deterministic,
Turns)
computer’s
turn
opponent’s
turn
opponent’s
turn
• backed-up value
– of a max-position: the value of its largest successor
– of a min-position: the value of its smallest successor
312
Minimax – Animated Example
Max 3 6 The computer can
obtain 6 by
choosing the right
6 hand edge from the
Min 5 3 first node.
Max 1 3 6 0 7
5
5 2 1 3 6 2 0 7
313
Minimax Strategy
• Why do we take the min value every other level
of the tree?
314
Minimax Function
• MINIMAX-VALUE(n) = UTILITY(n)
if n is a terminal state
316
Minimax algorithm
317
Tic Tac Toe
• Let p be a position/state in the game
• Define the utility function f(p) by
– f(p) =
• largest positive number if p is a win for computer
• smallest negative number if p is a win for opponent
• RCDC – RCDO
– where RCDC is number of rows, columns and diagonals in
which computer could still win
– and RCDO is number of rows, columns and diagonals in
which opponent could still win.
318
Properties of Minimax
321
Searching Game Trees
• Exhaustively searching a game tree is not usually a good idea.
• Even for a simple tic-tac-toe game there are over 350,000 nodes
in the complete game tree.
322
Alpha-beta Pruning
• A method that can often cut off a half the game tree.
323
α-β pruning example
324
α-β pruning example
=3
alpha cutoff
325
α-β pruning example
326
α-β pruning example
327
α-β pruning example
328
α-β pruning example
329
α-β pruning example
330
Alpha Cutoff
>3 =3
8 10
331
Beta Cutoff
=4 <4
4 >8
8 cutoff
332
Alpha-Beta Pruning
max
min
max
eval 5 2 10 11 1 2 2 8 6 5 12 4 3 25 2
333
Properties of α-β
• Pruning does not affect final result. This means that it gets the
exact same result as does full minimax.
334
Thank you!