0% found this document useful (0 votes)
5 views332 pages

Unit 1

This document provides an introduction to Artificial Intelligence (AI), covering its history, definitions, goals, and the concept of intelligent agents. It discusses various AI fields, types of agents, and the PEAS (Performance measure, Environment, Actuators, Sensors) framework for designing rational agents. Additionally, it outlines the types of environments in which agents operate and the characteristics of different agent types.

Uploaded by

Aarya Kevadia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views332 pages

Unit 1

This document provides an introduction to Artificial Intelligence (AI), covering its history, definitions, goals, and the concept of intelligent agents. It discusses various AI fields, types of agents, and the PEAS (Performance measure, Environment, Actuators, Sensors) framework for designing rational agents. Additionally, it outlines the types of environments in which agents operate and the characteristics of different agent types.

Uploaded by

Aarya Kevadia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 332

AI UNIT - I

2/8/2024 AI UNIT-I 1
UNIT-I
Introduction to Artificial Intelligence and Search
Strategies

• Part-I
• History and Introduction to AI
• Intelligent Agent
• Types of agents
• Environment and types
• Typical AI problems

2/8/2024 AI UNIT-I 2
UNIT-I
Introduction to Artificial Intelligence and Search
Strategies

2/8/2024 AI UNIT-I 3
Artificial Intelligence
What is AI ?

Artificial Intelligence is concerned with the design of intelligence


in an artificial device.
John McCarthy
The term was coined by McCarthy in 1956.

There are two ideas in the definition.

1. Intelligence
2. Artificial device

Intelligence means –
- A system with intelligence is expected to behave as intelligently as a human
– A system with intelligence is expected to behave in the best possible manner

AI UNIT-I 4
Definition of AI
“The exciting new effort to make “The study of mental faculties
computers think … machine with through the use of computational
minds, … ” (Haugeland, 1985) models” (Charniak and McDermott,
1985)
“Activities that we associated with
human thinking, activities such as “ The study of the computations
decision-making, problem solving, that make it possible to perceive,
learning … “ (Bellman, 1978) reason, and act” (Winston, 1992)

“The art of creating machines that “A field of study that seeks to


perform functions that require explain and emulate intelligent
intelligence when performed by behavior in terms of computational
people” (Kurzweil, 1990) processes” (Schalkoff, 1990)

“The study of how to make “The branch of computer science


computers do things at which, at that is concerned with the
the moment, people are better” automation of intelligent behavior”
(Rich and Knight, 1991) (Luger and Stubblefield, 1993)

In conclusion, they falls into four categories: Systems that


think like human, act like human, think rationally, or act rationally.
5
Goals of AI

To Create Expert Systems − To Implement Human


The systems which exhibit Intelligence in Machines −
intelligent behavior, learn, Creating systems that
demonstrate, explain, and understand, think, learn,
advice its users. and behave like humans.
AI Foundations?
AI inherited many ideas, viewpoints and techniques from other disciplines.

To investigate human
mind
Theories of reasoning and
learning

AI

Linguistic
Mathematics
The meaning and Theories of logic probability,
structure of language decision making and
CS computation

2/8/2024
Make AI a reality 7
AI UNIT-I
AI FIELDS
AI Fields
• Speech Recognition
• Natural Language Processing
• Computer Vision
• Image Processing
• Robotics
• Pattern Recognition (Machine Learning)
• Neural Network (Deep Learning)
Define scope and view of Artificial Intelligence

Designing systems that are Embodied by the concept of


as intelligent as humans. the Turing Test – Turing test

Logic and laws of thought


deals with studies of ideal or
study of rational agents
rational thought process and
inference.
The Turing Test
(Can Machine think? A. M. Turing, 1950)

2/8/2024 AI UNIT-I 11
History of AI
• McCulloch and Pitts (1943)
– Developed a Boolean circuit model of brain
– They wrote the paper explained how it is possible for neural networks to
compute
• Minsky and Edmonds (1951)
– Built a neural network computer (SNARC)
– Used 3000 vacuum tubes and a network with 40 neurons.
• Darmouth conference (1956):
– Conference brought together the founding fathers of artificial intelligence for
the first time
– In this meeting the term “Artificial Intelligence” was adopted.
• 1952-1969
– Newell and Simon - Logic Theorist was published (considered by many to be
the first AI program )
– Samuel - Developed several programs for playing checkers

2/8/2024 AI UNIT-I 12
History…. continued

• 1969-1979 Development of Knowledge-based systems


– Expert systems:
• Dendral: Inferring molecular structures
• Mycin: diagnosing blood infections
• Prospector: recommending exploratory drilling.
• In the 1980s, Lisp Machines developed and marketed.
• Around 1985, neural networks return to popularity
• In 1988, there was a resurgence of probabilistic and decision-
theoretic methods
• The 1990's saw major advances in all areas
• machine learning, data mining
• natural language understanding
• vision, virtual reality, games etc

2/8/2024 AI UNIT-I 13
• Gaming
• Natural Language
Processing
Applications • Expert Systems
of AI • Vision Systems
• Speech Recognition
• Handwriting Recognition
• Intelligent Robots
Summary
• Definition of AI
• Turing Test
• Foundations of AI
• History
Intelligent Agents
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types or The Structure of Agents

2/8/2024 16 Unit -1 Introduction


Agents

An agent is any thing that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators/ effectors

Ex: Human Being , Calculator etc

Agent has goal –the objective which agent has to satisfy

Actions can potentially change the environment

Agent perceive current percept or sequence of perceptions

Autonomous Agent
2/8/2024 Unit -1 Introduction 17
Examples
Agents
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators/ effectors

• Human agent: eyes, ears, and other organs used as sensors;


• hands, legs, mouth, and other body parts used as actuators/ Effector

• Robotic agent:
– Sensors:- cameras (picture Analysis) and infrared range finders for sensors, Solar Sensor.
– Actuators- various motors, speakers, Wheels
• The term bot is derived from robot.
• Software Agent(Soft Bot) • software bots act only in digital
– Functions as sensors spaces
– Functions as actuators • Is nothing more than a piece of code
• Example - Chatbots, the little
messaging applications that pop up in
• the corner of your screen
2/8/2024
19 Unit -1 Introduction
Agent Terminology
• Performance Measure of Agent − It is the
criteria, which determines how successful an
agent is.
• Behavior of Agent − It is the action that agent
performs after any given sequence of
percepts.
• Percept − It is agent’s perceptual inputs at a
given instance.
• Percept Sequence − It is the history of all that
an agent has perceived till date.
• Agent Function − It is a map from the precept
sequence to an action.
What is an Intelligent Agent
• An agent is anything that can
– perceive its environment through sensors, and
– act upon that environment through actuators (or effectors)
• An Intelligent Agent must sense, must act, must be autonomous (to
some extent),. It also must be rational.
• Fundamental Facilities of Intelligent Agent
• Acting
• Sensing
• Understanding, reasoning, learning
• In order to act one must sense , Blind actions is not characteristics of
Intelligence.
• Goal: Design rational agents that do a “good job” of acting in their
environments
– success determined based on some objective performance
measure
2/8/2024 Unit -1 Introduction
21
What is an Intelligent Agent
• Rational Agents
• AI is about building rational agents.
• An agent should strive to "do the right thing"
• An agent is something that perceives and acts.
• A rational agent always does the right thing
.
• Perfect Rationality( Agent knows all & correct action)
– Humans do not satisfy this rationality
• Bounded Rationality-
– Human use approximations
– Definition of Rational Agent:
For each possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure,
• Rational=best?
Yes, but best of its knowledge
• Rational=Optimal?
Yes, to the best of it’s abilities & constraints (Subject to
resources)

2/8/2024 Unit -1 Introduction


22
What is an Intelligent Agent - Agent Function

• Agent Function (percepts ==> actions)


– Maps from percept histories to actions f: P* → A
– The agent program runs on the physical architecture to produce the
function f
– agent = architecture + program
Action := Function(Percept Sequence)
If (Percept Sequence) then do Action

• Example: A Simple Agent Function for Vacuum World


If (current square is dirty) then suck
Else move to adjacent square

2/8/2024 Unit -1 Introduction


23
Example: Vacuum Cleaner Agent

 Percepts: location and contents, e.g., [A, Dirty]


 Actions: Left, Right, Suck, NoOp

2/8/2024 Unit -1 Introduction


24
What is an Intelligent Agent
• Limited Rationality
– limited sensors, actuators, and computing power may make Rationality
impossible
– Theory of NP-completeness: some problems are likely impossible to solve
quickly on ANY computer
– Both natural and artificial intelligence are always limited
– Degree of Rationality: the degree to which the agent’s internal "thinking"
maximizes its performance measure, given
• the available sensors
• the available actuators
• the available computing power
• the available built-in knowledge

2/8/2024 Unit -1 Introduction


25
PEAS Analysis
• To design a rational agent, we must specify the task
environment.

• PEAS Analysis:
– Specify Performance Measure
– Environment
– Actuators
– Sensors

2/8/2024 Unit -1 Introduction


26
PEAS Analysis – Examples

• Agent: Medical diagnosis system


– Performance measure: Healthy patient, minimize costs
– Environment: Patient, hospital, staff
– Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)
– Sensors: Keyboard (entry of symptoms, findings, patient's answers)

• Agent: Part-picking robot


– Performance measure: Percentage of parts in correct bins
– Environment: Conveyor belt with parts, bins
– Actuators: Jointed arm and hand
– Sensors: Camera, joint angle sensors

2/8/2024 Unit -1 Introduction


27
2/8/2024
28 Unit -1 Introduction
2/8/2024
29 Unit -1 Introduction
PEAS Analysis – More Examples

• Agent: Internet Shopping Agent

– Performance measure??
– Environment??
– Actuators??
– Sensors??

2/8/2024 Unit -1 Introduction


30
• Environments in which
agents operate can be
defined in different ways
Environment
• Environment appears from
the point of view of the
agent itself.

2/8/2024 Unit -1 Introduction 31


Environment Types
• Fully observable (vs. partially observable):
– An agent's sensors give it access to the complete state of the environment at each point in
time.
– It is convenient bcoz agent need not maintain any internal state to keep track of the world.
– Ex. Chess (Ex: Deep Blue)
– Partially Observable:: When Noisy & inaccurate sensors or part of state r missing from the
sensor data. (ex- Vacuum agent with only a local dirt sensor cannot tell whether there is dirt
in other squares, & automated taxi cannot see what other drivers are thinking)
– Ex. Poker
• Deterministic (vs. stochastic):

• Deterministic AI environments are those on which the outcome can be determined base on a
specific state. In other words, deterministic environments ignore uncertainty. Ex. Chess

• Most real world AI environments are not deterministic. Instead, they can be classified as
stochastic.

• Ex: Self-driving vehicles are a classic example of stochastic AI processes.

2/8/2024 Unit -1 Introduction


32
Environment Types (cont.)

• Episodic (vs. sequential):


• In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
• However, in Sequential environment, an agent requires memory of past actions to determine
the next best actions.

• Static (vs. dynamic):


– The environment is unchanged while an agent is deliberating

– The environment is semi-dynamic if the environment itself does not change with the
passage of time but the agent's performance score does.

• Discrete (vs. continuous):


– Discrete AI environments are those on which a finite [although arbitrarily large] set of
possibilities can drive the final outcome of the task. Chess is also classified as a discrete
AI problem. Continuous AI environments rely on unknown and rapidly changing data
sources. Vision systems in drones or self-driving cars operate on continuous AI
2/8/2024 environments. Unit -1 Introduction
33
Environment Types (cont.)
• Complete vs. Incomplete
Complete AI environments are those on which, at any give time, we have enough
information to complete a branch of the problem.

Ex: Chess is a classic example of a complete AI environment.

Ex: Poker, on the other hand, is an incomplete environments as AI strategies can’t


anticipate many moves in advance and, instead, they focus on finding a good
‘equilibrium” at any given time.

• Single agent (vs. multi-agent):


– An agent operating by itself in an environment.
Environment Types (cont.)

The environment type largely determines the agent design.

The real world is (of course) partially observable, stochastic, sequential,


dynamic, continuous, multi-agent

2/8/2024 Unit -1 Introduction


35
End of First Lecture

• Thanks
Agent types

• Four basic types:


– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents

2/8/2024 Unit -1 Introduction 37


Agent Types
• Simple reflex agents
– They choose actions only based on the current percept.
– These are based on condition-action rules (It is a rule that maps a state
(condition) to an action)
– They are stateless devices which do not have memory of past world states.
• Model Based Reflex Agents (with memory)
- Model : knowledge about “how the things happen in the world”.
– have internal state which is used to keep track of past states of the world.
• Goal Based Agents
– are agents which in addition to state information have a kind of goal
information which describes desirable situations.
– Agents of this kind take future events into consideration.
• Utility-based agents
– base their decision on classic axiomatic utility-theory in order to act rationally.
Note: All of these can be turned into “learning” agents
2/8/2024 Unit -1 Introduction
38
A Simple Reflex Agent

• We can summarize part of the


table by formulating
commonly occurring patterns
as condition-action rules:
• Example:
if car-in-front-brakes
then initiate braking
• Agent works by finding a rule
whose condition matches the
current situation
– rule-based systems
function Simple-Reflex-Agent(percept) returns action
• But, this only works if the static: rules, a set of condition-action rules
current percept is sufficient for
state  Interpret-Input(percept)
making the correct decision rule  Rule-Match(state, rules)
action  Rule-Action[rule]
return action

2/8/2024 Unit -1 Introduction


39
Example: Simple Reflex Vacuum Agent

2/8/2024 Unit -1 Introduction


40
Agents that Keep Track of the World

• Updating internal state


requires two kinds of encoded
knowledge
– knowledge about how the
world changes (independent of
the agents’ actions)
– knowledge about how the
agents’ actions affect the world
• But, knowledge of the internal
state is not always enough
– how to choose among
alternative decision paths (e.g., function Reflex-Agent-With-State(percept) returns action
where should the car go at an static: rules, a set of condition-action rules
intersection)? state, a description of the current world
– Requires knowledge of the goal state  Update-State(state, percept)
to be achieved rule  Rule-Match(state, rules)
action  Rule-Action[rule]
state  Update-State(state, action)
return action

2/8/2024 Unit -1 Introduction


41
Agents with Explicit Goals

 Reasoning about actions


 reflex agents only act based on pre-computed knowledge (rules)
 goal-based (planning) act by reasoning about which actions achieve the
goal
 more adaptive and flexible

2/8/2024 Unit -1 Introduction


42
Agents with Explicit Goals
• Knowing current state is not always enough.
– State allows an agent to keep track of unseen parts of the world, but the agent must
update state based on knowledge of changes in the world and of effects of own
actions.
– Goal = description of desired situation
• Examples:
– Decision to change lanes depends on a goal to go somewhere (and other factors);
– Decision to put an item in shopping basket depends on a shopping list, map of store,
knowledge of menu
• Notes:
– Search (Russell Chapters 3-5) and Planning (Chapters 11-13) are concerned with
finding sequences of actions to satisfy a goal.
– Reflexive agent concerned with one action at a time.
– Classical Planning: finding a sequence of actions that achieves a goal.
– Contrast with condition-action rules: involves consideration of future "what will
happen if I do ..." (fundamental difference).

2/8/2024 Unit -1 Introduction


43
A Complete Utility-Based Agent

 Utility Function
 a mapping of states onto real numbers
 allows rational decisions in two kinds of situations
 evaluation of the tradeoffs among conflicting goals
 evaluation of competing goals

2/8/2024 Unit -1 Introduction


44
Utility-Based Agents (Cont.)

• Preferred world state has higher utility for agent = quality of


being useful

• Examples
– quicker, safer, more reliable ways to get where going;
– price comparison shopping
– bidding on items in an auction
– evaluating bids in an auction

• Utility function: state ==> U(state) = measure of happiness

2/8/2024 Unit -1 Introduction


45
Shopping Agent Example
• Navigating: Move around store; avoid obstacles
– Reflex agent: store map precompiled.
– Goal-based agent: create an internal map, reason explicitly about it, use signs
and adapt to changes ().
• Gathering: Find and put into cart groceries it wants, need to
induce objects from percepts.
– Reflex agent: wander and grab items that look good.
– Goal-based agent: shopping list.
• Menu-planning: Generate shopping list, modify list if store is
out of some item.
– Goal-based agent: required; what happens when a needed item is not there?
Achieve the goal some other way. e.g., no milk cartons: get canned milk or
powdered milk.

2/8/2024 Unit -1 Introduction


46
General Architecture for Goal-Based Agents
Input percept
state  Update-State(state, percept)
goal  Formulate-Goal(state, perf-measure)
search-space  Formulate-Problem (state, goal)
plan  Search(search-space , goal)
while (plan not empty) do
action  Recommendation(plan, state)
plan  Remainder(plan, state)
output action
end

• Simple agents do not have access to their own performance measure


– In this case the designer will "hard wire" a goal for the agent, i.e. the designer will choose
the goal and build it into the agent
• Similarly, unintelligent agents cannot formulate their own problem
– this formulation must be built-in also

• The while loop above is the "execution phase" of this agent's behavior
– Note that this architecture assumes that the execution phase does not require
monitoring of the environment

2/8/2024 Unit -1 Introduction


47
Learning Agents

 Four main components:


 Performance element: the agent function
 Learning element: responsible for making improvements by observing performance
 Critic: gives feedback to learning element by measuring agent’s performance
 Problem generator: suggest other possible courses of actions (exploration)

2/8/2024 Unit -1 Introduction


48
Intelligent Agent Summary

• An agent perceives and acts in an environment. It has an architecture


and is implemented by a program.
• An ideal agent always chooses the action which maximizes its expected
performance, given the percept sequence received so far.
• An autonomous agent uses its own experience rather than built-in
knowledge of the environment by the designer.
• An agent program maps from a percept to an action and updates its
internal state.
• Reflex agents respond immediately to percepts.
• Goal-based agents act in order to achieve their goal(s).
• Utility-based agents maximize their own utility function.

2/8/2024 Unit -1 Introduction


49
Exercise

• News Filtering Internet Agent


– uses a static user profile (e.g., a set of keywords specified by the user)
– on a regular basis, searches a specified news site (e.g., Reuters or AP) for news
stories that match the user profile
– can search through the site by following links from page to page
– presents a set of links to the matching stories that have not been read before
(matching based on the number of words from the profile occurring in the news
story)
• (1) Give a detailed PEAS description for the news filtering agent
• (2) Characterize the environment type (as being observable, deterministic,
episodic, static, etc).

2/8/2024 Unit -1 Introduction


50
AI Problems
• water jug problem in Artificial Intelligence
• cannibals and missionaries problem in AI
• tic tac toe problem in artificial intelligence
• 8/16 puzzle problem in artificial intelligence
• tower of hanoi problem in artificial intelligence
Search Strategies
• Problem solving and formulating a problem State Space
Search- Uninformed and Informed Search Techniques,
• Heuristic function,
• A*,
• AO* algorithms ,
• Hill climbing,
• simulated annealing,
• genetic algorithms ,
• Constraint satisfaction method
Introduction to State Space Search
2.2 State space search
• Formulate a problem as a state space search by showing the legal problem
states, the legal operators, and the initial and goal states .
1. A state is defined by the specification of the values of all attributes of
interest in the world
2.An operator changes one state into the other; it has a precondition which is
the value of certain attributes
3.The initial state is where you start
4.The goal state is the partial description of the solution
State Space Search Notations

Let us begin by introducing certain terms.

An initial state is the description of the starting configuration of the agent

An action or an operator takes the agent from one state to another state which is
called a successor state. A state can have a number of successor states.

A plan is a sequence of actions. The cost of a plan is referred to as the path cost. The
path cost is a positive number, and a common path cost may be the sum of the costs
of the steps in the path.

Search is the process of considering various possible sequences of operators applied


to the initial state, and finding out a sequence which culminates in a goal state.
Search Problem
We are now ready to formally describe a search
problem.
A search problem consists of the following:
• S: the full set of states
• s0 : the initial state
• A:S→S is a set of operators
• G is the set of final states. Note that G ⊆S

These are schematically depicted in above Figure


Search Problem
The search problem is to find a sequence of actions which transforms the agent from
the initial state to a goal state g∈G.
A search problem is represented by a 4-tuple {S, s0, A, G}.
S: set of states
s0 ∈ S : initial state
A: S→S operators/ actions that transform one state to another state
G : goal, a set of states. G ⊆ S
This sequence of actions is called a solution plan. It is a path from the initial
state to a goal state. A plan P is a sequence of actions.
P = {a0, a1, … , aN} which leads to traversing a number of states {s0, s1, … ,
sN+1∈G}.
A sequence of states is called a path. The cost of a path is a positive number.
In many cases the path cost is computed by taking the sum of the costs of
each action.
Representation of search problems

A search problem is represented using a directed graph.

• The states are represented as nodes.

• The allowed actions are represented as arcs.


Searching process
The steps for generic searching process :
Do until a solution is found or the state space is exhausted.
1. Check the current state
2. Execute allowable actions to find the successor states.
3. Pick one of the new states.
4. Check if the new state is a solution state
If it is not, the new state becomes the current state and the process is repeated
Examples
Illustration of a search process
Examples
Illustration of a search process
Examples
Illustration of a search process
Example problem: Pegs and Disks problem

The initial state

Goal State
Example problem: Pegs and Disks problem

Now we will describe a sequence of actions that can be applied on the initial state.

Step 1: Move A → C

Step 2: Move A → B
Example problem: Pegs and Disks problem

Step 3: Move A → C

Step 4: Move B→ A
Example problem: Pegs and Disks problem

• Step 5: Move C → B

Step 6: Move A → B
Example problem: Pegs and Disks problem

• Step 7: Move C→ B
Example problem: Pegs and Disks problem
Search

Searching through a state space involves the following:

1. A set of states
2. Operators and their costs
3. Start state
4. A test to check for goal state

We will now outline the basic search algorithm, and then consider various variations of
this algorithm.
The basic search algorithm
Let L be a list containing the initial state (L= the fringe)

Loop if L is empty return failure


Node  select (L)
if Node is a goal
then return Node
(the path from initial state to Node)
else
generate all successors of Node, and
merge the newly generated states into L
End Loop

In addition the search algorithm maintains a list of nodes called the fringe(open list). The
fringe keeps track of the nodes that have been generated but are yet to be explored.
Search algorithm: Key issues
• How can we handle loops?
• Corresponding to a search algorithm, should we return a path or a node?
• Which node should we select?
• Alternatively, how would we place the newly generated nodes in the fringe?
• Which path to find?

The objective of a search problem is to find a path from the initial state to a goal
state.

Our objective could be to find any path, or we may need to find the shortest path
or least cost path.
Evaluating Search strategies

What are the characteristics of the different search algorithms and what is their
efficiency? We will look at the following three factors to measure this.
1. Completeness: Is the strategy guaranteed to find a solution if one exists?
2. Optimality: Does the solution have low cost or the minimal cost?

3. What is the search cost associated with the time and memory required to find a
solution?

a. Time complexity: Time taken (number of nodes expanded) (worst or average


case) to find a solution.

b. Space complexity: Space used by the algorithm measured in terms of the


maximum size of fringe
The different search strategies

The different search strategies that we will consider include the following:

1. Blind Search strategies or Uninformed search


a. Depth first search
b. Breadth first search
c. Iterative deepening search
d. Iterative broadening search
2. Informed Search
3. Constraint Satisfaction Search
4. Adversary Search
Blind Search
Blind Search
that does not use any extra information about the problem domain. The two common
methods of blind search are:
• BFS or Breadth First Search
• DFS or Depth First Search
Search Tree – Terminology

• Root Node: The node from which the search starts.


• Leaf Node: A node in the search tree having no children.
• Ancestor/Descendant: X is an ancestor of Y is either X is Y’s parent or X is an ancestor
of the parent of Y. If X is an ancestor of Y, Y is said to be a descendant of X.
• Branching factor: the maximum number of children of a non-leaf node in the search
tree
• Path: A path in the search tree is a complete path if it begins with the start node and
ends with a goal node. Otherwise it is a partial path.

We also need to introduce some data structures that will be used in the search
algorithms.
Node data structure
A node used in the search algorithm is a data structure which contains the following:
1. A state description
2. A pointer to the parent of the node
3. Depth of the node
4. The operator that generated this node
5. Cost of this path (sum of operator costs) from the start state

The nodes that the algorithm has generated are kept in a data structure called OPEN or
fringe. Initially only the start node is in OPEN.

The search process constructs a search tree, where


• root is the initial state and
• leaf nodes are nodes
• not yet expanded (i.e., in fringe) or
• having no successors (i.e., “dead-ends”)

Search tree may be infinite because of loops even if state space is small
Uninformed Search Strategies

• Uninformed strategies use only the information


available in the problem definition
– Also known as blind searching

• Breadth-first search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Comparing Uninformed Search Strategies

• Completeness
– Will a solution always be found if one exists?
• Time
– How long does it take to find the solution?
– Often represented as the number of nodes searched
• Space
– How much memory is needed to perform the search?
– Often represented as the maximum number of nodes stored at once
• Optimal
– Will the optimal (least cost) solution be found?
Comparing Uninformed Search Strategies

• Time and space complexity are measured in


– b – maximum branching factor of the search tree
– m – maximum depth of the state space
– d – depth of the least cost solution
Breadth-First Search

• Recall from Data Structures the basic algorithm for a breadth-


first search on a graph or tree

• Expand the shallowest unexpanded node

• Place all new successors at the end of a FIFO queue


Breadth First Search
Algorithm Breadth first search
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop

Note that in breadth first search the newly generated nodes are put at the back of
fringe or the OPEN list. The nodes will be expanded in a FIFO (First In First Out)
order. The node that enters OPEN earlier will be expanded earlier.
Breadth-First Search
Breadth-First Search
Breadth-First Search
Breadth-First Search
BFS illustrated
Step 1: Initially fringe contains only one node corresponding to the source state A.

Figure 3
Step 2: A is removed from fringe. The node is expanded, and its children B and C
are generated. They are placed at the back of fringe.
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and
put at the back of fringe.
Version
Step 4: Node C is removed from fringe and is expanded. Its children D and G are
added to the back of fringe.
Step 5: Node D is removed from fringe. Its children C and F are generated and added
to the back of fringe.
Step 6: Node E is removed from fringe. It has no children.
Step 7: D is expanded, B and F are put in OPEN.

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns
the path A C G by following the parent pointers of the node corresponding to G. The
algorithm terminates.
Example
BFS

92
What is the Complexity of Breadth-First Search?

• Time Complexity
– assume (worst case) that there is 1
goal leaf at the RHS d=0
– so BFS will expand all nodes
d=1
= 1 + b + b2+ ......... + bd
= O (bd) d=2
G

• Space Complexity
– how many nodes can be in the queue d=0
(worst-case)?
– at depth d-1 there are bd unexpanded d=1
nodes in the Q = O (bd)
d=2
G
Advantages & Disadvantages of Breadth First Search

Advantages of Breadth First Search


Finds the path of minimal length to the goal.

Disadvantages of Breadth First Search


Requires the generation and storage of a tree whose size is
exponential the the depth of the shallowest goal node
Properties of Breadth-First Search

• Complete
– Yes if b (max branching factor) is finite
• Time
– 1 + b + b2 + … + bd = O(bd)
– exponential in d
• Space
– O(bd)
– Keeps every node in memory
– This is the big problem; an agent that generates nodes at 10 MB/sec
will produce 860 MB in 24 hours
• Optimal
– Yes (if cost is 1 per step); not optimal in general
Lessons From Breadth First Search

• The memory requirements are a bigger


problem for breadth-first search than is
execution time

• Exponential-complexity search problems


cannot be solved by uniformed methods
Depth-First Search

• Recall from Data Structures the basic


algorithm for a depth-first search on a graph
or tree

• Expand the deepest unexpanded node

• Unexplored successors are placed on a stack


until fully explored
Depth first Search
Algorithm

Let fringe be a list containing the initial state


Loop
if fringe is empty return failure
Node remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
merge the newly generated nodes into fringe
add generated nodes to the front of fringe
End Loop
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Let us now run Depth First Search on the search
space given in Figure 34, and trace its progress.
Step 1: Initially fringe contains only the node for A.
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of
fringe.
Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe
Step 6: Node G is expanded and found to be a goal node. The solution path A-B-D-C-G is
returned and the algorithm terminates.
Depth-First Search
What is the Complexity of Depth-First Search?
• Time Complexity
– assume (worst case) that there is 1 d=0
goal leaf at the RHS
– so DFS will expand all nodes
d=1

=1 + b + b2+ ......... + bd d=2


= O (bd) G

d=0
• Space Complexity
– how many nodes can be in the queue d=1
(worst-case)?
– at depth l < d we have b-1 nodes d=2
– at depth d we have b nodes
d=3
– total = (d-1)*(b-1) + b = O(bd)
d=4
Depth-First Search
• Complete
– No: fails in infinite-depth spaces, spaces with loops
• Modify to avoid repeated spaces along path
– Yes: in finite spaces
• Time
– O(bd)
– Not great if m is much larger than d
– But if the solutions are dense, this may be faster than breadth-first
search
• Space
– O(bd)…linear space
• Optimal
– No
Depth-Limited Search

• A variation of depth-first search that uses a depth limit


– Alleviates the problem of unbounded trees
– Search to a predetermined depth l (“ell”)
– Nodes at depth l have no successors

• Same as depth-first search if l = ∞


• Can terminate for failure and cutoff
• The time and space complexity of depth-limited search is
similar to depth-first search.
Depth-Limited Search
Depth-Limited Search
Depth-Limited Search
• Complete
– Yes if l < d
• Time
– O(bl)
• Space
– O(bl)
• Optimal
– No if l > d
Uninformed : Iterative Deepening Search(IDS)

• Key idea: Iterative deepening search (IDS) applies DLS repeatedly with
increasing depth. It terminates when a solution is found or no solutions
exists.

• IDS combines the benefits of BFS and DFS: Like DFS the memory
requirements are very modest (O(bd)). Like BFS, it is complete when
the branching factor is finite.

• The total number of generated nodes is :

• N(IDS)=(d)b + (d-1) b2 + …+(1)bd

• In general, iterative deepening is the preferred Uninformed search


method when there is a large search space and the depth of the solution
is not known.
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
• Complete
– Yes
• Time
– O(bd)
• Space
– O(bd)
• Optimal
– Yes if step cost = 1
– Can be modified to explore uniform cost tree
Bidirectional Search
• Idea
– simultaneously search forward from S and backwards from G
– stop when both “meet in the middle”
– need to keep track of the intersection of 2 open sets of nodes
• What does searching backwards from G mean
– need a way to specify the predecessors of G
• this can be difficult,
• e.g., predecessors of checkmate in chess?
– which to take if there are multiple goal states?
– where to start if there is only a goal test, no explicit list?

134
Bi-Directional Search
Complexity: time and space complexity are:
O (b d / 2 )

136
Also note that the algorithm works well only when there are unique start and goal states.
Algorithm:
• Bidirectional search involves alternate searching from the start state toward
the goal and from the goal state toward the start.

• The algorithm stops when the frontiers intersect.

• A search algorithm has to be selected for each half.


Time and Space Complexities
• Consider a search space with branching factor b. Suppose that the
goal is d steps away from the start state. Breadth first search will
expand O(bd) nodes.
• If we carry out bidirectional search, the frontiers may meet when
both the forward and the backward search trees have depth = d/2.
• Suppose we have a good hash function to check for nodes in the
fringe.
• IN this case the time for bidirectional search will be O(bd/2).
• Also note that for at least one of the searches the frontier has to be
stored. So the space complexity is also O(bd/2).
5. Uniform-cost search
• This algorithm is by Dijkstra [1959]

• Used for weighted tree

• The Goal of UCS is to find the path to the goal node which is the lowest cumulative cost

• The algorithm expands nodes in the order of their cost from the source.

• The path cost is usually taken to be the sum of the step costs.

• In uniform cost search the newly generated nodes are put in OPEN according to their path
costs.

• This ensures that when a node is selected for expansion it is a node with the cheapest cost
among the nodes in OPEN, “priority queue”

• Let g(n) = cost of the path from the start node to the current node n. Sort nodes by
increasing value of g.

140
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
– Equivalent to breadth-first if step costs all equal

Complete? Yes, if step cost ≥ ε


Optimal? Yes – nodes expanded in increasing order of g(n)
Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is
the cost of the optimal solution
Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))

141
Uniform Cost Search Enqueue nodes in order of cost

5 2 5 2 5 2

1 7 1 7

4 5

Intuition: Expand the cheapest node. Where


the cost is the path cost g(n)
• Complete? Yes.
• Optimal? Yes,
•Time Complexity: O(bd)
• Space Complexity: O(bd)

Note that Breadth First search can be seen as a special case of Uniform Cost Search, where the path cost is just the 142
depth.
Uniform Cost Search in Tree
1. fringe  MAKE-EMPTY-QUEUE()
2. fringe  INSERT( root_node ) // with g=0
3. loop {
1. if fringe is empty then return false // finished without goal
2. node  REMOVE-SMALLEST-COST(fringe)
3. if node is a goal
1. print node and g
2. return true // that found a goal
4. Lg  EXPAND(node) // Lg is set of children with their g costs
// NOTE: do not check Lg for goals here!!
5. fringe  INSERT-ALL(Lg, fringe )
}

143
Uniform Cost Search in Graph
1. fringe  MAKE-EMPTY-QUEUE()
2. fringe  INSERT( root_node ) // with g=0
3. loop {
1. if fringe is empty then return false // finished without goal
2. node  REMOVE-SMALLEST-COST(fringe)
3. if node is a goal
1. print node and g
2. return true // that found a goal
4. Lg  EXPAND(node) // Lg is set of neighbours with their g costs
// NOTE: do not check Lg for goals here!!
5. fringe  INSERT-IF-NEW(Lg, fringe ) // ignore revisited nodes
// unless is with new better g
}

144
Uniform cost search
• A breadth-first search finds the shallowest goal state and will therefore be
the cheapest solution provided the path cost is a function of the depth of
the solution. But, if this is not the case, then breadth-first search is not
guaranteed to find the best (i.e. cheapest solution).

• Uniform cost search remedies this by expanding the lowest cost node on
the fringe, where cost is the path cost, g(n).

• In the following slides those values that are attached to paths are the cost
of using that path.

145
Consider the following problem…

A
1 10

5 5
S B G

15 5

We wish to find the shortest route from node S to node G; that is, node S is the initial
state and node G is the goal state. In terms of path cost, we can clearly see that the
route SBG is the cheapest route. However, if we let breadth-first search chose on the
problem it will find the non-optimal path SAG, assuming that A is the first node to be
expanded at level 1. Press space to see a UCS of the same node set…
146
Once
Node
We
Node
Wenow
node
Astart
isSexpand
removed
isBremoved
with
has our
been
the from
initial
node
from
expanded
thethe
at
state
queue
the
queue
and
itfront
isand
expand
removed
and
ofthe
the
the
revealed
it…
queue,
revealed
from the
node
nodes
queue
A.
(node
Press
are
andG)
added
space
the
is added
revealed
to
tothe
continue.
toqueue.
the
node The
queue.
(node
queueG)Theis added.
then
queue sorted
is
Theagain
queue
on sorted
path is cost.
again
on pathNodes
sorted
cost.
with
onNote,
path
cheaper
we
cost.have
path
Note,now
cost
node
found
have G now
priority.In
a goal
appears
state
this
but
incase
dothe
thenot
queue
queue
recognise
twice,
will it
be
onceasNode
itasis Gnot
A10(1),
and
at node
the oncefront
B as
(5),
of
G11
followed
the
. Asqueue.
G10by
is at
Node
node
theCBfront
(15).
is theofPress
cheaper
the queue,
space.
node.
we
Pressproceed
now space. to goal state. Press space.

A 10
1

5 5
S B G
The goal state is achieved and the
15 path S-B-G is returned. In relation to
C path cost, UCS has found the optimal
route. Press space to end.
Press space to begin the search

Size of Queue: 0
1
3 Queue: Empty
S 10G
A,
B,
G B,
, 11
GC,11C, C15
Nodes expanded: 3
0
1
2 Current action:
FINISHED
Waiting….
Backtracking
Expanding
SEARCH Current level: 2
n/a
0
1

UNIFORM COST SEARCH PATTERN


147
Uniform-Cost (UCS)
• Let g(n) = cost of the path from the start node to an open node n

• Algorithm outline:
– Always select from the OPEN the node with the least g(n) value
for expansion, and put all newly generated nodes into OPEN
– Nodes in OPEN are sorted by their g(n) values (in ascending
order)
– Terminate if a node selected for expansion is a goal

• Called “ Dijkstra's Algorithm ” in the algorithms literature and


similar to “ Branch and Bound Algorithm ” in operations research
literature

148
Uniform-Cost Search

GENERAL-SEARCH(problem, ENQUEUE-BY-PATH-COST)
exp. node nodes list CLOSED list

S
1 8
5
A B C
3 9 4
7 5
D E G G’ G”

149
Uniform-Cost Search
GENERAL-SEARCH(problem, ENQUEUE-BY-PATH-COST)
exp. node nodes list CLOSED list
{S(0)}
S {A(1) B(5) C(8)}
A {D(4) B(5) C(8) E(8) G(10)}
S
D {B(5) C(8) E(8) G(10)} 1
5 8
B {C(8) E(8) G’(9) G(10)}
A B C
C {E(8) G’(9) G(10) G”(13)}
3 9 4
E {G’(9) G(10) G”(13) } 7 5
G’ {G(10) G”(13) } D E G G’ G”

Solution path found is S B G <-- this G has cost 9, not 10


Number of nodes expanded (including goal node) = 7
150
Uniform-Cost (UCS)
• It is complete (if cost of each action is not infinitesimal)
– The total # of nodes n with g(n) <= g(goal) in the state space is finite
• Optimal/Admissible
– It is admissible if the goal test is done when a node is removed from the OPEN
list (delayed goal testing), not when it's parent node is expanded and the node
is first generated
• Exponential time and space complexity, O(b^d) where d is the depth
of the solution path of the least cost solution

151
Comparing Search Strategies

And how on our


small example?
152
• Depth-First Search:
– Expanded nodes: S A D E G How they perform
– Solution found: S A G (cost 10)
• Breadth-First Search:
– Expanded nodes: S A B C D E G S
– Solution found: S A G (cost 10)
• Uniform-Cost Search: 1 5 8
– Expanded nodes: S A D B C E G
– Solution found: S B G (cost 9) A B C
This is the only uninformed search 3 9
that worries about costs. 7 4 5
• Depth First Iterative-Deepening
Search: D E G
– nodes expanded: S S A B C S A D E
G
– Solution found: S A G (cost 10)
Depth-First Search:
Breadth-First Search:
Uniform-Cost Search:
Iterative-Deepening Search:
153
When to use what?
• Depth-First Search:
– Many solutions exist
– Know (or have a good estimate of) the depth of solution
• Breadth-First Search:
– Some solutions are known to be shallow
• Uniform-Cost Search:
– Actions have varying costs
– Least cost solution is the required
This is the only uninformed search that worries about costs.
• Iterative-Deepening Search:
– Space is limited and the shortest solution path is required

154
Search Graphs

• If the search space is not a tree, but a graph, the search


tree may contain different nodes corresponding to the
same state.
• The way to avoid generating the same state again when
not required
• The search algorithm can be modified to check a node
when it is being generated.
• we use another list called CLOSED,
• which records all the expanded nodes.
• The newly generated node is checked with the nodes in
CLOSED list & open list,
Algorithm outline-
• The CLOSED list has to be
maintained,
• the algorithm is required
to check every generated
node to see if it is already
there in OPEN or CLOSED.
• this will require efficient
way to index every node.
• S→ Set of successor
• M→node going to be
generated
Informed Search

• We have seen that uninformed search methods that


systematically explore the state space and find the goal.
• They are inefficient in most cases.
• Informed search methods use problem specific knowledge,
and may be more efficient.
• At the heart of such algorithms there is the concept of a
heuristic function.
Heuristics
❑Heuristic “Heuristics are criteria, methods or
principles for deciding which among several
alternative actions, promises to be the most effective
in order to achieve some goal”.
❑ quote by Judea Pearl,

❑In heuristic search or informed search, heuristics are


used to identify the most promising search path.
Example of Heuristic Function
• A heuristic function at a node n is an estimate of the optimum
cost from the current node to a goal. It is denoted by h(n).
• h(n) = estimated cost of the cheapest path from node n to a goal
node
• Example 1: We want a path from Kolkata to Guwahati
• Heuristic for Guwahati may be straight-line distance between
Kolkata and Guwahati
• h(Kolkata) = euclideanDistance(Kolkata, Guwahati)
Example 2: 8-puzzle: Misplaced Tiles Heuristics is the number of tiles out of place.

1. Hamming distance The first picture shows the current state n, and the second
picture the goal state.
h(n) = 5 (because the tiles 2, 8, 1, 6 and 7 are out of place. )

2. Manhattan Distance Heuristic:


This heuristic sums the distance that the tiles are out of place.
The distance of a tile is measured by the sum of the differences in the
x-positions and the y-positions.

For the above example, using the Manhattan distance heuristic,


h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6
Heuristic Search Algorithm Best-First Search.

• The algorithm maintains a priority


queue of nodes to be explored

• A cost function f(n) is applied to


each node.

• The nodes are put in OPEN in the


order of their f values.

• Nodes with smaller f(n) values are


expanded earlier.
Example of best first search
• Nodes are visited in the order : a, b, c, f
• Solution path is : a, c, f

a h’=1.6

h’=0.7 b c h’=0.8

h’=1.8 d h’=0.9 e h’=0 f h’=2.7 g

h h’=4.9 i h’=3.7 j h’=0 k h’=6.2

We will now consider different ways of defining the function f.


This leads to different search algorithms
Greedy Search
• In greedy search, the idea is to expand the node with the smallest estimated
cost to reach the goal.

• heuristic function --f(n) = h(n)

• h(n) estimates the distance remaining to a goal.

• Greedy algorithms often perform very well. They tend to find good solutions
quickly, although not always optimal ones.

• The resulting algorithm is not optimal

• Incomplete -It may fail to find a solution even if one exists.

• This can be seen by running greedy search on the following example.

• A good heuristic for the route-finding problem would be straight-line distance


to the goal.
Romania with step costs in km
Greedy best-first search
expand the node that is closest to the goal : Straight line distance heuristic
Greedy best-first search

• Evaluation function f(n) = h(n) (heuristic)


= estimate of cost from n to goal

• e.g., hSLD(n) = straight-line distance from n to


Bucharest
• Greedy best-first search expands the node
that appears to be closest to goal
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Optimal Path
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Romania with step costs in km

R1→ Arad→sibiu→fagaras→Bucharest =140+99+211=450


Romania with step costs in km

R1

R2

R1→ Arad→sibiu→fagaras→Bucharest =140+99+211=450 (Greedy)


R2 → Arad→sibiu→Rimnicu vilcea→pitesti→Bucharest =140+80+97+101=418
Properties of greedy best-first search
• Complete?
– Not unless it keeps track of all states visited
• Otherwise can get stuck in loops (just like DFS)

• Optimal?
– No – we just saw a counter-example

• Time?
– O(bm), can generate all nodes at depth m before finding solution
– m = maximum depth of search space

• Space?
– O(bm) – again, worst case, can generate all nodes at depth m
before finding solution
Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g.,


Iasi → Neamt → Iasi → Neamt →
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Space? O(bm) -- keeps all nodes in memory
• Optimal? No
A* algorithm
• We will next consider the famous A* algorithm. Nilsson & Rafael in
1968.
• A* is a best first search f(n) = g(n) + h’(n)
– g(n) = sum of edge costs from start to n(start node ➔ current node)
– h’(n) = estimate of lowest cost path from n to goal
– f’(n) = actual distance so far + estimated distance remaining (n ➔ goal)
• h(n) is said to be admissible if it underestimates the cost of solution
that can be reached from n.
• If C*(n) is the cost of the cheapest sol path from n to goal & if h’ is
admissible,
– h’(n) <= C*(n).
• we can prove that if h’(n) is admissible, then the search will find an
optimal solution.
n

m
(m,p) (p|e)-current
(m,q)-previously
calculated distance
Example of A* search

a h’=1.6

h’=0.7 b c h’=0.8

h’=1.8 d h’=0.9 e h’=0 f h’=2.7 g

h h’=4.9 i h’=3.7 j h’=0 k h’=6.2

Cost per arc = 1


h’ values are admissible, e.g. at b, actual cost of reaching goal (j) is 1+1=2 but h’
is only 0.7. At b, f (b) =g (b) + h’ (b) = 1 + 0.7 =1.7
A* search: properties

• The algorithm A* is admissible.


• solution found by A* is an optimal solution.
• Complete
• No. of nodes searched still exponential in the worst
case
• Otherwise, heuristic is logarithmically very accurate
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
A* Search

140
A* Search
A* Search
A* Search
A* Search
Romania with step costs in km

R1

R2

R1→ Arad→sibiu→fagaras→Bucharest =140+99+211=450 (Greedy)


R2 → Arad→sibiu→Rimnicu vilcea→pitesti→Bucharest =140+80+97+101=418(A*)
AO*

• Problem decomposition into an and-or graph


Problem decomposition into an and-or graph

• A technique for reducing a problem to a production system, as


follows:
– The principle goal is identified; it is split into two or more
sub-goals; these, too are split up.
– A goal is something you want to achieve. A sub-goal is a
goal that must be achieved in order for the main goal to be
achieved.
Problem decomposition into an and-or graph

– A graph is drawn of the goal and sub-goals.


– Each goal is written in a box, called a node, with
its subgoals underneath it, joined by links.
Problem decomposition into an and-or graph

– The leaf nodes at the bottom of the tree -


the boxes at the bottom of the graph that don’t have
any links below them
- are the pieces of data needed to solve the problem.
Problem decomposition into an and-or graph

• A goal may be split into 2 (or more) sub-goals, BOTH of which


must be satisfied if the goal is to succeed; the links joining the
goals are marked with a curved line, like this:

Goal 1 Goal 2
Problem decomposition into an and-or
graph
• Or a goal may be split into 2 (or more) sub-
goals, EITHER of which must be satisfied if the
goal is to succeed; the links joining the goals
aren't marked with a curved line:

Goal 1 Goal 2
Problem decomposition into an and-or
graph

• Example
• "The function of a financial advisor is to help
the user decide whether to invest in a savings
account, or the stock market, or both. The
recommended investment depends on the
investor's income and the current amount
they have saved:
Problem decomposition into an and-or
graph

¶ Individuals with inadequate savings should


always increase the amount saved as their first
priority, regardless of income.
· Individuals with adequate savings and an
adequate income should consider riskier but
potentially more profitable investment in the
stock market.
Problem decomposition into an and-or
graph

Individuals with low income who already have


adequate savings may want to consider splitting
their surplus income between savings and
stocks, to increase the cushion in savings while
attempting to increase their income through
stocks.
Problem decomposition into an and-or
graph

¹ The adequacy of both savings and income is


determined by the number of dependants an
individual must support.
There must be at least £3000 in the bank for
each dependant.
An adequate income is a steady income, and it
must supply at least £9000 per year, plus
£2500 for each dependant."
Problem decomposition into an and-or
graph

• How can we turn this information into an and-


or graph?
• Step 1: decide what the ultimate advice that
the system should provide is.
It’s a statement along the lines of “The
investment should be X”, where X can be any
one of several things.
• Start to draw the graph by placing a box at the
top:

Advise user:
investment
should be X
• Step 2: decide what sub-goals this goal can be
split into.
In this case, X can be one of three things:
savings, stocks or a mixture.
Add three sub-goals to the graph. Make sure
the links indicate “or” rather than “and”.
Advise user:
investment
should be X

X is savings X is stocks X is mixture


• Steps 3a, 3b and 3c: decide what sub-goals each
of the goals at the bottom of the graph can be
split into.
– It’s only true that “X is savings” if “savings are
inadequate”. That provides a subgoal under “X is
savings”
– It’s only true that “X is stocks” if “savings are
adequate” and “income is adequate. That provides
two subgoals under “X is stocks” joined by “and” links.
– Similarly, there are two subgoals under “X is mixture”
joined by “and” links.
Advise user:
investment
should be X

X is savings X is stocks X is mixture

Savings are Savings are Income is Savings are Income is


inadequate adequate adequate adequate inadequate
• The next steps (4a,4b,4c,4d & 4e) mainly
involve deciding whether something’s big
enough.
– Step 4a: savings are only inadequate if they are
smaller than a certain figure (let’s call it Y).
– Step 4b: savings are only adequate if they are bigger
than this figure (Y).
– Step 4c: income is only adequate if it is bigger than
a certain figure (let’s call it W), and also steady.
– Step 4d is the same as 4b. Step 4e is like 4c, but
“inadequate”, “smaller” and “not steady”.
Advise user: investment should
be X

X is savings X is stocks X is mixture

Savings are Savings are Income is Savings are Income is


inadequate adequate adequate adequate inadequate

Amount
saved < Y Amount Income is Income > Income < Income is not
saved > Y steady W W steady
• Now we need a box in which the value of Y
is calculated:

Y is Z times 3000

and we need a box in which the value of W is


calculated:

W is 9000 plus 2500


times Z
• Z is the number of dependants, so we need
a box in which this value is obtained:

Client has Z dependants

• We can now add these last three boxes into


the bottom layers of the graph in the same
way as we’ve added all the others:
Production rule: if
savings adequate and
income adequate
then X is stocks
Production rule: if income
< W and income is not
steady
then income is
inadequate
Local Search Algorithm
Local search algorithms and optimization
◼ Systematic search algorithms
❑ to find the goal and to find the path to that goal

◼ Local search algorithms


❑ the path to the goal is irrelevant, e.g., n-queens problem
❑ state space = set of “complete” configurations
❑ keep a single “current” state and try to improve it, e.g., move to its
neighbors
❑ Key advantages:
◼ use very little (constant) memory
◼ find reasonable solutions in large or infinite (continuous) state spaces
❑ Optimization problem (pure)-
◼ to find the best state (optimal configuration ) based on an objective
function, e.g. reproductive fitness – no goal test and path cost
Local Search Algorithms
• Instead of considering the whole state space,
consider only the current state

• Limits necessary memory; paths not retained

• Amenable to large or continuous (infinite)


state spaces where exhaustive algorithms
aren’t possible
• Local search algorithms can’t backtrack!
Local search – state space landscape
❑ Elevation = the value of the objective function or heuristic cost function

heuristic cost
function

global minimum

❑ A complete local search algorithm finds a solution if one exists


❑ A optimal algorithm finds a global minimum or maximum
What we think hill-climbing
looks like

What we learn hill-climbing is


Usually like
Procedure Hill-Climbing

• Begin
– 1. Identify possible starting states and measure
the distance (f) of their closeness with the goal
node; Push them in a stack according to the
ascending order of their f ;
– 2. Repeat
• Pop stack to get the stack-top element;
• If the stack-top element is the goal, announce it and exit
• Else push its children into the stack in the ascending order of their f
values;

– Until the stack is empty;


• End.
Hill-climbing search
◼ moves in the direction of increasing value until a “peak”
❑ current node data structure only records the state and its objective
function
❑ neither remember the history nor look beyond the immediate neighbors
Hill-climbing search – greedy local search
◼ Hill climbing, the greedy local search, often gets stuck
❑ Local maxima: a peak that is higher than each of its neighboring
states, but lower than the global maximum
❑ Ridges: a sequence of local maxima that is difficult to navigate

❑ Plateau: a flat area of the state space landscape


◼ a flat local maximum: no uphill exit exists
◼ a shoulder: possible to make progress
Hill-climbing search
• If there exists a successor s for the current state n such that
–h(s) < h(n)
–h(s) <= h(t) for all the successors t of n,
• then move from n to s. Otherwise, halt at n.

• Looks one step ahead to determine if any successor is better than the
current state; if there is, move to the best successor.

• Similar to Greedy search in that it uses h, but does not allow


backtracking or jumping to an alternative path since it doesn’t
“remember” where it has been.

• Not complete since the search will terminate at "local minima,"


"plateaus," and "ridges."
Hill climbing example
2 8 3 1 2 3
start 1 6 4 h = -4 goal 8 4 h=0
7 5 7 6 5

-5 -5 -2
2 8 3 1 2 3
1 4 h = -3 8 4 h = -1
7 6 5 7 6 5

-3 -4
2 3 2 3
1 8 4 1 8 4 h = -2
7 6 5 7 6 5
h = -3 -4
f(n) = -(number of tiles out of place)
Drawbacks
• Problems:
of hill climbing
– Local Maxima: peaks that aren’t the highest point in the
space
– Plateaus: the space has a broad flat region that gives the
search algorithm no direction (random walk)
– Ridges: flat like a plateau, but with drop-offs to the sides;
• Remedies:
–Random restart
–Problem reformulation
• Some problem spaces are great for hill
climbing and others are terrible.
Hill Climbing Search
• Variants of Hill climbing
– Stochastic Hill Climbing
– First Choice Hill Climbing
– Random restart hill climbing
– Evolutionary Hill Climbing
– Stochastic Hill Climbing
• Basic hill climbing selects always up hill moves,
• This selects random from available uphill moves
• This help in addressing issues with simple hill climbing
like ridge.
– Random restart hill climbing
• It tries to overcome other problem with hill climbing
• Initial state is randomly generated
• Reaches to a position from where no progressive state is
possible
• Local maxima problem is handled by RRHC
– Evolutionary Hill Climbing
• Performs random mutations
• Genetic algorithm base search
Example of a local optimum

2 5
-4
1 7 4
start 8 6 3 goal
1 2 5 1 2 5 1 2 3
7 4 7 4 -4 8 4 0
8 6 3 8 6 3 7 6 5
-3
1 2 5
8 7 4 -4
6 3
The N-Queens Problem
• Suppose you have 8 chess
queens...
• ...and a chess board
The N-Queens Problem
Can the queens be placed on the
board so that no two queens are
attacking each other

?
The N-Queens Problem
Two queens are not allowed in the
same row...
The N-Queens Problem
Two queens are not allowed in the
same row, or in the same column...
The N-Queens Problem
Two queens are not allowed in the
same row, or in the same column, or
along the same diagonal.
The N-Queens Problem
The number of queens, and the size
of the board can vary. N Queens

N columns
The N-Queens Problem
We will write a program which tries
to find a way to place N queens on
an N x N chess board.
How the program works
The program uses a stack
to keep track of where
each queen is placed.
How the program works

Each time the program


decides to place a
queen on the board,
the position of the new
queen is stored in a
record which is placed
in the stack.

ROW 1, COL 1
How the program works
We also have an integer
variable to keep track of
how many rows have
been filled so far.

ROW 1, COL 1

1 filled
How the program works
Each time we try to place
a new queen in the next
row, we start by placing
the queen in the first
column...

ROW 2, COL 1

ROW 1, COL 1

1 filled
How the program works
...if there is a conflict
with another queen,
then we shift the new
queen to the next
column.

ROW 2, COL 2

ROW 1, COL 1

1 filled
How the program works
If another conflict occurs,
the queen is shifted
rightward again.

ROW 2, COL 3

ROW 1, COL 1

1 filled
How the program works
When there are no
conflicts, we stop and
add one to the value of
filled.

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works
Let's look at the third
row. The first position
we try has a conflict...

ROW 3, COL 1

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works
...so we shift to column
2. But another conflict
arises...

ROW 3, COL 2

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works
...and we shift to the
third column.
Yet another conflict
arises...

ROW 3, COL 3

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works
...and we shift to column
4. There's still a conflict
in column 4, so we try to
shift rightward again...

ROW 3, COL 4

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works
...but there's nowhere
else to go.

ROW 3, COL 4

ROW 2, COL 3

ROW 1, COL 1

2 filled
How the program works

When we run out of


room in a row:
• pop the stack,
• reduce filled by 1
• and continue
working on the previous ROW 2, COL 3
row.
ROW 1, COL 1

1 filled
How the program works
Now we continue
working on row 2,
shifting the queen to the
right.

ROW 2, COL 4

ROW 1, COL 1

1 filled
How the program works
This position has no
conflicts, so we can
increase filled by 1, and
move to row 3.

ROW 2, COL 4

ROW 1, COL 1

2 filled
How the program works
In row 3, we start again
at the first column.

ROW 3, COL 1

ROW 2, COL 4

ROW 1, COL 1

2 filled
Local search – example
Hill-climbing search - example
◼ complete-state formulation for 8-queens
❑ successor function returns all possible states generated by moving a single
queen to another square in the same column (8 x 7 = 56 successors for each
state)
❑ the heuristic cost function h is the number of pairs of queens that are attacking
each other

best moves reduce h = 17 to h = 12 local minimum with h = 1


Artificial Intelligence
Constraint satisfaction problems
CSP
• Finding a solution that meets a set of
constraints is the goal of constraint satisfaction
problems (CSPs), a type of AI issue.
• Finding values for a group of variables that
fulfill a set of restrictions or rules is the aim of
constraint satisfaction problems.
• For tasks including resource allocation,
planning, scheduling, and decision-making,
CSPs are frequently employed in AI.
CSP
• There are mainly three basic components in the constraint
satisfaction problem:
• Variables: The things that need to be determined are
variables. Variables in a CSP are the objects that must have
values assigned to them in order to satisfy a particular set of
constraints. Boolean, integer, and categorical variables are
just a few examples of the various types of variables
Variables, for instance, could stand in for the many puzzle
cells that need to be filled with numbers in a sudoku puzzle.
CSP
• Domains: The range of potential values that
a variable can have is represented by domains.
Depending on the issue, a domain may be
finite or limitless. For instance, in Sudoku, the
set of numbers from 1 to 9 can serve as the
domain of a variable representing a problem
cell.
CSP
• Constraints: The guidelines that control how
variables relate to one another are known as
constraints. Constraints in a CSP define the ranges of
possible values for variables. Unary constraints,
binary constraints, and higher-order constraints are
only a few examples of the various sorts of
constraints. For instance, in a sudoku problem, the
restrictions might be that each row, column, and 3×3
box can only have one instance of each number from
1 to 9.
Constraint Satisfaction Problems
(CSP) algorithms
• The backtracking algorithm is a depth-first search algorithm
that methodically investigates the search space of potential
solutions up until a solution is discovered that satisfies all the
restrictions.
• The method begins by choosing a variable and giving it a
value before repeatedly attempting to give values to the
other variables.
• The method returns to the prior variable and tries a different
value if at any time a variable cannot be given a value that
fulfills the requirements.
• Once all assignments have been tried or a solution that
satisfies all constraints has been discovered, the algorithm
ends.
Constraint Satisfaction Problems
(CSP) algorithms
• The forward-checking algorithm is a variation of the
backtracking algorithm that condenses the search space using
a type of local consistency.
• For each unassigned variable, the method keeps a list of
remaining values and applies local constraints to eliminate
inconsistent values from these sets.
• The algorithm examines a variable’s neighbors after it is given
a value to see whether any of its remaining values become
inconsistent and removes them from the sets if they do.
• The algorithm goes backward if, after forward checking, a
variable has no more values.
Constraint Satisfaction Problems
(CSP) algorithms
• Algorithms for propagating constraints are a class
that uses local consistency and inference to
condense the search space.
• These algorithms operate by propagating restrictions
between variables and removing inconsistent values
from the variable domains using the information
obtained.
Constraint Satisfaction Problems
(CSP) algorithms
Constraint Satisfaction Problems
(CSP) algorithms
Constraint Satisfaction Problems
(CSP) algorithms
• Algorithms for propagating constraints are a class
that uses local consistency and inference to
condense the search space.
• These algorithms operate by propagating restrictions
between variables and removing inconsistent values
from the variable domains using the information
obtained.
Problem characterization
• State components:
– Variables
– Domains (possible values for the variables)
– (Binary) constraints between variables
• Goal: to find a state (a complete assignment of
values to variables), which satisfies the constraints
• Examples:
– map coloring
– crossword puzzles
– n-queens
– resource assignment/distribution/location
Representation
• State = constraint graph
– variables (n) = node tags
– domains = node content
– constraints = directed and tagged arcs between nodes

• Example: map coloring blue, red C1



C2
C3 blue
C1
 
C3 C4
C2 C4
blue, red, green blue, green

initial state
Representation
• In the search tree, a variable is assigned at each
level.
• Solutions have to be complete assignment, therefore
they appear at depth n, the number of variable and
maximum depth of the tree.
• Depth-first search algorithms are popular in CSPs.
• The simplest class of CSP (map coloring, n-queens)
are characterized by:
– discrete variables
– finite domains
Finite domains
• If the maximum size of the domain of any variable is
d, then the number of possible complete
assignments is O(dn), exponential in the number of
variables.
• CSPs with finite domain include Boolean CSPs,
whose variables can only be true or false.
• In most practical applications, CSP algorithms can
solve problems with domains orders of magnitude
larger than the ones solvable by uninformed search
algorithms.
Constraints
• The simplest type is the unary constraint, which
constraints the values of just one variable.
• A binary constraint relates two variables.
• Higher-order constraints involve three or more
variables. Cryptarithmetic puzzles are an example:
Cryptarithmetic puzzles
• Variables: F, T, U, W, R, O, X1, X2, X3
• Domains: {0,1,2,3,4,5,6,7,8,9}
• Constraints:
– Alldiff (F,T,U,W,R,O)
– O + O = R + 10 · X1
– X1 + W + W = U + 10 · X2
– X2 + T + T = O + 10 · X3
– X3 = F, T ≠ 0, F ≠ 0
Depth-first search with
backtracking
• Standard depth-first search on a CSP wastes time
searching when constraints have already been
violated.
• Because of the way that the operators have been
defined, an operator can never redeem a constraint
that has already been violated.
• A first improvement is:
– To test constraints after each variable assignment
– If all possible values violate some constraint, then the
algorithm backtracks to the last valid assignment
• Variables are classified as: past, current, future.
Backtracking search algorithm
Backtracking search algorithm
1. Set each variable as undefined. Empty stack. All variables are future variables.
2. Select a future variable as current variable.
If it exists, delete it from FUTURE and stack it (top = current variable),
if not, the assignment is a solution.
3. Select an unused value for the current variable.
If it exists, mark the value as used,
if not, set current variable as undefined,
mark all its values as unused,
unstack the variable and add it to FUTURE,
if stack is empty, there is no solution,
if not, go to 3.
4. Test constraints between past variables and the current one.
If they are satisfied, go to 2,
if not, go to 3.

(It is possible to use heuristics to select variables (2.) and values (3.).
Forward checking algorithm
Forward checking: example
Forward checking: example
Forward checking: example
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Forward checking: example
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values
Constraint propagation
• Forward checking propagates information from assigned to unassigned
variables, but doesn't provide early detection for all failures:

• NT and SA cannot both be blue!

• Constraint propagation repeatedly enforces constraints locally


Constraint propagation
• Forward checking does not detect the “blue”
inconsistency, because it does not look far enough
ahead.
• Constraint propagation is the general term for
propagating the implications of a constraint on one
variable onto other variables.
• The idea of arc consistency provides a fast method of
constraint propagation that is substantially stronger
than forward checking.
Arc consistency
• Simplest form of propagation makes each arc consistent
• X →Y is consistent iff
for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X →Y is consistent iff
for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X →Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked.



Arc consistency
• Simplest form of propagation makes each arc consistent
• X →Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


• Arc consistency detects failure earlier than forward checking
• Can be run as a preprocess or after each assignment
Constraint Satisfaction Problems
◼ So far
◼ All solutions are equally good

◼ In some real world applications, we


◼ Not only want feasible solutions, but also good
solutions
◼ We have different preferences on constraints
◼ Problems are too constrained that there is no
solution satisfying all constraints
SEND MORE MONEY - Problem
SEND
+MORE
=MONEY

Cryptarithmetic problem: mathematical puzzles where


digits are replaced by symbols

Find unique digits the letters represent, satisfying the


above constraints
G53CLP – Constraint Logic Programming Dr R. Qu
SEND MORE MONEY - Model
◼ Variables
◼ S, E, N, D, M, O, R, Y

◼ Domain
◼ {0, …, 9}

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MORE MONEY - Model
◼ Constraints

◼ Distinct variables, S ≠ E, M ≠ S, …

◼ S*1000 + E*100 + N*10 + D


+
M*1000 + O*100 + R*10 + E
=
M*10000 + O*1000 + N*100 + E*10 + Y

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MORE MONEY – How?
◼ How would you solve the problem using CP
techniques?
◼ Search tree with backtracking
◼ Constraint propagation
◼ Forward & backward checking
◼ Combination of above?

◼ Different problems may find different


techniques more appropriate

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MORE MONEY - Solution
SEND 9567
+MORE +1085
=MONEY =10652

◼ Is this the only solution?

◼ Sometimes we want to maximise an objective

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MOST MONEY - Problem
SEND
+MOST
=MONEY

◼ Objective: we now want to maximise MONEY

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MOST MONEY - Problem
◼ Modelling
◼ What does “best” mean
◼ How to find best solution

◼ Search
◼ Assign scores for proposed solution, h
◼ Update the bound, b

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MORE MONEY - model
SEND
+MORE
=MONEY

Variables:
Domain:

G53CLP – Constraint Logic Programming Dr R. Qu


SEND MORE MONEY - model
//.mod file
//declaration
//variables
enum Letters {S, E, N, D, M, O, R, Y};

//domain
var int l[Letters] in 0..9;

G53CLP – Constraint Logic Programming Dr R. Qu


Game Playing
• Minimax algorithm
• alpha beta cut offs
Game Playing

Why do AI researchers study game playing?

1. It’s a good reasoning problem, formal and nontrivial.

2. Direct comparison with humans and other computer


programs is easy.

307
What Kinds of Games?
Mainly games of strategy with the following
characteristics:

1. Sequence of moves to play


2. Rules that specify possible moves
3. Rules that specify a payment for each move
4. Objective is to maximize your payment

308
Two-Player Game
Opponent’s Move

Generate New Position

Game yes
Over?
no

Generate Successors

Evaluate Successors

Move to Highest-Valued Successor

no Game yes
Over?

309
Game Tree (2-player, Deterministic,
Turns)
computer’s
turn

opponent’s
turn

computer’s The computer is Max.


turn The opponent is Min.

opponent’s
turn

At the leaf nodes, the


leaf nodes utility function
are evaluated is employed. Big value
means good, small is bad.
310
Mini-Max Terminology
• utility function: the function applied to leaf nodes

• backed-up value
– of a max-position: the value of its largest successor
– of a min-position: the value of its smallest successor

• minimax procedure: search down several levels; at


the bottom level apply the utility function, back-up
values all the way up to the root node, and that node
selects the move.
311
Minimax
• Perfect play for deterministic games
• Idea: choose move to position with highest minimax value
= best achievable payoff against best play
• E.g., 2-ply game:

312
Minimax – Animated Example
Max 3 6 The computer can
obtain 6 by
choosing the right
6 hand edge from the
Min 5 3 first node.

Max 1 3 6 0 7
5

5 2 1 3 6 2 0 7

313
Minimax Strategy
• Why do we take the min value every other level
of the tree?

• These nodes represent the opponent’s choice


of move.

• The computer assumes that the human will


choose that move that is of least value to the
computer.

314
Minimax Function
• MINIMAX-VALUE(n) = UTILITY(n)
if n is a terminal state

• maxs  Successors(n) MINIMAX-VALUE(s)


if n is a MAX node

mins  Successors(n) MINIMAX-VALUE(s)


if n is a MIN node
315
Minimax algorithm

316
Minimax algorithm

317
Tic Tac Toe
• Let p be a position/state in the game
• Define the utility function f(p) by
– f(p) =
• largest positive number if p is a win for computer
• smallest negative number if p is a win for opponent
• RCDC – RCDO
– where RCDC is number of rows, columns and diagonals in
which computer could still win
– and RCDO is number of rows, columns and diagonals in
which opponent could still win.

318
Properties of Minimax

• Complete? Yes (if tree is finite)


• Optimal? Yes (against an optimal opponent)
• Time complexity? O(bm)
• Space complexity? O(bm) (depth-first exploration)

• For chess, b ≈ 35, m ≈100 for "reasonable" games


→ exact solution completely infeasible

Need to speed it up.

321
Searching Game Trees
• Exhaustively searching a game tree is not usually a good idea.

• Even for a simple tic-tac-toe game there are over 350,000 nodes
in the complete game tree.

• An additional problem is that the computer only gets to choose


every other path through the tree – the opponent chooses the
others.

322
Alpha-beta Pruning
• A method that can often cut off a half the game tree.

• Based on the idea that if a move is clearly bad, there is no


need to follow the consequences of it.

• alpha – highest value we have found so far

• beta – lowest value we have found so far

323
α-β pruning example

324
α-β pruning example
=3

alpha cutoff

325
α-β pruning example

326
α-β pruning example

327
α-β pruning example

328
α-β pruning example

329
α-β pruning example

330
Alpha Cutoff
>3 =3

8 10

What happens here? Is there an alpha cutoff?

331
Beta Cutoff

=4 <4

4 >8

8  cutoff

332
Alpha-Beta Pruning
max

min

max

eval 5 2 10 11 1 2 2 8 6 5 12 4 3 25 2

333
Properties of α-β
• Pruning does not affect final result. This means that it gets the
exact same result as does full minimax.

• Good move ordering improves effectiveness of pruning

• With "perfect ordering," time complexity = O(bm/2)


→ Reduced in doubles depth of search

334
Thank you!

2/8/2024 AI UNIT-I 335

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy