Chapter 2 - Intelliegent Agent
Chapter 2 - Intelliegent Agent
2
Wollo University ,Kombolicha Institute of Technology
By Ashenafi Workie(MSc.)
KIOT@SE by Ashenafi Workie
Major chapters outlines
5
Agent
6
Agent
7
Agent
9
Ideal example of Agent
10
Ideal example of Agent
11
Rationality
Agents may be rational or human-like
We have seen how humans act or think is difficult to understand since due
to the complex structure of human intelligence.
Our agent should be designed from a rationality view that act rationally
A rational agent is an agent that does the right thing for the perceived
data from the environment.
What is right is an ambiguous concept but we can consider the right thing
as the one that makes the agent more successful.
Success is also measured by using the performance measure
Question: how and when do you measure success in performance?
12
Performance
Subjective measure using the agent
Some agents are unable to answer, and some delude themselves, some overestimate, and some underestimate their success
13
Objective of measures
Needs standard to measure success
Provides the quantitative value of success measure of agent
Involves factors that affect the performance and weight to each
factors
E.g., the performance measure of a vacuum-cleaner agent could be
amount of dirt cleaned up,
amount of time taken,
amount of electricity consumed,
amount of noise generated, etc.
The time to measure performance is also important for success.
It may include knowing starting time, finishing time, duration of job, etc
14
Omniscience
• An omniscient agent
• Knows the actual outcome of its actions in advance
• No other possible outcomes
• However, impossible in the real world
• An example
• crossing a street but died of the fallen cargo door from
33,000ft(33 000 feet =10.0584 kilomètres) irrational?
15
Omniscience vs Rational Agent
• Omniscience agent is distinct from Rational agent
• Omniscience agent is an agent that knows the actual outcome of its
action. However, a rational agent is an agent that tries to achieve
more success from its decision
• Rational agents could make a mistake because of unpredictable
factors at the time of making a decision.
• Omniscient agent that acts and thinks rationally never
make a mistake
• Omniscient agent is an ideal agent in real world
• Agents can perform actions in order to modify future percepts so as
to obtain useful information (information gathering, exploration)
16
Factor to measure rational agent
1. Percept sequence perceived so far (do we have the entire history of how the
world evolve or not)
2. The set of actions that the agent can perform (agents designed to do the same
job with different action set will have different performance)
3. Performance measures ( is it subjective or objective? What are the factors
and their weights)
4. The agent knowledge about the environment (what kind of sensor does the
agent have? Does the agent knows every thing about the environment or not)
This leads to the concept of Ideal Rational Agent
17
Ideal rational agent
For each possible percept sequence, an ideal rational agent should do what
every action is expected to maximize its performance measure, on the basis of
the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Ideal rational agent implementation require perfection
In real situation such agent is difficult to achieve
Why car accident happened? Because drivers are not perfect agent
18
Autonomy
Agent is autonomous if its behavior is determined by its own experience (with
ability to learn and adapt)
Agent that lacks autonomous, if its actions are based completely on built-
in knowledge
Example: student grade decider agent:
Knowledge base given for converting numeric grades to letter grades.
Case 1: an agent always follows the rule (lacks autonomous)
Case 2: an agent that modifies the rules by learning exceptions from the
knowledge base as well as grade distribution.
19
Structure of Intelligent Agent
Structure of AI Agent refers to the design of intelligent agent program (function
that implement agent mapping from percept to actions) that will run on some
sort of computing device called architecture
This course focuses on intelligent agent program function theory, design and implementation
Design of an intelligent agent needs prior knowledge of the Performance measure or Goal
the agent is supposed to achieve, On what kind of Environment it operates
What kind of Actuators it has (what are the possible Actions),
What kind of Sensors the agent has (what are the possible Percepts)
Performance measure Environment Actuators Sensors are abbreviated as PEAS
Percepts Actions Goal Environment are abbreviated as PAGE
20
Examples of agents structure and sample PEAS
Agent: automated taxi driver:
Environment: Roads, traffic, pedestrians, customers
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
Actuators: Steering wheel, accelerator, brake, signal, horn
Performance measure: Safe, fast, legal, comfortable trip, maximize profits
Agent: Medical diagnosis system
Environment: Patient, hospital, physician, nurses, …
Sensors: Keyboard (percept can be symptoms, findings, patient’s answers)
Actuators: Screen display (action can be questions, tests, diagnoses, treatments, referrals)
Performance measure: Healthy patient, minimize costs, lawsuits
21
Examples of agents structure and sample PEAS
Agent: Interactive English tutor
Environment: Set of students
Sensors: Keyboard (typed words)
Actuators: Screen display (exercises, suggestions, corrections)
Performance measure: Maximize student's score on test
Agent: Satellite image analysis system
Environment: Images from orbiting
Sensors: Pixels of varying intensity, color
Actuators: print categorization of scener
Performance measure: Correct categorization
Agent: Part picking robot
Environment: Conveyor belt with parts
Sensors: pixels of varying intensity
Actuators: pickup parts and sort into bins
Performance measure: place parts in correct bins 22
Agent program
An agent is completely specified by the agent function that maps
percept sequences into actions
Aim: find a way to implement the rational agent function concisely
Skeleton of the Agent
FUNCTION SKELETON-AGENT (percept) returns action
static memory, the agent’s memory of the world
memoryUPDATE-MEMORY (memory, percept)
action CHOOSE-BEST-ACTION (memory)
memory UPDATE-MEMORY (memory, action)
RETURN action
Note
1. the function gets only a single percept at a time
Q: how to get the percept sequence?
2. The goal or performance measure is not part of the skeleton 23
Table-lookup agent
Table look-up agent store all the percept sequences –action pair
into the table
For each percept, this type of agent will search for the percept
entry and return the corresponding actions.
Table look-up couldn’t be the right option to implement a successful agent
Why?
Drawbacks:
Huge table
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn the table entries
24
Agent types
Based on memory of the agent, and they way the agent
takes action we can divide agents into five basic types:
These are (according to their increasing order of generality) :
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agent
Each will be discussed soon with models. Notation of model:
Rectangles: used to represent the current internal state of the agent decision process
Ovals: used to represent the background information used in the process
25
Simple reflex agent
26
Simple reflex agent
Function SIMPLE_REFLEX_AGENT(percept) return action
27
Simple reflex agent example
Consider an artificial robot that stand at the center of Meskel
square (environment)
Agent has camera and microphone (sensor) if the agent perceives a sound
of very high frequency (say above 20Khz). it starts to fly up to the sky as
much as possible if the agent perceives an image that looks like a car it
runs away in the forward direction
Otherwise it just turns in any direction randomly
28
Model-based reflex agents (also called a reflex agent with internal state)
29
Model-based reflex agents (also called a reflex agent with internal state)
30
Model-based reflex agents
Function MODEL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
rules, a set of condition action rules
31
Goal-based agents
32
Goal-based agents structure
Function GOAL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
33
Utility-based agents
34
Utility-based agents structure
Function GOAL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
35
Learning agents
Critic: how the agent is doing
Input: checkmate?
Fixed
Problem generator
Tries to solve the problem
differently instead of optimizing.
Suggests exploring new actions
-> new problems.
36
Learning agents
Performance element is what
was previously the whole
agent
Input sensor
Output action
Learning element
Modifies performance
element.
37
Types of Environment
Based on the portion of the environment observable
Fully observable: An agent's sensors give it access to the complete
state of the environment at each point in time. (chess vs. driving)
Partially observable
Fully unobservable
Based on the effect of the agent action
Deterministic : The next state of the environment is completely
determined by the current state and the action executed by the agent.
Strategic: If the environment is deterministic except for the actions of
other agents, then the environment is strategic
Stochastic or probabilistic
38
Types of Environment cont’d
Based on the number agent involved
Single agent A single agent operating by itself in an
environment.
Multi-agent: multiple agents are involved in the environment
Based on the state, action and percept space pattern
Discrete: A limited number of distinct, clearly defined state,
percepts and actions.
Continuous: state, percept and action are consciously changing
variables
Note: one or more of them can be discrete or continuous
39
Types of Environment cont’d
Based on the effect of time
Static: The environment is unchanged while an agent is deliberating.
Dynamic: The environment changes while an agent is not deliberating.
semi-dynamic: The environment is semi-dynamic if the environment
itself does not change with the passage of time but the agent's
performance score does
Based on loosely dependent sub-objectives
Episodic: The agent's experience is divided into atomic "episodes"
(each episode consists of the agent perceiving and then performing a
single action), and the choice of action in each episode depends only on
the episode itself.
Sequential: The agent's experience is a single atomic "episodes"
40
Environment types example
41
End ….
42