Chapter 2 AI
Chapter 2 AI
Artificial Intelligence
Mr. Yordanos A. Lecturer
MSC in Computer Science and Engineering
(Dept. of Computer Science)
yordanos033331@gmail.com
Crhr: 3 ECTS: 5
1
Chapter 2: Intelligent Agents
Agent
▪ An “agent” is an independent program or entity that interacts with its environment by
perceiving its surroundings via sensors, then acting through actuators or effectors.
▪ This agent has some level of autonomy that allows it to perform specific, predictable,
and repetitive tasks for users or applications.
▪ It’s also termed as ‘intelligent’ because of its ability to learn during the process of
performing tasks.
▪ The two main functions of intelligent agents include perception and action. Perception
is done through sensors while actions are initiated through actuators.
2
An agent can be:
1. Human-Agent: A human agent has sensory organs such as eyes, ears, nose, tongue
and skin parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors/actuators.
2. Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
3. Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
3
How do intelligent agents work?
▪ Hence the world around us is full of agents such as thermostat, cell phone, camera,
and even we are also agents.
▪ Sensors, actuators, and effectors are the three main components the intelligent
agents work through.
▪ Before moving into a detailed discussion, we should first know about sensors,
effectors, and actuators.
❑ Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors. E.g.: Camera, GPS, radar.
4
❑ Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
❑ Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
5
The above diagram shows how these components are positioned in the AI system.
Precepts or inputs from the environment are received through sensors by the intelligent
agent. Using this acquired information or observations this agent uses artificial
intelligence to make decisions. Actuators will then trigger actions. Percept history and
past actions will influence future decisions.
6
Intelligent Agents
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals.A thermostat is an example of an intelligent agent.
7
Characteristics of intelligent agents
Intelligent agents have the following distinguishing characteristics:
•They have some level of autonomy that allows them to perform certain tasks on their
own.
•They have a learning ability that enables them to learn even as tasks are carried out.
•They can interact with other entities such as agents, humans, and systems.
•New rules can be accommodated by intelligent agents incrementally.
•They exhibit goal-oriented habits.
•They are knowledge-based. They use knowledge regarding communications, processes,
and entities.
8
Rational Agent
▪ An ideal rational agent is an agent that can perform in the best possible action and
maximize the performance measure.
▪ The actions from the alternatives are selected based on:
o Percept sequence
o Built-in knowledge base
▪ The actions of the rational agent make the agent most successful in the percept
sequence given.The highest performing agents are rational agents.
▪ A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
10
• A rational agent should strive to "do the right thing", based on what it can perceive and
the actions it can perform.
– What does right thing mean? It is an action that will cause the agent to be most
successful and is expected to maximize goal achievement, given the available
information
• A rational agent is not omniscient
– An Omniscient agent knows the actual outcome of its actions, and can act
accordingly, but in reality omniscience is impossible.
Omniscient- knowing everything Omniscient can act appropriately. Is perfect k/ge
–Rational agents take action with expected success, where as omniscient agent take
action with 100% sure of its success
– Rational agent- optimizing expected performance.
–Are human beings Omniscient or Rational agent? 11
Example: Is the agent Rational?
▪ You are walking along the road to Mazoreya you see an old friend across the street.
There is no traffic.
▪ So, being rational, you start to cross the street.
▪ On the other hand, a big banner falls off from above and before you finish crossing
the road,
12
Were you irrational to cross the street?
This points out that rationality is concerned with expected success, given what has
been perceived. Crossing the street was rational, because most of the time, the
crossing would be successful, and there was no way you could have
foreseen the falling banner. The EXAMPLE shows that we can not blame an
agent for failing to take into account something it could not perceive. Or for failing
to take an action that it is incapable of taking.
13
In designing intelligent systems there are four main factors to consider:
14
Examples of agents in different types of applications
Agent type Percepts Actions Goals Environment
Questions, tests,
Medical diagnosis Symptoms, treatments, diagnoses Healthy patients, Patient, hospital
system patient's answers minimize costs
Interactive English Typed words,
tutor questions, Write exercises, Maximize student's Set of students,
suggestions suggestions, corrections score on exams materials
Collect
Softbot information on a
webpages ftp, mail, telnet subject Internet
Satellite image
Pixels intensity, Correct
analysis system Print a categorization of categorization Images from
color
scene orbiting satellite
Temperature, Open, close Maximize
Refinery pressure valves; adjust purity, yield, Refinery
controller readings temperature safety
15
PEAS representation in AI
▪ Many AI Agents use the PEAS model in their structure. PEAS is an acronym for
Performance Measure, Environment,Actuators, and Sensors.
▪ It is a type of model on which an AI agent works on. It is used to group similar agents.
Environment, actuators, and sensors of the respective agent are considered to make
performance measure by PEAS.
16
1.Performance Measure: The performance of each agent varies based on their
precepts and the success of agents is described using the performance measure unit.
2. Environment: The surrounding of the agent for every instant. The environment will
change with time if the respective agent is set in motion.
3. Actuator: Part of the agent which initiates the action and delivers the output of
action to the environment.
4. Sensors: Part of the agent which takes inputs for the agent.
17
PEAS for self-driving cars:
18
Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer
19
Agent Performance Measure Environment Actuators Sensors
Vacuum Cleanness, Efficiency Room, Table, Wheels, Brushes Camera, Dirt
Cleaner Battery Life, Security Wood floor, Vacuum, Extractor detection sensor,
Carpet, Various Cliff sensor, Bump
Obstacles sensor,
Infrared wall
sensor
20
Structure of an AI Agent
▪ To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs.
▪ The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program.
1. Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example, a robotic car, a camera, a PC.
2. Agent program is an implementation of the agent function. The agent program
produces function f by executing on the physical architecture.
▪ Accepts percept from an environment and generates actions
3. Agent function It is used to map a percept sequence to an action. (Percept
sequence refers to a history of what the intelligent agent has perceived.). f:P* - A
21
Program Skeleton of Agent
22
Agent Environment in AI
▪ The term "environment '' is quite well known to everyone. As per definition
from oxford dictionary: "environment is the surroundings or conditions in which a
person, animal, or plant lives or operates.“
▪ But when it comes to computing, it is the surrounding where a computing device
works or operates. In the context of Artificial intelligence, an Environment is
simply the surrounding of an agent and is where the agent operates.
▪ Now, let's consider a real-life example of driving a car on the road. Can you guess
who will be the agent and environment? Yes, Agent will be the car and the
environment will be the road. The driver is the one who senses all alerts and
operates the car in a dangerous environment to get the desired output - safe
driving.
23
▪ An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an
agent is present.
▪ The environment is where agent lives, operate and provide the agent with something
to sense and act upon it.
▪ Similarly in AI, we have an environment that contains an agent, a sensor, and an
actuator.
24
▪ The below figure shows the simplest diagrammatic representation of an agent
environment interaction. The agent is within the environment. There are sensors to
sense the environment and they provide sensory inputs to the agent. The agent
then takes actions for the respective inputs and provides the output back to the
environment.
▪ For AI, The problem which is to be solved itself creates a great challenge.
Understanding the given problem itself is a challenging task for AI. And apart from
reasoning, the most challenging aspect of an AI problem is the environment.
▪ Agent and Environment can be said as two hooks where AI is hanging. Or more
simply the environment is considered as the problem then the agent is the
solution for the problem or ‘agent’ is the game played on the ground
‘environment’.
25
Example for agents and their
environments in AI
26
▪ Some examples for agents and their environments are shown above for a clear
understanding. For a given task of driving, the vehicle is the agent and the road is the
environment to drive on.
▪ The sensor devices like cameras, radar, lidar, etc will collect information about the
road like the presence of pedestrians, the number of other vehicles present on the
road about the traffic signal, etc.
▪ The vehicle will then act concerning that information like whether the brake pedal
or the accelerator pedal has to be pushed or have to take a turn etc.
▪ If a machine is an agent then its working place is its environment. If we consider a
Cooling System as the agent then the industry it works for is the environment. The
coolant temperature sensor will then collect information and the machine will act
upon that information.
27
Types/Features of of Environments
▪ As per Russell and Norvig, an environment can have various features from the point
of view of an agent:
A Modern Approach is a
university textbook on artificial
intelligence, written by Stuart J.
Russell and Peter Norvig
28
1. Fully observable vs Partially Observable
❑ If an agent sensor can sense or access the complete state of an environment at
each point of time then it is a fully observable environment, else it is partially
observable.
❑ A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
❑ As with the name itself the environment of the agent is observed all the time. At
each point in time, the complete state of the environment is sensed or accessed by
the sensor.
❑ This type of completely observed environment is called fully observable and else
if it is not sensed or observed continuously with varying time then it is partially
observable.
❑ An environment that is not at all observed or accessed by any sensor for any time is
called an unobservable environment. Since it doesn't need to have a track over the
world, a fully observable environment is more convenient.
29
❑ The Agent is familiar with the complete state of the environment at a given time.
There will be no portion of the environment that is hidden for the agent.
▪ In real life, Chess is an example of fully observable because each player of the chess
game gets to see the whole board.
30
❑ In a partially observable environment, The agent is not familiar with the
complete environment at a given time. The agent is not familiar with the complete
environment at a given time.
33
3. Competitive vs Collaborative
34
4. SINGLE-AGENT vs MULTI-AGENT
35
5. Dynamic vs Static
▪ An environment that remains always unchanged by the action of the agent is called a
static environment.
▪ A static environment is the simplest one which is easy to deal with since the agent
doesn’t need to keep track of the world during an action. But an environment is said
to be dynamic if it changes by the action of the agent.
▪ A dynamic environment keeps constantly changing. An environment that keeps
constant with time and the performance score of the agent will change with time is
called a semi-dynamic environment.
▪ The crossword puzzle can be considered as an example of a Static environment
since the problem in the crossword puzzle is set paused at the beginning Crossword
puzzle, the environment remains constant and the environment doesn't expand or
shrink it remains the same.
▪ For a Dynamic environment, we can consider a roller coaster ride as an example.
The environment keeps changing for every instant as it is set in motion. The height,
mass, velocity, different energies(kinetic, potential), centripetal force, etc will vary
from time to time. 36
6. Discrete vs Continuous
▪ But for a chess game, the current action of a particular piece can influence the
future action. If the coin takes a step forward now, the next coming actions
depend on this action where to move. And it is sequential.
38
Fully vs
Determinis Single vs
Partially Episodic vs Static vs Discrete vs
Agent tic vs Multi
Observable Sequential Dynamic Continuous
Stochastic Agents
Brushing
Your Teeth
Playing
Chess
Playing
Cards
Order in
Restaurant
Autonomo
us Vehicles
39
Fully vs
Determinis Single vs
Partially Episodic vs Static vs Discrete vs
Agent tic vs Multi
Observable Sequential Dynamic Continuous
Stochastic Agents
Brushing
Fully Stochastic Sequential Static Continuous Single
Your Teeth
Playing
Partially Stochastic Sequential Dynamic Continuous Multi-Agent
Chess
Playing
Partially Stochastic Sequential Dynamic Continuous Multi-Agent
Cards
Order in Deterministi
Fully Episodic Static Discrete Single Agent
Restaurant c
Autonomo
Fully Stochastic Sequential Dynamic Continuous Multi-Agent
us Vehicles
40
How will an agent interact with the environment?
▪ Interaction between the agent and the environment is a time concerning the
procedure. For each time step, the agent collects information about the
representation of the environment state.
▪ Based on this information the agent will select an action from the available actions
for that state. After a time step later the agent will receive a numerical reward as a
result of the particular action and set itself into a new state.
▪ This interaction is a continuous process, the action is selected by the agent and the
environment will respond to those actions and present new situations to the agent.
41
▪ A typical learning structure of AI is shown above.
▪ Learning techniques are used to solve the given task by AI. The diagram explains how
the agent interacts with the environment. As the agent acts, the state of the
environment gets changed and lets the agent know about the change and also
receives a reward.This is continuous till the goal.
42
Types of AI Agents
▪ Based on the capabilities and level of perceived intelligence, intelligent agents can
be grouped into five main categories.
▪ The current percept is used rather than the percept history to act by these agents. The
basis for the agent function is the condition-action rule.
▪ The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current precepts and ignore the rest of the percept history.
▪ The Condition-action rule is a rule that maps a condition to an action. (e.g.: a room
cleaner agent works only if there is dirt in the room).
▪ The environment is fully observable and a fully observable environment is ideal for the
success of the agent function(These agents only succeed in the fully observable
environment.).
43
❑ The challenges to the design approach of the simple reflex agent are:
▪ Very limited intelligence
▪ No knowledge of Unrecognized parts of the current state.
▪ Environmental changes are not adaptable.
44
2. MODEL-BASED REFLEX AGENT
▪ The Model-based agent can work in a partially observable environment, and track the
situation.
▪ A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
▪ These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
▪ Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
45
▪ Unlike simple reflex agents, model-based reflex agents consider the percept history
in their actions. The agent function can still work well even in an environment that is
not fully observable.
▪ These agents use an internal model that determines the percept history and effect of
actions. They reflect on certain aspects of the present state that have been
unobserved.
46
3. GOAL-BASED AGENTS
▪ The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.
▪ The agent needs to know its goal which describes desirable situations.
▪ Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
▪ They choose an action, so that they can achieve the goal.
▪ These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
47
48
4. UTILITY-BASED AGENTS
▪ These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
▪ Utility-based agent act based not only goals but also the best way to achieve the
goal.
▪ The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
▪ The utility function maps each state to a real number to check how efficiently
each action achieves the goals.
49
Think about it this way: A goal-based agent (yes, another of the intelligent agents out
there) makes decisions based simply on achieving a set goal. Let's say you want to
travel from Assosa to Adiss Abeba: the goal-based agent will get you there. Adiss
Abeba is the goal and this agent will map the right path to get you there. But if
you're traveling from Assosa to Adiss Abeba and encounter a closed road, the utility-
based agent will kick into gear and analyse other routes to get you there, selecting
the best option for maximum utility. In this regard, the utility-based agent is a step
above the goal-based agent.
50
51
5. LEARNING AGENTS
▪ A learning agent in AI is the type of agent that can learn from its past experiences or
it has learning capabilities. It starts to act with basic knowledge and then is able to act
and adapt automatically through learning.
In many real-life situations, intelligent agents in artificial intelligence have been applied.
Vacuum cleaning
For a vacuum cleaner, the surface to be cleaned is the environment (e.g. Room, table,
carpet).Using sensors employed in vacuum cleaning (cameras, dirt detection sensors,
etc.)Senses the environment condition. Actuators such as brushes, wheels, and
vacuum extractors are used to perform actions.
55
Next, Chapter 3 : Natural Language Processing (NLP) Basics
56