Ai Chapter 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Chapter 2: Agent and Environment

Chapter 2- Agent and Environment


The AI system consists of the agent and environment; the agents will acts up on the
environment.

Agent:

 An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
 An Agent runs in the cycle of perceiving, thinking, and acting.
 The agent receives the information from the environment from sensors and store and
process the information in the agent program do the actions by using the effectors.

The agent consists of the following terms

Sensors:

 Sensor is a device which detects the change in the environment and sends the information
to other electronic devices.
 Sensors can be camera, infrared ray’s devices, keyboard, eye, ear etc.
Effectors:
 Effectors are the devices which affect the environment.
 Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.
Actuators:
 Actuators are the component of machines that converts energy into motion.
 The actuators are only responsible for moving and controlling a system.
 An actuator can be an electric motor, gears, rails, etc
The agent can be
1) Software as agent Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.
2) Robot as an agent A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.

Yogitha U, Dept BCA VVFGC, Tumkur Page 1


Chapter 2: Agent and Environment

3) Human as an agent A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
Terminologies related to Agent:
 Percept (Sense): It is the agent’s perceptual data at the present instance.
Example: The automated AC detects the temperature at 10 o clock.
 Percept Sequence: It is the history of all the percept the agent receives till now.
Example: The automated AC can store the temperature it sensed till now.
 Agent Function: An agent function is a map from the percept sequence (history of all
that an agent has perceived till date) to an action.
 Agent program: It is an implementation of agent function.
 Performance of the agent: It is the criteria used to determine how the agent is successful
in its action.
Example: if it is a self driving car performance is analyzed by the time it takes to reach
the destination .if it reaches the destination at the given time then that agent is successful
in its task and user says that it is performing well.
 Behavior of the agent: It is the action that agent performs after any given sequence of
percepts.
Example: For the self driving if the car applies break properly if any vehicle comes in
front of it suddenly then we will say it is acting correctly to the received actions.
Structure of the Agent
 An intelligent agent is a combination of Agent Program and Architecture.
 Intelligent Agent = Agent Program + Architecture
 Architecture is a machine in which the agent runs.
 The agent program is an implementation of an agent function.
Example:
Washing Machine is an AI agent so it consists of two things
 The agent program is programming chip the machine contains.
 The architecture is the washing machine’s hardware parts or washing machine itself.
Example of an agent, environment and its structure

Yogitha U, Dept BCA VVFGC, Tumkur Page 2


Chapter 2: Agent and Environment

Consider the above vacuum cleaner and list the following properties for it.
Environment: Room A and Room B.
Status of environment: Dirty, Clean
Percept: [location and status] i.e. [A, dirty]
Action: left, right, suck no operation.
Agent function: it is mapping between the percept sequence and the action
 In the above example the vacuum cleaner will works in two room A and B,it will move
from A and B automatically if current room it clean
 Its moving actions are left and right and it will perform suck operation if it detects the
dirt.
 Observe the following list of percept and actions
Percept Actions
[A, dirty] Suck
[A, clean] Right
[B. dirty] suck
[B, clean] Left
[A, clean][A, dirty] suck

Intelligent Agent :
An agent which acts in a way that is expected to maximize to its performance measure,
given the evidence provided by what it perceived and whatever built-in knowledge it has.
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality of Agent:
It is the performance measure of the agent which would defines the criteria of success.
Example:

Yogitha U, Dept BCA VVFGC, Tumkur Page 3


Chapter 2: Agent and Environment

 Consider the vacuum cleaner which cleanse two rooms A and B, it can take rest for some
time by going to sleep mode. It may make the noise while sucking.
 So to measure the performance its cleanliness is not only the criteria but we must
consider its noise, it’s time to travel between two rooms, its sleeping time etc will come
into picture while deciding its performance.
Rationality of the agent is measured by following criteria:
The performance measure that defines the criterion of success.
The agent prior knowledge of the environment.
The possible actions that the agent can perform.
The agent’s percept sequence to date.
PEAS
 Task environment of any agent is decided by PEAS.
 Following are the Properties of the agent to be listed while designing the rational agent.
P: Performance of the AI system.
E: Environment in which it acts.
A: Actuators
S: Sensors
PEAS for Self driving car
 Performance: Safety, time, legal drive, comfort.
 Environment: Roads, other cars, pedestrians, road signs.
 Actuators: Steering, accelerator, brake, signal, horn.
 Sensors: Camera, sonar, GPS, Speedometer, odometer, accelerometer, engine sensors,
keyboard.
PEAS for Vacuum cleaner
 Performance: cleanness, efficiency: distance traveled to clean,battery life, security.
 Environment: room, table, wood floor, carpet, different obstacles.
 Actuators: wheels, different brushes, vacuum extractor.
 Sensors: camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall sensors.

Yogitha U, Dept BCA VVFGC, Tumkur Page 4


Chapter 2: Agent and Environment

Types of the agents


1) Simple reflex agent
2) Model reflex agent
3) Goal based agent
4) Utility based agent
5) Learning agent
1) Simple reflex agent:

 Simple reflex agents act only on the basis of the current percept.
 The agent function is based on the condition-action rule i.e If (condition) then else
 A condition-action rule is a rule that maps a state i.e, condition to an action.
 If the condition is true, then the action is taken, else not.
 Example: if( temp>40 )
Then
Turn on the AC.
 This agent function only succeeds when the environment is fully observable environment.
Limitations:
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.

Yogitha U, Dept BCA VVFGC, Tumkur Page 5


Chapter 2: Agent and Environment

If there occurs any change in the environment, then the collection of rules need to be
updated.
2) Model Reflex agent

These are the agents with memory. It stores the information about the previous state, the
current state and performs the action accordingly.
It mainly has two parts
1) Model: It represents the knowledge about the world.it stores the complete aspects of the task
that agent is performing.
Example: If it is a self driving car then Model contains complete knowledge about driving car .
2) Internal representation: It represents the mapping between the current percept and the previous
history of the agent.
Example: while driving, if the driver wants to change the lane, he looks into the mirror to
know the present position of vehicles behind him. While looking in front, he can only see
the vehicles in front, and as he already has the information on the position of vehicles
behind him (from the mirror a moment ago), he can safely change the lane.
The previous and the current state get updated quickly for deciding the action.
It works well in partially observable environment
3) Goal Based Agent:

Yogitha U, Dept BCA VVFGC, Tumkur Page 6


Chapter 2: Agent and Environment

 These are the agents whose aim is to reach the Goal


 These agents will take the decisions not only based on the current percept but also using
the knowledge of the GOAL .i.e. The agent will think whether the present action will
helps to reach the destination.
 This involves the searching and planning concepts to take the actions and to reach the
GOAL.
Example:
If the agent is a self-driving car and the goal is the destination, then the
information of the route to the destination helps the car in deciding when to turn
left or right.
So in the above example the agent takes the actions according to current percept
and the GOAL.

Yogitha U, Dept BCA VVFGC, Tumkur Page 7


Chapter 2: Agent and Environment

4)Utility based agents

 These agents are used when there is multiple options to achieve the goal and to determine
which option is best.
 The utility based agent chooses the best option based on the users preferences, utility
describes the happiness of the agent along with reaching the goal.
i.e. Sometimes achieving the desired goal is not enough. We may look for a quicker,
safer, cheaper trip to reach a destination.
 The utility based agent chooses the action which gives maximum degree of happiness or
satisfaction.
Example: for a self driving car the destination is known, but there are multiple
routes. Choosing an appropriate route also matters to the overall success of the agent.
There are many factors in deciding the route like the shortest one, the comfortable
one, etc.

Yogitha U, Dept BCA VVFGC, Tumkur Page 8


Chapter 2: Agent and Environment

5) Learning Agent:

 A learning agent in AI is the type of agent which can learn from its past experiences or it
has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from the environment
Critic: Learning element takes feedback from critic (Analysis )which describes how well the
agent is doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem Generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Example: Human is the best example for learning agent who can learn the things from
environment and can analyze whether it is right or wrong and act on the environment
when it is required.
Types of environment
1. Fully Observable vs. Partially Observable
 When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment.
 If an agent is not capable to sense the complete state of the agent at each point of time is
called as it is partially observable.

Yogitha U, Dept BCA VVFGC, Tumkur Page 9


Chapter 2: Agent and Environment

 Maintaining a fully observable environment is easy as there is no need to keep track of


the history of the surrounding.
Example:
Chess – the board is fully observable, a player can see his part of moves and the opponent
part of moves as well.
Poker (cards) – the environment is partially observable because the player can see the
moves in his hand but cannot able to see the cards in the other player’s hands.
2. Deterministic vs. Stochastic
 The agent’s current state completely determines the next state of the agent, then the
environment is said to be deterministic environment. It has uniqueness in the agent
 The agent’s current state cannot completely determine next state of the agent, and then it
is called as stochastic environment. It has randomness in its moves.
Example:
Traffic signal: If the person in the traffic signal is the agent then base on his present color
of signal he can determines next state of the signal so it is deterministic environment.
Self driving car: For self driving car it cannot determines its next action based on present
action it may varies frequently so it is stochastic environment
3. Single-agent vs. Multi-agent
 An environment consisting of only one agent is said to be a single-agent environment.
 An environment involving more than one agent is a multi-agent environment.
Example:
The game of football is multi-agent as it involves 11 players in each team.
A person left alone in a maze is an example of the single-agent system.
4. Dynamic vs. Static
 An environment that keeps constantly changing itself when the agent is up with some
action is said to be dynamic.
 An environment with no change in it as the agent is acting is called a static environment.

Yogitha U, Dept BCA VVFGC, Tumkur Page 10


Chapter 2: Agent and Environment

Example:
Vacuum cleaner in the room whose environment will not change as the it cleans the room
every time so it is static environment
Chess game the coins will move each and every minutes so it is dynamic environment
5. Discrete vs. Continuous
 If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
 The environment in which the actions performed cannot be numbered ie. is not discrete,
is said to be continuous.
Example:
Self-driving cars are an example of continuous environments as their actions is driving,
parking, etc. which cannot be numbered.
The game of chess is discrete environment as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.

6. Episodic vs. sequential:

 If in an environment an action on current situation will not affects the actions on present
and previous sates then it is called as episodic environment.
 Here set of actions will perform based on current percept and it will not affect the next
actions.

Example:-

Chat bot: It is the best example of episodic environment because it will answers the
particular question we asked and the next answer will not be linked to the previous
questions.
Chess: This is an example of sequential because every move in chess will depends on its
previous moves and the every current move will affect next move.

7. Known vs Unknown

Yogitha U, Dept BCA VVFGC, Tumkur Page 11


Chapter 2: Agent and Environment

o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.

o It is quite possible that a known environment to be partially observable and an Unknown


environment to be fully observable.

8. Accessible vs Inaccessible

o If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.

Yogitha U, Dept BCA VVFGC, Tumkur Page 12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy