Ai Chapter 2
Ai Chapter 2
Ai Chapter 2
Agent:
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
An Agent runs in the cycle of perceiving, thinking, and acting.
The agent receives the information from the environment from sensors and store and
process the information in the agent program do the actions by using the effectors.
Sensors:
Sensor is a device which detects the change in the environment and sends the information
to other electronic devices.
Sensors can be camera, infrared ray’s devices, keyboard, eye, ear etc.
Effectors:
Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.
Actuators:
Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc
The agent can be
1) Software as agent Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.
2) Robot as an agent A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
3) Human as an agent A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
Terminologies related to Agent:
Percept (Sense): It is the agent’s perceptual data at the present instance.
Example: The automated AC detects the temperature at 10 o clock.
Percept Sequence: It is the history of all the percept the agent receives till now.
Example: The automated AC can store the temperature it sensed till now.
Agent Function: An agent function is a map from the percept sequence (history of all
that an agent has perceived till date) to an action.
Agent program: It is an implementation of agent function.
Performance of the agent: It is the criteria used to determine how the agent is successful
in its action.
Example: if it is a self driving car performance is analyzed by the time it takes to reach
the destination .if it reaches the destination at the given time then that agent is successful
in its task and user says that it is performing well.
Behavior of the agent: It is the action that agent performs after any given sequence of
percepts.
Example: For the self driving if the car applies break properly if any vehicle comes in
front of it suddenly then we will say it is acting correctly to the received actions.
Structure of the Agent
An intelligent agent is a combination of Agent Program and Architecture.
Intelligent Agent = Agent Program + Architecture
Architecture is a machine in which the agent runs.
The agent program is an implementation of an agent function.
Example:
Washing Machine is an AI agent so it consists of two things
The agent program is programming chip the machine contains.
The architecture is the washing machine’s hardware parts or washing machine itself.
Example of an agent, environment and its structure
Consider the above vacuum cleaner and list the following properties for it.
Environment: Room A and Room B.
Status of environment: Dirty, Clean
Percept: [location and status] i.e. [A, dirty]
Action: left, right, suck no operation.
Agent function: it is mapping between the percept sequence and the action
In the above example the vacuum cleaner will works in two room A and B,it will move
from A and B automatically if current room it clean
Its moving actions are left and right and it will perform suck operation if it detects the
dirt.
Observe the following list of percept and actions
Percept Actions
[A, dirty] Suck
[A, clean] Right
[B. dirty] suck
[B, clean] Left
[A, clean][A, dirty] suck
Intelligent Agent :
An agent which acts in a way that is expected to maximize to its performance measure,
given the evidence provided by what it perceived and whatever built-in knowledge it has.
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality of Agent:
It is the performance measure of the agent which would defines the criteria of success.
Example:
Consider the vacuum cleaner which cleanse two rooms A and B, it can take rest for some
time by going to sleep mode. It may make the noise while sucking.
So to measure the performance its cleanliness is not only the criteria but we must
consider its noise, it’s time to travel between two rooms, its sleeping time etc will come
into picture while deciding its performance.
Rationality of the agent is measured by following criteria:
The performance measure that defines the criterion of success.
The agent prior knowledge of the environment.
The possible actions that the agent can perform.
The agent’s percept sequence to date.
PEAS
Task environment of any agent is decided by PEAS.
Following are the Properties of the agent to be listed while designing the rational agent.
P: Performance of the AI system.
E: Environment in which it acts.
A: Actuators
S: Sensors
PEAS for Self driving car
Performance: Safety, time, legal drive, comfort.
Environment: Roads, other cars, pedestrians, road signs.
Actuators: Steering, accelerator, brake, signal, horn.
Sensors: Camera, sonar, GPS, Speedometer, odometer, accelerometer, engine sensors,
keyboard.
PEAS for Vacuum cleaner
Performance: cleanness, efficiency: distance traveled to clean,battery life, security.
Environment: room, table, wood floor, carpet, different obstacles.
Actuators: wheels, different brushes, vacuum extractor.
Sensors: camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall sensors.
Simple reflex agents act only on the basis of the current percept.
The agent function is based on the condition-action rule i.e If (condition) then else
A condition-action rule is a rule that maps a state i.e, condition to an action.
If the condition is true, then the action is taken, else not.
Example: if( temp>40 )
Then
Turn on the AC.
This agent function only succeeds when the environment is fully observable environment.
Limitations:
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.
If there occurs any change in the environment, then the collection of rules need to be
updated.
2) Model Reflex agent
These are the agents with memory. It stores the information about the previous state, the
current state and performs the action accordingly.
It mainly has two parts
1) Model: It represents the knowledge about the world.it stores the complete aspects of the task
that agent is performing.
Example: If it is a self driving car then Model contains complete knowledge about driving car .
2) Internal representation: It represents the mapping between the current percept and the previous
history of the agent.
Example: while driving, if the driver wants to change the lane, he looks into the mirror to
know the present position of vehicles behind him. While looking in front, he can only see
the vehicles in front, and as he already has the information on the position of vehicles
behind him (from the mirror a moment ago), he can safely change the lane.
The previous and the current state get updated quickly for deciding the action.
It works well in partially observable environment
3) Goal Based Agent:
These agents are used when there is multiple options to achieve the goal and to determine
which option is best.
The utility based agent chooses the best option based on the users preferences, utility
describes the happiness of the agent along with reaching the goal.
i.e. Sometimes achieving the desired goal is not enough. We may look for a quicker,
safer, cheaper trip to reach a destination.
The utility based agent chooses the action which gives maximum degree of happiness or
satisfaction.
Example: for a self driving car the destination is known, but there are multiple
routes. Choosing an appropriate route also matters to the overall success of the agent.
There are many factors in deciding the route like the shortest one, the comfortable
one, etc.
5) Learning Agent:
A learning agent in AI is the type of agent which can learn from its past experiences or it
has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from the environment
Critic: Learning element takes feedback from critic (Analysis )which describes how well the
agent is doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem Generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
Example: Human is the best example for learning agent who can learn the things from
environment and can analyze whether it is right or wrong and act on the environment
when it is required.
Types of environment
1. Fully Observable vs. Partially Observable
When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment.
If an agent is not capable to sense the complete state of the agent at each point of time is
called as it is partially observable.
Example:
Vacuum cleaner in the room whose environment will not change as the it cleans the room
every time so it is static environment
Chess game the coins will move each and every minutes so it is dynamic environment
5. Discrete vs. Continuous
If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
The environment in which the actions performed cannot be numbered ie. is not discrete,
is said to be continuous.
Example:
Self-driving cars are an example of continuous environments as their actions is driving,
parking, etc. which cannot be numbered.
The game of chess is discrete environment as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.
If in an environment an action on current situation will not affects the actions on present
and previous sates then it is called as episodic environment.
Here set of actions will perform based on current percept and it will not affect the next
actions.
Example:-
Chat bot: It is the best example of episodic environment because it will answers the
particular question we asked and the next answer will not be linked to the previous
questions.
Chess: This is an example of sequential because every move in chess will depends on its
previous moves and the every current move will affect next move.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.