Intelligent Agents

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Agents in Artificial Intelligence

Artificial intelligence is defined as a study of rational agents. A rational agent could


be anything which makes decisions, as a person, firm, machine, or software. It
carries out an action with the best outcome after considering past and current
percepts(agent’s perceptual inputs at a given instance).
• An AI system is composed of an agent and its environment. The agents act in
their environment. The environment may contain other agents. An agent is
anything that can be viewed as :
• perceiving its environment through sensors by current or Historical and
acting upon that environment through actuators

current + History
agent percepts

sensors Environmen
t
agent program effectors

actuator(action) change

Examples of Agent:-
A Human agent has eyes, ears, and other organs which act as sensors and hands,
legs, mouth, and other body parts acting as actuators.
A software agent has Keystrokes, file contents, received network packages which
act as sensors and displays on the screen, files, sent network packets acting as
actuators.
A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors acting as actuators.

Agent Terminology

• Performance Measure of Agent − It is the criteria, which determines how


successful an agent is.
• Behavior of Agent − It is the action that agent performs after any given
sequence of percepts.
• Percept − It is agent’s perceptual inputs at a given instance.
• Percept Sequence − It is the history of all that an agent has perceived till
date.
• Agent Function − It is a map from the percept sequence to an action.

The Structure of Intelligent Agents

Agent’s structure can be viewed as −


• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on. is the machinery
that the agent executes on. It is a device with sensors and actuators, for
example : a robotic car, a camera, a PC.
• Agent Program = an implementation of an agent function.
• Agent function is a map from the percept sequence(history of all that an
agent has perceived till date) to an action.

Rationality

Rationality is nothing but status of being reasonable, sensible, and having good sense
of judgment.
Rationality is concerned with expected actions and results depending upon what the
agent has perceived. Performing actions with the aim of obtaining useful information
is an important part of rationality.

What is Ideal Rational Agent?

An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −

• Its percept sequence


• Its built-in knowledge base
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
A rational agent always performs right action, where the right action means the action
that causes the agent to be most successful in the given percept sequence. The
problem the agent solves is characterized by Performance Measure, Environment,
Actuators, and Sensors (PEAS).
Types of Agents
Agents can be grouped into four classes based on their degree of perceived
intelligence and capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents

Simple reflex agents ignore the rest of the percept history and act only on the basis
of the current percept. Percept history is the history of all that an agent has
perceived till date. The agent function is based on the condition-action rule. A
condition-action rule is a rule that maps a state i.e, condition to an action. If the
condition is true, then the action is taken, else not. This agent function only
succeeds when the environment is fully observable. For simple reflex agents
operating in partially observable environments, infinite loops are often unavoidable.
It may be possible to escape from infinite loops if the agent can randomize its
actions.
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules need
to be updated.

Model-based reflex agents

It works by finding a rule whose condition matches the current situation. A model-
based agent can handle partially observable environments by use of model about
the world. The agent has to keep track of internal state which is adjusted by each
percept and that depends on the percept history. The current state is stored inside
the agent which maintains some kind of structure describing the part of the world
which cannot be seen. Updating the state requires information about :
• how the world evolves in-dependently from the agent, and
• how the agent actions affects the world.
Goal-based agents

A goal-based agent has an agenda, you might say. It operates based on a goal in
front of it and makes decisions based on how best to reach that goal. Unlike a simple
reflex agent that makes decisions based solely on the current environment, a goal-
based agent is capable of thinking beyond the present moment to decide the best
actions to take in order to achieve its goal. In this regard, a goal-based agent
operates as a search and planning function , meaning it targets the goal ahead and
finds the right action in order to reach it. This helps a goal-based agent to be proactive
rather than simply reactive in its decision-making.
These kind of agents take decision based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose among
multiple possibilities, selecting the one which reaches a goal state. The knowledge
that supports its decisions is represented explicitly and can be modified, which makes
these agents more flexible. They usually require search and planning. The goal-based
agent’s behavior can easily be changed. Example G plus, Alibaba

Utility-based agents

The agents which are developed having their end uses as building blocks are called
utility based agents. When there are multiple possible alternatives, then to decide
which one is best, utility-based agents are used. They choose actions based on
a preference (utility) for each state. Sometimes achieving the desired goal is not
enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent
happiness should be taken into consideration. Utility describes how “happy” the
agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real number
which describes the associated degree of happiness.
Learning Agent
A learning agent in AI is the type of agent which can learn from its past experiences
or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element :It is responsible for making improvements by learning from
the environment
2. Critic: Learning element takes feedback from critic which describes how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.

The Nature of Environments

Some programs operate in the entirely artificial environment confined to keyboard


input, database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains. The simulator has a very detailed, complex environment. The
software agent needs to choose from a long array of actions in real time. A softbot
designed to scan the online preferences of the customer and show interesting items
to the customer works in the real as well as an artificial environment.
The most famous artificial environment is the Turing Test environment, in which
one real and other artificial agents are tested on equal ground. This is a very
challenging environment as it is highly difficult for a software agent to perform as well
as a human.

Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two
persons, one plays the role of the tester. Each of them sits in different rooms. The
tester is unaware of who is machine and who is a human. He interrogates the
questions by typing and sending them to both intelligences, to which he receives typed
responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response
from the human response, then the machine is said to be intelligent.

Properties of Environment

The environment has multifold properties −


• Discrete / Continuous − If there are a limited number of distinct, clearly
defined, states of the environment, the environment is discrete (For example,
chess); otherwise it is continuous (For example, driving).
• Observable / Partially Observable − If it is possible to determine the
complete state of the environment at each time point from the percepts it is
observable; otherwise it is only partially observable.
• Static / Dynamic − If the environment does not change while an agent is
acting, then it is static; otherwise it is dynamic.
• Single agent / Multiple agents − The environment may contain other agents
which may be of the same or different kind as that of the agent.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have access
to the complete state of the environment, then the environment is accessible
to that agent.
• Deterministic / Non-deterministic − If the next state of the environment is
completely determined by the current state and the actions of the agent, then
the environment is deterministic; otherwise it is non-deterministic.
• Episodic / Non-episodic − In an episodic environment, each episode consists
of the agent perceiving and then acting. The quality of its action depends just
on the episode itself. Subsequent episodes do not depend on the actions in the
previous episodes. Episodic environments are much simpler because the agent
does not need to think ahead.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy