Lecture 2 Agent

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 44

Agent

Abu Saleh Musa Miah


Assist. Professor, Dept. of CSE, BAUST, Bangladesh
email: musa@baust.edu.bd, tel: +8801734264899
web: www.baust.edu.bd/cse
Outline
 Knowledge
 Agents and environments
 Rationality
 PEAS (Performance measure, Environment,
Actuators, Sensors)
 Environment types
 Agent types
Artificial Intelligence
In which we discuss the nature of agents, perfect or otherwise, the
diversity of environments, and the resulting menagerie of agent types.

Turing Test: Turing test can measure either an agent can work or not.
Rational Agent: A rational agent is one that acts so as to achieve the
best outcome, when there is uncertainty, the best expected outcome and
does right thing.

3
Knowledge
• Can be defined as the body of facts & principles
accumulated by humankind or the act, fact, or
state of knowing.
• True, but incomplete, much more than this
• It is having a familiarity with language, concepts,
procedures, rules, ideas, abstractions, places,
customs, facts, & associations, coupled with an
ability to use these notions effectively in
modeling different aspects of the world.
Knowledge
• The meaning of knowledge is closely related to the meaning of
intelligence.
• Intelligence requires the possession of an access to knowledge
• And a characteristic of intelligent people is that they posses much
knowledge
• Knowledge is likely stored as complex structures of interconnected
neurons.
• Symbolic representation
• Human brain Computer
• 3.3 lbs 100 gms
• 1012 neurons magnetic spots & voltage states
• 1014 bits storage 1012 bits doubling about every 3~4 years
• The gap between human & computer storage capacities is narrowing
rapidly
• Still wide gape between representation schemes & efficiencies
Knowledge
• Declarative vs. procedural
• Procedural: compiled knowledge related to the performance of some tasks
• The steps used to solve an algebraic equation
• Declarative: passive knowledge expressed as statements of facts about the
world.
• Personal data is a database
• Heuristic Knowledge: special type of knowledge used by humans to solve
complex problems.
• The knowledge used to make good judgments, or strategies, tricks, or ‘rules
of thumb’ used to simply the solution of problems.
• Heuristic s are usually acquired with much experience
 Fault in a TV set
 an experienced technician will not start by making numerous voltage checks
when it is clear that the sound is present but the picture is not
 The high voltage flyback transformer or related component is the culprit
• May not always be correct
• But frequently/quickly can find a solution
Knowledge and Data
• Knowledge should not confused with data
• Physician treating a patient use both Knowledge & Data
• Data: record: history, measurement of vital sign, drugs
given, response to drugs,……
• Knowledge: what Physician learned from medical school,
internship, residency, specialization, practice.
• Knowledge includes & requires the use of data &
information
• It combines relationship, correlations, dependencies, &
notion of gestalt with data & information
Belief, Hypothesis, & Knowledge
Belief: define as essentially any meaningful &
coherent expression that can be represented
• It may be true or false
Hypothesis: define as a justified belief that is
not known to be true
• Thus a hypothesis is a belief which is backed
up with some supporting evidence, but it may
still be false
Knowledge: define as true justified belief
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors &
acting upon that environment through actuators

• Human agent: eyes, ears, & other organs for


sensors;
• hands, legs, mouth, & other body parts for
actuators

• Robotic agent: cameras & infrared range finders


for sensors;
• various motors for actuators
Agents and Environments

10
Agents and Environments
A software agent receives keystrokes, file contents, and network packets a
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets.

percept to refer to the agent’s perceptual inputs at any given instant.


agent’s percept sequence is the complete history of everything the agent
has ever perceived

Mathematically speaking, we say that an agent’s behavior is


described by the agent function that maps any given percept sequence to a
action

11
Agents and environments

• The agent function maps from percept histories


to
• f: P*  A
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program
Vacuum-cleaner world

• Percepts: location &


contents, e.g., [A;Dirty]
• Actions: Left, Right, Suck,
NoOp
vacuum-cleaner
 This particular world has just two locations: squares A and B.

 The vacuum agent perceives which square it is in and whether there is dirt in the
square.

 It can choose to move left, move right, suck up the dirt, or do nothing.

 One very simple agent function is the following: if the current square is dirty, then suc
 otherwise, move to the other square.

 What is the right way to fill out the table?

 In other words, what makes an agent good or bad,


intelligent or stupid?

14
A vacuum-cleaner agent

 What is the right function?


 Can it be implemented in a small agent program?
Rationality
• An agent should strive to "do the right thing", based on what
it can perceive & the actions it can perform.

• The right action is the one that will cause the agent to be
most successful

• Rational Agent: For each possible percept sequence, a


rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
Rationality
• Fixed performance measure evaluates the environment
sequence
- one point per square cleaned up in time T?
- one point per clean square per time step, minus one per move?
- penalize for > k dirty squares?

• A rational agent chooses whichever action maximizes the


expected value of the performance measure given the percept
sequence to date
Rational  omniscient
- percepts may not supply all relevant information
Rational  clairvoyant
- action outcomes may not be as expected
• Hence, rational  successful
• Rational  exploration, learning, autonomy
PEAS
• PEAS: Performance measure, Environment, Sensors, Actuators

• To design a rational agent, we must specify the task environment

• Consider, e.g., the task of designing an automated taxi:

• Performance measure?? safety, destination, profits, legality, comfort,


…….
• Environment?? US streets/freeways, traffic, pedestrians, weather,
…….
• Actuators?? steering, accelerator, brake, horn, speaker/display, ….
• Sensors?? video, accelerometers, gauges, engine sensors, keyboard,
GPS, ……
Specifying the task environment
 Agent specify the performance measure, the environment, and the agent’s
actuators and sensors. group all these under the heading of the task environment.

 PEAS (Performance, Environment, Actuators, Sensors).


 Example difficult :automated taxi driver
Specifying the task environment
 performance measure to which we would like our
automated driver to aspire?

 Correct destination; minimizing fuel consumption and

 wear and tear; minimizing the trip time or cost;

 minimizing violations of traffic laws and disturbances to other drivers;

 maximizing safety and passenger comfort;

 maximizing profits. Obviously,

 some of these goals conflict,

 tradeoffs will be required.


Specifying the task environment

 environment that the taxi will face?


 variety of roads,
 ranging from rural lanes and urban alleys to 12-lane
freeways.
 traffic, pedestrians, stray animals,
 road works, police cars, puddles, and potholes.

 actuators for an automated taxi include :


 control over the engine through the accelerator and control
over steering and braking.
 need output display screen, voice synthesizer to talk back to
the passengers, some way to communicate with other
vehicles, politely or otherwise.
Specifying the task environment
 The basic sensors for the taxi will include one or more controllable video cameras
so that it can see the road; it might augment these with infrared or sonar sensors to
detect distances to other cars and obstacles. To avoid speeding tickets, the taxi
should have a speedometer, and to control the vehicle properly, especially on
curves, it should have an accelerometer.

 To determine the mechanical state of the vehicle, it will need the usual array of
engine, fuel, and electrical system sensors. Like many human drivers, it might want
a global positioning system (GPS) so that it doesn’t get lost. Finally, it will need a
keyboard or microphone for the passenger to request a destination
PEAS
• Agent: Medical diagnosis system
• Performance measure?? Healthy patient,
minimize costs, lawsuits
• Environment?? Patient, hospital, staff
• Sensors?? Keyboard (entry of symptoms,
findings, patient's answers)
• Actuators?? Screen display (questions, tests,
diagnoses, treatments, referrals)
PEAS
• Agent: Part-picking robot
• Performance measure?? Percentage of parts
in correct bins
• Environment?? Conveyor belt with parts, bins
• Actuators?? Jointed arm and hand
• Sensors?? Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure??: Maximize student's
score on test
• Environment??: Set of students
• Actuators??: Screen display (exercises,
suggestions, corrections)
• Sensors??: Keyboard
Internet shopping agent
• Performance measure?? price, quality,
appropriateness, efficiency
• Environment?? current and future WWW sites,
vendors, shippers
• Actuators?? display to user, follow URL, fill in
form
• Sensors?? HTML pages (text, graphics, scripts)
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.

• Deterministic (vs. stochastic): The next state of the


environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)

• Episodic (vs. sequential): The agent's experience is


divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single action),
and the choice of action in each episode depends only on
the episode itself.
Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating.
(The environment is semi-dynamic if the
environment itself does not change with the
passage of time but the agent's performance
score does)

• Discrete (vs. continuous): A limited number of


distinct, clearly defined percepts and actions.

• Single agent (vs. multiagent): An agent


operating by itself in an environment.
Environment types
Solitaire Backgammon Internet Taxi
shopping
Observable?? Yes Yes No No
Deterministic?? Yes No Partly No
Episodic?? No No No No
Static?? Yes Semi Semi No
Discrete?? Yes Yes Yes No
Single-agent?? Yes No Yes(except No
auctions)

The environment type largely determines the agent design

The real world is (of course) partially observable, stochastic,


sequential, dynamic, continuous, multi-agent
Structure of Agents

 The job of AI is to design an agent program


 To implements the agent function

agent = architecture + program .


 agent programs take the current percept as input from the sensors
and return an action to the actuators.
Structure of Agents
Structure of Agents

AI should find out how to write programs that,


to the extent possible,
 produce rational behavior from a smallish
program rather than from a vast table.
we outline four basic kinds of agent
programs hat
embody the principles underlying almost all
intelligent systems:
Agent types
• Four basic types in order of increasing
generality:
simple reflex agents
reflex agents with state
goal-based agents
 utility-based agents
All these can be turned into learning agents
Simple reflex agents

Example: Vacuum Cleaner


Example
Reflex agents with state
Example
Goal-based agents
Utility-based agents
Learning agents
Summary
• Agents interact with environments through actuators and
sensors
• The agent function describes what the agent does in all
circumstances
• The performance measure evaluates the environment sequence
• A perfectly rational agent maximizes expected performance
• Agent programs implement (some) agent functions
• PEAS descriptions define task environments

• Environments are categorized along several dimensions:


observable? deterministic? episodic? static? discrete?
single-agent?

• Several basic agent architectures exist:


reflex, reflex with state, goal-based, utility-based
How the components of agent programs work
Roughly speaking, we can place the representations along an axis of increasing
complexity and expressive power—atomic, factored, and structured.

Figure 2.16 Three ways to represent states and the transitions between them.
(a)Atomic representation: a state (such as B or C) is a black box with no
internal structure;
(b) Factored representation: a state consists of a vector of attribute
values; values can be Boolean, real valued, or one of a fixed set of symbols.
(c) Structured representation: a state includes objects, each of which may
have attributes of its own as well as relationships to other objects
How the components of agent programs work
Searching algorithm
Game Playing
Hidden Markov Model
Constraint Satisfaction Problem
Propositional Logic
Planning
Bayesian Network
Genetic Algorithm
Machine Learning
First Order Logic
Knowledge bsed learning
FOPL
Confession
It is possible that some sentences or some information were
included in these slides without mentioning exact references.
I am sorry for violating rules of intellectual property. When
I will have a bit more time, I will try my best to avoid such
things.
These slides are only for students in order to give them very
basic concepts about the giant, “Artificial Intelligence”, not
for experts.
Since I am not a network expert, these slides could have
wrong/inconsistent information…I am sorry for that.
Students are requested to check references and Books, or to
talk to Network engineers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy