0% found this document useful (0 votes)
10 views39 pages

Week2_Lecture

Uploaded by

albertadi412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views39 pages

Week2_Lecture

Uploaded by

albertadi412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

CSCI218: Foundations

of Artificial Intelligence
Outline
§ What is AI?
§ Risks and Benefits of AI
§ Agents and Environments
§ The Concept of Rationality
§ The Nature of Environments
§ The Structure of Agents
What is AI?

§ Researchers have pursued several different versions of AI


§ Two dimensions
§ Human vs. rational
§ Thought vs. behavior
§ Lead to four approaches
Acting humanly Thinking humanly
The Turing test approach The cognitive modeling approach

Acting rationally Thinking rationally


The rational agent approach The “laws of thought” approach
What is AI?

Acting humanly: The Turing test approach


§ What capabilities are needed?
§ Natural language processing
§ Knowledge representation
§ Automated reasoning
§ Machine learning
§ Computer vision
§ Speech recognition
§ Robotics
What is AI?

Thinking humanly: The cognitive modeling approach


§ We can learn about human thought in three ways:
§ introspection—trying to catch our own thoughts as they go by;
§ psychological experiments—observing a person in action;
§ brain imaging—observing the brain in action.
§ Cognitive science
§ Computer models from AI + experimental techniques from psychology
§ Construct precise and testable theories of the human mind.
§ Read minds (AI + Cognitive science)
What is AI?

Thinking rationally The “laws of thought” approach


§ Syllogisms from Aristotle
§ “Socrates is a man and all men are mortal and concludes that Socrates is
mortal.”
§ Logic and certainty

§ Probability and uncertainty


What is AI?

Acting rationally: The rational agent approach


§ Agent: something that acts
§ Computer programs
§ Computer agents
§ A rational agent
§ is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
§ Two advantages over the other approaches
§ More general + More amenable
What is AI?
The rational-agent approach to AI has prevailed
§ AI has focused on
§ The study and construction of agents that do the right thing
§ The right thing is defined by the objective that we provide to the agent

Logical foundations Probability theory


and formed definite and machine
plans learning

§ The standard model


§ Perfect rationality vs. Limited rationality
§ Its inadequacy
Risks and Benefits of AI
--“Mechanical arts are of ambiguous use, serving as well for hurt as for remedy”
§ Benefits of using AI
§ Free humanity from work; Increase goods; Accelerate research, etc.
§ Risks of misuse of AI
§ Weapons, surveillance, biasness, employment, safety, security, etc.
§ As AI becomes more capable, they will take more roles
§ The importance of governance and, eventually, regulation.
Agents and environments
Agents and environments
§ An agent perceives its environment through sensors and acts upon
it through actuators
§ The agent function maps percept sequences to actions
§ It is generated by an agent program running on a machine
§ What is the environment?
§ Part of the universe whose state we care about when designing this agent
§ It affects what the agent perceives
§ It is affected by the agent’s actions.
Agents and environments
§ A vacuum-cleaner world with just two locations.
§ Each location can be clean or dirty
§ The agent can move left or right and can clean the square that it occupies.

• What is the environment?


• What are the percepts?
• What are the actions?
• What is the agent function?
• What makes the vacuum-cleaner
good or bad?
Good Behavior: The Concept of Rationality
§ A rational agent --- one that does the right thing.
§ What does it mean to do the right thing?
§ Consequentialism
§ We evaluate an AI agent’s behavior by its consequences.
§ Perception à Action à Environment à Change its state
§ Is the resulted sequence of environment states desirable?
§ This notion of desirability
§ Captured by a performance measure
§ It evaluates any given sequence of environment states.
§ Performance measure is in the mind of the designer / users of the machine
§ It can be quite hard to formulate a performance measure correctly
Good Behavior: The Concept of Rationality
As a general rule, it is better to design performance measures according to what one
actually wants to be achieved in the environment, rather than according to how one
thinks the agent should behave.

• Measure its performance by the


amount of dirt cleaned up in a
single eight-hour shift?
• Or, reward the agent for having a
clean floor?

• In this subject, we usually assume that the performance measure can be specified
correctly (although it is not always this case)
Good Behavior: The Concept of Rationality
• Wolves and sheep are placed at random
Game of “wolf versus sheep” • The wolves need to catch all the sheep in 20
seconds while avoiding some boulders.

• A point system is implemented


• A wolf caught a sheep: +10 points
• Hit a boulder: -1 point
• -0.1 point per second passed to encourage
wolves to catch the sheep quickly.

• Goal is to train the AI wolves to figure out a way


to maximize its scores.
Good Behavior: The Concept of Rationality
• Rational depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.

• A definition of a rational agent:


• For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent
has.
Good Behavior: The Concept of Rationality
• Difference between “rationality” and “omniscience”
• An omniscient agent knows the actual outcome of its actions and can act
accordingly
• Rationality is not the same as perfection.
• Rationality maximizes expected performance, while perfection maximizes actual
performance
• Rational choice depends only on the percept sequence to date.

• Our definition requires a rational agent not only to gather information but
also to learn as much as possible from what it perceives.

• A rational agent should be autonomous—it should learn what it can to


compensate for partial or incorrect prior knowledge
The Nature of Environments
• Task environments
• The “problems” to which rational agents are the “solutions.”
• PEAS
• Performance, Environment, Actuators, Sensors
• We must always be to specify the task environment as fully as possible.
The Nature of Environments
The Nature of Environments

Properties of task environments


§ Fully observable vs. partially observable
§ Single-agent vs. multiagent
§ Deterministic vs. nondeterministic.
§ Episodic vs. sequential
§ Static vs. dynamic
§ Discrete vs. continuous
§ Known vs. unknown
The Nature of Environments

The hardest case is partially observable, multiagent,


nondeterministic, sequential, dynamic, continuous, and unknown.
PEAS: Automated taxi
§ Performance measure
§ Income, happy customer, vehicle costs,
fines, insurance premiums
§ Environment
§ US streets, other drivers, customers,
weather, police…
§ Actuators
§ Steering, brake, gas, display/speaker
§ Sensors
§ Camera, radar, accelerometer, engine Image: http://nypost.com/2014/06/21/how-google-
sensors, microphone, GPS might-put-taxi-drivers-out-of-business/
PEAS: Medical diagnosis system
§ Performance measure
§ Patient health, cost, reputation
§ Environment
§ Patients, medical staff, insurers, courts
§ Actuators
§ Screen display, email
§ Sensors
§ Keyboard/mouse
The Structure of Agents
§ What is the inside of an agent?
§ Recall the job of AI
§ design an agent program that implements the agent function—the
mapping from percepts to actions.

agent = agent architecture + agent program

§ Agent architecture
§ some computing device with physical sensors and actuators
The Structure of Agents
§ We mainly focus on “agent program” in this subject
§ Agent program
§ Input: the current percept from the sensors
§ Output: an action to the actuators.
§ What if the agent’s actions need to depend on the entire percept
sequence?
§ Then the agent will have to remember the percepts.
The Structure of Agents
§ Let’s look at a rather trivial agent program
§ It keeps track of the percept sequence,
§ and then uses it to index into a table of actions to decide what to do.

§ However, a lookup table based approach does not scale


The Structure of Agents
§ The key challenge for AI is to
§ Produce rational behavior from a smallish program rather than from a
vast table.
§ Four basic kinds of agent programs
§ Simple reflex agents;
§ Model-based reflex agents;
§ Goal-based agents; and
§ Utility-based agents.
The Structure of Agents
Simple reflex agents
§ They select actions on the basis of the current percept, ignoring
the rest of the percept history.
§ Leaned reflex and innate reflex
The Structure of Agents

• Simple reflex agents are simple but have limited intelligence.


• They will work only if the correct decision can be made on the basis of just the current
percept—that is, only if the environment is fully observable.
The Structure of Agents
Model-based reflex agents
§ To handle partial observability, the agent needs to keep track of
the part of the world it can’t see now.
§ The agent should maintain some sort of internal state that depends on
the percept history and thereby reflects at least some of the
unobserved aspects of the current state.
§ Updating this internal state requires
§ Transition model: how the world changes over time,
§ Sensor model: how the state of the world is reflected in the agent’s
percepts.
The Structure of Agents

• A model-based reflex agent keeps track of the current state of the world, using an
internal model. It then chooses an action in the same way as the reflex agent.
The Structure of Agents
Goal-based agents
§ The agent needs some sort of goal information that describes
situations that are desirable
§ It combines the goal information with the model to choose actions that
achieve the goal.
§ Goal-based action selection could be straightforward or tricky
§ Search and planning to find action sequences to achieve a goal.
§ More flexible because the knowledge that supports its decisions
is represented explicitly and can be modified
The Structure of Agents

A model-based, goal-based agent keeps track of the world state as well


as a set of goals it is trying to achieve, and chooses an action that will
(eventually) lead to the achievement of its goals.
The Structure of Agents
Utility-based agents
§ Allow a comparison of different actions in the eye of the agent.
§ An agent’s utility function is essentially an internalization of the
performance measure.
§ Provided that the internal utility function and the external performance
measure are in agreement, an agent that chooses actions to maximize
its utility will be rational according to the external performance
measure.
§ Has many advantages in terms of flexibility
§ Conflicting goals, uncertainty, expected utility
The Structure of Agents

• A model-based, utility-based agent uses a model of the world, along with a utility function that
measures its preferences among states of the world.
• It chooses the action that leads to the best expected utility, where expected utility is computed by
averaging over all possible outcome states, weighted by the probability of the outcome.
The Structure of Agents
Agents that are able to learn
§ How the agent programs “come into being?”
§ Build a machine that can learn and then teach them
§ The preferred method for creating state-of-the-art systems.
§ It allows the agent to operate in initially unknown environments and to
become more competent than its initial knowledge alone might allow.
§ A learning agent involves four conceptual components
§ learning element, critic
§ performance element, problem generator
The Structure of Agents

Learning in intelligent agents can be


summarized as a process of modification of
each component of the agent to bring the
components into closer agreement with the
available feedback information, thereby
improving the overall performance of the
agent.

• A general learning agent. The “performance element” box represents what we have previously
considered to be the whole agent program. Now, the “learning element” box gets to modify that
program to improve its performance.
Summary
§ What is AI?
§ Risks and Benefits of AI
§ Agents and Environments
§ The Concept of Rationality
§ The Nature of Environments
§ The Structure of Agents
Thank you. Questions?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy