4 AI Module 5
4 AI Module 5
4 AI Module 5
Module 5
UNCERTAINTY
ACTING UNDER UNCERTAINTY
Agents may need to handle uncertainty, whether due to partial observability, nondeterminism,
or a combination of the two. An agent may never know what state it is in or where it will end
up after a sequence of actions.
Agent’s knowledge cannot guarantee a successful outcome but can provide some degree of
belief (likelihood) on it. A rational decision depends on both the relative importance of
(sub)goals and the likelihood that they will be achieved. Probability theory offers a clean way
to quantify likelihood.
Example
Automated taxi to Airport
• Goal: deliver a passenger to the airport on time
• Action At : leave for airport t minutes before flight: How can we be sure that A90 will
succeed?
• Too many sources of uncertainty:
o partial observability (ex: road state, other drivers’ plans, etc.)
o uncertainty in action outcome (ex: flat tire, etc.)
o noisy sensors (ex: unreliable traffic reports)
o complexity of modelling and predicting traffic
With purely-logical approach it is difficult to anticipate everything that can go wrong
• risks falsehood: “A25 will get me there on time” or
• leads to conclusions that are too weak for decision making: “A25 will get me there on
time if there’s no accident on the bridge , and it doesn’t rain and my tires remain intact”
• Over-cautious choices are not rational solutions either, ex: A1440 causes staying
overnight at the airport
1
Artificial Intelligence
Summarizing uncertainty
Let’s consider an example of uncertain reasoning: diagnosing a dental patient’s toothache. A
medical diagnosis
• Given the symptoms (toothache) infer the cause (cavity) How to encode this relation
in logic?
• diagnostic rules:
• Toothache → Cavity (wrong)
• Toothache → (Cavity ∨ GumProblem ∨ Abscess ∨ ...) (too many possible
causes, some very unlikely) causal rules:
• Cavity → Toothache (wrong)
• (Cavity ∧ ...) → Toothache (many possible (con)causes)
• Problems in specifying the correct logical rules: Complexity: too many possible
antecedents or consequents Theoretical ignorance: no complete theory for the domain
Practical ignorance: no complete knowledge of the patient
Trying to use logic to cope with a domain like medical diagnosis thus fails for three main
reasons:
• Laziness: It is too much work to list the complete set of antecedents or consequents
needed to ensure an exceptionless rule and too hard to use such rules.
• Theoretical ignorance: Medical science has no complete theory for the domain.
• Practical ignorance: Even if we know all the rules, we might be uncertain about a
particular patient because not all the necessary tests have been or can be run.
The connection between toothaches and cavities is just not a logical consequence in either
direction. This is typical of the medical domain, as well as most other judgmental domains:
law, business, design, automobile repair, gardening, dating, and so on. The agent’s knowledge
can at best provide only a degree of belief in the relevant sentences. Our main tool for
dealing with degrees of belief is probability theory. Probability provides a way of
summarizing the uncertainty that comes from our laziness and ignorance, thereby solving
the qualification problem.
2
Artificial Intelligence
• “The probability that the patient has a cavity, given that she has a toothache and a
history of gum disease, is 0.4”:
• P(HasCavity (patient ) | hasToothAche(patient ) ∧ HistoryOfGum(patient )) = 0.4
Preferences, as expressed by utilities, are combined with probabilities in the general theory of rationa
l decisions called decision theory:
The fundamental idea of decision theory is that an agent is rational if and only if it chooses
the action that yields the highest expected utility, averaged over all the possible outcomes
of the action. This is called the principle of maximum expected utility (MEU).
3
Artificial Intelligence
mutually exclusive and exhaustive—two possible worlds cannot both be the case, and one
possible world must be the case. For example, if we are about to roll two (distinguishable) dice,
there are 36 possible worlds to consider: (1,1), (1,2), ..., (6,6). The Greek letter Ω (uppercase
omega) is used to refer to the sample space, and ω (lowercase omega) refers to elements of the
space, that is, particular possible worlds.
A fully specified probability model associates a numerical probability P (ω) with each
possible world.1 The basic axioms of probability theory say that every possible world has a
probability between 0 and 1 and that the total probability of the set of possible worlds is 1:
Probabilities such as P (Total = 11) and P (doubles) are called unconditional or prior
probabilities (and sometimes just “priors” for short); they refer to degrees of belief in
propositions in the absence of any other information.
• Less specific belief still valid after more evidence arrives. ex: P(cavity ) = 0.2 holds
even if P(cavity |toothache) = 0.6
4
Artificial Intelligence
• New evidence may be irrelevant, allowing for simplification. ex: P(cavity |toothache,
49ersWin) = P(cavity |toothache) = 0.8
ex: P(Odd = true) = P(1) + P(3) + P(5) = 1/6 + 1/6 + 1/6 = 1/2
“The probability that the patient has a cavity, given that she is a teenager with no toothache, is 0.1” as
follows:
P (cavity |¬toothache ∧ teen)=0.1 .
Probability Distribution gives the probabilities of all the possible values of a random
variable
ex: P (Weather = sunny)=0.6
P (Weather = rain )=0.1
P (Weather = cloudy)=0.29
P (Weather = snow)=0.01 ,
but as an abbreviation we will allow
P(Weather)= 0.6, 0.1, 0.29, 0.01
5
Artificial Intelligence
where the bold P indicates that the result is a vector of numbers, and where we assume a pre
defined ordering sunny, rain , cloudy , snow on the domain of Weather. We say that the P
statement defines a probability distribution for the random variable Weather.TheP notation is
also used for conditional distributions: P(X | Y ) gives the values of P (X = xi | Y = yj) for
each possible i, j pair.
For continuous variables, it is not possible to write out the entire distribution as a vector,
because there are infinitely many values. Instead, we can define the probability that a random
variable takes on some value x as a parameterized function of x. For example, the sentence P
(NoonTemp = x)=Uniform[18C,26C](x) expresses the belief that the temperature at noon is
distributed uniformly between 18 and 26 degrees Celsius. We call this a probability density
function.
P(Weather , Cavity) denotes the probabilities of all combinations of the values of Weather and
Cavity. This is a 4 × 2 table of probabilities called the joint probability distribution of
Weather and Cavity.
For example, the product rules for all possible values of Weather and Cavity can be written
as a single equation:
P(Weather , Cavity)=P(Weather | Cavity)P(Cavity) ,
6
Artificial Intelligence
Notice that the probabilities in the joint distribution sum to 1, as required by the axioms
of probability.
For example, there are six possible worlds in which cavity ∨ toothache holds:
P(cavity ∨ toothache) = 0.108 + 0.012 + 0.072 + 0.008 + 0.016 + 0.064 = 0.28.
Adding the entries in the first row gives the unconditional or marginal probability of cavity:
P(cavity) = 0.108 + 0.012 + 0.072 + 0.008 = 0.2
This process is called marginalization, or summing out—because we sum up the probabilities
for each possible value of the other variables, thereby taking them out of the equation.
We can write the following general marginalization rule for any sets of variables Y and Z:
This rule is called conditioning. Marginalization and conditioning turn out to be useful rules
for all kinds of derivations involving probability expressions.
For example, we can compute the probability of a cavity, given evidence of a toothache, as
follows:
7
Artificial Intelligence
The two values sum to 1.0, as they should. Notice that in these two calculations the term
1/P(toothache ) remains constant, no matter which value of Cavity we calculate. In fact, it can
be viewed as a normalization constant for the distribution P(Cavity | toothache), ensuring that
it adds up to 1. we can write the two preceding equations in one:
where the summation is over all possible ys (i.e., all possible combinations of values of the
unobserved variables Y). Notice that together the variables X, E, and Y constitute the complete
set of variables for the domain, so P(X, e, y) is simply a subset of probabilities from the full
joint distribution.
INDEPENDENCE
Expand the full joint distribution in Figure 13.3 by adding a fourth variable, Weather. The full
joint distribution then becomes P(Toothache , Catch, Cavity, Weather ), which has 2 × 2 × 2 ×
4 = 32 entries. It contains four “editions” of the table shown in Figure 13.3, one for each kind
of weather. What relationship do these editions have to each other and to the original three-
variable table? For example, how are P (toothache , catch , cavity , cloudy) and P (toothache ,
catch , cavity ) related? We can use the product rule:
P (toothache , catch , cavity , cloudy) = P (cloudy | toothache , catch , cavity )P
(toothache , catch , cavity )
the weather does not influence the dental variables. Therefore, the following assertion seems
reasonable:
8
Artificial Intelligence
The 32-element table for four variables can be constructed from one 8-element table and one
4-element table. This decomposition is illustrated schematically in Figure below.
9
Artificial Intelligence
10
Artificial Intelligence
This decomposition makes it easy to see what the joint probability values should be. The first term is
the conditional probability distribution of a breeze configuration, given a pit configuration; its values
are 1 if the breezes are adjacent to the pits and 0 otherwise. The second term is the prior probability of
a pit configuration. Each square contains a pit with probability 0.2, independently of the other
squares; hence,
11
Artificial Intelligence
12
Artificial Intelligence
13