Cpts 440 / 540 Artificial Intelligence: Uncertainty Reasoning
Cpts 440 / 540 Artificial Intelligence: Uncertainty Reasoning
Cpts 440 / 540 Artificial Intelligence: Uncertainty Reasoning
Artificial Intelligence
Uncertainty Reasoning
Non-monotonic Logic
• Traditional logic is monotonic
– The set of legal conclusions grows monotonically with the set
of facts appearing in our initial database
• When humans reason, we use defeasible logic
– Almost every conclusion we draw is subject to reversal
– If we find contradicting information later, we’ll want to retract
earlier inferences
• Nonmonotonic logic, or defeasible reasoning, allows a
statement to be retracted
• Solution: Truth Maintenance
– Keep explicit information about which facts/inferences
support other inferences
–
Uncertainty
• On the other hand, the problem might not be in the
fact that T/F values can change over time but rather
that we are not certain of the T/F value
• Agents almost never have access to the whole truth
about their environment
• Agents must act in the presence of uncertainty
– Some information ascertained from facts
– Some information inferred from facts and knowledge
about environment
– Some information based on assumptions made from
Environment Properties
• Fully observable vs. partially observable
• Deterministic vs. stochastic / strategic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. multiagent
Uncertainty Arises Because of Several Factors
• Incompleteness
– Many rules are incomplete because too many
conditions to be explicitly enumerated
– Many rules incomplete because some conditions
are unknown
• Incorrectness
Where Do Probabilities Come From?
• Frequency
• Subjective judgment
• Consider the probability that the sun will still
exist tomorrow.
• There are several ways to compute this
• Choice of experiment is known as the
reference class problem
Acting Under Uncertainty
• Agents must still act even if world not certain
• If not sure which of two squares have a pit and must enter one of them to
reach the gold, the agent will take a chance
• If can only act with certainty, most of the time will not act. Consider example
that agent wants to drive someone to the airport to catch a flight, and is
considering plan A90 that involves leaving home 60 minutes before the flight
departs and driving at a reasonable speed. Even though the Pullman airport
is only 5 miles away, the agent will not be able to reach a definite conclusion
- it will be more like “Plan A90 will get us to the airport in time, as long as my
car doesn't break down or run out of gas, and I don't get into an accident,
and there are no accidents on the Moscow-Pullman highway, and the plane
doesn't leave early, and there's no thunderstorms in the area, …”
• We may still use this plan if it will improve our situation, given known
information
• The performance measure here includes getting to the airport in time, not
wasting time at the airport, and/or not getting a speeding ticket.
Limitation of Deterministic Logic
• Pure logic fails for three main reasons:
• Laziness
– Too much work to list complete set of antecedents or
consequents needed to ensure an exceptionless rule, too
hard to use the enormous rules that result
• Theoretical ignorance
– Science has no complete theory for the domain
• Practical ignorance
– Even if we know all the rules, we may be uncertain about a
particular patient because all the necessary tests have not
or cannot be run
Probability
• Probabilities are numeric values between 0 and 1
(inclusive) that represent ideal certainties (not beliefs)
of statements, given assumptions about the
circumstances in which the statements apply.
• These values can be verified by testing, unlike certainty
values. They apply in highly controlled situations.
toothache ~toothache
catch ~catch catch ~catch
cavity .108 .012 .072 .008
~cavity .016 .064 .144 .576
a b
Axioms of Probability
• Negation, P(~a) = 1 – P(a)
a
Axioms of Probability
• Conditional probability
– Once evidence is obtained, the agent can use
conditional probabilities, P(a|b)
– P(a|b) = probability of a being true given that we
know b is true P ( a ^ b)
– The equation P(a|b) = P(b)
holds whenever P(b)>0
• An agent who bets according to probabilities
that violate these axioms can be forced to bet
so as to lose money regardless of outcome
Axioms of Probability
• Conjunction
– Product rule
– P(a^b) = P(a)*P(b|a)
– P(a^b) = P(b)*P(a|b)
a b
• In order words, the only way a and b can both be true is if a is true
Axioms of Probability
• If a and b are independent events (the truth of
a has no effect on the truth of b),
then P(a^b) = P(a) * P(b).
• “Wet” and “Raining” are not independent
events.
• “Wet” and “Joe made a joke” are pretty close
to independent events.
a b a b
More Than 2 Variables
• The chain rule is derived by successive
application of the product rule:
• P(X1,..,Xn) = P(X1,..,Xn-1)P(Xn|X1,..,Xn-1)
= P((X1,..,Xn-2)P(Xn-1|X1,..,Xn-2)P(Xn|X1,..,Xn-1)
=…
in1= P(Xi|X1,..,Xi-1)
X1
X2 X3
Law of Alternatives
• If we know that exactly one of A1, A2, ..., An are true, then we
know P(B) = P(B|A1)P(A1)
+ P(B|A2)P(A2) + ... + P(B|An)P(An) and
P(B|X) = P(B|A1,X) + ... + P(B|
An,X)P(An,X)
• Example
– P(Sunday) = P(Monday) =.. = P(Saturday) = 1/7
– P(FootballToday) =
P(FootballToday|Sunday)P(Sunday) +
P(FootballToday|Monday)P(Monday) +
.. +
P(FootballToday|Saturday)P(Saturday)
Lunar Lander Example
• A lunar lander crashes somewhere in your town (one of the cells at random
in the grid). The crash point is uniformly random (the probability is uniformly
distributed, meaning each location has an equal probability of being the
crash point).
• D is the event that it crashes downtown.
• R is the event that it crashes in the river.
What is P(R)? 18/54
H1 H2 H3
Answer
• Observation: I draw a white bead.
• P(H1|W) = P(H1)P(W|H1) / P(W)
= (1/3 * 3/4) / 5/12 = 3/12 * 12/5 = 36/60 = 3/5
• P(H2|W) = P(H2)P(W|H2) / P(W)
= (1/3 * 1/2) / 5/12 = 1/6 * 12/5 = 12/30 = 2/5
• P(H3|W) = P(H3)P(W|H3) / P(W)
= (1/3 * 0) / 5/12 = 0 * 12/5 = 0
Example
• If I replace the bead, then redraw another bead
at random from the same box, how well can I
predict its color before drawing it?
• P(H1)=3/5, P(H2) = 2/5, P(H3) = 0
• P(W) = P(W|H1)P(H1) + P(W|H2)P(H2) + P(W|
H3)P(H3)
= 3/4*3/5 + 1/2*2/5 + 0*0 = 9/20 + 4/20 = 13/20
H1 H2 H3
Monty Hall Problem
• Monty Hall Applet
• Another Monty Hall Applet
Example
• We wish to know probability that John has malaria, given that he has a slightly unusual
symptom: a high fever.
• We have 4 kinds of information
a) probability that a person has malaria regardless of symptoms (0.0001)
b) probability that a person has the symptom of fever given that he has malaria (0.75)
c) probability that a person has symptom of fever, given that he does NOT have malaria (0.14)
d) John has high fever P(E|H) * P(H)
• H = John has malaria P(H|E) =
P(E)
• E = John has a high fever
Suppose P(H) = 0.0001, P(E|H) = 0.75, P(E|~H) = 0.14
Example
• We wish to know probability that John has malaria, given that he has a slightly unusual
symptom: a high fever.
• We have 4 kinds of information
a) probability that a person has malaria regardless of symptoms
b) probability that a person has the symptom of fever given that he has malaria
c) probability that a person has symptom of fever, given that he does NOT have malaria
d) John has high fever
P(E|H) * P(H)
• H = John has malaria P(H|E) =
• E = John has a high fever P(E)
Suppose P(H) = 0.0001, P(E|H) = 0.75, P(E|~H) = 0.14
Then P(E) = 0.75 * 0.0001 + 0.14 * 0.9999 = 0.14006
and P(H|E) = (0.75 * 0.0001) / 0.14006 = 0.0005354
On the other hand, if John did not have a fever, his probability of having malaria would be
• Two players, each with a choice of cooperating with the other or defecting
• Each receives payoff according to payoff matrix for their decision
• When both cooperate, both rewarded equal, intermediate payoff (reward, R)
• When one player defects, he/she receives highest payoff (temptation, T)
and other gets poor payoff (sucker, S)
• When both player defect they receive intermediate penalty P
• Make problem more interesting by repeating with same players, use history to guide
future decisions (iterated prisoner's dilemma)
• Some strategies:
• Tit For Tat:
– Cooperate on first move then do whatever opponent did on previous move, performed best in
tournament
• Golden Rule:
– Always cooperate
• Iron Rule:
Examples
• In the first example, the other player chooses
randomly
• Prisoner's Dilemma Applet
• Visualize Prisoner's Dilemma
Fuzzy Logic
• “Precision carries a cost”
– Boolean logic relies on sharp distinctions
– 6’ is tall, 5’ 11 ½” is not tall
• The tolerance for imprecision feeds human
capabilities
– Example, drive in city traffic
• Fuzzy logic is NOT logic that is fuzzy
– Logic that is used to describe fuzziness
Fuzzy Logic
• Fuzzy Logic is a multivalued logic that allows intermediate values to
be defined between conventional evaluations like yes/no,
true/false, black/white, etc.
• Fuzzy Logic was initiated in 1965 by Lotfi A. Zadeh, professor of
computer science at the University of California in Berkeley.
• The concept of fuzzy sets is associated with the term ``graded
membership''.
• This has been used as a model for inexact, vague statements about
the elements of an ordinary set.
• Fuzzy logic prevalent in products:
– Washing machines
– Video cameras
– Razors
– Dishwasher
Fuzzy Sets
• In a fuzzy set the elements have a DEGREE of
existence.
• Some typically fuzzy sets are large numbers,
tall men, young children, approximately equal
to 10, mountains, etc.
Fuzzy Sets
Ordinary Sets
1 If x in A
fA(x) =
0 If x not in A
A Fuzzy Set has Fuzzy Boundaries
• A fuzzy set A of universe X is defined by
function fA(x) called the membership function
of set A
0.4
0.2
0.0
150 160 170 180 190 200 210
A man who is 184 cm tall is a member Height, cm
of the average men set with a degree Degree of Fuzzy Sets
Membership
of membership of 0.1 1.0
0.8
0.4
At the same time, he is also a
Tall
member of the tall men set with a 0.2
degree of 0.4. 0.0
150 160 170 180 190 200 210
Fuzzy Set Representation
• Typical functions that can be used to represent a
fuzzy set are
– Sigmoid
– Gaussian
– Linear fit (preferred because low computation cost)
(x)
X Fuzzy Subset A
1
0
Crisp Subset A Fuzziness Fuzziness x
Linguistic Variables and Hedges
• In fuzzy expert systems, linguistic variables are used in fuzzy rules. For
example:
IF wind is strong
THEN sailing is good
IF project_duration is long
THEN completion_risk is high
IF speed is slow
THEN stopping_distance is short
Linguistic Variables and Hedges
• The range of possible values of a linguistic variable
represents the universe of discourse of that variable.
– Example, speed
– University of discourse might have range 0 .. 220 mph
– Fuzzy subsets might be very slow, slow, medium, fast,
and very fast.
• Hedges
– Modify the shape of fuzzy sets
– Adverbs such as very, somewhat, quite, more or less and
slightly.
Linguistic Variables and Hedges
Degree of
Membership
1.0
Short Short
Tall
0.8
0.6 Average
0.4
Very Short Very
VeryTall
Tall
Tall
0.2
0.0
150 160 170 180 190 200 210
Height, cm
Fuzzy Set Relations
• One set A is a subset of set B if for every x, fA(x) <= fB(x)
Sets A and B are equal if for every element x, fA(x) = fB(x).
• OR / Union
– AUB is the smallest fuzzy subset of X containing both A and B, and
is defined by fA B= max(fA(x),fB(x))
U
• AND / Intersection
– The intersection A B is the largest fuzzy subset of X contained in
both A and B, and is defined by fA B(x) = min(fA(x), fB(x))
• NOT: truth(~x) = 1.0 - truth(x)
• IMPLICATION: A -> B = ~A v B, so truth(A-
>B) = max(1.0 – fA(x), fB(x))
Examples
• Fuzzy Logic Washing Machine
• Fuzzy Logic Rice Cooker
• Fuzzy Logic Barcode Scanner
• Fuzzy Logic Blender
• Fuzzy Logic Shampoo
• Fuzzy Logic Monitor