4. Module-4 Reasoning Under Uncertainty
4. Module-4 Reasoning Under Uncertainty
4. Module-4 Reasoning Under Uncertainty
Module-IV:
Reasoning under Uncertainty
So, we can read the diagram as follows: Penguins are birds (no
exceptions); Birds usually fly; and Penguins usually don‟t fly (exception).
Some advanced issues for sceptical/doughtful reasoning
• Logics for Non monotonic reasoning
• Default reasoning: it is a form of non-monotonic reasoning
where plausible conclusions are inferred based on general
rules which may have exceptions (defaults). It is non-
monotonic in the sense that additional information may force
us to withdraw earlier conclusions, namely whenever the
additional information shows that the case at hand is
exceptional.
• Two approaches
– Non-monotonic logic
– Default logic
• Non-monotonic logic
– Abduction
– Inheritance
• Abductive reasoning is a type of reasoning that emphasizes
drawing inferences from the existing data. There is no
assurance that the conclusion drawn is accurate, though, as
the information at hand could not be comprehensive.
Conclusions drawn from abductive reasoning are likely to be
true. This type of reasoning determines the most likely
conclusion for a set of incomplete facts by taking it into
account.
This plan is faulty, because, for example, expenses are $40, but they mustn't
be bigger than $30. We must fix this plan, changing the choice which is
connected with the problem. In this plan, that choice could be:
We will change one entertainment choice to reading a book, or to doing
nothing. After this change, the expenses are smaller, and it should check
weather the new plan is the solution.
To use dependency-directed backtracking
• Associate with each node one more justifications. Each
justification corresponds to a derivation process that led to
the node and must contain a list of all the nodes on which its
derivation depend.
• Provide a mechanism that, when given a contradiction node
and its justification, computes the set of assumptions that
trigger the justification. This set of assumptions named as
“no-good” i.e. defined to be minimal set of assumptions such
that if you remove any element from the set, the justification
will no longer valid and the node will be no longer be
believed.
• Provide a mechanism for considering a no-good set and
choosing an assumption to withdraw.
Justification based Truth Maintenance System (TMS)
• TMS is a form of non-monotonic reasoning by permitting the
addition of changing statements to a knowledge base.
• Also called belief-revision or revision management system
Tell
Inference
TMS
Engine (IE)
Ask
TMS
Role of TMS:
• To maintain consistency of knowledge
• Recent information can displace previous conclusions that
are no longer valid.
• TMS should maintain dependency records for all such
conclusions
• Procedure used to perform this process is dependency-
directed backtracking
• Records maintain in the from of dependency-network
Interface functions:
Justifying literals
Justifying constraints
Justifying literals Derived literals Justifying Constraints
{P,W} R (P ∧ W) → R
{P} Q P→Q
{Q,R} S (Q ∧ R) → S
Derived literals
Justifying Literals
Justifying Constraints
Justifying Constraints
Justifying Literals
Justification Tree: justification functions can produce a tree with
literal S at the root. At each node of the tree, the function
justifying_literals can be used to get children nodes until one
reaches members of the premise set.
• Statistical Reasoning:
– Bayes Theorem for probabilistic inference
– Certainty Factors and Rule-Based Systems
– Bayesian Belief Networks
– Dempster Shafer Theory
– Fuzzy Logic
• Statistical Reasoning:
– Bayes Theorem for probabilistic inference
Uncertainty:
• Till now, we have learned knowledge representation using
first-order logic and propositional logic with certainty, which
means we were sure about the predicates. With this
knowledge representation, we might write A→B, which
means if A is true then B is true
• But consider a situation where we are not sure about
whether A is true or not then we cannot express this
statement, this situation is called uncertainty.
• So to represent uncertain knowledge, where we are not sure
about the predicates, we need uncertain reasoning or
probabilistic reasoning.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in
the real world.
• Information occurred from unreliable sources.
• Experimental Errors
• Equipment fault
• Temperature variation
• Climate change
• Medical Diagnosis
Bayes' theorem:
• Bayes' theorem is also known as Bayes' rule, Bayes' law,
or Bayesian reasoning, which determines the probability of
an event with uncertain knowledge.
• In probability theory, it relates the conditional probability and
marginal probabilities of two random events.
• Bayes' theorem was named after the British
mathematician Thomas Bayes. The Bayesian inference is
an application of Bayes' theorem, which is fundamental to
Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge
of P(A|B).
• Bayes' theorem allows updating the probability prediction of
an event by observing new information of the real world.
• Bayes theorem (also known as the Bayes Rule or Bayes
Law) is used to determine the conditional probability of
event A when event B has already occurred.
• The general statement of Bayes‟ theorem is “The
conditional probability of an event A, given the occurrence
of another event B, is equal to the product of the event of
B, given A and the probability of A divided by the probability
of event B.” i.e.
Example: If cancer corresponds to one's age then by using Bayes'
theorem, we can determine the probability of cancer more accurately
with the help of age.
P(E/Ei) . P(Ei)
𝑖=1
Example: A person has undertaken a job. The probabilities of
completion of the job on time with and without rain are 0.44 and
0.95 respectively. If the probability that it will rain is 0.45, then
determine the probability that the job will be completed on time.
• P(rain) = P(A) = 0.45,
• P(no rain) = P(B) = 1 − P(A) = 1 − 0.45 = 0.55
• Let E1 be the event that the mining job will be completed on
time and E2 be the event that it rains.
• P(E1) = 0.44, and P(E2) = 0.95
• Since, events A and B form partitions of the sample space S,
by total probability theorem, we have
• P(E) = P(A) P(E1) + P(B) P(E2) = 0.7205
Example: There are three pots containing 3 white and 2 black
balls; 2 white and 3 black balls; 1 black and 4 white balls
respectively. There is an equal probability of each pot being
chosen. One ball is equal probability chosen at random. what is
the probability that a white ball is drawn?
• Let E1, E2, and E3 be the events of choosing the first,
second, and third pot respectively. Then,
• P(E1) = P(E2) = P(E3) =1/3
• Let E be the event that a white ball is drawn. Then,
• P(E/E1) = 3/5, P(E/E2) = 2/5, P(E/E3) = 4/5
• P(E) = P(E/E1) . P(E1) + P(E/E2) . P(E2) + P(E/E3) . P(E3)
• Ans: 3/5
Question: what is the probability that a patient has diseases meningitis with a stiff
neck?
Given Data:
• A doctor is aware that disease meningitis causes a patient to have a stiff neck,
and it occurs 80% of the time. He is also aware of some more facts, which are
given as follows:
• The Known probability that a patient has meningitis disease is 1/30,000.
• The Known probability that a patient has a stiff neck is 2%.
• Let „A‟ be the proposition that patient has stiff neck (effect) and
• „B‟ be the proposition that patient has meningitis (Cause).
• so we can calculate the following as:
• P(A|B) = 0.8
• P(B) = 1/30000
• P(A)= 0.02