Iai Unit-V

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Probabilistic Reasoning

Uncertainty:
Till now, we have learned knowledge representation using first-order logic and

propositional logic with certainty, which means we were sure about the predicates. With

this knowledge representation, we might write A→B, which means if A is true then B is

true but consider a situation where we are not sure about whether A is true or not then

we cannot express this statement, this situation is called uncertainty.

So, to represent uncertain knowledge, where we are not sure about the predicates, we

need uncertain reasoning or probabilistic reasoning.

Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real world.

1. Information occurred from unreliable sources.


2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.

Probabilistic reasoning:
Probabilistic reasoning is a way of knowledge representation where we apply the

concept of probability to indicate the uncertainty in knowledge. In probabilistic

reasoning, we combine probability theory with logic to handle the uncertainty.

We use probability in probabilistic reasoning because it provides a way to handle the

uncertainty that is the result of someone's laziness and ignorance.


In the real world, there are lots of scenarios, where the certainty of something is not

confirmed, such as "It will rain today," "behavior of someone for some situations," "A

match between two teams or two players." These are probable sentences for which we

can assume that it will happen but not sure about it, so here we use probabilistic

reasoning.

Need of probabilistic reasoning in AI:

o When there are unpredictable outcomes.


o When specifications or possibilities of predicates becomes too large to handle.
o When an unknown error occurs during an experiment.

In probabilistic reasoning, there are two ways to solve problems with uncertain

knowledge:

o Bayes' rule
o Bayesian Statistics

As probabilistic reasoning uses probability and related terms, so before understanding

probabilistic reasoning, let's understand some common terms:

Probability: Probability can be defined as a chance that an uncertain event will occur. It

is the numerical measure of the likelihood that an event will occur. The value of

probability always remains between 0 and 1 that represent ideal uncertainties.

0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.

P(A) = 0, indicates total uncertainty in an event A.

P(A) =1, indicates total certainty in an event A.


We can find the probability of an uncertain event by using the below formula.

o P(¬A) = probability of a not happening event.

o P(¬A) + P(A) = 1.

Event: Each possible outcome of a variable is called an event.

Sample space: The collection of all possible events is called sample space.

Random variables: Random variables are used to represent the events and objects in

the real world.


Prior probability: The prior probability of an event is probability computed before

observing new information.

Posterior Probability: The probability that is calculated after all evidence or

information has considered. It is a combination of prior probability and new information.

Conditional probability:
Conditional probability is a probability of occurring an event when another event has

already happened.

Let's suppose, we want to calculate the event A when event B has already occurred, "the

probability of A under the conditions of B", it can be written as:

Where P(A⋀B)= Joint probability of a and B

P(B)= Marginal probability of B.

If the probability of A is given and we need to find the probability of B, then it will be

given as:
It can be explained by using the below Venn diagram, where B is occurred event, so

sample space will be reduced to set B, and now we can only calculate event A when

event B is already occurred by dividing the probability of P(A⋀B) by P (B).

Example:

In a class, there are 70% of the students who like English and 40% of the students who

likes English and mathematics, and then what is the percent of students those who like
English also like mathematics?

Solution:

Let, A is an event that a student likes Mathematics

B is an event that a student likes English.

Hence, 57% are the students who like English also like Mathematics.
Bayes' theorem
Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which

determines the probability of an event with uncertain knowledge.

In probability theory, it relates the conditional probability and marginal probabilities of


two random events.

Bayes' theorem was named after the British mathematician Thomas Bayes.

The Bayesian inference is an application of Bayes' theorem, which is fundamental to

Bayesian statistics.

It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).

Bayes' theorem allows updating the probability prediction of an event by observing new

information of the real world.

Example: If cancer corresponds to one's age, then by using Bayes' theorem, we can

determine the probability of cancer more accurately with the help of age.

Bayes' theorem can be derived using product rule and conditional probability of event A
with known event B:

As from product rule we can write:

P(A ⋀ B)= P(A|B) P(B) or

Similarly, the probability of event B with known event A:

P(A ⋀ B)= P(B|A) P(A)

Equating right hand side of both the equations, we will get:


The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is

basic of most modern AI systems for probabilistic inference.

It shows the simple relationship between joint and conditional probabilities. Here,

P(A|B) is known as posterior, which we need to calculate, and it will be read as

Probability of hypothesis A when we have occurred an evidence B.

P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we
calculate the probability of evidence.

P(A) is called the prior probability, probability of hypothesis before considering the

evidence

P(B) is called marginal probability, pure probability of an evidence.

In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes' rule

can be written as:

Where A1, A2, A3,........, An is a set of mutually exclusive and exhaustive events.

The more general case of Bayes’ rule for multivalued variables can be written in the P

notation as follows:

As before, this is to be taken as representing a set of equations, each dealing with

specific values of the variables. We will also have occasion to use a more general version

conditionalized on some background evidence e:


Applying Bayes' rule:
Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A).

This is very useful in cases where we have a good probability of these three terms and
want to determine the fourth one. Suppose we want to perceive the effect of some

unknown cause, and want to compute that cause, then the Bayes' rule becomes:

Example-1:

Question: what is the probability that a patient has diseases meningitis with a stiff
neck?

Given Data:

A doctor is aware that the disease meningitis causes a patient to have a stiff neck, and it occurs

80% of the time. He is also aware of some more facts, which are given as follows :

o The Known probability that a patient has meningitis disease is 1/30,000.

o The Known probability that a patient has a stiff neck is 2%.

Let a be the proposition that patient has stiff neck and b be the proposition that patient

has meningitis., so we can calculate the following as:

P(a|b) = 0.8
P(b) = 1/30000

P(a)= .02

Hence, we can assume that 1 patient out of 750 patients has meningitis disease with a

stiff neck.
Example-2:

Question: From a standard deck of playing cards, a single card is drawn. The

probability that the card is king is 4/52, then calculate posterior probability

P(King|Face), which means the drawn face card is a king card.

Solution:

P(king): probability that the card is King= 4/52= 1/13

P(face): probability that a card is a face card= 3/13

P(Face|King): probability of face card when we assume it is a king = 1

Putting all values in equation (i) we will get:

Application of Bayes' theorem:


Following are some applications of Bayes' theorem:

o It is used to calculate the next step of the robot when the already executed step

is given.

o Bayes' theorem is helpful in weather forecasting.

o It can solve the Monty Hall problem.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy