CH 6 - Probability
CH 6 - Probability
CH 6 - Probability
Probability implies 'likelihood' or 'chance'. When an event is certain to happen then the probability
of occurrence of that event is 1 and when it is certain that the event cannot happen then the
probability of that event is 0.
Hence the value of probability ranges from 0 to 1.
Classical Definition
As the name suggests the classical approach to defining probability is the oldest approach. It
states that if there are n exhaustive, mutually exclusive and equally likely cases out of which m
cases are favourable to the happening of event A,
Then the probabilities of event A is defined as given by the following probability function:
Thus to calculate the probability we need information on number of favourable cases and total
number of equally likely cases. This can he explained using following example.
Example
Problem Statement:
A coin is tossed. What is the probability of getting a head?
Solution:
Total number of equally likely outcomes (n) = 2 (i.e. head or tail)
Number of outcomes favourable to head (m) = 1
Sample Space
Sample Space: Each trial of the experiment will always result in an outcome. We can think of the
set, of all possible outcomes, of a trial. This set is called a sample space and is usually denoted by
either letter S or by a Greek letter Ω (Omega).
If the number of elements in the sample space is finite, then it is a finite sample space. For
example, in case of an experiment of tossing of a coin the sample space S = {Head, Tail}. It is a
finite sample space. It contains finite number of elements.
The number of elements of S may be countably infinite. Such a sample space is called countably
infinite sample space. For example, suppose you decide to toss a die till you observe ‘1’. Total
number of tosses required would be 1, 2, 3 and so on. In other words, here sample space S = {1,
2,3….}. Counting of the elements is possible but, it may not end.
You may describe the sample space using a Venn diagram. You can also use a Tree diagram. For
example, consider the experiment of tossing a six-faced die. Figure 1 is a Venn diagram.
You can make use of a Tree diagram. In it a branch of the tree represents each outcome. For
example, in case of tossing of a coin two outcomes H and T are possible.
Events
A subset of the sample space is called an event. It is a collection of one or more outcomes
(sample points) of a random experiment. We will use upper case letters with or without suffix, such
as A1 , A2 , B, C, P, Q to denote events. Using the same sample space we can define several
events. All events defined are essentially subsets of the sample space. Consider an experiment of
tossing two coins together. The sample space S of this experiment is S = {HH, HT, TH, TT}. Let A
denote the event of getting at least one head, then A = {HH, HT, TH}. If B denote the event of
getting at the most one head then B = {HT, TH, TT}.
If C denotes the event of getting, neither head nor tail then C= {}. This is empty set and will be
referred as a null event.
The sample space is also a subset of itself and is referred as a sure event.
An event can be simple or compound. A simple event (also called as elementary event) contains
only one of the outcomes of a sample space. A compound event is an event that has two or more
of the outcomes of a sample space.
Example
Consider the experiment involving two dice together. The sample space is given by S = {(1,1),
(1,2), ….(6,6)}. Define an event A1 as that of getting ‘1’ on both the dice. A1 = {(1,1)}. Define A 2 as
the event of getting ‘1’ on at least one die. A 2 = {(1,1), (1,2), (1,3),(1,4), (1,5), (1,6), (2,1), (3,1),
4,1), (5,1), (6,1)}. Here A1 is a simple event and A2 is a compound event. From the definition of an
event, it follows that the sample space S as well as the empty set are also events. The sample
space is referred to as a sure event, whereas the empty set is referred as a null event.
Operations On Events
All the operations on 'Sets' are applicable to events. The basic operations on Sets are Union and
Intersection. Let A and B denote two events associated with the same Sample Space.
A ∪ B : The event A ∪ B (read as A union B) is the set of all sample points.
A ∩ B : The event A ∩ B (read as A intersection B) is the set of all sample points, which
belong to both A and B.
Complement of an event
We have defined an event as a subset of the sample space S. So, it may or may not occur. If
event A does not occur in the observed outcome of the experiment then, it does not belong to set
A. Such an outcome must belong to compliment of the set A. The event defined by the set, A c =
{which does not belong to A} is called the complement of the event A. It is also denoted by A' .
Examples
Turning left and turning right are Mutually Exclusive (you can't do both at the same time)
Tossing a coin: Heads and Tails are Mutually Exclusive
Cards: Kings and Aces are Mutually Exclusive
Axioms of Probabilty
One strategy in mathematics is to start with a few statements, then build up more mathematics
from these statements. The beginning statements are known as axioms. An axiom is typically
something that is mathematically self-evident. From a relatively short list of axioms, deductive logic
is used to prove other statements, called theorems or propositions.
The area of mathematics known as probability is no different. Probability can be reduced to three
axioms.
We suppose that we have a set of outcomes called the sample space S. This sample space can
be thought of as the universal set for the situation that we are studying. The sample space is
comprised of subsets called events E1, E2, . . ., En. We also assume that there is a way for
assigning a probability to any event E. The probability of the event E is denoted by P(E).
Probability Axioms
Dependent Events
When two events are said to be dependent, the probability of one event occurring influences the
likelihood of the other event.
For example, if you were to draw a two cards from a deck of 52 cards. If on your first draw you had
an ace and you put that aside, the probability of drawing an ace on the second draw is greatly
changed because you drew an ace the first time. Let's calculate these different probabilities to see
what's going on.
There are 4 Aces in a deck of 52 cards
If we don't return this card into the deck, the probability of drawing an ace on the second pick is
given by
As you can clearly see, the above two probabilities are related, so we say that the two events are
dependent i.e. the likelihood of the second event, depends on, what happens in the first event.
Ref: (Link)
Bayes’ Theorem
This theorem is named after Thomas Bayes and often called as Bayes' law or Bayes' rule. In
probability theory and applications, Bayes' theorem shows the relation between a conditional
probability and its reverse form.
The equation used is:
Where:
P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does
not take into account any information about B.
P(A|B) is the conditional probability of A, given B. It is also called the posterior probability
because it is derived from or depends upon the specified value of B.
P(B|A) is the conditional probability of B given A. It is also called the likelihood.
P(B) is the prior or marginal probability of B, and acts as a normalizing constant.
Example
A simple example is as follows: There is a 40% chance of it raining on Sunday. If it rains on
Sunday, there is a 10% chance it will rain on Monday. If it didn't rain on Sunday, there's an 80%
chance it will rain on Monday.
"Raining on Sunday" is event A, and "Raining on Monday" is event B.
P(A) = 0.40 = Probability of Raining on Sunday.
P(A`) = 0.60 = Probability of not raining on Sunday.
P(B|A) = 0.10 = Probability of it raining on Monday, if it rained on Sunday.
P(B`|A) = 0.90 = Probability of it not raining on Monday, if it rained on Sunday.
P(B|A`) = 0.80 = Probability of it raining on Monday, if it did not rain on Sunday.
P(B`|A`) = 0.20 = Probability of it not raining on Monday, if it did not rain on Sunday.
The first thing we'd normally calculate is the probability of it raining on Monday: This would be the
sum of the probability of "Raining on Sunday and raining on Monday" and "Not raining on Sunday
and raining on Monday"
However, what if we said: "It rained on Monday. What is the probability it rained on Sunday?" That
is where Bayes' theorem comes in. It allows us to calculate the probability of an earlier event,
given the result of a later event.
The equation used is: