CH 6 - Probability

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Probability

Probability implies 'likelihood' or 'chance'. When an event is certain to happen then the probability
of occurrence of that event is 1 and when it is certain that the event cannot happen then the
probability of that event is 0.
Hence the value of probability ranges from 0 to 1.

Classical Definition
As the name suggests the classical approach to defining probability is the oldest approach. It
states that if there are n exhaustive, mutually exclusive and equally likely cases out of which m
cases are favourable to the happening of event A,
Then the probabilities of event A is defined as given by the following probability function:

Thus to calculate the probability we need information on number of favourable cases and total
number of equally likely cases. This can he explained using following example.

Example
Problem Statement:
A coin is tossed. What is the probability of getting a head?
Solution:
Total number of equally likely outcomes (n) = 2 (i.e. head or tail)
Number of outcomes favourable to head (m) = 1

Sample Space
Sample Space: Each trial of the experiment will always result in an outcome. We can think of the
set, of all possible outcomes, of a trial. This set is called a sample space and is usually denoted by
either letter S or by a Greek letter Ω (Omega).
If the number of elements in the sample space is finite, then it is a finite sample space. For
example, in case of an experiment of tossing of a coin the sample space S = {Head, Tail}. It is a
finite sample space. It contains finite number of elements.
The number of elements of S may be countably infinite. Such a sample space is called countably
infinite sample space. For example, suppose you decide to toss a die till you observe ‘1’. Total
number of tosses required would be 1, 2, 3 and so on. In other words, here sample space S = {1,
2,3….}. Counting of the elements is possible but, it may not end.
You may describe the sample space using a Venn diagram. You can also use a Tree diagram. For
example, consider the experiment of tossing a six-faced die. Figure 1 is a Venn diagram.
You can make use of a Tree diagram. In it a branch of the tree represents each outcome. For
example, in case of tossing of a coin two outcomes H and T are possible.

Events
A subset of the sample space is called an event. It is a collection of one or more outcomes
(sample points) of a random experiment. We will use upper case letters with or without suffix, such
as A1 , A2 , B, C, P, Q to denote events. Using the same sample space we can define several
events. All events defined are essentially subsets of the sample space. Consider an experiment of
tossing two coins together. The sample space S of this experiment is S = {HH, HT, TH, TT}. Let A
denote the event of getting at least one head, then A = {HH, HT, TH}. If B denote the event of
getting at the most one head then B = {HT, TH, TT}.
If C denotes the event of getting, neither head nor tail then C= {}. This is empty set and will be
referred as a null event.
The sample space is also a subset of itself and is referred as a sure event.
An event can be simple or compound. A simple event (also called as elementary event) contains
only one of the outcomes of a sample space. A compound event is an event that has two or more
of the outcomes of a sample space.

Example
Consider the experiment involving two dice together. The sample space is given by S = {(1,1),
(1,2), ….(6,6)}. Define an event A1 as that of getting ‘1’ on both the dice. A1 = {(1,1)}. Define A 2 as
the event of getting ‘1’ on at least one die. A 2 = {(1,1), (1,2), (1,3),(1,4), (1,5), (1,6), (2,1), (3,1),
4,1), (5,1), (6,1)}. Here A1 is a simple event and A2 is a compound event. From the definition of an
event, it follows that the sample space S as well as the empty set are also events. The sample
space is referred to as a sure event, whereas the empty set is referred as a null event.

Operations On Events
All the operations on 'Sets' are applicable to events. The basic operations on Sets are Union and
Intersection. Let A and B denote two events associated with the same Sample Space.
A ∪ B : The event A ∪ B (read as A union B) is the set of all sample points.

A ∩ B : The event A ∩ B (read as A intersection B) is the set of all sample points, which
belong to both A and B.

Complement of an event
We have defined an event as a subset of the sample space S. So, it may or may not occur. If
event A does not occur in the observed outcome of the experiment then, it does not belong to set
A. Such an outcome must belong to compliment of the set A. The event defined by the set, A c =
{which does not belong to A} is called the complement of the event A. It is also denoted by A' .

Mutually exclusive (disjoint) events


Two events A and B are called mutually exclusive (disjoint) if they have no Similar point in
common, i.e. if A ∩ B = Φ. It means mutually exclusive events, cannot occur together. Consider
a collection of all mutually exclusive events. If the union of all these events gives the entire sample
space, then such events are called exhaustive. It means that at least one of the events is certain
to occur. Two events A and B are said to be mutually exclusive and exhaustive if A ∪ B = S
and A ∩ B = Φ. It means A and B constitute a partition of the sample space S. It is easy to see
that A and Ac are mutually exclusive and exhaustive events.

Examples
 Turning left and turning right are Mutually Exclusive (you can't do both at the same time)
 Tossing a coin: Heads and Tails are Mutually Exclusive
 Cards: Kings and Aces are Mutually Exclusive

Axioms of Probabilty
One strategy in mathematics is to start with a few statements, then build up more mathematics
from these statements. The beginning statements are known as axioms. An axiom is typically
something that is mathematically self-evident. From a relatively short list of axioms, deductive logic
is used to prove other statements, called theorems or propositions.
The area of mathematics known as probability is no different. Probability can be reduced to three
axioms.
We suppose that we have a set of outcomes called the sample space S. This sample space can
be thought of as the universal set for the situation that we are studying. The sample space is
comprised of subsets called events E1, E2, . . ., En. We also assume that there is a way for
assigning a probability to any event E. The probability of the event E is denoted by P(E).

Probability Axioms

Axiom Applications (Theorems on Probability)


The three axioms set an upper bound for the probability of any event. We denote the complement
of the event E by EC. From set theory, E and EC have an empty intersection and are mutually
exclusive. Furthermore E U EC = S, the entire sample space.
These facts, combined with the axioms give us:
1 = P(S) = P(E U EC) = P(E) + P(EC) .
We rearrange the above equation and see that P(E) = 1 - P(E C). Since we know that probabilities
must be nonnegative, we now have that an upper bound for the probability of any event is 1.
By rearranging the formula again we have P(E C) = 1 - P(E). We also can deduce from this formula
that the probability of an event not occurring is one minus the probability that it does occur.
The above equation also provides us a way to calculate the probability of the impossible event,
denoted by the empty set.
To see this, recall that the empty set is the complement of the universal set, in this case S C.
Since 1 = P(S) + P(SC) = 1 + P(SC), by algebra we have P(SC) = 0.

Independent and Dependent Events


Independent Event
When two events are said to be independent of each other, what this means is that the probability
that one event occurs, in no way affects the probability of the other event occurring. An example of
two independent events is as follows; say you rolled a die and flipped a coin. The probability of
getting any number (on the face), on the die in no way influences, the probability of getting a head
or a tail on the coin.

Dependent Events
When two events are said to be dependent, the probability of one event occurring influences the
likelihood of the other event.
For example, if you were to draw a two cards from a deck of 52 cards. If on your first draw you had
an ace and you put that aside, the probability of drawing an ace on the second draw is greatly
changed because you drew an ace the first time. Let's calculate these different probabilities to see
what's going on.
There are 4 Aces in a deck of 52 cards

On your first draw, the probability of getting an ace is given by:

If we don't return this card into the deck, the probability of drawing an ace on the second pick is
given by

As you can clearly see, the above two probabilities are related, so we say that the two events are
dependent i.e. the likelihood of the second event, depends on, what happens in the first event.
Ref: (Link)

Bayes’ Theorem
This theorem is named after Thomas Bayes and often called as Bayes' law or Bayes' rule. In
probability theory and applications, Bayes' theorem shows the relation between a conditional
probability and its reverse form.
The equation used is:

Where:
 P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does
not take into account any information about B.
 P(A|B) is the conditional probability of A, given B. It is also called the posterior probability
because it is derived from or depends upon the specified value of B.
 P(B|A) is the conditional probability of B given A. It is also called the likelihood.
 P(B) is the prior or marginal probability of B, and acts as a normalizing constant.

Example
A simple example is as follows: There is a 40% chance of it raining on Sunday. If it rains on
Sunday, there is a 10% chance it will rain on Monday. If it didn't rain on Sunday, there's an 80%
chance it will rain on Monday.
"Raining on Sunday" is event A, and "Raining on Monday" is event B.
 P(A) = 0.40 = Probability of Raining on Sunday.
 P(A`) = 0.60 = Probability of not raining on Sunday.
 P(B|A) = 0.10 = Probability of it raining on Monday, if it rained on Sunday.
 P(B`|A) = 0.90 = Probability of it not raining on Monday, if it rained on Sunday.
 P(B|A`) = 0.80 = Probability of it raining on Monday, if it did not rain on Sunday.
 P(B`|A`) = 0.20 = Probability of it not raining on Monday, if it did not rain on Sunday.
The first thing we'd normally calculate is the probability of it raining on Monday: This would be the
sum of the probability of "Raining on Sunday and raining on Monday" and "Not raining on Sunday
and raining on Monday"

However, what if we said: "It rained on Monday. What is the probability it rained on Sunday?" That
is where Bayes' theorem comes in. It allows us to calculate the probability of an earlier event,
given the result of a later event.
The equation used is:

In our case, "Raining on Sunday" is event A, and "Raining on Monday" is event B.


 P(B|A) = 0.10 = Probability of it raining on Monday, if it rained on Sunday.
 P(A) = 0.40 = Probability of Raining on Sunday.
 P(B) = 0.52 = Probability of Raining on Monday.
So, to calculate the probability it rained on Sunday, given that it rained on Monday:

In other words, if it rained on Monday, there's a 7.69% chance it rained on Sunday.


Deterministic And Non-Deterministic (Random)
Experimentation is an integral part of any learning process. For some of the experiments, the
results are specific and known in advance. Such experiments are called deterministic experiments.
Examples of deterministic experiments are:
 Dropping of a pebble from a height
 Introducing a spark of electricity in a cylinder containing a mixture of hydrogen
 and oxygen and noting the end product
 Heating water in a pot for a sufficiently long time to a temperature in excess of 100 0 C
You can prepare a list of deterministic experiments. In such experiments, the outcomes are certain
and can be predicted even before the experiment is performed. The results of experiments are
labeled as outcomes. A particular repetition of an experiment is known as a trial.
On the Other hand, an experiment in which the set, of all possible outcomes are known, but the
exact outcome of a particular trial is unknown, prior to conduct of the experiment, is called a
random experiment. It is possible that you have some idea, guess or intuition, how the experiment
will turn out, but you cannot predict its outcome with certainty.
Examples:
 Random experiment: toss a coin; sample space: S={heads, tails} or as we usually write it,
{H,T}
 Random experiment: roll a die; sample space: S={1,2,3,4,5,6}
 Random experiment: observe the number of iPhones sold by an Apple store in Boston in
2015; sample space: S={0,1,2,3,⋯}
 Random experiment: observe the number of goals in a soccer match; sample space:
S={0,1,2,3,⋯}
Ref: (Link)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy