Probability Theory - MIT OCW
Probability Theory - MIT OCW
Probability Theory - MIT OCW
Probability Theory
Lecturer: Michel Goemans
These notes cover the basic definitions of discrete probability theory, and then present some
results including Bayes’ rule, inclusion-exclusion formula, Chebyshev’s inequality, and the weak law
of large numbers.
p(x) = 1,
x∈S
so the total probability of the elements of our sample space is 1. What this means intuitively is
that when we perform our process, exactly one of the things in our sample space will happen.
Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2,
p(b) = 1/3, p(c) = 1/6.
If all elements of our sample space have equal probabilities, we call this the uniform probability
distribution on our sample space. For example, if our sample space was the outcomes of a die roll,
the sample space could be denoted S = {x1 , x2 , . . . , x6 }, where the event xi correspond to rolling i.
The uniform distribution, in which every outcome xi has probability 1/6 describes the situation
for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and
T (tails), each with probability 1/2. In this situation we have the uniform probability distribution
on the sample space S = {H, T }.
We define an event A to be a subset of the sample space. For example, in the roll of a die,
if the event A was rolling an even number, then A = {x2 , x4 , x6 }. The probability of an event A,
denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space.
For rolling an even number, we have
1
P(A) = p(x2 ) + p(x4 ) + p(x6 ) =
2
Given an event A of our sample space, there is a complementary event which consists of all
points in our sample space that are not in A. We denote this event by ¬A. Since all the points in
a sample space S add to 1, we see that
Prob-1
A B
S
A ∧ ¬B A∧B B ∧ ¬A
¬A ∧ ¬B
P(A ∧ B)
P(B|A) = .
P(A)
Why is this an interesting notion? Let’s give an example. Suppose we roll a fair die, and we
ask what is the probability of getting an odd number, conditioned on having rolled a number that
is at most 3? Since we know that our roll is 1, 2, or 3, and that they are equally likely (since we
started with the uniform distribution corresponding to a fair die), then the probability of each of
these outcomes must be 13 . Thus the probability of getting an odd number (that is, of getting 1
or 3) is 23 . Thus if A is the event “outcome is at most 3” and B is the event “outcome is odd”,
then we would like the mathematical definition of the “probability of B conditioned on A” to give
P(B|A) = 2/3. And indeed, mathematically we find
P(B ∧ A) 2/6 2
P(B|A) = = = .
P(A) 1/2 3
The intuitive reason for which our definition of P(B|A) gives the answers we wanted is that the
1
probability of every outcome in A gets multiplied by P(A) when one conditions on the event A.
Prob-2
Indeed, the first term is P(A ∧ B) and the second term P(A ∧ ¬B). Adding these together, we get
If we have two events A and B, we say that they are independent if the probability that both
happen is the product of the probability that the first happens and the probability that the second
happens, that is, if
P(A ∧ B) = P(A) · P(B).
Example. For a die roll, the events A of rolling an even number, and B of rolling a number less
or equal to 3 are not independent, since P(A) · P(B) = P(A ∧ B). Indeed, 12 ·
12 = 61 .
However, if
you define C to be the event of rolling a 1 or 2, then A and C are independent, since P(A) =
12 ,
P(C) =
13 , and P(A ∧ C) =
16 .
Let us now show on an example that our mathematical definition of independence does capture
the intuitive notion of independence. Let’s assume that we toss two coins (not necessarily fair
coins). The sample space is S = {HH, HT, T H, T T } (where the first letter represents the result
of the first coin). Let us denote the event of the first coin being a tail by T ◦, and the event of the
second coin being a tail by ◦T and so on. By definition, we have P(T ◦) = p(T H) + p(T T ) and
so on. Suppose that knowing that the first coin is a tail doesn’t change the probability that the
second coin is a tail. This gives
P(◦T | T ◦) = P(◦T ).
Moreover, by definition of conditional probability,
P(T T )
P(◦T | T ◦) = .
P(T ◦)
Combining these equations gives
P(T T ) = P(T ◦)P(◦T ),
or equivalently
P(T ◦ ∧ ◦T ) = P(T ◦)P(◦T ).
Which is the condition we took to define the independence. Conclusion: knowing that the first
coin is a tail doesn’t change the probability that the second coin is a tail is the same as what we
defined as “independence” between the events T ◦ and ◦T .
More generally, suppose that A and B are independent. In this case, we have
P(A ∧ B) P(A)P(B)
P(B|A) = = = P(B).
P(A) P(A)
That is, if two events are independent, then the probability of B happening, conditioned on A
happening is the same as the probability of B happening without the conditioning. It is straight
forward to check that the reasoning can be reversed as well: if the probability of B does not change
when you condition on A, then the two events are independent.
Prob-3
It is possible to have a set of three events such that any two of them are independent, but all
three are not independent. It is an interesting exercise to try to find such an example.
If we have k probability distributions on sample spaces S1 . . . Sk , we can construct a new prob
ability distribution called the product distribution by assuming that these k processes are inde
pendent. Our new sample space is made of all the k-tuples (s1 , s2 , . . . , sk ) where si ∈ Si . The
probability distribution on this sample space is defined by
k
k
p(s1 , s2 , . . . , sk ) = p(si ).
i=1
For example, if you roll k dice, your sample space will be the set of tuples (s1 , . . . , sk ) where
si ∈ {x1 , x2 , . . . , x6 }. The value of si represents the result of the i-th die (for instance si = x3
means that the i-th die rolled 3). For 2 dice, the probability of rolling a one and a two will be
1 1 1 1
p(x1 , x2 ) + p(x2 , x1 ) = · + · = 1/18,
6 6 6 6
because you could have rolled the one with either the first die or the second die. The probability
of rolling two ones is P(x1 , x1 ) = 1/36.
3 Bayes’ rule
If we have a sample space, then conditioning on some event A gives us a new sample space. The
elements in this new sample space are those elements in event A, and we normalize their probabilities
by dividing by P(A) so that they will still add to 1.
Let us consider an example. Suppose we have two coins, one of which is a trick coin, which has
two heads, and one of which is normal, and has one head and one tail. Suppose you toss a random
one of these coins. You observe that it comes up heads. What is the probability that the other
side is tails? I’ll tell you the solution in the next paragraph, but you might want to first test your
intuition by guessing the answer.
To solve this puzzle, let’s label the two sides of the coin with two heads: we call one of these
H1 and the other H2 . Now, there are four possibilities for the outcome of the above process, all
equally likely. They are as follows:
coin 1 coin 2
H1 H
H2 T
If you observe heads, then you eliminate one of these four possibilities. Of the remaining three, the
other side will be heads in two cases (if you picked coin 1) and tails in only one case (if you picked
coin 2). Thus, the probability the other side is tails is equal to 13 .
A similar probability puzzle goes as follows: You meet a woman who has two children, at least
one of whom is a girl. What is the probability that the two children are girls? The intended answer
Prob-4
is that if you choose a woman with two children randomly, with probability
14 , she has two boys,
with probability
12 she has one boy and one girl and with probability
14 , she has two girls. Thus
the conditional probability that she has two girls, given that she has at least one, is
31 .
1 Note that
the above calculation does not take into account the possibility of twins.
Exercise. A woman lives in a school district where, one-fifth of women with two children have
twins, and of these, one-fifth are identical twins (with both children being of the same sex). Now
what is the probability that a woman with two children she meets at the party for parents of first
grade girls has two daughters? [Assume that all mothers are equally likely to go to the party, that
two siblings are in the same grade if and only if they are twins, and that children are equally likely
to be boys or girls.]
P(B)
P(B|A) = P(A|B) .
P(A)
The proof of Bayes’ rule is straightforward. Replacing the conditional probabilities in Bayes’
rule by their definition, we get
Let us assume that this probability of a false positive is 1/30. There will also be some false negative
rate:
P(negative test|disease).
Let us assume that this probability of a false negative is 1/10.
Now, is it a good idea to test everyone for the disease? We will use Bayes’ rule to calculate
the probability that somebody in the general population who tests positive actually has disease L.
Let’s define event A as testing positive and B as having the disease. Then Bayes’ rule tells us that
P(B)
P(B|A) = P(A|B) .
P(A)
1
This might be contrary to your intuition. Indeed if you meet a woman with a girl, and have never seen her other
child, this second child has probability 1/2 (and not 1/3) of being a girl. A way of making the question confusing so
as to trick people is to ask: You meet a woman who has two children, one of whom is a girl. What is the probability
that the other is a girl?.
Prob-5
What are these numbers. P(A|B) is the chance you test positive, given that you have disease L,
which we find is 0.9 = 1 − 1/10 by using the false negative rate. P(B) = 1/1000 is the incidence of
the disease. P(A) is a little harder to calculate. We can obtain it by using the formula
P(A) = P(A|B)P(B) + P(A|¬B)P(¬B)
This gives
9 1 1 999
P(A) = + ≈ 0.0342.
10 1000 30 1000
You can see that this calculation is dominated by the rate of false positives. Then, using Bayes’
rule, we find that
B 0.001
P(B|A) = P(A|B) = 0.9 ≈ 0.0265.
A 0.0342
That is, even if you test positive, the chance that you have disease L is only around 2.65 percent.
Whether it is a good idea to test everyone for disease L is a medical decision, which will depend
on the severity of the disease, and the side effects of whatever treatment they give to people who
test positive. However, anybody deciding whether it is a good idea should take into account the
above calculation.
This is not just a theoretical problem. Recently, a number of medical clinics have been adver
tising whole body CAT scans for apparently healthy people, on the chance that they will detect
some cancer or other serious illness early enough to cure it. The FDA and some other medical
organizations are questioning whether the benefits outweigh the risks involved with investigating
false positives (which may involve surgery) that ultimately turn out to be no threat to health.
A A∧B B
We see that if we take P(A) + P(B), we have double counted all points of the sample space that
are in both A and B, so we need to subtract their probabilities. This can be done by subtracting
P(A ∧ B). We then get
P(A ∨ B) = P(A) + P(B) − P(A ∧ B).
Prob-6
Now, if we have three events, A, B and C, then we get the Venn diagram represented in Figure 3.
A∧B
A B
A∧B∧C
A∧C B∧C
C
Figure 3: Venn diagram of three events A, B and C. (To clear any confusion, event A corresponds
to 4 of the 8 regions in the Venn diagram, and A ∧ B corresponds to two of them.)
We want to obtain a formula for P(A ∨ B ∨ C) . If we take P(A) + P(B) + P(C), we have counted
every point in the pairwise intersections twice, and every point in the triple intersection A ∧ B ∧ C
three times. Thus, to fix the pairwise intersections, we must subtract P(A∧B)+P(B∧C)+P(A∧C).
Now, if we look at points in A ∧ B ∧ C, we started having counted every point in this set three
times, and we then subtracted each of these points three times, so we have to add them back in
again once. Thus, for three events, we get
P(A ∨ B ∨ C) = P(A) + P(B) + P(C) − P(A ∧ B) − P(B ∧ C) − P(A ∧ C) + P(A ∧ B ∧ C).
Thus, to get the probability that at least one of these three events occurs, we add the probability
of all events, subtract the intersection of all pairs of events, and add back the probability of the
intersection of all three events.
The inclusion-exclusion formula can be generalized to n events. Here is the general result:
Theorem 1. Let A1 , . . ., An be events. Then the probability of their union is
n
X X
P(A1 ∨ A2 ∨ . . . ∨ An ) = P(Ai ) − P(Ai ∧ Aj ) (1)
i=1 1≤i<j≤n
X
+ P(Ai ∧ Aj ∧ Ak ) − . . . + (−1)n+1 P(A1 ∧ A2 ∧ . . . ∧ An ).
1≤i<j<k≤n
Prob-7
P(A1 ∨ A2 ∨ . . . ∨ An−1 ).
This is the same as the right side of the inclusion-exclusion formula for n − 1, except that every
term has an additional ∧An included in it. We claim that this sum is
P (A1 ∨ A2 ∨ . . . ∨ An−1 ) ∧ An .
Ãi = Ai ∧ An .
The other, which we won’t go into the details of, is to consider the sample space obtained by
conditioning on the event An .
Summarizing, we have shown that the right-hand-side of (1) is equal to
or
P(B) − P(B ∧ An ) + P(An )
if we define B to be the event A1 ∨A2 ∨. . .∨An−1 . By the inclusion-exclusion formula for two events,
this is equal to P(B ∨ An ), which is what we wanted to show (as this is precicely the left-hand-side
of (1)). This completes the proof of Theorem 1.
Prob-8
1≤i<j≤n
There are n2 = n(n − 1)/2 terms. The event here is that both the i-th and the j-th letter go into
the correct envelopes. The probability that we put the i-th letter into the correct envelope is, as
before, n1 . Given that we have put the i-th letter into the correct envelope, the probability that we
1
put the j-th letter into the correct envelope is n−1 , since there are n − 1 letters other than the i-th
1
one, and they are all equally likely to go into the j-th envelope. This probability is then n(n−1) .
The second term thus is (remembering the minus sign)
n 1 1
− =− .
2 n(n − 1) 2
The t-th term is
X
(−1)t+1 P(Aj1 ∧ Aj2 ∧ . . . ∧ Ajt )
There are
n n(n − 1) . . . (n − t + 1)
=
t t!
terms in this sum, and each term is the probability
1
.
n(n − 1)(n − 2) . . . (n − t + 1)
Multiplying these quantities, we find that the t-th sum is (−1)t+1 t1! . We thus have that the proba
bility that at least one of the n letters goes into the right envelope is
1 1 1
1− + − . . . + (−1)n+1
2! 3! n!
and subtracting this from 1, we get that the probability that none of these letters goes into the
right envelope is
1 1 1
1 − 1 + − + . . . + (−1)n .
2! 3! n!
This can be rewritten as
n
X 1
(−1)k .
k!
k=0
You may recognize this as the first n + 1 terms of the Taylor expansion of ex , with the substitution
x = −1. Thus, as n goes to ∞, the probability that none of the letters go into the correct envelope
tends to 1e .
Prob-9
5 Expectation
So far we have dealt with events and their probabilities. Another very important concept in
probability is that of a random variable. A random variable is simply a function f defined on the
points of our sample space S. That is, associated with every x ∈ S, there is a value f (x). For the
time being, we will only consider functions that take values over the reals R, but the range of a
random variable can be any set.
We say that two random variables f and g are independent if the events f (x) = α and g(x) = β
are independent for any choice of values α, β in the range of f and g.
We define the expected value of a random variable f to be
X
E(f ) = p(x)f (x).
x∈S
Suppose that we have an event A. There is an important random variable IA associated with
A, called an indicator variable for A:
1 if x ∈ A
IA (x) =
0 if x ∈
/ A.
At first glance, there might not seem like much point in using such a simple random variable.
However, it can be very useful, especially in conjunction with the following important fact:
Linearity of expectation. A very useful fact about expectation is that it is linear. That is, if
we have two functions, f and g, then
Prob-10
The proof of the second is essentially similar, and we will not give it.
It it tempting to hope that E(f ) · E(g) = E(f · g), but this is false in general. For example, for
an indicator random variable IA (thus taking values only 0 or 1), we have that E(IA ) = P(A) while
2 ) = E(I ) = P(A) which is not P(A)2 (unless P(A) is 0 or 1). However, if the two
E(IA · IA ) = E(IA A
random variables f and g are independent, then equality does hold:
X
E(f · g) = p(x)f (x)g(x)
x∈S
XX X
= p(x)αβ
α β x∈S:f (x)=α,g(x)=β
XX
= αβP(f (x) = α ∧ g(x) = β)
α β
XX
= αβP(f (x) = α)P(g(x) = β) (by independence)
α β
X X
= αP(f (x) = α) βP(g(x) = β)
α β
= E(f )E(g).
So to summarize: we always have E(αf + βg) = αE(f ) + βE(g), and if in addition f and g are
independent, then E(f ) · E(g) = E(f g).
Exercise. It is however possible for E(f )·E(g) to equal E(f g) for f and g that are not independent;
given an example of such a pair of random variables.
6 Variance
Another quantity associated with a random variable is its variance. This is defined as
That is, the variance is the expectation of the square of the difference between the value of f and
the expected value f¯ = E(f ) of f . We can also expand this into:
X
Var(f ) = p(x)(f (x) − f¯)2 .
x∈S
The variance tells us how closely the value of a random variable is clustered around its expected
value. You might be more familiar with the standard deviation σ; the standard deviation of f is
defined to be the square root of Var(f ).
We can rewrite the definition of the variance as follows:
Prob-11
so the variance of f is the expectation of the square of f minus the square of the expectation of f .
We get from the second line to the third line by using linearity of expectation, and the third to the
fourth by using the definition Ef = f¯. Notice that the variance is always nonnegative, and that it
is equal to 0 if f is constant.
Let us compute the variance and standard deviation of the roll of a die. Let the number on the
die be the random variable X. We have that each of the numbers 1 through 6 are equally likely, so
6 6
X X 1 21
E(X) = p(i)i = i=
6 6
i=1 i=1
and
6 6
2
X
2
X 1 91
E[X ] = p(i)i = i2 = .
6 6
i=1 i=1
So 2
91 21 35
Var(X) = − =
6 6 12
and the standard deviation
35
σ= = 1.7078.
12
Recall that the Cauchy Schwartz inequality says that for twov vectors s and st, the inner product
¯
v i si ti is at most the product of their lengths. For si choose p(xi )|f (xi ) − f | and for ti choose
p(xi ). Then their inner product is the expected value of |f − f¯|, the length of s is the standard
deviation, and the length of st is 1. For the die example above, we have that the expected value of
|f − f¯| is 3/2, which is slightly less than the standard deviation of 1.7078.
If we a random variable f and a real c then Var(cf ) = c2 Var(f ). Suppose we have two random
variables f and g, and want to compute the variance of their sum. We get
This last quantity, E[f g] − f¯ḡ, is called the covariance. Recall from the previous section that if f
and g are independent, E[f g] = f¯ḡ, and so the covariance is 0. Thus we have that if f and g are
independent,
Var(f + g) = Var(f ) + Var(g);
variance is linear if we have independence (whereas expectation is linear even without indepen
dence).
Prob-12
Finally, suppose we have k random variables, f1 , . . . , fk , which are pairwise independent: for
any i =6 j, fi and fj are independent. What is the variance of the random variable f = f1 + f2 +
f3 + . . . + fk ? We have, using the same reasoning as above,
X
= Var(fi ),
i=1
If you flip the coin n times, the results of each coin flip are independent, Thus the variance is n
times the variance of a single coin flip, or np(1 − p).
7 Chebyshev’s inequality
A very useful inequality, that can give bounds on probabilities, can be proved using the tools that
we have developed so far. This is Chebyshev’s inequality, and it is used to get an upper bound on
the probability that a random variable takes on a value that is too far away from its mean.
Suppose you know the mean and variance of a random variable f . Is there some way that
you can put a bound on the probability that the random variable is a long way away from its
mean? This is exactly what Chebyshev’s inequality does. Let’s first derive Chebyshev’s inequality
intuitively, and then figure out how to turn this into a mathematically rigorous proof.
We will turn the probability around. Suppose we fix the probability p that the random variable
is
v farther than cσ away (for some given c) from the mean f¯ (recall σ is the standard deviation
Var(f )). Let’s see how small the variance can be. Let’s divide the sample space into two events.
The first (which happens with probability say 1 − p) is that
In this case, the way to minimize their contribution to the variance is to set f (x) = f¯, and the
contribution of this case to the variance is 0. The second event is when
|f − f¯| ≥ cσ.
In this case, the way to minimize the variance is to set |f (x) − f¯| = cσ, and the contribution of
this case to the variance is pc2 σ 2 . Since the variance is σ 2 , we have
σ 2 = pc2 σ 2 ,
p = c−2 .
Prob-13
Since this was the case that minimized the variance, in any other case, the variance has to be
greater than this. This gives us Chebyshev’s inequality, namely
P( |f − f¯| ≥ cσ ) ≤ 1/c2
P( |f − f¯| ≥ y ) ≤ σ 2 /y 2 .
Now, let’s turn this derivation into rigorous mathematical formulas. We first write down the
formula for the variance X
σ2 = p(x)|f (x) − f¯|2 .
x∈S
Now, let’s divide it into the two cases we talked about above.
X X
σ2 = p(x)|f (x) − f¯|2 + p(x)|f (x) − f¯|2 .
x:|f (x)−f¯|<cσ x:|f (x)−f¯|≥cσ
P P
For the first sum, we have p(x) = 1 − p and for the second sum, we have p(x) = p, where p is
the probability that |f − f | ≥ cσ. Similarly, for the first sum, we have |f (x) − f¯|2 ≥ 0 and for the
¯
second sum, we have |f (x) − f¯|2 ≥ c2 σ 2 . Putting these facts together, we have
σ 2 ≥ pc2 σ 2 ,
which gives
p ≤ 1/c2 ,
and proves Chebyshev’s inequality.
Theorem 2 (Weak law of large numbers). Fix E > 0. Let f1 , · · · , fn be n independent copies of a
random variable f . Let
1
gn = (f1 + f2 + · · · fn ).
n
Then
P |gn − f¯| ≥ E → 0
as n → ∞.
In plain English, the probability that gn deviates from the expected value of f by at least E
becomes arbitrarily small as n grows arbitrarily large.
Prob-14
The weak law of large numbers can be proved by using Chebyshev’s inequality applied to gn .
For this, we need to know E[gn ] and Var(gn ). By linearity of expectations, we have
n
1 1X n
E[gn ] = E (f1 + · · · + fn ) = E[fi ] = E[f ] = f¯.
n n n
i=1
1
Var[gn ] = Var (f1 + · · · + fn )
n
� n �
1 X
= 2
Var fi
n
i=1
n
1 X
= Var[fi ]
n2
i=1
1
= Var[f ],
n
the third equality being true since the fi ’s are independent. Thus, as n tends to infinity, E[gn ]
remains constant while Var[gn ] tends to 0. For example, we saw that the roll of a fair die gives
a variance of 35
12 . If we were to roll the die 1000 times and average all 1000 values, we would get
a random variable whose expected value is still 3.5 but whose variance is much smaller, it is only
35
12,000 .
Now that we know the expected value and variance of gn , we can simply use Chebyshev’s
inequality on gn to get:
Var[gn ] Var[f ]
P[|gn − f | ≥ E] ≤ = ,
E2 nE2
and indeed this probability tends to 0 as n tends to infinity. This proves the weak law of large
numbers.
Prob-15
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.