Durrett_2019_chap1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Cambridge University Press

978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

Measure Theory

In this chapter, we will recall some definitions and results from measure theory. Our purpose
here is to provide an introduction to readers who have not seen these concepts before and
to review that material for those who have. Harder proofs, especially those that do not
contribute much to one’s intuition, are hidden away in the Appendix. Readers with a solid
background in measure theory can skip Sections 1.4, 1.5, and 1.7, which were previously
part of the Appendix.

1.1 Probability Spaces


Here and throughout the book, terms being defined are set in boldface. We begin with the
most basic quantity. A probability space is a triple (, F ,P ), where  is a set of “out-
comes,” F is a set of “events,” and P : F → [0,1] is a function that assigns probabilities
to events. We assume that F is a σ -field (or σ -algebra), i.e., a (nonempty) collection of
subsets of  that satisfy
(i) if A ∈ F , then Ac ∈ F , and
(ii) if Ai ∈ F is a countable sequence of sets, then ∪i Ai ∈ F .
Here and in what follows, countable means finite or countably infinite. Since ∩i Ai =
(∪i Aci )c , it follows that a σ -field is closed under countable intersections. We omit the last
property from the definition to make it easier to check.
Without P , (, F ) is called a measurable space, i.e., it is a space on which we can put
a measure. A measure is a nonnegative countably additive set function; that is, a function
μ : F → R with
(i) μ(A) ≥ μ(∅) = 0 for all A ∈ F , and
(ii) if Ai ∈ F is a countable sequence of disjoint sets, then

μ(∪i Ai ) = μ(Ai )
i

If μ() = 1, we call μ a probability measure. In this book, probability measures are


usually denoted by P .
The next result gives some consequences of the definition of a measure that we will need
later. In all cases, we assume that the sets we mention are in F .

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

2 Measure Theory

Theorem 1.1.1 Let μ be a measure on (, F )


(i) monotonicity. If A ⊂ B, then μ(A) ≤ μ(B).
∞
(ii) subadditivity. If A ⊂ ∪∞
m=1 Am , then μ(A) ≤ m=1 μ(Am ).

(iii) continuity from below. If Ai ↑ A (i.e., A1 ⊂ A2 ⊂ . . . and ∪i Ai = A), then


μ(Ai ) ↑ μ(A).
(iv) continuity from above. If Ai ↓ A (i.e., A1 ⊃ A2 ⊃ . . . and ∩i Ai = A), with
μ(A1 ) < ∞, then μ(Ai ) ↓ μ(A).
Proof (i) Let B − A = B ∩ Ac be the difference of the two sets. Using + to denote disjoint
union, B = A + (B − A) so
μ(B) = μ(A) + μ(B − A) ≥ μ(A).
n−1 ′
(ii) Let A′n = An ∩A, B1 = A′1 and for n > 1, Bn = A′n −∪m=1 Am . Since the Bn are disjoint
and have union A, we have using (ii) of the definition of measure, Bm ⊂ Am , and (i) of this
theorem
∞ ∞

μ(A) = μ(Bm ) ≤ μ(Am )
m=1 m=1

(iii) Let Bn = An −An−1 . Then the Bn are disjoint and have ∪∞ n


m=1 Bm = A, ∪m=1 Bm = An so

 n

μ(A) = μ(Bm ) = lim μ(Bm ) = lim μ(An )
n→∞ n→∞
m=1 m=1

(iv) A1 − An ↑ A1 − A so (iii) implies μ(A1 − An ) ↑ μ(A1 − A). Since A1 ⊃ A, we have


μ(A1 − A) = μ(A1 ) − μ(A) and it follows that μ(An ) ↓ μ(A).
The simplest setting, which should be familiar from undergraduate probability, is:
Example 1.1.2 (Discrete probability spaces) Let  = a countable set, i.e., finite or count-
ably infinite. Let F = the set of all subsets of . Let
 
P (A) = p(ω), where p(ω) ≥ 0 and p(ω) = 1
ω∈A ω∈

A little thought reveals that this is the most general probability measure on this space.
In many cases when  is a finite set, we have p(ω) = 1/||, where || = the number
of points in .
For a simple concrete example that requires this level of generality, consider the astragali,
dice used in ancient Egypt made from the ankle bones of sheep. This die could come to rest
on the top side of the bone for four points or on the bottom for three points. The side of the
bone was slightly rounded. The die could come to rest on a flat and narrow piece for six
points or somewhere on the rest of the side for one point. There is no reason to think that all
four outcomes are equally likely, so we need probabilities p1 , p3 , p4 , and p6 to describe P .
To prepare for our next definition, we need to note that it follows easily from the
definition: If Fi , i ∈ I are σ -fields, then ∩i∈I Fi is. Here I = ∅ is an arbitrary index set

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

1.1 Probability Spaces 3

(i.e., possibly uncountable). From this it follows that if we are given a set  and a collection
A of subsets of , then there is a smallest σ -field containing A. We will call this the σ -field
generated by A and denote it by σ (A).
Let Rd be the set of vectors (x1, . . . xd ) of real numbers and Rd be the Borel sets, the
smallest σ -field containing the open sets. When d = 1, we drop the superscript.
Example 1.1.3 (Measures on the real line) Measures on (R, R) are defined by giving a
Stieltjes measure function with the following properties:
(i) F is nondecreasing.
(ii) F is right continuous, i.e., limy↓x F (y) = F (x).
Theorem 1.1.4 Associated with each Stieltjes measure function F there is a unique measure
μ on (R, R) with μ((a,b]) = F (b) − F (a)

μ((a,b]) = F (b) − F (a) (1.1.1)

When F (x) = x, the resulting measure is called Lebesgue measure.


The proof of Theorem 1.1.4 is a long and winding road, so we will content ourselves with
describing the main ideas involved in this section and hiding the remaining details in the
Appendix in Section A.1. The choice of “closed on the right” in (a,b] is dictated by the fact
that if bn ↓ b, then we have
∩n (a,bn ] = (a,b]

The next definition will explain the choice of “open on the left.”
A collection S of sets is said to be a semialgebra if (i) it is closed under intersection,
i.e., S, T ∈ S implies S ∩ T ∈ S , and (ii) if S ∈ S , then S c is a finite disjoint union of sets
in S . An important example of a semialgebra is
Example 1.1.5 Sd = the empty set plus all sets of the form

(a1,b1 ] × · · · × (ad ,bd ] ⊂ Rd where − ∞ ≤ ai < bi ≤ ∞

The definition in (1.1.1) gives the values of μ on the semialgebra S1 . To go from semial-
gebra to σ -algebra, we use an intermediate step. A collection A of subsets of  is called an
algebra (or field) if A,B ∈ A implies Ac and A ∪ B are in A. Since A ∩ B = (Ac ∪ B c )c ,
it follows that A ∩ B ∈ A. Obviously, a σ -algebra is an algebra. An example in which the
converse is false is:
Example 1.1.6 Let  = Z = the integers. A = the collection of A ⊂ Z so that A or Ac is
finite is an algebra.
Lemma 1.1.7 If S is a semialgebra, then S̄ = {finite disjoint unions of sets in S } is an
algebra, called the algebra generated by S .
Proof Suppose A = +i Si and B = +j Tj , where + denotes disjoint union and we assume
the index sets are finite. Then A ∩ B = +i,j Si ∩ Tj ∈ S̄ . As for complements, if A = +i Si
then Ac = ∩i Sic . The definition of S implies Sic ∈ S̄ . We have shown that S̄ is closed under
intersection, so it follows by induction that Ac ∈ S̄ .

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

4 Measure Theory

Example 1.1.8 Let  = R and S = S1 then S̄1 = the empty set plus all sets of the form

∪ki=1 (ai ,bi ] where − ∞ ≤ ai < bi ≤ ∞

Given a set function μ on S we can extend it to S̄ by


n

+ni=1 Ai
 
μ = μ(Ai )
i=1

By a measure on an algebra A, we mean a set function μ with


(i) μ(A) ≥ μ(∅) = 0 for all A ∈ A, and
(ii) if Ai ∈ A are disjoint and their union is in A, then


∪∞
 
μ i=1 Ai = μ(Ai )
i=1

μ is said to be σ -finite if there is a sequence of sets An ∈ A so that μ(An ) < ∞ and


∪n An = . Letting A′1 = A1 and for n ≥ 2,
 
n−1 c
A′n = ∪nm=1 Am or A′n = An ∩ ∩m=1 Am ∈ A

we can without loss of generality assume that An ↑  or the An are disjoint.


The next result helps us to extend a measure defined on a semialgebra S to the σ -algebra
it generates, σ (S )
Theorem 1.1.9 Let S be a semialgebra and let μ defined on S have μ(∅) = 0. Suppose
(i) if S ∈ S , is a finite disjoint union of sets
 iS ∈ S , then μ(S) = i μ(Si ), and (ii) if
Si ,S ∈ S with S = +i≥1 Si , then μ(S) ≤ i≥1 μ(Si ). Then μ has a unique extension μ̄
that is a measure on S̄ , the algebra generated by S . If μ̄ is sigma-finite, then there is a
unique extension ν that is a measure on σ (S ).
In (ii) above, and in what follows, i ≥ 1 indicates a countable union, while a plain subscript
i or j indicates a finite union. The proof of Theorems 1.1.9 is rather involved so it is given
in Section A.1. To check condition (ii) in the theorem the following is useful.
Lemma 1.1.10 Suppose only that (i) holds.
(a) If A,Bi ∈ S̄ with A = +ni=1 Bi , then μ̄(A) = i μ̄(Bi ).


(b) If A,Bi ∈ S̄ with A ⊂ ∪ni=1 Bi , then μ̄(A) ≤ i μ̄(Bi ).




Proof Observe that it follows from the definition that if A = +i Bi is a finite disjoint union
of sets in S̄ and Bi = +j Si,j , then
 
μ̄(A) = μ(Si,j ) = μ̄(Bi )
i,j i

To prove (b), we begin with the case n = 1, B1 = B. B = A + (B ∩ Ac ) and B ∩ Ac ∈ S̄ , so

μ̄(A) ≤ μ̄(A) + μ̄(B ∩ Ac ) = μ̄(B)

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

1.1 Probability Spaces 5

To handle n > 1 now, let Fk = B1c ∩ . . . ∩ Bk−1


c ∩ Bk and note

∪i Bi = F1 + · · · + Fn
A = A ∩ (∪i Bi ) = (A ∩ F1 ) + · · · + (A ∩ Fn )
so using (a), (b) with n = 1, and (a) again
n
 n

μ̄(A) = μ̄(A ∩ Fk ) ≤ μ̄(Fk ) = μ̄ (∪i Bi )
k=1 k=1

Proof of Theorem 1.1.4. Let S be the semialgebra of half-open intervals (a,b] with
−∞ ≤ a < b ≤ ∞. To define μ on S , we begin by observing that
F (∞) = lim F (x) and F (−∞) = lim F (x) exist
x↑∞ x↓−∞

and μ((a,b]) = F (b) − F (a) makes sense for all −∞ ≤ a < b ≤ ∞ since F (∞) > −∞
and F (−∞) < ∞.
If (a,b] = +ni=1 (ai ,bi ], then after relabeling the intervals we must have a1 = a, bn = b,
and ai = bi−1 for 2 ≤ i ≤ n, so condition (i) in Theorem 1.1.9 holds. To check (ii), suppose
first that −∞ < a < b < ∞, and (a,b] ⊂ ∪i≥1 (ai ,bi ] where (without loss of generality)
−∞ < ai < bi < ∞. Pick δ > 0 so that F (a + δ) < F (a) + ǫ and pick ηi so that
F (bi + ηi ) < F (bi ) + ǫ2−i
The open intervals (ai ,bi + ηi ) cover [a + δ,b], so there is a finite subcover (αj ,βj ),
1 ≤ j ≤ J . Since (a + δ,b] ⊂ ∪Jj=1 (αj ,βj ], (b) in Lemma 1.1.10 implies
J
 ∞

F (b) − F (a + δ) ≤ F (βj ) − F (αj ) ≤ (F (bi + ηi ) − F (ai ))
j =1 i=1

So, by the choice of δ and ηi ,




F (b) − F (a) ≤ 2ǫ + (F (bi ) − F (ai ))
i=1

and since ǫ is arbitrary, we have proved the result in the case −∞ < a < b < ∞.
To remove the last restriction, observe that if (a,b] ⊂ ∪i (ai ,bi ] and (A,B] ⊂ (a,b] has
−∞ < A < B < ∞, then we have


F (B) − F (A) ≤ (F (bi ) − F (ai ))
i=1

Since the last result holds for any finite (A,B] ⊂ (a,b], the desired result follows.
Measures on Rd
Our next goal is to prove a version of Theorem 1.1.4 for Rd . The first step is to introduce
the assumptions on the defining function F . By analogy with the case d = 1 it is natural to
assume:

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

6 Measure Theory

0 2/3 1

0 0 2/3

0 0 0

Figure 1.1 Picture of the counterexample.

(i) It is nondecreasing, i.e., if x ≤ y (meaning xi ≤ yi for all i), then F (x) ≤ F (y).
(ii) F is right continuous, i.e., limy↓x F (y) = F (x) (here y ↓ x means each yi ↓ xi ).
(iii) If xn ↓ −∞, i.e., each coordinate does, then F (xn ) ↓ 0. If xn ↑ −∞, i.e., each
coordinate does, then F (xn ) ↑ 1.
However, this time it is not enough. Consider the following F



⎪ 1 if x1,x2 ≥ 1

⎨2/3 if x ≥ 1 and 0 ≤ x < 1
1 2
F (x1,x2 ) =


⎪2/3 if x2 ≥ 1 and 0 ≤ x1 < 1

⎩0 otherwise
See Figure 1.1 for a picture. A little thought shows that
μ((a1,b1 ] × (a2,b2 ]) = μ((−∞,b1 ] × (−∞,b2 ]) − μ((−∞,a1 ] × (−∞,b2 ])
− μ((−∞,b1 ] × (−∞,a2 ]) + μ((−∞,a1 ] × (−∞,a2 ])
= F (b1,b2 ) − F (a1,b2 ) − F (b1,a2 ) + F (a1,a2 )
Using this with a1 = a2 = 1 − ǫ and b1 = b2 = 1 and letting ǫ → 0 we see that
μ({1,1}) = 1 − 2/3 − 2/3 + 0 = −1/3
Similar reasoning shows that μ({1,0}) = μ({0,1}) = 2/3.
To formulate the third and final condition for F to define a measure, let
A = (a1,b1 ] × · · · × (ad ,bd ]
V = {a1,b1 } × · · · × {ad ,bd }
where −∞ < ai < bi < ∞. To emphasize that ∞’s are not allowed, we will call A a finite
rectangle. Then V = the vertices of the rectangle A. If v ∈ V , let

sgn (v) = (−1)# of a’s in v



AF = sgn (v)F (v)
v∈V

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

1.1 Probability Spaces 7

We will let μ(A) = AF , so we must assume


(iv) AF ≥ 0 for all rectangles A.
Theorem 1.1.11 Suppose F : Rd → [0,1] satisfies (i)–(iv) given above. Then there is a
unique probability measure μ on (Rd , Rd ) so that μ(A) = A F for all finite rectangles.
d
Example 1.1.12 Suppose F (x) = i=1 Fi (x), where the Fi satisfy (i) and (ii) of
Theorem 1.1.4. In this case,
d
AF = (Fi (bi ) − Fi (ai ))
i=1

When Fi (x) = x for all i, the resulting measure is Lebesgue measure on Rd .


Proof We let μ(A) = A F for all finite rectangles and then use monotonicity to extend the
definition to Sd . To check (i) of Theorem 1.1.9, call A = +k Bk a regular subdivision of A if
there are sequences ai = αi,0 < αi,1 . . . < αi,ni = bi so that each rectangle Bk has the form
(α1,j1 −1,α1,j1 ] × · · · × (αd,jd −1,αd,jd ] where 1 ≤ ji ≤ ni

It is easy to see that for regular subdivisions λ(A) = k λ(Bk ). (First consider the case in
which all the endpoints are finite and then take limits to get the general case.) To extend this
result to a general finite subdivision A = +j Aj , subdivide further to get a regular one.
The proof of (ii) is almost identical to that in Theorem 1.1.4. To make things easier to
write and to bring out the analogies with Theorem 1.1.4, we let
(x,y) = (x1,y1 ) × · · · × (xd ,yd )
(x,y] = (x1,y1 ] × · · · × (xd ,yd ]
[x,y] = [x1,y1 ] × · · · × [xd ,yd ]
for x,y ∈ Rd . Suppose first that −∞ < a < b < ∞, where the inequalities mean that each
component is finite, and suppose (a,b] ⊂ ∪i≥1 (a i ,bi ], where (without loss of generality)
−∞ < a i < bi < ∞. Let 1̄ = (1, . . . ,1), pick δ > 0 so that
μ((a − δ 1̄,b]) > μ((a,b]) − ǫ

Figure 1.2 Conversion of a subdivision to a regular one.

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

8 Measure Theory

and pick ηi so that


μ((a,bi + ηi 1̄]) < μ((a i ,bi ]) + ǫ2−i
The open rectangles (a i ,bi + ηi 1̄) cover [a + δ 1̄,b], so there is a finite subcover (α j ,β j ),
1 ≤ j ≤ J . Since (a + δ 1̄,b] ⊂ ∪Jj=1 (α j ,β j ], (b) in Lemma 1.1.10 implies
J
 ∞

μ([a + δ 1̄,b]) ≤ μ((α j ,β j ]) ≤ μ((a i ,bi + ηi 1̄])
j =1 i=1

So, by the choice of δ and ηi ,




μ((a,b]) ≤ 2ǫ + μ((a i ,bi ])
i=1
and since ǫ is arbitrary, we have proved the result in the case −∞ < a < b < ∞. The proof
can now be completed exactly as before.

Exercises
1.1.1 Let  = R, F = all subsets so that A or Ac is countable, P (A) = 0 in the first case
and = 1 in the second. Show that (, F ,P ) is a probability space.
1.1.2 Recall the definition of Sd from Example 1.1.5. Show that σ (Sd ) = Rd , the Borel
subsets of Rd .
1.1.3 A σ -field F is said to be countably generated if there is a countable collection
C ⊂ F so that σ (C ) = F . Show that Rd is countably generated.
1.1.4 (i) Show that if F1 ⊂ F2 ⊂ . . . are σ -algebras, then ∪i Fi is an algebra. (ii) Give an
example to show that ∪i Fi need not be a σ -algebra.
1.1.5 A set A ⊂ {1,2, . . .} is said to have asymptotic density θ if
lim |A ∩ {1,2, . . . ,n}|/n = θ
n→∞
Let A be the collection of sets for which the asymptotic density exists. Is A a
σ -algebra? an algebra?

1.2 Distributions
Probability spaces become a little more interesting when we define random variables on
them. A real-valued function X defined on  is said to be a random variable if for every
Borel set B ⊂ R we have X −1 (B) = {ω : X(ω) ∈ B} ∈ F . When we need to emphasize the
σ -field, we will say that X is F -measurable or write X ∈ F . If  is a discrete probability
space (see Example 1.1.2), then any function X :  → R is a random variable. A second
trivial, but useful, type of example of a random variable is the indicator function of a set
A ∈ F: 
1 ω∈A
1A (ω) =
0 ω ∈ A

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

1.2 Distributions 9

(,F,P ) (R,R) μ = P ◦ X −1

X
✁ ✲ A
✁ X−1 (A)

Figure 1.3 Definition of the distribution of X.

The notation is supposed to remind you that this function is 1 on A. Analysts call this
object the characteristic function of A. In probability, that term is used for something quite
different. (See Section 3.3.)
If X is a random variable, then X induces a probability measure on R called its
distribution by setting μ(A) = P (X ∈ A) for Borel sets A. Using the notation introduced
previously, the right-hand side can be written as P (X −1 (A)). In words, we pull A ∈ R back
to X −1 (A) ∈ F and then take P of that set.
To check that μ is a probability measure we observe that if the Ai are disjoint, then using
the definition of μ; the fact that X lands in the union if and only if it lands in one of the Ai ;
the fact that if the sets Ai ∈ R are disjoint, then the events {X ∈ Ai } are disjoint; and the
definition of μ again; we have:
 
μ (∪i Ai ) = P (X ∈ ∪i Ai ) = P (∪i {X ∈ Ai }) = P (X ∈ Ai ) = μ(Ai )
i i
The distribution of a random variable X is usually described by giving its distribution
function, F (x) = P (X ≤ x).
Theorem 1.2.1 Any distribution function F has the following properties:
(i) F is nondecreasing.
(ii) limx→∞ F (x) = 1, limx→−∞ F (x) = 0.
(iii) F is right continuous, i.e., limy↓x F (y) = F (x).
(iv) If F (x−) = limy↑x F (y), then F (x−) = P (X < x).
(v) P (X = x) = F (x) − F (x−).
Proof To prove (i), note that if x ≤ y, then {X ≤ x} ⊂ {X ≤ y}, and then use (i) in
Theorem 1.1.1 to conclude that P (X ≤ x) ≤ P (X ≤ y).
To prove (ii), we observe that if x ↑ ∞, then {X ≤ x} ↑ , while if x ↓ −∞, then
{X ≤ x} ↓ ∅ and then use (iii) and (iv) of Theorem 1.1.1.
To prove (iii), we observe that if y ↓ x, then {X ≤ y} ↓ {X ≤ x}.
To prove (iv), we observe that if y ↑ x, then {X ≤ y} ↑ {X < x}.
For (v), note P (X = x) = P (X ≤ x) − P (X < x) and use (iii) and (iv).
The next result shows that we have found more than enough properties to characterize
distribution functions.

© in this web service Cambridge University Press www.cambridge.org


Cambridge University Press
978-1-108-47368-2 — Probability
5th Edition
Excerpt
More Information

10 Measure Theory

y ✟
✟✟

x

✏✏

F −1 (x) F −1 (y)

Figure 1.4 Picture of the inverse defined in the proof of Theorem 1.2.2.

Theorem 1.2.2 If F satisfies (i), (ii), and (iii) in Theorem 1.2.1, then it is the distribution
function of some random variable.
Proof Let  = (0,1), F = the Borel sets, and P = Lebesgue measure. If ω ∈ (0,1), let
X(ω) = sup{y : F (y) < ω}
Once we show that
(⋆) {ω : X(ω) ≤ x} = {ω : ω ≤ F (x)}
the desired result follows immediately since P (ω : ω ≤ F (x)) = F (x). (Recall P is
Lebesgue measure.) To check (⋆), we observe that if ω ≤ F (x), then X(ω) ≤ x, since
x∈ / {y : F (y) < ω}. On the other hand if ω > F (x), then since F is right continuous, there
is an ǫ > 0 so that F (x + ǫ) < ω and X(ω) ≥ x + ǫ > x.
Even though F may not be 1-1 and onto we will call X the inverse of F and denote it
by F −1 . The scheme in the proof of Theorem 1.2.2 is useful in generating random variables
on a computer. Standard algorithms generate random variables U with a uniform distribu-
tion, then one applies the inverse of the distribution function defined in Theorem 1.2.2 to get
a random variable F −1 (U ) with distribution function F .
If X and Y induce the same distribution μ on (R, R), we say X and Y are equal in dis-
tribution. In view of Theorem 1.1.4, this holds if and only if X and Y have the same
distribution function, i.e., P (X ≤ x) = P (Y ≤ x) for all x. When X and Y have the
same distribution, we like to write
d
X=Y
but this is too tall to use in text, so for typographical reasons we will also use X =d Y .
When the distribution function F (x) = P (X ≤ x) has the form
 x
F (x) = f (y) dy (1.2.1)
−∞
we say that X has density function f . In remembering formulas, it is often useful to think
of f (x) as being P (X = x) although
 x+ǫ
P (X = x) = lim f (y) dy = 0
ǫ→0 x−ǫ

By popular demand, we have ceased our previous practice of writing P (X = x) for the
density function. Instead we will use things like the lovely and informative fX (x).

© in this web service Cambridge University Press www.cambridge.org

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy