Bab 2 - Marek
Bab 2 - Marek
Bab 2 - Marek
Measure 45
2.6 Probability
The ideas which led to Lebesgue measure may be adapted to construct mea-
sures generally on arbitrary sets: any set Ω carrying an outer measure (i.e. a
mapping from P (Ω) to [0, ∞] monotone and countably sub-additive) can be
equipped with a measure µ defined on an appropriate σ-field F of its subsets.
The resulting triple (Ω, F, µ) is then called a measure space, as observed in
Remark 2.1. Note that in the construction of Lebesgue measure we only used
the properties, not the particular form of the outer measure.
For the present, however, we shall be content with noting simply how to
restrict Lebesgue measure to any Lebesgue measurable subset B of R with
m(B) > 0:
Given Lebesgue measure m on the Lebesgue σ-field M let
MB = {A ∩ B : A ∈ M}
Proposition 2.20
(B, MB , mB ) is a complete measure space.
S S
Hint i (Ai ∩ B) = ( i Ai ) ∩ B and (A1 ∩ B) \ (A2 ∩ B) = (A1 \ A2 ) ∩ B.
We now have the basis for some probability theory, although a general
development still requires the extension of the concept of measure from R to
abstract sets. Nonetheless the building blocks are already evident in the detailed
development of the example of Lebesgue measure. The main idea in providing a
46 Measure, Integral and Probability
Definition 2.6
A probability space is a triple (Ω, F, P ) where Ω is an arbitrary set, F is a
σ-field of subsets of Ω, and P is a measure on F such that
P (Ω) = 1,
Remark 2.6
The original definition, given by Kolmogorov in 1932, is a variant of the above
(see Theorem 2.14): (Ω, F, P ) is a probability space if (Ω, F) are given as
above, and P is a finitely additive set function with P (Ø) = 0 and P (Ω) = 1
such that P (Bn ) & 0 whenever (Bn ) in F decreases to Ø.
Example 2.2
We see at once that Lebesgue measure restricted to [0, 1] is a probability mea-
sure. More generally: suppose we are given an arbitrary Lebesgue measurable
1
set Ω ⊂ R, with m(Ω) > 0. Then P = c·mΩ , where c = m(Ω) , and m = mΩ de-
notes the restriction of Lebesgue measure to measurable subsets of Ω, provides
a probability measure on Ω, since P is complete and P (Ω) = 1.
1
For example, if Ω = [a, b], we obtain c = b−a , and P becomes the ‘uniform
distribution’ over [a, b]. However, we can also use less familiar sets for our base
1
space; for example, Ω = [a, b] ∩ (R \ Q), c = b−a gives the same distribution
over the irrationals in [a, b].
Definition 2.7
Given a probability space (Ω, F, P ) we say that the elements of F are events.
Suppose next that a number has been drawn from [0, 1] but has not been
revealed yet. We would like to bet on it being in [0, 14 ] and we get a tip that
it certainly belongs to [0, 12 ]. Clearly, given this ‘inside information’, the prob-
ability of success is now 21 rather than 14 . This motivates the following general
definition.
Definition 2.8
Suppose that P (B) > 0. Then the number
P (A ∩ B)
P (A|B) =
P (B)
is called the conditional probability of A given B.
Proposition 2.21
The mapping A 7→ P (A|B) is countably additive on the σ-field FB .
Exercise 2.9
S∞
Prove that if Hi are pairwise disjoint events such that i=1 Hi = Ω,
P (Hi ) 6= 0, then
X∞
P (A) = P (A|Hi )P (Hi ).
i=1
48 Measure, Integral and Probability
Definition 2.9
The events A, B are independent if
P (A ∩ B) = P (A) · P (B).
Exercise 2.10
Suppose that A and B are independent events. Show that Ac and B are
also independent.
The Exercise indicates that if A and B are independent events, then all
elements of the σ-fields they generate are mutually independent, since these σ-
fields are simply the collections FA = {Ø, A, Ac , Ω} and FB = {Ø, B, B c , Ω}
respectively. This leads us to a natural extension of the definition: two σ-fields
F1 and F2 are independent if for any choice of sets A1 ∈ F1 and A2 ∈ F2 we
have P (A1 ∩ A2 ) = P (A1 )P (A2 ).
However, the extension of these definitions to three or more events (or
several σ-fields) needs a little care, as the following simple examples show:
Example 2.3
Let Ω = [0, 1], A = [0, 14 ] as before; then A is independent of B = [ 81 , 58 ] and of
C = [ 81 , 83 ] ∪ [ 34 , 1]. In addition, B and C are independent. However,
P (A ∩ B ∩ C) 6= P (A) · P (B) · P (C).
Thus, given three events, the pairwise independence of each of the three possible
pairs does not suffice for the extension of ‘independence’ to all three events.
On the other hand, with A = [0, 41 ], B = C = [0, 161
] ∪ [ 14 , 11
16 ], (or alterna-
1 9
tively with C = [0, 16 ] ∪ [ 16 , 1])
P (A ∩ B ∩ C) = P (A) · P (B) · P (C) (2.21)
2. Measure 49
Definition 2.10
The events A1 , . . . , An are independent if for all k ≤ n for each choice of k
events, the probability of their intersection is the product of the probabilities.
Definition 2.11
The σ-fields F1 , F2 , ..., Fn defined on a given probability space (Ω, F, P ) are
independent if, for all choices of distinct indices i1 , i2 , ..., ik from {1, 2, ..., n}
and all choices of sets Fin ∈ Fin we have
P (Fi1 ∩ Fi2 ∩ ... ∩ Fik ) = P (Fi1 ) · P (Fi2 ) · · · · · P (Fik ).
As indicated in the Preface, we will explore briefly how the ideas developed
in each chapter can be applied in the rapidly growing field of mathematical
finance. This is not intended as an introduction to this subject, but hope-
fully it will demonstrate how a consistent mathematical formulation can help
to clarify ideas central to many disciplines. Readers who are unfamiliar with
mathematical finance should consult texts such as [4], [5], [7] for definitions and
a discussion of the main ideas of the subject.
Probabilistic modelling in finance centres on the analysis of models for the
evolution of the value of traded assets, such as stocks or bonds, and seeks
to identify trends in their future behaviour. Much of the modern theory is
concerned with evaluating derivative securities such as options, whose value is
determined by the (random) future values of some underlying security, such as
a stock.