0% found this document useful (0 votes)
14 views21 pages

Lecture 06

Lecture notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views21 pages

Lecture 06

Lecture notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

STA 611: Introduction to Mathematical Statistics

Expectation

Instructor: Meimei Liu

STA 611 (Lecture 06) Expectation 1 / 20


Chapter 4

Chapter 4 sections

4.1 Expectation
4.2 Properties of Expectations
4.3 Variance
4.4 Moments
4.5 The Mean and the Median
4.6 Covariance and Correlation
4.7 Conditional Expectation
SKIP: 4.8 Utility

STA 611 (Lecture 06) Expectation 2 / 20


Chapter 4 4.1 Expectation

Summarizing distributions

The distribution of X contains everything there is to know about


the probabilistic properties of X .
However, sometimes we want to summarize the distribution of X
in one or a few numbers
e.g. to more easily compare two or more distributions.
Examples of descriptive quantities:
Mean ( = Expectation )
Center of mass - weighted average

Median, Moments
Variance, Interquartile Range (IQR), Covariance, Correlation

STA 611 (Lecture 06) Expectation 3 / 20


Chapter 4 4.1 Expectation

Definition of Expectation µ = E(X )


Def: Mean aka. Expected value
Let X be a random variable with p(d)f f (x). The mean, or expected
value of X , denoted E(X ), is defined as follows
X discrete: X
E(X ) = xf (x)
All x

assuming the sum exists.


X continuous: Z ∞
E(X ) = xf (x) dx
−∞

assuming the integral exists.

If the sum or integral does not exists we say that the expected value
does not exist.
The mean is often denoted with µ.
STA 611 (Lecture 06) Expectation 4 / 20
Chapter 4 4.1 Expectation

Examples
Recall the distribution of Y = the number of heads in 3 tosses
(coin toss example from Lecture 4)
y 0 1 2 3
fY (y ) 18 38 38 18
then
1 3 3 1 12 3
E(Y ) = 0 + 1 + 2 + 3 = = = 1.5
8 8 8 8 8 2

STA 611 (Lecture 06) Expectation 5 / 20


Chapter 4 4.1 Expectation

Examples
Recall the distribution of Y = the number of heads in 3 tosses
(coin toss example from Lecture 4)
y 0 1 2 3
fY (y ) 18 38 38 18
then
1 3 3 1 12 3
E(Y ) = 0 + 1 + 2 + 3 = = = 1.5
8 8 8 8 8 2

Find E(X ) where X ∼ Binom(n, p). The pf of X is


 
n x
f (x) = p (1 − p)n−x for x = 0, 1, . . . , n
x
Find E(X ) where X ∼ Uniform(a, b). The pdf of X is
1
f (x) = for a ≤ x ≤ b
b−a
STA 611 (Lecture 06) Expectation 5 / 20
Chapter 4 4.1 Expectation

Expectation of g(X )

Theorem 4.1.1
Let X be a random variable with p(d)f f (x) and g(x) be a real-valued
function. Then
X discrete: X
E (g(X )) = g(x)f (x)
All x

X continuous: Z ∞
E (g(X )) = g(x)f (x) dx
−∞

Example: Find E(X 2 ) where X ∼ Uniform(a, b).

STA 611 (Lecture 06) Expectation 6 / 20


Chapter 4 4.1 Expectation

Expectation of g(X , Y )

Theorem 4.1.2
Let X and Y be random variables with joint p(d)f f (x, y ) and let g(x, y )
be a real-valued function. Then
X and Y discrete:
X
E (g(X , Y )) = g(x, y )f (x, y )
All x,y

X and Y continuous:
Z ∞ Z ∞
E (g(X , Y )) = g(x, y )f (x, y ) dx dy
−∞ −∞

Example: Find E X +Y

2 where X and Y are independent and
X ∼ Uniform(a, b) and Y ∼ Uniform(c, d).

STA 611 (Lecture 06) Expectation 7 / 20


Chapter 4 4.2 Properties of Expectations

Properties of Expectation
Theorems 4.2.1, 4.2.4 and 4.2.6:
E(aX + b) = aE(X ) + b for constants a and b.
Let X1 , . . . , Xn be n random variables, all with finite expectations
E(Xi ), then
n n
!
X X
E Xi = E(Xi )
i=1 i=1

Corollary: E (a1 X1 + · · · + an Xn + b) = a1 E(X1 ) + · · · + an E(Xn ) + b


for constants b, a1 , . . . , an .
Let X1 , . . . , Xn be n independent random variables, all with finite
expectations E(Xi ), then
n n
!
Y Y
E Xi = E(Xi )
i=1 i=1
CAREFUL !!! In general E(g(X )) 6= g(E(X )).
For example: E(X 2 ) 6= [E(X )]2
STA 611 (Lecture 06) Expectation 8 / 20
Chapter 4 4.2 Properties of Expectations

Examples

If X1 ,P
X2 , . . . , Xn are i.i.d. Bernoulli(p) random variables then
Y = ni=1 Xi ∼ Binomial(n, p).

E(Xi ) = 0 × (1 − p) + 1 × p = p for i = 1, . . . , n
n n n
!
X X X
⇒ E(Y ) = E Xi = E(Xi ) = p = np
i=1 i=1 i=1

Note: i.i.d. stands for independent and identically distributed

STA 611 (Lecture 06) Expectation 9 / 20


Chapter 4 4.3 Variance

Definition of Variance σ 2 = Var(X )

Def: Variance
Let X be a random variable (discrete or continuous) with a finite mean
µ = E(X ). The Variance of X is defined as
 
Var(X ) = E (X − µ)2
p
The standard deviation of X is defined as Var(X )

We often use σ 2 for variance and σ for standard deviation.

Theorem 4.3.1 – Another way of calculating variance


For any random variable X

Var(X ) = E(X 2 ) − [E(X )]2

STA 611 (Lecture 06) Expectation 10 / 20


Chapter 4 4.3 Variance

Examples - calculating the variance

Recall the distribution of Y = the number of heads in 3 tosses


(coin toss example from Lecture 4)
y 0 1 2 3
fY (y ) 18 38 38 18
We already found that µ = E(Y ) = 1.5. Then

1 3
Var(Y ) = (0 − 1.5)2 + (1 − 1.5)2
8 8
23 1
+ (2 − 1.5) + (3 − 1.5)2
8 8
= 0.75

Find Var(X ) where X ∼ Uniform(a, b)

STA 611 (Lecture 06) Expectation 11 / 20


Chapter 4 4.3 Variance

Properties of the Variance

Theorems 4.3.2, 4.3.3, 4.3.4 and 4.3.5


Var(X ) ≥ 0 for any random variable X .

Var(X ) = 0 if and only if X is a constant,


i.e. P(X = c) = 1 for some constant c.

Var(aX + b) = a2 Var(X )

If X1 , . . . , Xn are independent we have


n n
!
X X
Var Xi = Var(Xi )
i=1 i=1

STA 611 (Lecture 06) Expectation 12 / 20


Chapter 4 4.3 Variance

Examples

If X1 ,P
X2 , . . . , Xn are i.i.d. Bernoulli(p) random variables then
Y = ni=1 Xi ∼ Binomial(n, p).

E(Xi ) = p for i = 1, . . . , n
E(Xi2 ) = 0 × (1 − p) + 12 × p = p
2
for i = 1, . . . , n
⇒ Var(Xi ) = E(Xi2 ) − [E(Xi )]2 = p − p2 = p(1 − p)
n n n
!
X X X
⇒ Var(Y ) = Var Xi = Var(Xi ) = p(1 − p)
i=1 i=1 i=1
= np(1 − p)

STA 611 (Lecture 06) Expectation 13 / 20


Chapter 4 4.3 Variance

Measures of location and scales

The mean is a measure of location, the variance is a measure of scale.

STA 611 (Lecture 06) Expectation 14 / 20


Chapter 4 4.4 Moments

Moments and Central moments

Def: Moments
Let X be a random variable and k be a positive integer.
The expectation E(X k ) is called the k th moment of X
Let E(X ) = µ. The expectation E (X − µ)k is called the k th


central moment of X

The first moment is the mean: µ = E(X 1 )


The first central moment is zero: E(X − µ) = E(X ) − E(X ) = 0
The second central moment is the variance: σ 2 = E (X − µ)2


STA 611 (Lecture 06) Expectation 15 / 20


Chapter 4 4.4 Moments

Moments and Central moments


Symmetric distribution: If the p(d)f f (x) is symmetric with respect
to a point x0 , i.e. f (x0 + δ) = f (x0 − δ) for all δ
It the mean of a symmetric distribution exists, then it is the point of
symmetry.
If the distribution of X is symmetric w.r.t. its mean µ then
k

E (X − µ) = 0 for k odd (if the central moment exists)
Skewness: E (X − µ)3 /σ 3


STA 611 (Lecture 06) Expectation 16 / 20


Chapter 4 4.4 Moments

Moment generating function

Def: Moment Generating Function


Let X be a random variable. The function
 
ψ(t) = E etX t ∈R

is called the moment generating function (m.g.f.) of X

Theorem 4.4.2
Let X be a random variables whose m.g.f. ψ(t) is finite for t in an open
interval around zero. Then the nth moment of X is finite, for
n = 1, 2, . . ., and
dn
E(X n ) = n ψ(t)
dt t=0

STA 611 (Lecture 06) Expectation 17 / 20


Chapter 4 4.4 Moments

Example

Let X ∼ Gamma(n, β). Then X has the pdf

1
f (x) = x n−1 e−x/β for x > 0
(n − 1)!β n

Find the m.g.f. of X and use it to find the mean and the variance of X .

STA 611 (Lecture 06) Expectation 18 / 20


Chapter 4 4.4 Moments

Properties of m.g.f.

Theorems 4.4.3 and 4.4.4:


ψaX +b (t) = ebt ψX (at)
Let Y = ni=1 Xi where X1 , . . . , Xn are independent random
P
variables with m.g.f. ψi (t) for i = 1, . . . , n Then
n
Y
ψY (t) = ψi (t)
i=1

Theorem 4.4.5: Uniqueness of the m.g.f.


Let X and Y be two random variables with m.g.f.’s ψX (t) and ψY (t).
If the m.g.f.’s are finite and ψX (t) = ψY (t) for all values of t in an open
interval around zero, then X and Y have the same distribution.

STA 611 (Lecture 06) Expectation 19 / 20


Chapter 4 4.4 Moments

Example

Let X ∼ N(µ, σ 2 ). X has the pdf

(x − µ)2
 
1
f (x) = √ exp −
σ 2π 2σ 2

and the m.g.f. for the normal distribution is

t 2σ2
 
ψ(t) = exp µt +
2

Homework (not to turn in): Show that ψ(t) is the m.g.f. of X .


Let X1 , . . . , X2 be independent Gaussian random variables with
means µi and variances σi2 . P
What is the distribution of Y = ni=1 Xi ?

STA 611 (Lecture 06) Expectation 20 / 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy