Topic 2a Theory of Estimation
Topic 2a Theory of Estimation
1) Unbiasedness
2) Consistency
3) Efficiency
4) Sufficiency
1. Unbiasedness
Suppose that x 1 , x 2 , x 3 ,… x n form a random sample of size n from a distribution having density
function f ( X ; θ ) which depends on the unknown parameter θ . Let us assume that θ belongs to the
parameter space Ω a subset of the real line, then we have the following definition of an unbiased
estimator.
Definition
^
A statistic θ=φ ( x 1 , x 2 , x 3 ,… x n ) is said to be an unbiased estimator of θ if E [ θ^ ]=θ for all real numbers
θ ∈Ω.
Thus if the mean of the sampling distribution of the estimator θ^ is θ , we say that θ^ is an unbiased
estimator.
σ2 −σ 2
(
The bias in this case is σ 2−
n )
−σ 2=
n
hence the sample variance is not an unbiased estimator of
Example 1
Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a population with mean μ, then
n n
E [∑ ] ∑
i=1
ai x i =
i=1
ai E [ x i ]
n
¿ μ ∑ ai
i=1
NB
n
1
Part (ii) is a special case of part (iii) with ∑ ai for i=1,2 , … , n
n i=1
n
Remark: the example above shows that the statistics x i,i=1,2 , … , n, x́ and y=∑ ai xi where
i=1
n
∑ ai=1 are all unbiased estimators of the parameter μ ie. Unbiased estimators are not unique.
i=1
Example 2
1 1
{ ( ) ( )
f ( x )= 1 for λ− 2 < x< λ+ 2
0 elsewhere
Solution
E [ x́ ] =E [ x ] = ∫ x .1dx
1
λ−
2
1
x2 λ+
¿
2 [ ] λ−
2
1
2
1 2 2
1
¿
( ) ( )
λ+
2
−
λ−
2
2 2
(¿ λ + λ+ 14 )−( λ − λ+ 14 )
2 2
2
¿λ
∴ E [ x́ ] =λ Hence x́ is an unbiased estimator of λ
Example 3
Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a distribution with mean μ and variance
n n
1
2
σ . Show that Su=
2
∑ ( x −x́ )2 is an unbiased estimator of σ 2 whereas S2= 1n ∑ ( x i−x́ )2 is not an
n−1 i=1 i i=1
unbiased estimator.
Solution
Case 1
n
1 2
S2= ∑ ( x i−x́ )
n i=1
n
1
¿ ∑ ( x2i −2 x i x́ + x́ 2 )
n i=1
n
1
n ∑ xi 1 n
¿ ∑ x i −2 x́ i=1 + ∑ x́ 2
2
n i=1 n n i=1
n
1
¿ ∑ x 2i −x́ 2
n i=1
Let the sample be such that
Case 2
n n
1
Having Su=
2
∑ ( x −x́ )2 and S2= 1n ∑ ( x i−x́ )2 we can obtain
n−1 i=1 i i=1
n n
2 2 2 2
( n−1 ) S =∑ ( x i−x́ ) And n S =∑ ( xi −x́ )
u
i=1 i =1
2 2
Therefore ( n−1 ) Su =n S
2 n 2
Hence Su= S
n−1
n
E [ S2u ] = E [ S2 ]
n−1
n n−1
¿ ( n−1 )nσ 2
From case 1 above. This reduces to σ 2.
2 2 2
Thus E [ Su ] =σ , hence Su is an unbiased estimator of σ 2
Example 4
{
−x
In the exponential population whose p.d.f is given by f ( X ; θ ) = θ
exp
θ ( )
, x >0 , θ>0
0 , elsewhere
Solution
∞
1 −x
E [ x́ ] =E [ x ] = ∫ x . exp
x=0 θ θ
dx ( )
∞
1 −x
¿ ∫
θ x=0
x exp
θ
dx ( )
Using ∫ U dv =UV −∫ V du
Let U =x thus du=dx
dV −x
Also let
dx ( )
=exp
θ
−x −x
⇒ V =∫ exp (
θ )
dx=−θ exp (
θ )
∞ ∞
1
∫ x exp −x
θ x=0 θ
1
( )
dx= −xθ exp
θ
−x
θ [
− ∫ −θ exp
x=0
−x
θ
dx( ) ( ) ]
∞
1
[
¿ − xθ exp
θ
−x
θ ( ) [
−θ −θ exp
−x
θ ( )] ]
x=0
∞
1
[
¿ −xθ exp
θ
−x
θ ( )
+ θ2 exp
−x
θ ( )] 0
1
¿ . θ2
θ
¿θ
E [ x́ ] =θ ; Hence x́ is an unbiased estimator of .
Alternatively;
∞
1
E [ x ]= ∫ x exp −x
θ x=0 θ
dx ( )
x dy 1
Let = y ⇒ x=θy ; = and dx=θdy
θ dx θ
Replacing and changing limits;
∞
1
∫ θy e− y θdy
θ y=0
∞
n−1 − y
Recall Γ n= ∫ y e dy
y=0
∞
Therefore ∫ y e− y dy=¿ Γ 2 ¿ and Γ 2=( 2−1 ) !=1
y=0
∞
1 −y θ2
Hence ∫ θy e θdy= =θ
θ y=0 θ
Example 5
1 T
Let T =max ( x 1 , x 2 , x 3 , … x n ). Show that θ=( )
^ 1+
n
is an unbiased estimator of θ .
Solution
NB:
A statistic is a function of the element of a random sample which does not contain any unknown
parameters. The distribution of a statistic is known as the sampling distribution. Any statistic
whose values are used to estimate U ( θ^ ) where U ( θ ) is a function of θ is called an estimate of
U ( θ ).
T
^ 1+ 1 Is a statistic. To show that θ^ is an unbiased estimator of θ we need toknow the sampling
θ= ( )
n
distribution of the statistic T .
n
Let G ( t ) denote the distribution function of T , then G ( t ) =Pr ( T ≤ t ) =[ Pr ( X ≤ t ) ]
Since T is the nth order statistic of the sample and assuming independence,
t n
t n For 0<t <θ
[ ]1
G ( t ) = ∫ dx =
0 θ θ ()
And the density function of T is
n
d t
g ( t ) =G' ( t )= ()
dt θ
n t n−1
g (t)=
{ θn
0 elsewhere
, 0<t< θ
( 1n ) E [ T ]
¿ 1+
n+1 n
¿(
n )( n+1 )
θ
¿θ
Hence θ^ is an unbiased estimator of θ .
Note, however that θ^ and T are actually estimators of θ except that θ^ is unbiased.
2. Consistency
This means that the sequence {θ^ 1 , θ^ 2 , … , θ^ n } tend to get closer and closer to the quantity being
estimated as the sample size increases.
This concept of limiting closeness is called consistency. More precisely, a sequence of estimators
θ^ n ; n=1,2,3 … is called mean square error consistent sequence of estimators if and only if
2
[ ]
lim E ( θ^ n−θ ) =0 For all θ ∈Ω
n→ 0
I.e. θ^ n ; n=1,2,3 … will be mean square error consistent if it converges to θ in mean square error.
Recall:
2 2
[ ] [ [
E ( θ^ n−θ ) =Var [ θ^ n ] + θ n+ θ−E ( θ^ n ) ]]
Hence the sequence θ^ n ; n=1,2,3 … is a mean squared error consistent if both the bias and the variance
of θ^ approaches zero, i.e.
Example 1
Prove that x́ is a consistent estimator for μ in the normal population with parameters μ and σ 2.
Solution
2
X N (μ,σ )
σ2
x́ N μ , ( ) n
x́ −μ
Z=
To prove, we recall that σ 2 is the standard normal variate N ( 0,1 )
√ n
Consider a small value ε
| x́−μ| ε
⇒ Pr
[√ ] √
σ
n
2
<
σ2
n
ε √n
¿ Pr Z < [ σ ]
ε √n
σ
¿ ∫ φ ( Z ) dz
−ε √n
σ
ε √n
σ
¿ 2 ∫ φ ( Z ) dz
0
Clearly this expression increases as n increases. Therefore as n tends to infinity, this expression tends to
1.
1 1
{ ( ) ( )
f ( x )= 1 for λ− 2 < x< λ+ 2
0 elsewhere
Solution
σ2
x́ N μ , ( ) n
Where μ and σ 2 are the mean and the variance of the population respectively.
1
λ+
∞ 2
μ= ∫ x . f ( x ) dx= ∫ x .1dx
−∞ 1
λ−
2
1
x2 λ+
¿ [ ]
2 λ−
2
1
2
1 2 2
1
¿
( ) ( )
λ+
2
−
λ−
2
2 2
(¿ λ + λ+ 14 )−( λ − λ+ 14 )
2 2
2
¿λ
2
σ 2=E [ x2 ] − ( E [ x ] )
1
λ+
2
E [ x 2 ] = ∫ x 2 .1 dx
1
λ−
2
1
x3 λ+
¿
3 [ ] λ−
2
1
2
1 3 3
1
¿
( ) ( )
λ+
2
−
λ−
2
3 3
1
¿ λ2 +
12
1
⇒ σ 2=λ2 + −λ2
12
1
¿
12
1
Thus x́ N λ , ( 12 n )
x́ −λ
Z= N ( 0,1 )
1
√ 12 n
We want to prove Pr [|x́ −λ|< ε ]
|x́−λ| ε
¿ Pr
[ 1
√ 12 n
<
1
√12 n ]
¿ Pr [ Z< ε √ 12 n ]
ε √ 12n
¿ ∫ φ ( Z ) dz
−ε √12 n
ε √ 12n
Example 3
Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a normal population having mean μ and
variance σ 2.
n n
1 2 1 2
Let x́ n= ∑ x i and Sn = ∑ ( x i−x́ ) be the sample mean and variance respectively.
n i=1 n i=1
Show that:
σ2
¿ lim
n→∞ n
¿0
∴ { x́ n ; n=1,2,3 , … } Is a mean squared error consistent estimator of μ
2
ii) To show that { S n ; n=1,2,3 , … } is a mean squared error consistent estimator of σ 2
n S n2
Hence E
[ ]
σ
2
=n−1
n S n2
And Var
[ ]
σ
2
=2 ( n−1 )
2 n−1 4
⇒ E [ Sn 2 ] = ( n−1
n )
σ 2 2
And Var [ S n ]=
( ( ))
n2
σ
2 2 2
Hence lim E [ S n ]=σ and lim Var [ S n ]=0
n→∞ n→∞
2
Therefore the sequence { S n ; n=1,2,3 , … } is a mean squared error consistent estimator of σ 2
NB:
We can adopt the following definition of a consistent estimator. The statistic θ^ is a consistent estimator
of the parameter θ if and only if for each c >0 ,
lim Pr [|θ−θ
^ |< c ]=1
n→∞
From the definition of a consistent estimator, we say that a sequence of estimators {θ^ n ; n=1,2,3 , … } is
called a simple or weakly consistent sequence of estimators of θ if and only if for each ε > 0
[ ]
lim Pr |θ^ n−θ|≥ ε =0 For all θ
n→∞
Or equivalently
[
lim Pr |θ^ n−θ|<ε =1
n→∞
]
Define a consistent estimator
The notion of simple consistence is the same as convergence in probability or weak convergence.
A simple consistent estimator is referred to simply as a consistent estimator. The consistency of x́ n can
be reduced directly from the weak law of large numbers.
We can judge whether a sequence of estimators is consistent by using the following sufficient
conditions; the sequence {θ^ n ; n=1,2,3 , … } of estimators is a consistent estimator of θ if
i. lim E [ θ^ n ] =θ
n→∞
Which implies that mean squared error consistency implies consistency but the converse is not true.