0% found this document useful (0 votes)
77 views

Topic 2a Theory of Estimation

The document discusses properties that good estimators should possess, including: 1) Unbiasedness - The expected value of the estimator should equal the true parameter value. 2) Consistency - As more samples are taken, the estimate should get closer to the true value. 3) Efficiency - The estimator should have low variance. It provides examples of estimators that are unbiased (e.g. sample mean) and biased (e.g. sample variance). It also shows that the sample variance is not an unbiased estimator of population variance while the sample standard deviation is unbiased.

Uploaded by

Kimondo King
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

Topic 2a Theory of Estimation

The document discusses properties that good estimators should possess, including: 1) Unbiasedness - The expected value of the estimator should equal the true parameter value. 2) Consistency - As more samples are taken, the estimate should get closer to the true value. 3) Efficiency - The estimator should have low variance. It provides examples of estimators that are unbiased (e.g. sample mean) and biased (e.g. sample variance). It also shows that the sample variance is not an unbiased estimator of population variance while the sample standard deviation is unbiased.

Uploaded by

Kimondo King
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

PROPERTIES OF ESTIMATORS (1)

A good estimator should possess the following properties:

1) Unbiasedness
2) Consistency
3) Efficiency
4) Sufficiency
1. Unbiasedness

Suppose that x 1 , x 2 , x 3 ,… x n form a random sample of size n from a distribution having density
function f ( X ; θ ) which depends on the unknown parameter θ . Let us assume that θ belongs to the
parameter space Ω a subset of the real line, then we have the following definition of an unbiased
estimator.

Definition
^
A statistic θ=φ ( x 1 , x 2 , x 3 ,… x n ) is said to be an unbiased estimator of θ if E [ θ^ ]=θ for all real numbers
θ ∈Ω.
Thus if the mean of the sampling distribution of the estimator θ^ is θ , we say that θ^ is an unbiased
estimator.

A sample variance s2can be shown to be a biased estimator of σ 2 because E [ s ] =


2
( n−1
n ) 2 1
σ =( 1− ) σ
n
2

σ2 −σ 2
(
The bias in this case is σ 2−
n )
−σ 2=
n
hence the sample variance is not an unbiased estimator of

the population variance.

Example 1

Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a population with mean μ, then

i) E [ x i ]=μ for i=1,2 , … , n


ii) E [ x́ ] =μ and
n
iii) For any constants a 1 , a2 , a3 , … an satisfying ∑ ai
i=1

n n
E [∑ ] ∑
i=1
ai x i =
i=1
ai E [ x i ]
n
¿ μ ∑ ai
i=1

NB
n
1
Part (ii) is a special case of part (iii) with ∑ ai for i=1,2 , … , n
n i=1
n
Remark: the example above shows that the statistics x i,i=1,2 , … , n, x́ and y=∑ ai xi where
i=1
n

∑ ai=1 are all unbiased estimators of the parameter μ ie. Unbiased estimators are not unique.
i=1

Example 2

Suppose x 1 , x 2 , x 3 ,… x n is a random sample from a population with

1 1

{ ( ) ( )
f ( x )= 1 for λ− 2 < x< λ+ 2
0 elsewhere

Prove that x́ is an unbiased estimator of λ .

Solution

We need to show that E [ x́ ] =λ


1
λ+
2

E [ x́ ] =E [ x ] = ∫ x .1dx
1
λ−
2

1
x2 λ+
¿
2 [ ] λ−
2
1
2

1 2 2
1
¿
( ) ( )
λ+
2

λ−
2
2 2

(¿ λ + λ+ 14 )−( λ − λ+ 14 )
2 2

2
¿λ
∴ E [ x́ ] =λ Hence x́ is an unbiased estimator of λ
Example 3

Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a distribution with mean μ and variance
n n
1
2
σ . Show that Su=
2
∑ ( x −x́ )2 is an unbiased estimator of σ 2 whereas S2= 1n ∑ ( x i−x́ )2 is not an
n−1 i=1 i i=1
unbiased estimator.

Solution

Case 1
n
1 2
S2= ∑ ( x i−x́ )
n i=1
n
1
¿ ∑ ( x2i −2 x i x́ + x́ 2 )
n i=1
n

1
n ∑ xi 1 n
¿ ∑ x i −2 x́ i=1 + ∑ x́ 2
2
n i=1 n n i=1
n
1
¿ ∑ x 2i −x́ 2
n i=1
Let the sample be such that

E [ x i ]=0 For alli=1,2 , … , n


n
E [ S 2 ]=E [ 1
∑ x2 −x́2
n i=1 i ]
n n 2
1
¿ ∑ E [ x 2i ]−E
n i=1
1
n∑i=1
xi
[( ) ]
n n
n
1 2 ∑(x )
Taking ∑ E [ x i ], recall ' i=1 i , μ'2=¿
∑ ( x 2i )
i=1
n i=1 μ1=
n n
n n
1
n
1
¿ . n μ'2− 2 E
n [∑ i=1
2
x +2 ∑ xi x j
i
i=1
]
1 2
¿ μ'2− 2
. n μ'2− 2 E [ x i ] ∑ x j
n n
But E [ x i ]=0
1 n−1 ' n−1 2
E [ S 2 ] =μ'2− μ '2=
n n
μ2 =
n
σ ( )
2 n−1 2
Thus E [ S ]= σ implying that S2 is not an unbiased estimator of σ 2
n

Case 2
n n
1
Having Su=
2
∑ ( x −x́ )2 and S2= 1n ∑ ( x i−x́ )2 we can obtain
n−1 i=1 i i=1

n n
2 2 2 2
( n−1 ) S =∑ ( x i−x́ ) And n S =∑ ( xi −x́ )
u
i=1 i =1

2 2
Therefore ( n−1 ) Su =n S

2 n 2
Hence Su= S
n−1
n
E [ S2u ] = E [ S2 ]
n−1
n n−1
¿ ( n−1 )nσ 2
From case 1 above. This reduces to σ 2.

2 2 2
Thus E [ Su ] =σ , hence Su is an unbiased estimator of σ 2

Example 4

{
−x
In the exponential population whose p.d.f is given by f ( X ; θ ) = θ
exp
θ ( )
, x >0 , θ>0

0 , elsewhere

Show that x́ is an unbiased estimator of θ .

Solution

1 −x
E [ x́ ] =E [ x ] = ∫ x . exp
x=0 θ θ
dx ( )

1 −x
¿ ∫
θ x=0
x exp
θ
dx ( )
Using ∫ U dv =UV −∫ V du
Let U =x thus du=dx
dV −x
Also let
dx ( )
=exp
θ

−x −x
⇒ V =∫ exp (
θ )
dx=−θ exp (
θ )
∞ ∞
1
∫ x exp −x
θ x=0 θ
1
( )
dx= −xθ exp
θ
−x
θ [
− ∫ −θ exp
x=0
−x
θ
dx( ) ( ) ]

1
[
¿ − xθ exp
θ
−x
θ ( ) [
−θ −θ exp
−x
θ ( )] ]
x=0


1
[
¿ −xθ exp
θ
−x
θ ( )
+ θ2 exp
−x
θ ( )] 0

1
¿ . θ2
θ
¿θ
E [ x́ ] =θ ; Hence x́ is an unbiased estimator of .
Alternatively;

1
E [ x ]= ∫ x exp −x
θ x=0 θ
dx ( )
x dy 1
Let = y ⇒ x=θy ; = and dx=θdy
θ dx θ
Replacing and changing limits;

1
∫ θy e− y θdy
θ y=0

n−1 − y
Recall Γ n= ∫ y e dy
y=0


Therefore ∫ y e− y dy=¿ Γ 2 ¿ and Γ 2=( 2−1 ) !=1
y=0


1 −y θ2
Hence ∫ θy e θdy= =θ
θ y=0 θ

Example 5

Suppose x 1 , x 2 , x 3 ,… x n form a random sample from a population with density


1
f ( X ; θ )= For 0< x <θ and θ>0 . Where θ is an unknown parameter.
θ

1 T
Let T =max ( x 1 , x 2 , x 3 , … x n ). Show that θ=( )
^ 1+
n
is an unbiased estimator of θ .

Solution

NB:

A statistic is a function of the element of a random sample which does not contain any unknown
parameters. The distribution of a statistic is known as the sampling distribution. Any statistic
whose values are used to estimate U ( θ^ ) where U ( θ ) is a function of θ is called an estimate of
U ( θ ).
T
^ 1+ 1 Is a statistic. To show that θ^ is an unbiased estimator of θ we need toknow the sampling
θ= ( )
n
distribution of the statistic T .
n
Let G ( t ) denote the distribution function of T , then G ( t ) =Pr ( T ≤ t ) =[ Pr ( X ≤ t ) ]

Since T is the nth order statistic of the sample and assuming independence,
t n
t n For 0<t <θ
[ ]1
G ( t ) = ∫ dx =
0 θ θ ()
And the density function of T is
n
d t
g ( t ) =G' ( t )= ()
dt θ

n t n−1
g (t)=
{ θn
0 elsewhere
, 0<t< θ

It then follows that


θ
n t n−1
E [ T ]= ∫ t dt
t =0 θn
θ
ntn
¿∫ dt
t=0 θn
θ
n t n +1
¿ n
θ n+1 [ ] t =0
n θ n+1
¿ .
n+1 θn
n
¿ θ For all θ>0
n+1
Therefore
T
1
[( ) ]
E [ θ^ ] =E 1+
n

( 1n ) E [ T ]
¿ 1+

n+1 n
¿(
n )( n+1 )
θ

¿θ
Hence θ^ is an unbiased estimator of θ .

Note, however that θ^ and T are actually estimators of θ except that θ^ is unbiased.

2. Consistency

Let θ^ 1 be an estimator of θ based on a sample of size 1 from f ( X ; θ ). Let θ^ 2 be an estimator of θ based


on a sample size of 2 and in general let θ^ n be an estimator of θ based on a sample size of n . Then
θ^ 1 , θ^ 2 , … , θ^ n is a sequence of estimators of θ .
We desire that

lim E ( θ^ n −θ ) =0 For all θ ∈Ω


n→∞

This means that the sequence {θ^ 1 , θ^ 2 , … , θ^ n } tend to get closer and closer to the quantity being
estimated as the sample size increases.

This concept of limiting closeness is called consistency. More precisely, a sequence of estimators
θ^ n ; n=1,2,3 … is called mean square error consistent sequence of estimators if and only if
2
[ ]
lim E ( θ^ n−θ ) =0 For all θ ∈Ω
n→ 0

I.e. θ^ n ; n=1,2,3 … will be mean square error consistent if it converges to θ in mean square error.

Recall:
2 2
[ ] [ [
E ( θ^ n−θ ) =Var [ θ^ n ] + θ n+ θ−E ( θ^ n ) ]]
Hence the sequence θ^ n ; n=1,2,3 … is a mean squared error consistent if both the bias and the variance
of θ^ approaches zero, i.e.

lim Var ( θ^ n ) =0 And lim E ( θ^ n ) =θ


n→∞ n→∞

Example 1

Prove that x́ is a consistent estimator for μ in the normal population with parameters μ and σ 2.

Solution
2
X N (μ,σ )

σ2
x́ N μ , ( ) n
x́ −μ
Z=
To prove, we recall that σ 2 is the standard normal variate N ( 0,1 )
√ n
Consider a small value ε

| x́−μ| ε
⇒ Pr
[√ ] √
σ
n
2
<
σ2
n

ε √n
¿ Pr Z < [ σ ]
ε √n
σ

¿ ∫ φ ( Z ) dz
−ε √n
σ

ε √n
σ

¿ 2 ∫ φ ( Z ) dz
0

Clearly this expression increases as n increases. Therefore as n tends to infinity, this expression tends to
1.

This means x́ is a consistent estimator.


Example 2

Suppose x 1 , x 2 , x 3 ,… x n is a random sample from a population given by

1 1

{ ( ) ( )
f ( x )= 1 for λ− 2 < x< λ+ 2
0 elsewhere

Prove that x́ is a consistent estimator for λ .

Solution

σ2
x́ N μ , ( ) n
Where μ and σ 2 are the mean and the variance of the population respectively.

1
λ+
∞ 2

μ= ∫ x . f ( x ) dx= ∫ x .1dx
−∞ 1
λ−
2

1
x2 λ+
¿ [ ]
2 λ−
2
1
2

1 2 2
1
¿
( ) ( )
λ+
2

λ−
2
2 2

(¿ λ + λ+ 14 )−( λ − λ+ 14 )
2 2

2
¿λ
2
σ 2=E [ x2 ] − ( E [ x ] )
1
λ+
2

E [ x 2 ] = ∫ x 2 .1 dx
1
λ−
2

1
x3 λ+
¿
3 [ ] λ−
2
1
2

1 3 3
1
¿
( ) ( )
λ+
2

λ−
2
3 3
1
¿ λ2 +
12
1
⇒ σ 2=λ2 + −λ2
12
1
¿
12
1
Thus x́ N λ , ( 12 n )
x́ −λ
Z= N ( 0,1 )
1
√ 12 n
We want to prove Pr [|x́ −λ|< ε ]

|x́−λ| ε
¿ Pr
[ 1
√ 12 n
<
1
√12 n ]
¿ Pr [ Z< ε √ 12 n ]
ε √ 12n

¿ ∫ φ ( Z ) dz
−ε √12 n

ε √ 12n

¿2 ∫ φ ( Z ) dz Which tends to 1 as n tends to infinity.


0

Therefore x́ is a consistent estimator for λ .

Example 3

Suppose x 1 , x 2 , x 3 ,… x n form a random sample of size n from a normal population having mean μ and
variance σ 2.
n n
1 2 1 2
Let x́ n= ∑ x i and Sn = ∑ ( x i−x́ ) be the sample mean and variance respectively.
n i=1 n i=1
Show that:

i) { x́ n ; n=1,2,3 , … } is a mean squared error consistent estimator of μ


2
ii) { S n ; n=1,2,3 , … } is a mean squared error consistent estimator of σ 2
Solution

i) To show that { x́ n ; n=1,2,3 , … } is a mean squared error consistent estimator of μ,


lim E [ ( x́ n−μ ) 2 ]=lim Var ( x́ n)
n→∞ n→∞

σ2
¿ lim
n→∞ n
¿0
∴ { x́ n ; n=1,2,3 , … } Is a mean squared error consistent estimator of μ
2
ii) To show that { S n ; n=1,2,3 , … } is a mean squared error consistent estimator of σ 2

Since we are sampling from a normal distribution, it follows that


n 2
1 2 n Sn
2∑( i
x − x́ ) = 2 Has a chi-square distribution with n−1 degrees of freedom.
σ i=1 σ

n S n2
Hence E
[ ]
σ
2
=n−1

n S n2
And Var
[ ]
σ
2
=2 ( n−1 )

2 n−1 4
⇒ E [ Sn 2 ] = ( n−1
n )
σ 2 2
And Var [ S n ]=
( ( ))
n2
σ

2 2 2
Hence lim E [ S n ]=σ and lim Var [ S n ]=0
n→∞ n→∞

2
Therefore the sequence { S n ; n=1,2,3 , … } is a mean squared error consistent estimator of σ 2

NB:

We can adopt the following definition of a consistent estimator. The statistic θ^ is a consistent estimator
of the parameter θ if and only if for each c >0 ,

lim Pr [|θ−θ
^ |< c ]=1
n→∞

From the definition of a consistent estimator, we say that a sequence of estimators {θ^ n ; n=1,2,3 , … } is
called a simple or weakly consistent sequence of estimators of θ if and only if for each ε > 0

[ ]
lim Pr |θ^ n−θ|≥ ε =0 For all θ
n→∞

Or equivalently
[
lim Pr |θ^ n−θ|<ε =1
n→∞
]
Define a consistent estimator

The notion of simple consistence is the same as convergence in probability or weak convergence.

A simple consistent estimator is referred to simply as a consistent estimator. The consistency of x́ n can
be reduced directly from the weak law of large numbers.

We can judge whether a sequence of estimators is consistent by using the following sufficient
conditions; the sequence {θ^ n ; n=1,2,3 , … } of estimators is a consistent estimator of θ if

i. lim E [ θ^ n ] =θ
n→∞

ii. lim Var [ θ^ n ]=0


n→∞

Which implies that mean squared error consistency implies consistency but the converse is not true.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy