Unbiasedness: Sudheesh Kumar Kattumannil University of Hyderabad
Unbiasedness: Sudheesh Kumar Kattumannil University of Hyderabad
Unbiasedness: Sudheesh Kumar Kattumannil University of Hyderabad
University of Hyderabad
1 Introduction
Assume that the data X = (X1 , ..., Xn ) comes from a probability distribution f (x|), with
unknown. Once the data is available our task is to find (estimate) the value of . That is we
need to construct good estimators for or its function g(). Then an important questions in
this point, How to measure/evaluate the closeness of the estimator we obtained? How to find
the best possible value? What is the best? We shall try to answer this questions by introducing
some properties of the estimator. This include unbiasedness, consistency and efficiency.
2 Unbiasedness
After finding point estimator we are interested to develop criteria to compare different point
estimators. One such measure is the mean square error (MSE) of an estimator. MSE of an
estimator T of is defined as
M SE(T ) = E(T )2 .
The MSE is a function of and has the interpretation in terms of variance and bias
1
where BT () is the bias given by BT () = E(T ) . Hence one may interested to minimize both
the bias (inaccuracy) and variance (precision) of an estimator. When bias is is equal to zero
we say that the estimator is unbiased. Then MSE become variance and we need to minimize
the variance. Next we are dealing with unbiased estimator and our task is then to find the
minimum variance unbiased estimator
As mentioned above, Any estimator that not unbiased is called biased. Clearly the bias is the
difference Bias() = E(T (X)) g().
When used repeatedly, an unbiased estimator in the long run will estimate the right value on
the average.
Example 1 Let X1 , ..., Xn be a random sample from N (, 2 ), then show that Show X is
1
Pn 2
unbiased for and S 2 = n1 2
k=1 (Xk = X) is unbiased for , and compute their MSEs.
S 2.
Note that unbiased estimators of g() may not exist.
Example 2 Let X be a distributed according to the binomial distribution B(n, p) and suppose
that g(p) = 1/p. Then, unbiasedness of an estimator T (x) of g() should satisfy
n
X n k
T (k) p (1 p)(nk) for all 0 < p < 1.
k=0
k
It can be seen that as p 0, the left side tends to T (0) and the right side to . Hence no such
T (x) exist.
If there exists an unbiased estimator of g, the estimand g will be called U-estimable.
2
Although unbiasedness is an attractive condition, after an unbiased estimator has been found,
its performance should be investigated. That is, we need to find an estimator that has smaller
variance among all unbiased estimator.
Definition 2 An unbiased estimator T (x) of g() is the uniform minimum variance unbiased
(UMVU) estimator of g() if
0
V ar(T (X) V ar(T (X)) for all .
0
where T (x) is any other unbiased estimator of g(). The estimator T (x) is locally minimum
0
variance unbiased (LMVU) at = 0 if V ar(T (X)) V ar(T (X)) for any other unbiased
0
estimator T (x).
How To Find The Best Unbiased Estimator? There are different way in finding UMVU esti-
mator. The relationship of unbiased estimators of g() with unbiased estimators of zero can
be helpful in characterizing and determining UMVU estimators when they exist. The following
theorem is in that direction.
Theorem 1 Let X have distribution {f (x), , let T (x) be an unbiased estimator of g(),
and let U denote the set of all unbiased estimators of zero. Then, a necessary and sufficient
condition for T to be a UMVU estimator of its expectation g() is that
3
Proof: If T1 and T2 are two unbiased estimators of g(), their difference f (T ) = T1 T2 satisfies
E(f (T )) = 0 f or all ,
and hence by the completeness of T , T1 = T1 a.e., hence the uniqueness. Now by Lehman-
Sheffe theorem (will discuss later), T is UMVU estimator for .
Next we will discuss the method for deriving UMVU estimators
If T is a complete sufficient statistic, the UMVU estimator of any U-estimable function g() is
uniquely determined by the set of equations
Example 3 Suppose that T has the binomial distribution b(p, n) and that g(p) = p(1 p).
Then, the above equation will becomes
n
X k
T (k)pk (1 p)nk = p(1 p) f or all 0 < p < 1.
k=0
n
If = p/q so that p = /(1+) and (1p) = 1/(1+), and the above equation can be rewritten
as
n n1
X n n2
X n2
T (k)k = (1 + ) = ()k 0 < < .
k=0
k k=0
k 1
t(n t)
T (t) = .
n(n 1)
4
of and is unbiased estimators of . Moreover
E (h1 |T ) = E (h2 |T ).
Example 4 Suppose that X1 , ..., Xn are iid according to the uniform distribution U (0, ) and
that g() = /2. Then, T = X(n) , the largest of the Xs, is a complete sufficient statistic. Since
E(X1 ) = /2 and T = X(n) is a complete sufficient statistic, by Lehmann-Sheffe the UMVU
estimator of /2 is t = E[X1 |X(n)]. Hence we have to evaluate E[X1 |X(n)].
If X(n) = t , then X1 = t with probability 1/n, and X is uniformly distributed on (0, t) with
the remaining probability (n 1)/n. Hence
1 n1t n+1t
E[X1 |t] = t+ = .
n n 2 n 2
Thus, [(n + 1)/n]T /2 and [(n + 1)/n]T are the UMVU estimators of /2 and , respectively.
Now we prove a theorem which was already discussed in connection with sufficiency.
5
Theorem 5 A complete sufficient statistic is minimal sufficient.
Proof: Let T be a complete sufficient statistics for the family {F } and S be any sufficient
statistics for which E(h(X)) is finite. Denote h(T ) = E(S|T ), then by Lehmann-Sheffe Theorem
h is UMVU estimator of E(S). Consider any other sufficient statistics T1 . Then we show that
h(T ) is a function of T1 . Suppose that h(T ) is not a function of T1 , then the function defined
by h1 (T1 ) = E(h(T )|T1 ) is unbiased for E(S). And by Rao-Blackwell theorem,
which is contradiction to the fact that h(T ) is UMVU estimator for . Hence h(T ) is not a
function of T1 . Since h and T1 be arbitrary, T must be a function of every sufficient statistic,
hence the proof.
Remark: In view of Lehmann-Sheffe theorem once we have a complete sufficient statistics T
which is unbiased for , then T will be UMVU for .
In this section, we will discuss the attainment of a lower bound to the variance of an unbiased
estimator. As a prerequisite we shall discuss about Fisher Information.
Clearly I() is the average of the square of relative rate at which the density f changes at x. It
is plausible that the greater this expectation is at a given value 0 , the easier it is to distinguish
0 from neighboring values , and, therefore, the more accurately can be estimated at = 0 .
6
Remark: If X1 , ..., Xn is a random sample with the pdf f (x|). Then the score for the entire
sample X1 , , Xn is
x
X
sn (X, ) = s(Xk , ).
k=1
In () = I().
It is important to realize that I() depends on the particular parametrization chosen. In fact,
if = h() and h is differentiable, the information that X contains about is
0
I() = I(h())(h ())2 .
The proof of the alternative expressions of I() given below will discuss in the class.
d2
I() = E( logp (X))
d2
d
I() = E( logh (X))2
d
d
I() = E( logr (X))2 ,
d
where h(x) = f (x)/(1 F (x)) and r(x) = f (x)/F (x), the hazard rate and reversed hazard rate
of X respectively.
h i
f (x|) = h(x)exp T (x)s() c() ,
1
I(g()) = .
V (T )
7
And for any differentiable function h(), we have
h s0 () i2
I(h()) = V (T ).
h0 ()
Example 5 Let X Gamma(, ), where we assume that is known. The density is given
by
1 x 1
f (x) =
x1 e = e( )x log() h(x),
0
I() = I(h())(h ())2 ,
1
we have, quite generally, that I[ch()] = c2
I[h()], so the information in X about is I() =
/ 2 .
Remark: Suppose that X is the whole data and T = T (X) is some statistic. Then, IX ()
IT () for all . The two information measures match with each other for all if and
only if T is a sufficient statistic for .
Theorem 7 (Cramer-Rao Lower bound) Let X1 , ..., Xn be a sample with the joint pdf f (x|).
Suppose T (X) is an estimator satisfying (i) E (T (X)) = g() for any ; and (ii) V ar (T (X)) <
. If the following equation (interchangeability)
Z Z
d d
h(x)f (x|)dx = h(x) f (x|)dx
d d
0
(g ())2
V (T (X)) .
I()
8
Theorem 8 (Cramer-Rao Lower bound iid case) Let X1 , ..., Xn be a iid with common pdf
f (x|). Suppose T (X) is an estimator satisfying all the conditions stated in Theorem 7, then
0
(g ())2
V (T (X)) .
nI()
Theorem 9 Let X1 , ..., Xn be a iid with common pdf f (x|). Suppose that all the conditions
stated in the Theorem 7 is satisfied, then the equality hold if and only if
d
k()(T (x) g()) = log f ,
d
Another popular lower bound to the variance of an unbiased estimator is provided by the
Chapman- Robbins inequality and it has better performance than the Cramer-rao inequality in
the non-regular case as the later fails in that case. Next we discuss about it.
Theorem 10 (Chapman-Robbins Inequality) Let E(T (X)) = g(), with E(T (X))2 < .
If 6= , assume that f and f are distinct and further assume that there exist 6= such that
{f (x) > 0} {f (x) > 0}. Then
(g() g())2
V (T (X)) sup .
V (f (X)/f (X))
Z
2 1
E(f (X)/f (X)) = ( )2 = .
0
9
and
Z
2 1
E(f (X)/f (X)) = ( ) = 1.
0
Hence
V (f (X)/f (X)) = .
( )2 2
V (T (X)) sup = sup ( ) = ,
;< ( ) ;< 4
for any unbiased estimator T (X) of . Clearly 2X is unbiased for . Also it is UMVU estimator
for as X is complete sufficient statistics. Consider
2
V (2X) = 4V (X) = ,
3
2
which is greater than 4
, the bound provided by Chapmann-Robbins inequality.
10
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: