0% found this document useful (0 votes)
146 views

Squalsoln

This document provides solutions to several statistical inference qualifying exam problems involving sufficient statistics. The solutions show that: 1) For iid Poisson(λ) random variables X1 and X2, the statistic T = X1 + 2X2 is not sufficient for the parameter λ, as it is not a function of the minimal sufficient statistic. 2) For iid beta(θ,θ) random variables X1,...,Xn, the statistic Σlog(Xi) + log(1- Xi) is a complete minimal sufficient statistic for the parameter θ. 3) For iid uniform(θ, θ + 1) random variables X1,...,Xn, the order statistics (X(
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views

Squalsoln

This document provides solutions to several statistical inference qualifying exam problems involving sufficient statistics. The solutions show that: 1) For iid Poisson(λ) random variables X1 and X2, the statistic T = X1 + 2X2 is not sufficient for the parameter λ, as it is not a function of the minimal sufficient statistic. 2) For iid beta(θ,θ) random variables X1,...,Xn, the statistic Σlog(Xi) + log(1- Xi) is a complete minimal sufficient statistic for the parameter θ. 3) For iid uniform(θ, θ + 1) random variables X1,...,Xn, the order statistics (X(
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Some Math 580 Statistical Inference qualifying exam problems, often with solutions.

Most solutions are from David Olive, but a few solutions were contributed by Bhaskar
Bhattacharya, Abdel Mugdadi, and Yaser Samadi.
2.68∗. (Aug. 2000 Qual): The number of defects per yard, Y of a certain fabric is
known to have a Poisson distribution with parameter λ. However, λ is a random variable
with pdf
f(λ) = e−λ I(λ > 0).
a) Find E(Y).
b) Find Var(Y).
Solution. Note that the pdf for λ is the EXP(1) pdf, so λ ∼ EXP(1).
a) E(Y ) = E[E(Y |λ)] = E(λ) = 1.
b) V (Y ) = E[V (Y |λ)] + V [E(Y |λ)] = E(λ) + V (λ) = 1 + 12 = 2.
4.27. (Jan. 2003 Qual) Let X1 and X2 be iid Poisson (λ) random variables. Show
that T = X1 + 2X2 is not a sufficient statistic for λ. (Hint: the Factorization Theorem
uses the word iff. Alternatively, find a minimal sufficient statistic S and show that S is
not a function of T .)
See 4.38 solution.
4.28. (Aug. 2002 Qual): Suppose that X1 , ..., Xn are iid N(σ, σ) where σ > 0.
a) Find a minimal sufficient statistic for σ.
b) Show that (X, S 2) is a sufficient statistic but is not a complete sufficient statistic
for σ.
4.31. (Aug. 2004 Qual): Let X1 , ..., Xn be iid beta(θ, θ). (Hence δ = ν = θ.)
a) Find a minimal sufficient statistic for θ.
b) Is the statistic found in a) complete? (prove or disprove)
Solution.
Γ(2θ) θ−1 Γ(2θ)
f(x) = x (1 − x)θ−1 = exp[(θ − 1)(log(x) + log(1 − x))],
Γ(θ)Γ(θ) Γ(θ)Γ(θ)
P
for 0 < x < 1, a 1 parameter exponential family. Hence ni=1 (log(Xi ) + log(1 − Xi )) is
a complete minimal sufficient statistic.
4.32. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with probability mass function
1
f(x) = P (X = x) =
xν ζ(ν)
where ν > 1 and x = 1, 2, 3, .... Here the zeta function
X∞
1
ζ(ν) =
x=1

for ν > 1.

1
a) Find a minimal sufficient statistic for ν.
b) Is the statistic found in a) complete? (prove or disprove)
c) Give an example of a sufficient statistic that is strictly not minimal.
Solution. a) and b)
1
f(x) = I{1,2,...}(x) exp[−ν log(x)]
ζ(ν)
P
is a 1 parameter regular exponential family with Ω = (−∞, −1). Hence ni=1 log(Xi ) is
a complete minimal sufficient statistic.
c) By the Factorization Theorem,
P W = (X1 , ..., Xn) is sufficient, but W is not mini-
mal since W is not a function of ni=1 log(Xi ).
4.36. (Aug. 2009 Qual): Let X1 , ..., Xn be iid uniform(θ, θ + 1) random variables
where θ is real.
a) Find a minimal sufficient statistic for θ.
b) Show whether the minimal sufficient statistic is complete or not.
Solution. Now
fX (x) = I(θ < x < θ + 1)
and
f(x) I(θ < x(1) ≤ x(n) < θ + 1)
=
f(y) I(θ < y(1) ≤ y(n) < θ + 1)
which is constant for all real θ iff (x(1), x(n) ) = (y(1), y(n) ). Hence T = (X(1) , X(n) ) is a
minimal sufficient statistic by the LSM theorem. To show that T is not complete, first
find E(T ). Now Z t
FX (t) = dx = t − θ
θ
for θ < t < θ + 1. Hence

fX(n) (t) = n[FX (t)]n−1fx (t) = n(t − θ)n−1

for θ < t < θ + 1 and


Z Z θ+1
Eθ (X(n) ) = tfX(n) (t)dt = tn(t − θ)n−1 dt.
θ

Use u–substitution with u = t − θ, t = u + θ and dt = du. Hence t = θ implies u = 0,


and t = θ + 1 implies u = 1. Thus
Z 1 Z 1 Z 1
n−1 n
Eθ (X(n) ) = n(u + θ)u du = nu du + nθun−1 du =
0 0 0
1 1
un+1 un n nθ n
n +θ n = + =θ+ .
n+1 0 n 0 n+1 n n+1

2
Now
fX(1) (t) = n[1 − FX (t)]n−1 fx (t) = n(1 − t + θ)n−1
for θ < t < θ + 1 and thus
Z θ+1
Eθ (X(1) ) = tn(1 − t + θ)n−1 dt.
θ

Use u–substitution with u = (1 − t + θ) and t = 1 − u + θ and du = −dt. Hence t = θ


implies u = 1, and t = θ + 1 implies u = 0. Thus
Z 0 Z 1 Z 1
n−1 n−1
Eθ (X(1) ) = − n(1 − u + θ)u du = n(1 + θ) u du − n un du =
1 0 0
1 1
un un+1 n n 1
n(1 + θ) − n = (θ + 1) − =θ+ .
n 0 n+1 0 n n+1 n+1
To show that T is not complete try showing Eθ (aX(1) + bX(n) + c) = 0 for some
n−1
constants a, b and c. Note that a = −1, b = 1 and c = − n+1 works. Hence

n−1
Eθ (−X(1) + X(n) − )=0
n+1
for all real θ but
n−1
Pθ (g(T ) = 0) = Pθ (−X(1) + X(n) − = 0) = 0 < 1
n+1
for all real θ. Hence T is not complete.
4.37. (Sept. 2010 Qual): Let Y1 , ..., Yn be iid from a distribution with pdf
2 2
f(y) = 2 τ y e−y (1 − e−y )τ −1

for y > 0 and f(y) = 0 for y ≤ 0 where τ > 0.


a) Find a minimal sufficient statistic for τ .
b) Is the statistic found in a) complete? Prove or disprove.
Solution. Note that
2 2
f(y) = I(y > 0) 2y e−y τ exp[(1 − τ )(− log(1 − e−y ))]

is P
a 1 parameter exponential family with minimal and complete sufficient statistic
− ni=1 log(1 − e−Yi ).
2

4.38 and 4.27. (Aug. 2016 qual) a) Let X1 , ..., Xn be independent identically dis-
tributed gamma(α, β), and, independently, Y1 , ..., Ym independent identically distributed
gamma(α, kβ) where k is known, and α, β > 0 are parameters. Find a two dimensional
sufficient statistic for (α, β).
b) Let X1 , X2 be independent identically distributed Poisson(θ). Show
T = X1 + 2X2 is not sufficient for θ.

3
Solution: a) f(x, y) =
 n Yn Xn  m Ym Xm
1 α−1 1 α−1
( xi ) exp[− xi /β] ( yj ) exp[− yj /(kβ)]
Γ(α)β α i=1 i=1
Γ(α)(kβ)α j=1 j=1

 n  m " Y
n m
Y
#α−1 " Pn Pm !#
1 1 i=1 xi + j=1 yj /k
= ( xi)( yj ) exp − .
Γ(α)β α Γ(α)(kβ)α i=1 j=1
β
By Factorization, !
n
Y m
Y n
X m
X
( Xi )( Yj ), Xi + Yj /k or
i=1 j=1 i=1 j=1
n m n m
!
X X X X
log(Xi ) + log(Yj ), Xi + Yj /k
i=1 j=1 i=1 j=1

is sufficient.
b) The minimal sufficient statistic X1 + X2 is not a function of T , thus T is not
sufficient. Alternatively, the Factorization Theorem says T is sufficient iff f(x|θ) =
g(T (x)|θ)h(x) where h(x) does not depend on θ and g depends on x only through T (x).
No such factorization exists.
5.2. (1989 Univ. of Minn. and Aug. 2000 SIU Qual): Let (X, Y ) have the bivariate
density
1 −1
f(x, y) = exp( [(x − ρ cos θ)2 + (y − ρ sin θ)2 ]).
2π 2
Suppose that there are n independent pairs of observations (Xi , Yi ) from the above density
and that ρ is known. Assume that 0 ≤ θ ≤ 2π. Find a candidate for the maximum
likelihood estimator θ̂ by differentiating the log likelihood log(L(θ)). (Do not show that
the candidate is the MLE, it is difficult to tell whether the candidate, 0 or 2π is the MLE
without the actual data.)
Solution. The likelihood function L(θ) =
1 −1 X 2
X
exp( [ (x i − ρ cos θ) + (yi − ρ sin θ)2 ]) =
(2π)n 2
1 −1 X 2 X
2 2
X
2
X
exp( [ x i − 2ρ cos θ x i + ρ cos θ + y i − 2ρ sin θ yi + ρ2 sin2 θ])
(2π)n 2
1 −1 X 2 X 2 2
X X
= exp( [ x i + y i + ρ ]) exp(ρ cos θ x i + ρ sin θ yi ).
(2π)n 2
Hence the log likelihood log L(θ)
X X
= c + ρ cos θ xi + ρ sin θ yi .

The derivative with respect to θ is


X X
−ρ sin θ xi + ρ cos θ yi .

4
Setting this derivative to zero gives
X X
ρ yi cos θ = ρ xi sin θ

or P
y
P i = tan θ.
xi
Thus P
−1 yi
θ̂ = tan ( P ).
xi
Now the boundary points are θ = 0 and θ = 2π. Hence θ̂M LE equals 0, 2π, or θ̂ depending
on which value maximizes the likelihood.
5.23. (Jan. 2001 Qual): Let X1 , ..., Xn be a random sample from a normal distribu-
tion with known mean µ and unknown variance τ.
a) Find the maximum likelihood estimator of the variance τ.

b) Find the maximum likelihood estimator of the standard deviation τ . Explain
how the MLE was obtained.
n
Pn
Solution. a) The log likelihood is log L(τ ) = −P 2
log(2πτ ) − 2τ1 2
i=1 (Xi − µ) . The
n n
derivative of the log likelihood is equal to − 2τ + 2τ1P
2
2
i=1 (Xi − µ) . Setting the derivative
n
(X −µ)2
equal to 0 and solving for τ gives the MLE τ̂ = i=1 n i . Now the likelihood is only
defined for τ > 0. As τ goes to 0 or ∞, log L(τ ) tends to −∞. Since there is only one
critical point, τ̂ is the MLE.
q Pn
2
i=1 (Xi −µ)
b) By the invariance principle, the MLE is n
.

5.28. (Aug. 2002 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a half normal HN(µ, σ 2) distribution with pdf

2 −(x − µ)2
f(x) = √ exp ( )
2π σ 2σ 2
where σ > 0 and x > µ and µ is real. Assume that µ is known.
a) Find the maximum likelihood estimator of σ 2.
b) What is the maximum likelihood estimator of σ? Explain.
Solution. This problem is nearly the same as finding the MLE of σ 2 when the data
arePiid N(µ, σ 2 ) when µ is known. See Problem 5.23 and Section 10.23. The MLE in a)
n
is i=1 (Xi − µ)2 /n. For b) use the invariance principle and take the square root of the
answer in a).
5.29. (Jan. 2003 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a lognormal (µ, σ 2) distribution with pdf

1 −(log(x) − µ)2
f(x) = √ exp ( )
x 2πσ 2 2σ 2

5
where σ > 0 and x > 0 and µ is real. Assume that σ is known.
a) Find the maximum likelihood estimator of µ.
b) What is the maximum likelihood estimator of µ3 ? Explain.
Solution. a) P
log(Xi )
µ̂ =
n
To see this note that
Y P
1 − (log(xi ) − µ)2
L(µ) = ( √ ) exp( .
xi 2πσ 2 2σ 2

So P
(log(xi) − µ)2
log(L(µ)) = log(c) −
2σ 2
and the derivative of the log likelihood wrt µ is
P
2(log(xi ) − µ)
.
2σ 2
P
Setting this quantity equal to 0 gives nµ = log(xi ) and the solution µ̂ is unique. The
2
second derivative is −n/σ < 0, so µ̂ is indeed the global maximum.
b)
P 3
log(Xi )
n
by invariance.
5.30. (Aug. 2004 Qual): Let X be a single observation from a normal distribution
with mean θ and with variance θ2 , where θ > 0. Find the maximum likelihood estimator
of θ2.
Solution.
1 2 2
L(θ) = √ e−(x−θ) /2θ
θ 2π

ln(L(θ)) = −ln(θ) − ln( 2π) − (x − θ)2 /2θ2

dln(L(θ)) −1 x − θ (x − θ)2
= + 2 +
dθ θ θ θ3
2
x x 1 set
= 3 − 2− =0
θ θ θ
by solving for θ,
x √
θ= ∗ (−1 + 5),
2
and √
x
θ= ∗ (−1 − 5).
2

6
√ √
But, θ > 0. Thus, θ̂ = x2 ∗ (−1 + 5), when x > 0, and θ̂ = x
2
∗ (−1 − 5), when x < 0.
To check with the second derivative
d2 ln(L(θ)) 2θ + x 3(θ2 + θx − x2)
= − +
dθ2 θ3 θ4
2 2
θ + 2θx − 3x
=
θ4
but the sign of the θ4 is always positive, thus the sign of the second derivative depends
on the sign
√ of the numerator. Substitute θ̂ in the numerator and simplify, you get
x2
2
(−5 ± 5), which is always negative. Hence by the invariance principle, the MLE of
2 2
θ is θ̂ .
5.31. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with probability density function
 
σ 1/λ 1
f(x) = exp −(1 + ) log(x) I[x ≥ σ]
λ λ
where x ≥ σ, σ > 0, and λ > 0. The indicator function I[x ≥ σ] = 1 if x ≥ σ and
0, otherwise. Find the maximum likelihood estimator (MLE) (σ̂, λ̂) of (σ, λ) with the
following steps.
a) Explain why σ̂ = X(1) = min(X1 , ..., Xn ) is the MLE of σ regardless of the value
of λ > 0.
b) Find the MLE λ̂ of λ if σ = σ̂ (that is, act as if σ = σ̂ is known).
Solution. a) For any λ > 0, the likelihood function
" n
#
1 1 X
L(σ, λ) = σ n/λ I[x(1) ≥ σ] n exp −(1 + ) log(xi )
λ λ i=1

is maximized by making σ as large as possible. Hence σ̂ = X(1) .


b) " #
n
1 1 X
L(σ̂, λ) = σ̂ n/λ I[x(1) ≥ σ̂] n exp −(1 + ) log(xi) .
λ λ i=1
Hence log L(σ̂, λ) =
n
n 1 X
log(σ̂) − n log(λ) − (1 + ) log(xi ).
λ λ i=1

Thus n
d −n n 1 X set
log L(σ̂, λ) = 2 log(σ̂) − + 2 log(xi ) = 0,
dλ λ λ λ i=1
Pn
or −n log(σ̂) + i=1 log(xi ) = nλ. So
Pn Pn
i=1 log(xi ) log(xi /σ̂)
λ̂ = − log(σ̂) + = i=1 .
n n

7
Now
d2 2n n 2 X
n

2
log L(σ̂, λ) = 3
log(σ̂) + 2
− 3
log(x i )
dλ λ λ λ i=1
λ=λ̂
n
X
n 2 −n
= − log(xi /σ̂) = < 0.
λ̂2 λ̂3 i=1 λ̂2
Hence (σ̂, λ̂) is the MLE of (σ, λ).
5.32. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf  
1 1
f(x) = exp −(1 + ) log(x)
λ λ
where λ > 0 and x ≥ 1.
a) Find the maximum likelihood estimator of λ.
b) What is the maximum likelihood estimator of λ8 ? Explain.
Solution. a) the likelihood
 
1 1 X
L(λ) = n exp −(1 + ) log(xi ) ,
λ λ

and the log likelihood


1 X
log(L(λ)) = −n log(λ) − (1 + ) log(xi ).
λ
Hence
d −n 1 X set
log(L(λ)) = + 2 log(xi ) = 0,
dλ λ λ
P
or log(xi ) = nλ or P
log(Xi )
λ̂ = .
n
Notice that P
d2 n 2 log(xi )
log(L(λ)) = 2 − =
dλ2 λ λ3 λ=λ̂

n 2nλ̂ −n
− = < 0.
λ̂2 λ̂3 λ̂2
Hence λ̂ is the MLE of λ.
b) By invariance, λ̂8 is the MLE of λ8 .
5.33. (Jan. 2004 Qual): Let X1 , ..., Xn be independent identically distributed random
variables with probability mass function
1
f(x) = e−2θ exp[log(2θ)x],
x!

8
for x = 0, 1, . . . , where θ > 0. Assume that at least one Xi > 0.
a) Find the maximum likelihood estimator of θ.
b) What is the maximum likelihood estimator of (θ)4 ? Explain.
Solution. a) The likelihood
X
L(θ) = c e−n2θ exp[log(2θ) xi],

and the log likelihood


X
log(L(θ)) = d − n2θ + log(2θ) xi .

Hence
d 2 X set
log(L(θ)) = −2n + xi = 0,
dθ 2θ
P
or xi = 2nθ, or
θ̂ = X/2.
Notice that P
d2 − xi
log(L(θ)) = <0
dθ2 θ2
P
unless xi = 0.
b) (θ̂) = (X/2)4 by invariance.
4

5.34. (Jan. 2006 Qual): Let X1 , ..., Xn be iid with one of two probability density
functions. If θ = 0, then


1, 0 ≤ x ≤ 1
f(x|θ) =
0, otherwise.

If θ = 1, then

 1

2 x
, 0≤x≤1
f(x|θ) =
0, otherwise.

Find the maximum likelihood estimator of θ.


Q
Solution. L(0|x) = 1 for 0 < xi < 1, and L(1|x) = ni=1 2√1xi for 0 < xi < 1. Thus
Qn Qn
the MLE is 0 if 1 ≥ i=1 2√1xi and the MLE is 1 if 1 < i=1 2√1xi .
Warning: Variants of the following question often appear on qualifying exams.
5.35. (Aug. 2006 Qual): Let Y1 , ..., Yn denote a random sample from a N(aθ, θ)
population.
a) Find the MLE of θ when a = 1.
b) Find the MLE of θ when a is known but arbitrary.

9
Solution. a) Notice that θ > 0 and
 
1 1 −(y − θ)2
f(y) = √ √ exp .
2π θ 2θ

Hence the likelihood  


1 −1 X
L(θ) = c exp (yi − θ)2
θn/2 2θ
and the log likelihood
n 1 X
log(L(θ)) = d − log(θ) − (yi − θ)2 =
2 2θ
n  
n 1 X yi2 2yi θ θ2
d − log(θ) − − +
2 2 i=1 θ θ θ
P Xn
n 1 ni=1 yi2 1
=d − log(θ) − + yi − nθ.
2 2 θ i=1
2
Thus n
d −n 1 1 X 2 1 n set
log(L(θ)) = + yi 2 − = 0,
dθ 2 θ 2 i=1 θ 2
or n
−n 2 n 1X 2
θ − θ+ y = 0,
2 2 2 i=1 i
or n
X
2
nθ + nθ − yi2 = 0. (1)
i=1

Now the quadratic formula states that for a 6= 0, the quadratic equation ay 2 + by + c = 0
has roots √
−b ± b2 − 4ac
.
2a
Applying the quadratic formula to (1) gives
p P
−n ± n2 + 4n ni=1 yi2
θ= .
2n
Since θ > 0, a candidate for the MLE is
q Pn
p Pn
−n + n2 + 4n 2 −1 + 1 + 4 n1 Yi2
i=1 Yi i=1
θ̂ = = .
2n 2

Since θ̂ satisfies (1),


n
X
nθ̂ − yi2 = −nθ̂2. (2)
i=1

10
Note that
Pn
d2 n y 2
1 Xn
i=1 i 2
2
log(L(θ)) = 2 − = 3 [nθ − 2 yi ] =
dθ 2θ θ3 2θ i=1

θ=θ̂

n
X n
X n
X
1 1
[nθ̂ − yi2 − yi2] = 2
[−nθ̂ − yi2] < 0
2θ̂3 i=1 i=1 2θ̂3 i=1

by (2). Since L(θ) is continuous with a unique root on θ > 0, θ̂ is the MLE.
5.37. (Aug. 2006 Qual): Let X1 , ..., Xn be independent identically distributed (iid)
random variables with probability density function
 
2 x −(ex − 1)2
f(x) = √ e exp
λ 2π 2λ2
where x > 0 and λ > 0.
a) Find the maximum likelihood estimator (MLE) λ̂ of λ.
b) What is the MLE of λ2 ? Explain.
Pn 
−1
Solution. a) L(λ) = c λ1n exp 2λ 2
xi
i=1 (e − 1)
2
.
Thus n
1 X xi
log(L(λ)) = d − n log(λ) − 2 (e − 1)2 .
2λ i=1
Hence
d log(L(λ)) −n 1 X xi set
= + 3 (e − 1)2 = 0,
dλ λ λ
2
P xi 2
or nλ = (e − 1) , or rP
(eXi − 1)2
λ̂ = .
n
Now
d2 log(L(λ)) n 3 X xi
2
= − (e − 1)
dλ2 λ2 λ4 λ=λ̂
n 3n n
= − λ̂2 = [1 − 3] < 0.
λ̂2 λ̂4 λ̂2
So λ̂ is the MLE.
5.38. (Jan. 2007 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a distribution with pdf
 
2 1 −(log(x))2
f(x) = √ exp
λ 2π x 2λ2

where λ > 0 where and 0 ≤ x ≤ 1.


a) Find the maximum likelihood estimator (MLE) of λ.

11
b) Find the MLE of λ2 .
Solution. a) The likelihood
Y Y  P 
1 1 −(log xi )2
L(λ) = f(xi ) = c exp ,
xi λn 2λ2
and the log likelihood
X P
(log xi )2
log(L(λ)) = d − log(xi) − n log(λ) − .
2λ2
Hence P
d −n (log xi )2 set
log(L(λ)) = + = 0,
dλ λ λ3
P
or (log xi )2 = nλ2 , or rP
(log xi )2
λ̂ = .
n
This solution is unique.
Notice that P
d2 n 3 (log xi )2
2
log(L(λ)) = 2 −
dλ λ λ4 λ=λ̂

n 3nλ̂2 −2n
= − = < 0.
λ̂2 λ̂4 λ̂2
Hence rP
(log Xi )2
λ̂ =
n
is the MLE of λ.
b) P
2 (log Xi )2
λ̂ =
n
is the MLE of λ2 by invariance.
5.41. (Jan. 2009 Qual): Suppose that X has probability density function
θ
fX (x) = , x≥1
x1+θ
where θ > 0.
a) If U = X 2 , derive the probability density function fU (u) of U.
b) Find the method of moments estimator of θ.
c) Find the method of moments estimator of θ2 .
5.42. (Jan. 2009 Qual): Suppose that the joint probability distribution function of
X1 , ..., Xk is
Pk !
n! −[( i=1 xi ) + (n − k)xk ]
f(x1 , x2, ..., xk |θ) = exp
(n − k)!θk θ

12
where 0 ≤ x1 ≤ x2 ≤ · · · ≤ xk and θ > 0.
a) Find the maximum likelihood estimator (MLE) for θ.
b) What is the MLE for θ2 ? Explain briefly.
P
Solution. a) Let t = [( ki=1 xi ) + (n − k)xk ]. L(θ) = f(x|θ) and log(L(θ)) =
log(f(x|θ)) =
t
d − k log(θ) − .
θ
Hence
d −k t set
log(L(θ)) = + 2 = 0.
dθ θ θ
Hence
kθ = t
or
t
θ̂ = .
k
This is a unique solution and

d2 k 2t k 2k θ̂ k
2
log(L(θ)) = 2 − 3 = − =− < 0.
dθ θ θ θ=θ̂ θ̂ 2 θ̂ 3 θ̂2
P
Hence θ̂ = T /k is the MLE where T = [( ki=1 Xi ) + (n − k)Xk ].
b) θ̂2 by the invariance principle.
5.43. (Jan. 2010 Qual): Let X1 , ..., Xn be iid with pdf
cos(θ)
f(x) = exp(θx)
2 cosh(πx/2)
where x is real and |θ| < π/2.
a) Find the maximum likelihood estimator (MLE) for θ.
b) What is the MLE for tan(θ)? Explain briefly.
[cos(θ)] n exp(θ
P
xi ) P
Solution. a) L(θ) = Q
2 cosh(πxi /2)
. So log(L(θ)) = c + n log(cos(θ)) + θ xi , and

d log(L(θ)) 1 X set
=n [− sin(θ)] + xi = 0,
dθ cos(θ)

or tan(θ) = x, or θ̂ = tan−1 (X).


Since
d2 log(L(θ))
= −n sec2(θ) < 0
dθ2
for |θ| < 1/2, θ̂ is the MLE.
b) The MLE is tan(θ̂) = tan(tan−1 (X)) = X by the invariance principle.
(By properties of the arctan function, θ̂ = tan−1 (X) iff
tan(θ̂) = X and −π/2 < θ̂ < π/2.)

13
5.44. (Aug. 2009 Qual): Let X1 , ..., Xn be a random sample from a population with
pdf  
1 x−µ
f(x) = exp − , x ≥ µ,
σ σ
where −∞ < µ < ∞, σ > 0.
a) Find the maximum likelihood estimator of µ and σ.
b) Evaluate τ (µ, σ) = Pµ,σ [X1 ≥ t] where t > µ. Find the maximum likelihood
estimator of τ (µ, σ).
Solution. a) This is a two parameter exponential distribution. So see Section 10.14
where σ = λ and µ = θ.
b)   
x−µ
1 − F (x) = τ (µ, σ) = exp − .
σ
By the invariance principle, the MLE of τ (µ, σ) = τ (µ̂, σ̂)
" !#
x − X(1)
= exp − .
X − X(1)

5.45. (Sept. 2010 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
s r !   
1 1 θ θ y 1 −1 y θ
f(y) = √ + exp + −2
2 2π θ y y2 θ ν 2ν 2 θ y

where y > 0, θ > 0 is known and ν > 0.


a) Find the maximum likelihood estimator (MLE) of ν.
b) Find the MLE of ν 2 .
Solution. a) Let
y θ
w = t(y) = + − 2.
θ y
Then the likelihood n
1 −1 X
L(ν) = d n exp( 2 wi ),
ν 2ν i=1
and the log likelihood
n
1 X
log(L(ν)) = c − n log(ν) − wi .
2ν 2 i=1

Hence n
d −n 1 X set
log(L(ν)) = + 3 wi = 0,
dν ν ν i=1
or r Pn
i=1 wi
ν̂ = .
n

14
This solution is unique and
Pn
d2 n 3 w i n 3nν̂ 2 −2n
log(L(ν)) = 2 − i=1 = − = 2 < 0.
dν 2 ν ν 4 ν̂ 2 ν̂ 4 ν̂
ν=ν̂

Thus r Pn
i=1 Wi
ν̂ =
n
is the MLE of ν if ν̂ > 0.
Pn
2 Wi
b) ν̂ = i=1 by invariance.
n
5.46. (Sept. 2011 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
1 1 −1
f(y) = φ y −(φ+1) −φ
exp[ log(1 + y −φ )]
1+y λ λ
where y > 0, φ > 0 is known and λ > 0.
a) Find the maximum likelihood estimator (MLE) of λ.
b) Find the MLE of λ2 .
Solution. a) The likelihood
" n
#
1 1X
L(λ) = c n exp − log(1 + yi−φ ) ,
λ λ i=1
Pn
and the log likelihood log(L(λ)) = d − n log(λ) − 1
λ i=1 log(1 + yi−φ ). Hence
Pn
d −n i=1 log(1 + yi−φ ) set
log(L(λ)) = + = 0,
dλ λ λ2
Pn
or i=1 log(1 + yi−φ ) = nλ or
Pn
i=1 log(1 + yi−φ )
λ̂ = .
n
This solution is unique and
Pn
−φ
d2 n 2 i=1 log(1 + y i ) n 2nλ̂ −n
log(L(λ)) = − = − = < 0.
dλ2 λ2 λ3 λ̂2 λ̂3 λ̂2
λ=λ̂

Thus Pn
i=1 log(1 + Yi−φ )
λ̂ =
n
is the MLE of λ if φ is known.
b) The MLE is λ̂2 by invariance.

15
5.47. (Aug. 2012 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from an inverse half normal distribution with probability density func-
tion (pdf)  
2 1 −1
f(y) = √ exp
σ 2π y 2 2σ 2 y 2
where y > 0 and σ > 0.
a) Find the maximum likelihood estimator (MLE) of σ 2.
b) Find the MLE of σ.
Solution. a) The likelihood
  n2 " n
#
1 −1 X 1
L(σ 2) = c exp ,
σ2 2σ 2 i=1 yi2

and the log likelihood


n
2 n 2 1 X 1
log(L(σ )) = d − log(σ ) − 2 .
2 2σ i=1 yi2

Hence n
d 2 −n 1 X 1 set
log(L(σ )) = + = 0,
d(σ 2) 2(σ 2 ) 2(σ 2 )2 i=1 yi2
Pn 1
or i=1 yi2 = nσ 2 or
n
2 1X 1
σ̂ = .
n i=1 yi2
This solution is unique and

d2
2 2
log(L(σ 2 )) =
d(σ )
Pn 1
n
i=1 yi2 n nσ̂ 2 2 −n
2 2
− 2 3
= 2 2
− 2 3
= 4 < 0.
2(σ ) (σ ) 2 2 2(σ̂ ) (σ̂ ) 2 2σ̂
σ =σ̂
Thus n
2 1X 1
σ̂ =
n i=1 Yi2
is the MLE of σ 2. √
b) By invariance, σ̂ = σ̂ 2.
5.48. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
 
θ −θ
f(y) = 2 exp
y y

16
where y > 0 and θ > 0.
a) Find the maximum likelihood estimator (MLE) of θ.
b) Find the MLE of 1/θ.
" n
#
X 1
Solution. a) The likelihood L(θ) = c θn exp −θ , and the log likelihood
i=1
y i
Xn
1
log(L(θ)) = d + n log(θ) − θ . Hence
i=1
y i
n
d n X 1 set n
log(L(θ)) = − = 0, or θ̂ = Pn 1 .
dθ θ y
i=1 i i=1 yi

d2 −n
Since this solution is unique and 2 log(L(θ)) = 2 < 0,
dθ θ
n
θ̂ = Pn 1 is the MLE of θ.
i=1 Yi Pn 1
i=1 Yi
b) By invariance, the MLE is 1/θ̂ = .
n
5.49. (Aug. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a Lindley distribution with probability density function (pdf)

θ2
f(y) = (1 + y)e−θy
1+θ
where y > 0 and θ > 0.
a) Find the maximum likelihood estimator (MLE) of θ. You may assume that

d2
2
log(L(θ)) < 0.
dθ θ=θ̂

b) Find the MLE of 1/θ.


Solution: a) The likelihood
 n n
X
θ2
L(θ) = c exp(−θ yi ),
1+θ i=1

and the log likelihood


  n
X
θ2
log(L(θ)) = d + n log −θ yi .
1+θ i=1

Always use properties of logarithms to simplify the log likelihood before taking deriva-
tives. Note that
n
X
log(L(θ)) = d + 2n log(θ) − n log(1 + θ) − θ yi .
i=1

17
Hence n
d 2n n X set
log(L(θ)) = − − yi = 0,
dθ θ (1 + θ) i=1
or
2(1 + θ) − θ 2+θ
− y = 0 or −y =0
θ(1 + θ) θ(1 + θ)
or 2 + θ = y(θ + θ2 ) or yθ2 + θ(y − 1) − 2 = 0. So
q
−(Y − 1) + (Y − 1)2 + 8Y
θ̂ = .
2Y

b) By invariance, the MLE is 1/θ̂.


√ p
5.53. (Jan. 2015 QUAL): b) σ̂ = σ̂ 2 = Q/n by invariance.
5.54. (Aug. 2016 QUAL): Suppose X1 , ..., Xn are random variables with likelihood
function hQ i
n−k 1 −xi /θ Qn −xi /θ

i=1 θ e i=n−k+1 e
L(θ) = Qn−k −d /θ
i=1 e
i

where θ > 0, xi > 0, and xi > di > 0 for i = 1, ..., n − k. The di and k are known
constants. Find the maximum likelihood estimator (MLE) of θ.
Solution:
n−k n
1 Y −(xi−di )/θ
Y
L(θ) = e e−xi /θ .
θn−k i=1 i=n−k+1

Hence
n−k n
1X 1 X
log(L(θ)) = −(n − k) log(θ) − (xi − di ) − xi =
θ i=1 θ i=n−k+1
n−k n−k n
1X 1X 1 X
−(n − k) log(θ) − xi + di − xi =
θ i=1 θ i=1 θ
i=n−k+1

f
−(n − k) log(θ) −
θ
Pn−k Pn−k P P Pn−k
where f = i=1 xi − i=1 di + ni=n−k+1 xi = ni=1 xi − i=1 di =
Pn−k Pn
i=1 (xi − di ) + i=n−k+1 xi > 0. So

d log(L(θ) −(n − k) f set


= + 2 =0
dθ θ θ
or (n − k)θ = f or
f
θ̂ = .
n−k

18
This solution was unique and

d2 log(L(θ) n − k 2f n − k 2(n − k)θ̂ −(n − k)
2
= 2
− 3 = − = < 0.
dθ θ θ θ̂ θ̂2 θ̂3 θ̂2

Hence θ̂ is the MLE.


6.2. (Aug. 2002 QUAL): Let X1 , ..., Xn be independent identically distributed ran-
dom variable from a N(µ, σ 2 ) distribution. Hence E(X1 ) = µ and V AR(X1 ) = σ 2.
Consider estimators of σ 2 of the form
n
1X
S 2(k) = (Xi − X)2
k i=1

where k > 0 is a constant to be chosen. Determine the value of k which gives the smallest
mean square error. (Hint: Find the MSE as a function of k, then take derivatives with
respect to k. Also, use Theorem 4.1c and Remark 5.1 VII.)
6.7. (Jan. 2001 Qual): Let X1 , ..., Xn be independent, identically distributed N(µ, 1)
random variables where µ is unknown and n ≥ 2. Let t be a fixed real number. Then the
expectation
Eµ (I(−∞,t] (X1 )) = Pµ (X1 ≤ t) = Φ(t − µ)
for all µ where Φ(x) is the cumulative distribution function of a N(0, 1) random variable.

a) Show that the sample mean X is a sufficient statistic for µ.


b) Explain why (or show that) X is a complete sufficient statistic for µ.
c) Using the fact that the conditional distribution of X1 given X = x is the N(x, 1 −
1/n) distribution where the second parameter 1 − 1/n is the variance of conditional
distribution, find
Eµ (I(−∞,t](X1 )|X = x) = Eµ [I(−∞,t](W )]
where W ∼ N(x, 1 − 1/n). (Hint: your answer should be Φ(g(x)) for some function g.)
d) What is the uniformly minimum variance unbiased estimator for
Φ(t − µ)?
Solution. a) The joint density
1 1X
f(x) = n/2
exp[− (xi − µ)2 ]
(2π) 2
1 1 X 2 X
= exp[− ( x i − 2µ xi + nµ2 )]
(2π)n/2 2
1 1X 2 nµ2
= exp[− x i ] exp[nµx − ].
(2π)n/2 2 2
Hence by the factorization theorem X is a sufficient statistic for µ.

19
b) X is sufficient by a) and complete since the N(µ, 1) family is a regular one param-
eter exponential family.
c) E(I−(∞,t](X1 )|X = x) = P (X1 ≤ t|X = x) = Φ( √t−x ).
1−1/n

d) By the LSU theorem,


t−X
Φ( p )
1 − 1/n
is the UMVUE.
6.14. (Jan. 2003 Qual): Let X1 , ..., Xn be independent, identically distributed
exponential(θ) random variables where θ > 0 is unknown. Consider the class of esti-
mators of θ n
X
{Tn (c) = c Xi | c > 0}.
i=1

Determine the value of c that minimizes the mean square error MSE. Show work and
prove that your value of c is indeed the global minimizer.
P
Solution.
P Note that Xi ∼ G(n, θ). Hence MSE(c) = V arθ (Tn (c)) + [Eθ Tn (c) − θ]2
= c V arθ ( Xi ) + [ncEθ X − θ]2 = c2 nθ2 + [ncθ − θ]2.
2

So
d
MSE(c) = 2cnθ2 + 2[ncθ − θ]nθ.
dc
Set this equation to 0 to get 2nθ2 [c + nc − 1] = 0 or c(n + 1) = 1. So c = 1/(n + 1).
The second derivative is 2nθ2 + 2n2 θ2 > 0 so the function is convex and the local min
is in fact global.
6.19. (Aug. 2000 SIU, 1995 Univ. Minn. Qual): Let X1 , ..., Xn be independent
identically distributed random variables from a N(µ, σ 2 ) distribution. Hence E(X1 ) = µ
and V AR(X1 ) = σ 2. Suppose that µ is known and consider estimates of σ 2 of the form
n
2 1X
S (k) = (Xi − µ)2
k i=1

where k is a constant to be chosen. Note: E(χ2m ) = m and V AR(χ2m ) = 2m. Determine


the value of k which gives the smallest mean square error. (Hint: Find the MSE as a
function of k, then take derivatives with respect to k.)
Solution.
W ≡ S 2(k)/σ 2 ∼ χ2n /k
and
MSE(S 2(k)) = MSE(W ) = V AR(W ) + (E(W ) − σ 2)2
σ4 σ 2n
= 2n + ( − σ 2 )2
k2 k
2n n 2n + (n − k)2
= σ 4[ 2 + ( − 1)2 ] = σ 4 .
k k k2

20
d
Now the derivative dk
MSE(S 2 (k))/σ 4 =

−2 −2(n − k)
3
[2n + (n − k)2] + .
k k2
Set this derivative equal to zero. Then

2k 2 − 2nk = 4n + 2(n − k)2 = 4n + 2n2 − 4nk + 2k 2 .

Hence
2nk = 4n + 2n2
or k = n + 2.
Should also argue that k = n + 2 is the global minimizer. Certainly need k > 0
and the absolute bias will tend to ∞ as k → 0 and the bias tends to σ 2 as k → ∞, so
k = n + 2 is the unique critical point and is the global minimizer.
6.20. (Aug. 2001 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf
2x −x2 /θ
f(x|θ) = e , x>0
θ
and f(x|θ) = 0 for x ≤ 0.
a) Show that X12 is an unbiased estimator of θ. (Hint: use the substitution W = X 2
and find the pdf of W or use u-substitution with u = x2/θ.)
b) Find the Cramer-Rao lower bound for the variance of an unbiased estimator of θ.
c) Find the uniformly minimum variance unbiased estimator (UMVUE) of θ.
√ √
Solution. a) Let W = X 2 . Then f(w) = fX ( w) 1/(2 w) = (1/θ) exp(−w/θ) and
W ∼ EXP (θ). Hence Eθ (X 2 ) = Eθ (W ) = θ.
b) This is an exponential family and
1
log(f(x|θ)) = log(2x) − log(θ) − x2
θ
for x > 0. Hence
∂ −1 1
f(x|θ) = + 2 x2
∂θ θ θ
and
∂2 1 −2
2
f(x|θ) = 2 + 3 x2.
∂θ θ θ
Hence
1 −2 1
I1 (θ) = −Eθ [ 2
+ 3 x2 ] = 2
θ θ θ
by a). Now
[τ 0(θ)]2 θ2
CRLB = =
nI1(θ) n

21
where τ (θ) = θ.
P
c) This is a regular exponential family so ni=1 Xi2 is a complete sufficient statistic.
Since Pn
X2
Eθ [ i=1 i ] = θ,
n
Pn
Xi2
the UMVUE is i=1
n
.
6.21. (Aug. 2001 Qual): See Mukhopadhyay (2000, p. 377). Let X1 , ..., Xn be iid
N(θ, θ2 ) normal random variables with mean θ and variance θ2 . Let
n
1X
T1 = X = Xi
n i=1

and let s
Pn
i=1 (Xi − X)2
T 2 = cn S = cn
n−1
where the constant cn is such that Eθ [cn S] = θ. You do not need to find the constant cn .
Consider estimators W (α) of θ of the form.

W (α) = αT1 + (1 − α)T2

where 0 ≤ α ≤ 1.
a) Find the variance

V arθ [W (α)] = V arθ (αT1 + (1 − α)T2 ).

b) Find the mean square error of W (α) in terms of V arθ (T1 ), V arθ (T2) and α.
c) Assume that
θ2
V arθ (T2) ≈
.
2n
Determine the value of α that gives the smallest mean square error. (Hint: Find the
MSE as a function of α, then take the derivative with respect to α. Set the derivative
equal to zero and use the above approximation for V arθ (T2). Show that your value of α
is indeed the global minimizer.)
Solution. a) In normal samples, X and S are independent, hence

V arθ [W (α)] = α2 V arθ (T1 ) + (1 − α)2 V arθ (T2 ).

b) W (α) is an unbiased estimator of θ. Hence MSE[W (α)] ≡ MSE(α) = V arθ [W (α)]


which is found in part a).

22
c) Now
d set
MSE(α) = 2αV arθ (T1) − 2(1 − α)V arθ (T2) = 0.

Hence
θ2
V arθ (T2) 2n
α̂ = ≈ 2θ2 θ2
= 1/3
V arθ (T1 ) + V arθ (T2) 2n
+ 2n

using the approximation and the fact that Var(X̄) = θ2 /n. Note that the second deriva-
tive
d2
MSE(α) = 2[V arθ (T1 ) + V arθ (T2)] > 0,
dα2
so α = 1/3 is a local min. The critical value was unique, hence 1/3 is the global min.
6.22. (Aug. 2003 Qual): Suppose that X1 , ..., Xn are iid normal distribution with
mean 0 and variance σ 2. Consider the following estimators: T1 = 21 |X1 − X2 | and
q P
T2 = n1 ni=1 Xi2 .
a) Is T1 unbiased for σ? Evaluate the mean square error (MSE) of T1 .
b) Is T2 unbiased for σ? If not, find a suitable multiple of T2 which is unbiased for σ.

Solution. a) X1 − X2 ∼ N(0, 2σ 2 ). Thus,


Z ∞
1 −u2
E(T1) = u√ e 4σ2 du
0 4πσ 2
σ
= √ .
π

Z
1 ∞ 2 1 −u2
E(T12) = u √ e 4σ2 du
2 0 4πσ 2
σ2
= .
2
V (T1 ) = σ 2( 12 − π1 ) and

1 1 1 3 2
MSE(T1) = σ 2[( √ ) − 1)2 + − ] = σ 2 [ − √ ].
π 2 π 2 π
Pn 2
Xi i=1 Xi
b) σ
has a N(0,1) and σ2
has a chi square distribution with n degrees of freedom.
Thus r Pn √
i=1 Xi
2 2Γ( n+1
2
)
E( 2
) = n ,
σ Γ( 2 )
and √
σ 2Γ( n+1
2
)
E(T2) = √ n .
n Γ( 2 )

23
Therefore, √
n Γ( n2 )
E( √ T2) = σ.
2 Γ( n+1
2
)
6.23. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf (probability density function)
1  x
f(x) = exp −
λ λ
where x and λ are both positive. Find the uniformly minimum variance unbiased esti-
mator (UMVUE) of λ2 .
Solution. This
Pnis a regular one parameter exponential family with complete sufficient
statistic Tn = i=1 Xi ∼ G(n, λ). Hence E(Tn ) = nλ, E(Tn ) = V (Tn ) + (E(Tn ))2 =
2

nλ2 + n2 λ2 , and Tn2/(n + n2) is the UMVUE of λ2 .


6.24. (Jan. 2004 Qual): Let X1 , ..., Xn be independent identically distributed random
variables with pdf (probability density function)
r  σ
σ
f(x) = exp −
2πx3 2x
σ
where x and σ are both positive. Then Xi = where Wi ∼ χ21. Find the uniformly
Wi
1
minimum variance unbiased estimator (UMVUE) of .
σ
Solution.
1 Wi χ2
= ∼ 1.
Xi σ σ
Hence if n
X 1 T n
T = , then E( ) = ,
i=1
Xi n nσ
and T /n is the UMVUE since f(x) is an exponential family with complete sufficient
statistic 1/X.
6.25. (Jan. 2004 Qual): Let X1 , ..., Xn be a random sample from the distribution
with density
 2x
θ2
, 0<x<θ
f(x) =
0 elsewhere

Let T = max(X1, ..., Xn ). To estimate θ consider estimators of the form CT . Determine


the value of C which gives the smallest mean square error.
Solution. The pdf of T is
2nt2n−1
g(t) =
θ2n

24
for 0 < t < θ.
2n 2n
E(T ) = 2n+1 θ and E(T 2) = 2n+2
θ2 .

2n 2n 2 2n
MSE(CT ) = (C θ − θ)2 + C 2[ θ −( θ)2 ]
2n + 1 2n + 2 2n + 1
dMSE(CT ) 2Cnθ 2nθ 2nθ2 4n2 θ2
= 2[ − θ][ ] + 2C[ − ].
dC 2n + 1 2n + 1 2n + 2 (2n + 1)2
dM SE(CT ) set
Solve dC
= 0 to get
n+1
. C=2
2n + 1
The MSE is a quadratic in C and the coefficient on C 2 is positive, hence the local min is
a global min.
6.26. (Aug. 2004 Qual): Let X1 , ..., Xn be a random sample from a distribution with
pdf
2x
, 0 < x < θ.
f(x) =
θ2
Let T = cX be an estimator of θ where c is a constant.
a) Find the mean square error (MSE) of T as a function of c (and of θ and n).
b) Find the value c that minimizes the MSE. Prove that your value is the minimizer.

Solution. a) E(Xi ) = 2θ/3 and V (Xi ) = θ2/18. So bias of T = B(T ) = EcX − θ =


c 23 θ − θ and Var(T ) =
P
c Xi c2 X c2 nθ2
V ar( )= 2 V ar(Xi ) = 2 .
n n n 18
So MSE = Var(T) +[B(T )]2 =

c2 θ 2 2θ
+ ( c − θ)2 .
18n 3
b)
dMSE(c) 2cθ2 2θ 2θ
= + 2( c − θ) .
dc 18n 3 3
Set this equation equal to 0 and solve, so

θ2 2c 4 2
+ θ( θc − θ) = 0
18n 3 3
or
2θ2 8 4
c[ + θ2 ] = θ2
18n 9 3
or
1 8 4
c( + )θ2 = θ2
9n 9 3

25
or
1 8n 4
c( + )=
9n 9n 3
or
9n 4 12n
c= = .
1 + 8n 3 1 + 8n
This is a global min since the MSE is a quadratic in c2 with a positive coefficient, or
because
d2 MSE(c) 2θ2 8θ2
= + > 0.
dc2 18n 9
6.27. (Aug. 2004 Qual): Suppose that X1 , ..., Xn are iid Bernoulli(p) where n ≥ 2
and 0 < p < 1 is the unknown parameter.
a) Derive the UMVUE of ν(p), where ν(p) = e2(p(1 − p)).
b) Find the Cramér Rao lower bound for estimating ν(p) = e2(p(1 − p)).
Solution. a) Consider the statistic
P W = X1 (1 − X2 ) which is an unbiased estimator of
ψ(p) = p(1 − p). The statistic T = ni=1 Xi is both complete and sufficient. The possible
values of W are 0 or 1. Let U = φ(T ) where

φ(t) = E[X1 (1 − X2 )|T = t]


= 0P [X1 (1 − X2 ) = 0|T = t] + 1P [X1 (1 − X2 ) = 1|T = t]
= P [X1 (1 − X2 ) = 1|T = t]
Pn
P [X1 = 1, X2 = 0 and i=1 Xi = t]
= Pn
P [ i=1 Xi = t]
P
P [X1 = 1]P [X2 = 0]P [ ni=3 Xi = t − 1]
= Pn .
P [ i=1 Xi = t]
Pn Pn
Now i=3 Xi is Bin(n − 2, p) and i=1 Xi is Bin(n, p). Thus
 t−1
p(1 − p)[ n−2
t−1
p (1 − p)n−t−1 ]
φ(t) = 
n t
t
p (1 − p)n−t
n−2

t−1 (n − 2)! t(t − 1)!(n − t)(n − t − 1)! t(n − t)
= n = =
t
(t − 1)!(n − 2 − t + 1)! n(n − 1)(n − 2)! n(n − 1)
t
n
− n nt )
(n t
n
− nt )
n(1 n
= = = x(1 − x).
n−1 n−1 n−1
n n
Thus n−1 X(1 − X) is the UMVUE of p(1 − p) and e2U = e2 n−1 X(1 − X) is the
2
UMVUE of τ (p) = e p(1 − p).
Alternatively, X is a complete sufficient statistic, so try an estimator of the form
U = a(X)2 + bX + c. Then U is the UMVUE if Ep (U) = e2p(1 − p) = e2 (p − p2 ). Now

26
P
E(X) = E(X1 ) = p and V (X) = V (X1 )/n = p(1 − p)/n since Xi ∼ Bin(n, p). So
E[(X) ] = V (X) + [E(X)] = p(1 − p)/n + p . So Ep (U) = a[p(1 − p)/n] + ap2 + bp + c
2 2 2

ap ap2 a a
= − + ap2 + bp + c = ( + b)p + (a − )p2 + c.
n n n n
a
So c = 0 and a − n
= a n−1
n
= −e2 or

−n 2
a= e .
n−1
a
Hence n
+ b = e2 or
a n n 2
b = e2 − = e2 + e2 = e.
n n(n − 1) n−1

So
−n 2 n 2 n 2
U= e (X)2 + eX= e X(1 − X).
n−1 n−1 n−1
b) The FCRLB for τ (p) is [τ 0(p)]2 /nI1 (p). Now f(x) = px (1 − p)1−x , so log f(x) =
x log(p) + (1 − x) log(1 − p). Hence

∂ log f x 1−x
= −
∂p p 1−p
and
∂ 2 log f −x 1−x
2
= 2 − .
∂p p (1 − p)2
So
∂ 2 log f −p 1−p 1
I1(p) = −E( 2
) = −( 2 − 2
)= .
∂p p (1 − p) p(1 − p)
So
[e2(1 − 2p)]2 e4(1 − 2p)2 p(1 − p)
F CRLBn = n = .
p(1−p)
n

6.30. (Jan. 2009 Qual): Suppose that Y1 , ..., Yn are independent binomial(mi, ρ)
where the mi ≥ 1 are known constants. Let
Pn n
i=1 Yi 1 X Yi
T1 = Pn and T2 =
i=1 mi n i=1 mi

be estimators of ρ.
a) Find MSE(T1).
b) Find MSE(T2 ).
c) Which estimator is better?

27
Hint: by the arithmetic–geometric–harmonic mean inequality,
n
1X n
mi ≥ Pn 1 .
n i=1 i=1 mi

Solution. a) Pn Pn
i=1 E(Yi ) mi ρ
E(T1 ) = Pn = Pi=1
n = ρ,
i=1 mi i=1 mi

so MSE(T1) = V (T1) =
n
X Xn Xn
1 1 1
Pn V( Yi ) = P V (Yi ) = P mi ρ(1 − ρ)
( i=1 m i )2 i=1
( ni=1 mi)2 i=1 ( ni=1 mi )2 i=1

ρ(1 − ρ)
= Pn .
i=1 mi

b)
n n n
1 X E(Yi ) 1 X mi ρ 1X
E(T2) = = = ρ = ρ,
n i=1 mi n i=1 mi n i=1
so MSE(T2) = V (T2) =
n
X n n n
1 Yi 1 X Yi 1 X V (Yi ) 1 X mi ρ(1 − ρ)
V( )= 2 V( )= 2 = 2
n2 i=1 mi n i=1 mi n i=1 (mi)2 n i=1 (mi )2

n
ρ(1 − ρ) X 1
= .
n2 i=1
m i

c) The hint
n
1X n
mi ≥ Pn 1
n i=1 i=1 mi

implies that Pn Pn
1 1
n i=1 mi 1 i=1 mi
Pn ≤ and Pn ≤ .
i=1 mi n i=1 mi n2
Hence MSE(T1) ≤ MSE(T2), and T1 is better.
6.31. (Sept. 2010 Qual): Let Y1 , ..., Yn be iid gamma(α = 10, β) random variables.
Let T = cY be an estimator of β where c is a constant.
a) Find the mean square error (MSE) of T as a function of c (and of β and n).
b) Find the value c that minimizes the MSE. Prove that your value is the minimizer.

Solution. a) E(T ) = cE(Y ) = cαβ = 10cβ.


V (T ) = c2 V (Y ) = c2 αβ 2/n = 10c2 β 2/n.
MSE(T ) = V (T ) + [B(T )]2 = 10c2 β 2/n + (10cβ − β)2.

28
d MSE(c) 2c10β 2 set
b) = + 2(10cβ − β)10β = 0
dc n
2 2 2
or [20β /n] c + 200β c − 20β = 0
or c/n + 10c − 1 = 0 or c(1/n + 10) = 1
or
1 n
c= 1 = .
n
+ 10 10n + 1
This value of c is unique, and

d2 MSE(c) 20β 2
= + 200β 2 > 0,
dc2 n
so c is the minimizer.
6.32. (Jan. 2011 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf (probability density function)

f(y) = (2 − 2y)I(0,1)(y) ν exp[(1 − ν)(− log(2y − y 2 ))]

where ν > 0 and n > 1. The indicator I(0,1)(y) = 1 if 0 < y < 1 and I(0,1)(y) = 0,
otherwise.
a) Find a complete sufficient statistic.
b) Find the Fisher information I1(ν) if n = 1.
c) Find the Cramer Rao lower bound (CRLB) for estimating 1/ν.
d) Find the uniformly minimum unbiasedPestimator (UMVUE) of ν.
Hint: You may use the fact that Tn = − ni=1 log(2Yi − Yi2 ) ∼ G(n, 1/ν), and

1 Γ(r + n)
E(Tnr ) =
ν r Γ(n)
for r > −n. Also Γ(1 + x) = xΓ(x) for x > 0.
Solution.
Pn a) Since this distribution is a one parameter regular exponential family,
2
Tn = − i=1 log(2Yi − Yi ) is complete.
b) Note that log(f(y|ν)) = log(ν) + log(2 − 2y) + (1 − ν)[− log(2y − y 2)]. Hence
d log(f(y|ν)) 1
= + log(2y − y 2)
dν ν
and
d2 log(f(y|ν)) −1
2
= 2.
dν ν
 
−1 1
Since this family is a 1P-REF, I1(ν) = −E = .
ν2 ν2
[τ 0(ν)]2 ν2 1
c) = 4 = .
nI1 (ν) ν n nν 2
1 Γ(−1 + n) ν
d) E[Tn−1 ] = −1 = . So (n − 1)/Tn is the UMVUE of ν by LSU.
ν Γ(n) n−1

29
6.33. (Sept. 2011 Qual): Let Y1 , ..., Yn be iid random variables from a distribution
with pdf
θ
f(y) =
2(1 + |y|)θ+1
where θ > 0 and y is real. Then W = log(1 + |Y |) has pdf f(w) = θe−wθ for w > 0.
a) Find a complete sufficient statistic.
b) Find the (Fisher) information number I1(θ).
c) Find the uniformly minimum variance unbiased estimator (UMVUE) for θ.
θ
Solution. a) Since f(y) = [exp[−(θ + 1) log(1 + |y|)] is a 1P-REF,
P 2
T = ni=1 log(1 + |Yi |) is a complete sufficient statistic.
b) Since this is an exponential family, log(f(y|θ)) = log(θ/2) − (θ + 1) log(1 + |y|) and

∂ 1
log(f(y|θ)) = − log(1 + |y|).
∂θ θ
Hence
∂2 −1
2
log(f(y|θ)) = 2
∂θ θ
and  2 
∂ 1
I1(θ) = −Eθ 2
log(f(Y |θ)) = 2 .
∂θ θ
c) The complete sufficient statistic T ∼ G(n, 1/θ). Hence the UMVUE of θ is (n−1)/T
since for r > −n,  r
r r 1 Γ(r + n)
E(T ) = E(T ) = .
θ Γ(n)
So
Γ(n − 1)
E(T −1) = θ = θ/(n − 1).
Γ(n)
6.34. (Similar to Sept. 2010 Qual): Suppose that X1 , X2 , ..., Xn are independent
identically distributed random variables from normal distribution with unknown mean µ
and known variance σ 2 . Consider the parametric function g(µ) = e2µ.
a) Derive the uniformly minimum variance unbiased estimator (UMVUE) of g(µ).
b) Find the Cramer-Rao lower bound (CRLB) for the variance of an unbiased esti-
mator of g(µ).
c) Is the CRLB attained by the variance of the UMVUE of g(µ)?
Solution. a) Note that X is a complete and sufficient statistic for µ and X ∼
−1 2
N(µ, n−1 σ 2). We know that E(e2X ), the mgf of X when t = 2, is given by e2µ+2n σ .
−1 2
Thus the UMVUE of e2µ is e−2n σ e2X .
b) The CRLB for the variance of unbiased estimator of g(µ) is given by 4n−1 σ 2e4µ
whereas

30
−1 σ 2 −1 σ 2
V (e−2n e2X̄ ) = e−4n E(e4X̄ ) − e4µ (3)
−4n−1 σ2 4µ+ 12 16n−1 σ2
= e e − e4µ
4µ 4n−1 σ2
= e [e − 1]
−1 2 4µ
> 4n σ e

since ex > 1 + x for all x > 0. Hence the CRLB is not attained.
6.36. (Aug. 2012 Qual): Let Y1 , ..., Yn be iid from a one parameter Pn exponential
family with pdf or pmf f(y|θ) with complete sufficient statistic T (Y ) = i=1 t(Yi ) where
t(Yi ) ∼ θX and X has a known distribution with known mean E(X) and known variance
V (X). Let Wn = cT (Y ) be an estimator of θ where c is a constant.
a) Find the mean square error (MSE) of Wn as a function of c (and of n, E(X) and
V (X)).
b) Find the value of c that minimizes the MSE. Prove that your value is the minimizer.
c) Find the uniformly minimum variance unbiased estimator (UMVUE) of θ.
Solution. See Theorem
Pn 6.5.
a) E(Wn )P= c i=1 E(t(Yi )) = cnθE(X), and
V (Wn ) = c2 ni=1 V (t(Yi )) = c2 nθ2 V (X). Hence MSE(c) ≡ MSE(Wn ) =
V (Wn ) + [E(Wn ) − θ]2 = c2nθ2 V (X) + (cnθE(X) − θ)2 .
b) Thus

d MSE(c) set
= 2cnθ2 V (X) + 2(cnθE(X) − θ)nθE(X) = 0,
dc
or
c(nθ2V (X) + n2 θ2 [E(X)]2) = nθ2E(X),
or
E(X)
cM = ,
V (X) + n[E(X)]2
which is unique. Now

d2 MSE(c)
2
= 2[nθ2 V (X) + n2 θ2 [E(X)]2] > 0.
dc
So MSE(c) is convex and c = cM is the minimizer.
1
c) Let cU = . Then E[cU T (Y )] = θ, hence cU T (Y ) is the UMVUE of θ by the
nE(X)
Lehmann Scheffe theorem.
6.37. (Jan. 2013 qual):Let X1 , ..., Xn be a random sample from a Poisson (λ) dis-
tribution. Let X and S 2 denote the sample mean and the sample variance, respectively.

a) Show that X is uniformly minimum variance unbiased (UMVU) estimator of λ.


b) Show that E(S 2|X) = X.

31
c) Show that Var(S 2 ) > Var(X).
1 P
Solution: a) Since f(x) = exp[log(λ)x]I(x ∈ {0, 1, ...}) is a 1P-REF, ni=1 Xi is a
x! P
complete sufficient statistic and E(X) = λ. Hence X = ( ni=1 Xi )/n is the UMVUE of
λ by the LSU theorem.
b) E(S 2) = λ is an unbiased estimator of λ. Hence E(S 2|X) is the unique UMVUE
of λ by the LSU theorem. Thus E(S 2|X) = X by part a).
c) By Steiner’s formula, V (S 2) = V (E(S 2 |X))+E(V (S 2 |X)) = V (X)+E(V (S 2 |X)) >
V (X). (To show V (S 2 ) ≥ V (X), note that X is the UMVUE and S 2 is an unbiased esti-
mator of λ. Hence V (X) ≤ V (S 2 ) by the definition of a UMVUE, and the inequality is
strict for at least one value of λ since the UMVUE is unique.)
6.38. (Aug. 2012 Qual): Let X1 , ..., Xn be a random sample from a Poisson distri-
bution with mean θ. P
n
a) Show that T = i=1 Xi is complete sufficient statistic for θ.
b) For a > 0, find the uniformly minimum variance unbiased estimator (UMVUE) of
g(θ) = eaθ .
c) Prove the identity:
 T
 X1
 1
E 2 |T = 1 + .
n
Solution: a) See solution to Problem 6.37P a).
b) The complete sufficient statistic T = ni=1 Xi ∼ Poisson (nθ). Hence the mgf of
T is
E(etT ) = mT (t) = exp[nθ(et − 1)].
Thus n(et − 1) = a, or et = a/n + 1, or et = (a + n)/n, or t = log[(a + n)/n]. Thus
 T
tT t T a+n a+n
e = (e ) = = exp[(T log( )]
n n

is the UMVUE of eaθ by the LSU theorem.


c) Let X = X1 , and note that 2X is an unbiased estimator of eθ since
X)
2X = elog(2 = e(log 2)X ,

and E(2X ) = mX (log 2) = exp[θ(elog 2 − 1)] = eθ .


Thus E[2X |T ] is the UMVUE of E(2X ) = eθ by the LSU theorem. By part b) with a = 1,
 T
X 1+n
E[2 |T ] = .
n

6.39. (Aug. 13 Qual): Let X1 , ..., Xn be independent identically distributed from a


N(µ, σ 2) population, where σ 2 is known. Let X be the sample mean.
a) Find E(X − µ)2 .
b) Using a), find the UMVUE of µ2 .

32
c) Find E(X − µ)3 . [Hint: Show that if Y is a N(0, σ 2 ) random variable, then
E(Y 3 ) = 0].
d) Using c), find the UMVUE of µ3 .
2
Solution. a) E(X − µ)2 = Var(X) = σn .
2 2 2 2 2
b) From a), E(X − 2µX + µ2 ) = E(X ) − µ2 = σn , or E(X ) − σn = µ2 , or
2 2
E(X − σn ) = µ2 .
2 2
Since X is a complete and sufficient statistic, and X − σn is an unbiased estimator
2 2
of µ2 and is a function of X, the UMVUE of µ2 is X − σn by the Lehmann-Scheffé
Theorem. R∞
c) Let Y = X − µ ∼N(0, σ 2). Then E(Y 3) = −∞ h(y)dy = 0, because h(y) is an odd
function.
3 3 2 3 2
d) E(X − µ) = E(X − 3µX + 3µ2 X − µ3 ) = E(X ) − 3µE(X ) + 3µ2 E(X) − µ3
3 2 3 2
= E(X ) − 3µ σn + µ2 + 3µ3 − µ3 = E(X ) − 3µ σn − µ3 .
3 2
Thus E(X ) − 3µ σn − µ3 = 0, so replacing µ with its unbiased estimator X in the
middle term, we get  
3 σ2
E X − 3X = µ3 .
n
3 2
Since X is a complete and sufficient statistic, and X − 3X σn is an unbiased estimator
3 2
of µ3 and is a function of X̄, the UMVUE of µ3 is X − 3X σn by the Lehmann-Scheffé
Theorem.
6.40. (Jan. 2014 Qual): Let Y1 , ..., Yn be iid from a uniform U(0, θ) distribution
where θ > 0. Then T = max(Y1 , ..., Yn) is a complete sufficient statistic.
a) Find E(T k ) for k > 0.
b) Find the UMVUE of θk for k > 0.
Z θ n−1
ntn−1 k k nt
Solution: a) The pdf of T is f(t) = θn I(0 < t < θ). Hence E(T ) = t dt =
0 θn
Z θ k+n−1
nt nθk+n n k
n
dt = n
= θ .
0 θ (k + n)θ k+n
k+n k
b) Thus the UMVUE of θk is T .
n
6.41. (Jan. 2014 Qual): Let Y1 , ..., Yn be iid from a distribution with probability
distribution function (pdf)
θ
f(y) =
(1 + y)θ+1
where y > 0 and θ > 0.
a) Find a minimal sufficient statistic for θ.
b) Is the statistic found in a) complete? (prove or disprove)
c) Find the Fisher information I1(θ) if n = 1.
d) Find the Cramer Rao lower bound (CRLB) for estimating θ2 .

33
6.42. (Aug. 2014 Qual): Let X1 , ..., Xn be iid from a distribution with pdf

f(x|θ) = θxθ−1 I(0 < x < 1), θ > 0.

a) Show W = − log(X) ∼ exponential(1/θ).


b) Find the method of moments estimator of θ.
c) Find the UMVUE of 1/θ2 .
d) Find the Fisher information I1(θ).
e) Find the Cramér Rao lower bound for unbiased estimators of τ (θ) = 1/θ2 .
Solution. a) Show f(w) = θe−wθ for w > 0.
R1 set
b) E(X) = 0 θxθ dx = θ/(θ + 1) = X. So θ = θX + X, or θ(1 − X) = X. So
X
θ̂ = .
1 − X Pn
c) T = − i=1 log(Xi ) ∼ G(n, 1/θ) is complete and
1
θ2
Γ(2+ n) n(n + 1)
E(T 2 ) = = .
Γ(n) θ2

Or use
n  n 2 n2 + n
E(T 2) = V (T ) + [E(T )]2 = + = .
θ2 θ θ2
Hence
Γ(n) 2 T2
T =
Γ(2 + n) n(n + 1)
is the UMVUE of θ2 by LSU.
d 1
d) Now log(f(x|θ) = log(θ) + (θ − 1) log(x). So log(f(x|θ) = + log(x), and
dθ θ
d2 −1
2
log(f(x|θ) = 2 . This familay is a 1P-REF, so
dθ θ
 2 
d
I1 (θ) = −Eθ log(f(x|θ) = 1/θ2 .
dθ2

[τ 0(θ)]2 4
e) Now τ 0 (θ) = −2θ−3 , and CRLB = = 4.
nI1 (θ) nθ
2
6.43. (Jan. 2015 QUAL): (Y , S ) is complete sufficient.
a) Y + S 2 by LSU.    
2 cn Y µ cn µ 1
b) Using Y S , get E 2
= 2 = 2 = cn µE .
S  σ σ S2
n−1 2 1 n−1 1 n−3
Using 2
S ∼ χ2n−1 , show E 2
= 2
. So cn = .
σ S n−3 σ n−1
6.44. (Aug. 2016 Qual): Assume the service time of a customer at a store follows
a Pareto distribution with minimum waiting time equal to θ minutes. The maximum
length of the service is dependent on the type of the service. Suppose X1 , ..., Xn is a

34
random sample of service times of n customers, where each Xi has a Pareto density given
by
 4 −5
4θ x x ≥ θ,
f(x|θ) =
0 x<θ

for an unknown θ > 0. WE are interested in estimating the parameter θ.


a) Write the likelihood function for observed values x1, ..., xn of X1 , ..., Xn.
b
b) Find a sufficient statistics for θ. Call it θ.
c) Derive the distribution function and probability density function of θ. b
d) Determine the bias and mean squared error of θ. b
e) Derive the value of an that makes an θb an unbiased estimator of θ.
f) Is the unbiased estimator an θb the uniformly minimum-variance unbiased estimator
(UMVUE) of θ ? explain.
Solution:
a)
n
Y n
Y n
Y
L(θ|x1 , . . . , xn ) = fθ (xi) = 4θ4 x−5 n 4n
i I[θ,∞) (xi) = 4 θ I[θ,∞) (x(1)) x−5
i
i=1 i=1 i=1

where x(1) is the first order statistics.


b) By factorization theorem, we have
n
Y
n 4n
f(x|θ) = 4 θ I[θ,∞) (x(1)) x−5
i = g(T (x)|θ)h(x)
i=1

where T (x) = X(1) . Therefore, θb = X(1) is a sufficient statistics.


c) The distribution function:

FX(1) (t) = P (X(1) ≤ t) = 1 − P (X(1) > t) = 1 − P (X1 > t, . . . , Xn > t)


n
Y
= 1 − P (X1 > t) . . . P (Xn > t) = 1 − P (Xi > t)
i=1
θ
= 1 − (1 − FX (t))n = 1 − ( )4n
t
The density function:

d 4nθ4n t−4n−1 t ≥ θ,
fX(1) (t) = FX(1) (t) =
dt 0 t<θ

d)

b − θ = E[T (x)] − θ = 4nθ − θ =


b = E[θ]
Bias(θ)
1
θ
4n − 1 4n − 1

35
b = MSE(T (x)) = V ar(T (x)) + Bias(T (x))2
MSE(θ)
4nθ2 θ2
= +
(4n − 1)2 (4n − 2) (4n − 1)2

e) From part d) we have

b = E[T (x)] = 4nθ


E[θ] ,
4n − 1
therefore,
4n − 1 b 4n − 1
E[ θ] = E[ T (x)] = θ,
4n 4n
that is, an = 4n−1
4n
.
b
f) Yes, an θ is the UMVUE of θ. Because it is an unbiased estimator, and also it is a
function of the complete sufficient statistics X(1) . Then, based on the Lehmann-Scheffe
Theorem, it is the UMVUE.
To show that T (x) = X(1) is complete, we need to show if E[g(T )] = 0 for all θ,
implies P (g(T ) = 0) = 1 for all θ. The following “method” is not quite right because
there may be functions g such that E(g(T )) is not differentiable.
Suppose for all θ, g(t) is a function that satisfying
Z ∞
1
E[g(T )] = g(t)4nθ4n 4n+1 dt = 0
θ t

Then, by taking derivative on both sides of this, we have


Z ∞
d d 1
0 = E[g(T )] = g(t)4nθ4n 4n+1 dt
dθ dθ θ t
Z ∞ Z
4n d 1 d 4n  ∞ 1
=θ g(t)4n 4n+1 dt + θ g(t)4n 4n+1 dt
dθ θ t dθ θ t
1
= −θ4n g(θ)4n 4n+1 + 0
θ
1
= −4ng(θ)
θ
Since 4ng(θ) 1θ = 0, and 4n 1θ 6= 0, it must be that g(θ) = 0, and this is true for every
θ > 0, therefore T (x) = X(1) is a complete statistics.
7.6. (Aug. 2002 Qual): Let X1 , ..., Xn be independent, identically distributed random
variables from a distribution with a beta(θ, θ) pdf

Γ(2θ)
f(x|θ) = [x(1 − x)]θ−1
Γ(θ)Γ(θ)

where 0 < x < 1 and θ > 0.


a) Find the UMP (uniformly most powerful) level α test for Ho : θ = 1 vs. H1 : θ = 2.

36
b) If possible, find the UMP level α test for Ho : θ = 1 vs. H1 : θ > 1.
Qn
Solution.
Q For both a) and b), the test is reject Ho iff i=1 xi (1 − xi ) > c where
Pθ=1 [ ni=1 xi (1 − xi ) > c] = α.
7.10. (Jan. 2001 SIU and 1990 Univ. MN Qual): Let X1 , ..., Xn be a random sample
from the distribution with pdf
xθ−1 e−x
f(x, θ) = , x > 0, θ > 0.
Γ(θ)
Find the uniformly most powerful level α test of
H: θ = 1 versus K: θ > 1.

Solution. H says f(x) = e−x while K says

f(x) = xθ−1 e−x /Γ(θ).


Q
The monotone likelihood ratio property holds for xi since then
Q n
fn (x, θ2) ( ni=1 xi )θ2 −1 (Γ(θ1 ))n Γ(θ1) n Y θ2 −θ1
= Q =( ) ( xi )
fn (x, θ1) ( ni=1 xi )θ1 −1 (Γ(θ2 ))n Γ(θ2) i=1
Qn
which increases as i=1 xi increases if θ2 > θ1 . Hence the level α UMP test rejects H if
n
Y
Xi > c
i=1

where n
Y X
PH ( Xi > c) = PH ( log(Xi ) > log(c)) = α.
i=1

7.11. (Jan 2001 Qual, see Aug 2013 Qual): Let X1 , ..., Xn be independent identically
distributed random variables from a N(µ, σ 2 ) distribution where the variance σ 2 is known.
We want to test H0 : µ = µ0 against H1 : µ 6= µ0 .
a) Derive the likelihood ratio test.
b) Let λ be the likelihood ratio. Show that −2 log λ is a function of (X − µ0 ).
c) Assuming that H0 is true, find P (−2 log λ > 3.84).
Solution. a) The likelihood function
−1 X
L(µ) = (2πσ 2)−n/2 exp[ (xi − µ)2 ]
2σ 2
and the MLE for µ is µ̂ = x. Thus the numerator of the likelihood ratio test statistic is
L(µ0 ) and the denominator is L(x). So the test is reject H0 if λ(x) = L(µ0 )/L(x) ≤ c
where α = Pµ0 (λ(X) ≤ c).

37
b)P
As a statistic, P
log λ = log L(µ0 ) − log L(X) = P P
− 2σ1 2 [(Xi − µ0 ) − (Xi − X)2 ] = 2σ
2 −n 2
2 [X − µ0 ] since (Xi − µ0 )2 = (Xi − X + X −
P
µ0 )2 = (Xi − X)2 + n(X − µ0 )2 . So −2 log λ = σn2 [X − µ0 ]2.
c) −2 log λ ∼ χ21 and from a chi–square table, P (−2 log λ > 3.84) = 0.05.
7.12. (Aug. 2001 Qual): Let X1 , ..., Xn be iid from a distribution with pdf
2x
f(x) = exp(−x2/λ)
λ
where λ and x are both positive. Find the level α UMP test for Ho : λ = 1 vs H1 : λ > 1.

7.13. (Jan. 2003 Qual): Let X1 , ..., Xn be iid from a distribution with pdf

(log θ)θx
f(x|θ) =
θ−1
where 0 < x < 1 and θ > 1. Find the UMP (uniformly most powerful) level α test of
Ho : θ = 2 vs. H1 : θ = 4.
Solution. Let θ1 = 4. By Neyman Pearson lemma, reject Ho if
 n P  n
f(x|θ1 ) log(θ1) xi 1 1
= θ1 P >k
f(x|2) θ1 − 1 log(2) 2 xi

iff  n   P xi
log(θ1) θ1
>k
(θ1 − 1) log(2) 2
iff   P xi
θ1
> k0
2
iff X
xi log(θ1/2) > c0 .
P P
So reject Ho iff Xi > c where Pθ=2 ( Xi > c) = α.
7.14. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a distribution with pdf
 2
x2 exp −x
2σ2
f(x) = √
σ 3 2 Γ(3/2)

where σ > 0 and x ≥ 0.


a) What is the UMP (uniformly most powerful) level α test for
Ho : σ = 1 vs. H1 : σ = 2 ?
b) If possible, find the UMP level α test for Ho : σ = 1 vs. H1 : σ > 1.

38
Solution. a) By NP lemma reject Ho if

f(x|σ = 2)
> k0 .
f(x|σ = 1)

The LHS = P 2
1
exp[ −1 x ]
23n
−1
8
P 2i
exp[ 2 xi ]
So reject Ho if
1 X 1 1
3n
exp[ x2i ( − )] > k 0
2 2 8
P 2 P 2
or if xi > k where PHo ( xi > k) = α.
b) In the above argument, with any σ1 > 1, get
X 1 1
x2i ( − 2 )
2 2σ1

and
1 1
− 2 >0
2 2σ1
for any σ12 > 1. Hence the UMP test is the same as in a).
7.15. (Jan. 2004 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a distribution with pdf
 
2 1 −[log(x)]2
f(x) = √ exp
σ 2π x 2σ 2

where σ > 0 and x ≥ 1.


a) What is the UMP (uniformly most powerful) level α test for
Ho : σ = 1 vs. H1 : σ = 2 ?
b) If possible, find the UMP level α test for Ho : σ = 1 vs. H1 : σ > 1.
Solution. a) By NP lemma reject Ho if

f(x|σ = 2)
> k0 .
f(x|σ = 1)

The LHS = P
1
2n
exp[ −18
[log(xi )]2]
P
exp[ −1
2
[log(xi )]2 ]
So reject Ho if
1 X
2 1 1
n
exp[ [log(x i )] ( − )] > k 0
2 2 8
P 2
P 2
or if [log(Xi )] > k where PHo ( [log(Xi )] > k) = α.

39
b) In the above argument, with any σ1 > 1, get
X 1 1
[log(xi )]2( − 2 )
2 2σ1

and
1 1
− 2 >0
2 2σ1
for any σ12 > 1. Hence the UMP test is the same as in a).
7.16. (Aug. 2004 Qual): Suppose X is an observable random variable with its pdf
given by f(x), x ∈ R. Consider two functions defined as follows:

 3 2
64
x 0≤x≤4
f0 (x) =
0 elsewhere

 3 √
16
x 0≤x≤4
f1 (x) =
0 elsewhere.

Determine the most powerful level α test for H0 : f(x) = f0 (x) versus Ha : f(x) =
f1 (x) in the simplest implementable form. Also, find the power of the test when α = 0.01

Solution. The most powerful test will have the following form.
Reject H0 iff ff10 (x)
(x)
> k.
3
But ff10 (x)
(x)
= 4x− 2 and hence we reject H0 iff X is small, i.e. reject H0 is X < k for
some constant k. This test must also haveR the size α, that is we require:
k 3 2 1 3
α = P (X < k) when f(x) = f0(x)) = 0 64 x dx = 64 k ,
1
so that k = 4α 3 .
1
For the power, when k = 4α 3 R
k 3 √ √
P [X < k when f(x) = f1 (x)] = 0 16 xdx = α.
When α = 0.01, the power is = 0.10.
7.17. (Sept. 2005 Qual): Let X be one observation from the probability density
function
f(x) = θxθ−1 , 0 < x < 1, θ > 0.
a) Find the most powerful level α test of H0 : θ = 1 versus H1 : θ = 2.
b) For testing H0 : θ ≤ 1 versus H1 : θ > 1, find the size and the power function of
5
the test which rejects H0 if X > .
8
c) Is there a UMP test of H0 : θ ≤ 1 versus H1 : θ > 1? If so, find it. If not, prove so.

40
7.19. (Jan. 2009 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a half normal HN(µ, σ 2) distribution with pdf
 
2 −(x − µ)2
f(x) = √ exp
σ 2π 2σ 2
where σ > 0 and x > µ and µ is real. Assume that µ is known.
a) What is the UMP (uniformly most powerful) level α test for
H0 : σ 2 = 1 vs. H1 : σ 2 = 4 ?
b) If possible, find the UMP level α test for H0 : σ 2 = 1 vs. H1 : σ 2 > 1.
Solution. a) By the NP lemma reject Ho if
f(x|σ 2 = 4)
> k0.
f(x|σ 2 = 1)
The LHS = P
(xi−µ)2
1
2n
exp[( −
2(4)
)]
P
− (xi −µ)2
.
exp[( 2
)]
So reject Ho if
1 X −1 1
n
exp[ (xi − µ)2 ( + )] > k 0
2 8 2
P 2
P 2
or if (xi − µ) > k where Pσ2 =1 ( (Xi − µ) > k) = α.
P
Under Ho, (Xi − µ)2 ∼ χ2n so k = χ2n (1 − α) where P (χ2n > χ2n (1 − α)) = α.
b) In the above argument,
−1 −1
+ 0.5 = + 0.5 > 0
2(4) 8
but
−1
+ 0.5 > 0
2σ12
for any σ12 > 1. Hence the UMP test is the same as in a).
Alternatively, use the fact that this is an exponential family where w(σ 2) = −1/(2σ 2 )
is an increasing function of σ 2 with T (Xi ) = (Xi − µ)2 . Hence the test in a) is UMP for
a) and b) by Theorem 7.3.
7.20. (Aug. 2009 Qual): Suppose that the test statistic T (X) for testing H0 : λ = 1
versus H1 : λ > 1 has an exponential(1/λ1 ) distribution if λ = λ1 . The test rejects H0 if
T (X) < log(100/95).
a) Find the power of the test if λ1 = 1.
b) Find the power of the test if λ1 = 50.
c) Find the pvalue of this test.
Solution. E[T (X)] = 1/λ1 and the power = P(test rejects H0 ) = Pλ1 (T (X) <
log(100/95)) = Fλ1 (log(100/95))
= 1 − exp(−λ1 log(100/95)) = 1 − (95/100)λ1 .

41
a) Power = 1 − exp(− log(100/95)) = 1 − exp(log(95/100)) = 0.05.
b) Power = 1 − (95/100)50 = 0.923055.
c) Let T0 be the observed value of T (X). Then pvalue = P (W ≤ T0 ) where W ∼
exponential(1) since under H0 , T (X) ∼ exponential(1). So pvalue = 1 − exp(−T0).
7.21. (Aug. 2009 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a Burr type X distribution with pdf
2 2
f(x) = 2 τ x e−x (1 − e−x )τ −1

where τ > 0 and x > 0.


a) What is the UMP (uniformly most powerful) level α test for
H0 : τ = 2 versus H1 : τ = 4 ?
b) If possible, find the UMP level α test for H0 : τ = 2 versus H1 : τ > 2.
Solution. Note that
2 2
f(x) = I(x > 0) 2x e−x τ exp[(τ − 1)(log(1 − e−x ))]

is a one parameter exponential familyPand w(τ ) = τ −1 is an increasing function of τ. Thus


n 2
the UMP test rejects H0 if T (x) = i=1 log(1 − e−xi ) > k where α = Pτ =2 (T (X) > k).
Or use NP lemma.
a) Reject Ho if
f(x|τ = 4)
> k.
f(x|τ = 1)
The LHS = Qn n
4n −x2i 4−1 Y
i=1 (1 − e ) n 2

n
Qn 2 =2 (1 − e−xi )2.
2 i=1 (1 − e
−x i)
i=1

So reject Ho if
n
Y 2
(1 − e−xi )2 > k 0
i=1
or n
Y 2
(1 − e−xi ) > c
i=1
or n
X 2
log(1 − e−xi ) > d
i=1
where n
Y 2
α = Pτ =2 ( (1 − e−xi ) > c).
i=1

b) Replace 4 − 1 by τ1 − 1 where τ1 > 2. Then reject H0 if


n
Y 2
(1 − e−xi )τ1 −2 > k 0
i=1

42
which gives the same test as in a).
7.22. (Jan. 2010 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from an inverse exponential distribution with pdf
 
θ −θ
f(x) = 2 exp
x x

where θ > 0 and x > 0.


a) What is the UMP (uniformly most powerful) level α test for
H0 : θ = 1 versus H1 : θ = 2 ?
b) If possible, find the UMP level α test for H0 : θ = 1 versus H1 : θ > 1.
Solution.
P By exponential family theory, the UMP test rejects H0 if
T (x) = − ni=1 x1i > k where Pθ=1 (T (X) > k) = α.
Alternatively, use the Neyman Pearson lemma:
a) reject Ho if
f(x|θ = 2)
> k0 .
f(x|θ = 1)
The LHS = P
2n exp(−2 x1i )
P .
exp(− x1i )
So reject Ho if
X 1
2n exp[(−2 + 1) ] > k0
xi
P 1
P 1
or if − xi
> k where P1 (− xi
> k) = α.
b) In the above argument, reject H0 if
X 1
2n exp[(−θ1 + 1) ] > k0
xi
P P
or if − x1i > k where P1 (− x1i > k) = α for any θ1 > 1. Hence the UMP test is the
same as in a).
7.23. (Sept. 2010 Qual): Suppose that X is an observable random variable with its
pdf given by f(x). Consider the two functions defined as follows: f0 (x) is the probability
density function of a Beta distribution with α = 1 and β = 2 and and f1 (x) is the pdf of
a Beta distribution with α = 2 and β = 1.
a) Determine the UMP level α = 0.10 test for H0 : f(x) = f0 (x) versus H1 : f(x) =
f1 (x). (Find the constant.)
b) Find the power of the test in a).
Solution. a) We reject H0 iff ff01 (x)
(x) 2x
> k. Thus we reject H0 iff 2(1−x) > k. That is
1−x 1
< k1 , that is x < k2 , that is x > k3 . Now 0.1 = P (X > k3 ) when f(x) = f0 (x), so
x √
k3 = 1 − 0.1.

43
7.24. (Sept. 2010 Qual): The pdf of a bivariate normal distribution is f(x, y) =
" 2     2 #!
1 −1 x − µ1 x − µ1 y − µ2 y − µ2
2 1/2
exp 2
− 2ρ +
2πσ1σ2 (1 − ρ ) 2(1 − ρ ) σ1 σ1 σ2 σ2

where −1 < ρ < 1, σ1 > 0, σ2 > 0, while x, y, µ1 , and µ2 are all real. Let (X1 , Y1 ), ..., (Xn, Yn )
be a random sample from a bivariate normal distribution.
Let θ̂(x, y) be the observed value of the MLE of θ, and let θ̂(X, Y ) be the MLE as a
random variable. Let the (unrestricted) MLEs be µ̂1 , µ̂2 , σ̂1 , σ̂2, and ρ̂. Then
n 
X 2 X n  2
xi − µ̂1 nσ̂ 2 yi − µ̂2 nσ̂22
T1 = = 21 = n, and T3 = = = n,
i=1
σ̂1 σ̂1 i=1
σ̂2 σ̂22
n 
X  
xi − µ̂1 yi − µ̂2
and T2 = = nρ̂.
i=1
σ̂1 σ̂2
Consider testing H0 : ρ = 0 vs. HA : ρ 6= 0. The (restricted) MLEs for µ1 , µ2 , σ1 and
σ2 do not change under H0 , and hence are still equal to µ̂1 , µ̂2 , σ̂1 , and σ̂2 .
a) Using the above information, find the likelihood ratio test for H0 : ρ = 0 vs.
HA : ρ 6= 0. Denote the likelihood ratio test statistic by λ(x, y).
b) Find the large sample (asymptotic) likelihood ratio test that uses test statistic
−2 log(λ(x, y)).
Solution. a) Let k = [2πσ1σ2(1 − ρ2 )1/2]. Then the likelihood L(θ) =
n
" 2     2#!
1 −1 X xi − µ1 xi − µ1 yi − µ2 yi − µ2
exp − 2ρ + .
kn 2(1 − ρ2 ) i=1 σ1 σ1 σ2 σ2

Hence  
1 −1
L(θ̂) = exp [T1 − 2ρ̂T2 + T3]
[2πσ̂1σ̂2(1 − ρ̂2 )1/2]n 2(1 − ρ̂2)
1
= exp(−n)
[2πσ̂1σ̂2(1 − ρ̂2 )1/2]n
and  
1 −1
L(θ̂ 0 ) = exp [T1 + T3 ]
[2πσ̂1σ̂2]n 2
1
= exp(−n).
[2πσ̂1σ̂2 ]n
Thus λ(x, y) =
L(θ̂ 0 )
= (1 − ρ̂2)n/2 .
L(θ̂)
So reject H0 if λ(x, y) ≤ c where α = supθ ∈Θo P (λ(X, Y ) ≤ c). Here Θo is the set of
θ = (µ1 , µ2 , σ1, σ2, ρ) such that the µi are real, σi > 0 and ρ = 0, i.e., such that Xi and
Yi are independent.

44
b) Since the unrestricted MLE has one more free parameter than the restricted MLE,
−2 log(λ(X, Y )) ≈ χ21, and the approximate LRT rejects H0 if −2 log λ(x, y) > χ21,1−α
where P (χ21 > χ21,1−α) = α.
7.26. (Aug. 2012 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf  
y 1 −1 y
f(y) = e I(y ≥ 0) exp (e − 1)
λ λ
where y > 0 and λ > 0.
λ 2
a) Show that W = eY − 1 ∼ χ.
2 2
b) What is the UMP (uniformly most powerful) level α test for
H0 : λ = 2 versus H1 : λ > 2?
c) If n = 20 and α = 0.05, then find the power β(3.8386) of the above UMP test if
λ = 3.8386. Let P (χ2d ≤ χ2d,δ ) = δ. The tabled values below give χ2d,δ .
d δ
0.01 0.05 0.1 0.25 0.75 0.9 0.95 0.99
20 8.260 10.851 12.443 15.452 23.828 28.412 31.410 37.566
30 14.953 18.493 20.599 24.478 34.800 40.256 43.773 50.892
40 22.164 26.509 29.051 33.660 45.616 51.805 55.758 63.691
Solution. b) This family is a regular one parameter exponential family where w(λ) =
−1/λ
Pn isyiincreasing. Hence the level Pnα UMP test rejects H0 when
Yi
i=1 (e − 1) > k where α = P 2 ( i=1 (e − 1) > k) = P2 (T (Y ) > k).
λ 2T (Y )
c) Since T (Y ) ∼ χ22n , ∼ χ22n. Hence
2 λ
α = 0.05 = P2 (T (Y ) > k) = P (χ240 > χ240,1−α),

and k = χ240,1−α = 55.758. Hence the power

2T (Y ) 2(55.758) 2(55.758)
β(λ) = Pλ (T (Y ) > 55.758) = P ( > ) = P (χ240 > )
λ λ λ
2(55.758)
= P (χ240 > ) = P (χ240 > 29.051) = 1 − 0.1 = 0.9.
3.8386
7.27. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed N(µ =
0, σ 2) random variables with pdf
 2
1 −y
f(y) = √ exp
2πσ 2 2σ 2

where y is real and σ 2 > 0.


a) Show W = Y 2 ∼ σ 2χ21 .
b) What is the UMP (uniformly most powerful) level α test for
H0 : σ 2 = 1 versus H1 : σ 2 > 1?

45
c) If n = 20 and α = 0.05, then find the power β(3.8027) of the above UMP test if
σ = 3.8027. Let P (χ2d ≤ χ2d,δ ) = δ. The tabled values below give χ2d,δ .
2

d δ
0.01 0.05 0.1 0.25 0.75 0.9 0.95 0.99
20 8.260 10.851 12.443 15.452 23.828 28.412 31.410 37.566
30 14.953 18.493 20.599 24.478 34.800 40.256 43.773 50.892
40 22.164 26.509 29.051 33.660 45.616 51.805 55.758 63.691
Solution. b) This family is a regular one parameter exponential family where w(σ 2) =
2
−1/(2σ
Pn 2 ) is increasing. Hence Pnthe level α UMP test rejects H0 when
2
y
i=1 i > k where α = P 1 ( i=1 iY ) > k) = P1 (T (Y ) > k).
T (Y )
c) Since T (Y ) ∼ σ 2 χ2n , 2
∼ χ2n . Hence
σ
α = 0.05 = P1 (T (Y ) > k) = P (χ220 > χ220,1−α),

and k = χ220,1−α = 31.410. Hence the power

T (Y ) 31.41 31.41
β(σ) = Pσ (T (Y ) > 31.41) = P ( 2
> 2
) = P (χ220 > )
σ σ 3.8027
= P (χ220 > 8.260) = 1 − 0.01 = 0.99.
7.28. (Aug. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf  
y 1  y 2
f(y) = 2 exp −
σ 2 σ
where σ > 0, µ is real, and y ≥ 0.
a) Show W = Y 2 ∼ σ 2χ22 . Equivalently, show Y 2 /σ 2 ∼ χ22.
b) What is the UMP (uniformly most powerful) level α test for
H0 : σ = 1 versus H1 : σ > 1?

c)√If n = 20 and α = 0.05, then find the power β( 1.9193) of the above UMP test if
σ = 1.9193. Let P (χ2d ≤ χ2d,δ ) = δ. The above tabled values for problem 7.27 give χ2d,δ .

Solution. a) Let X = Y 2/σ 2 = t(Y ). Then Y = σ X = t−1 (X). Hence
dt−1 (x) σ 1
= √
dx 2 x
and the pdf of X is
−1 √ "  √ 2 #
dt (x) σ x −1 σ x σ 1
g(x) = fY (t−1 (x)) =
2
exp √ = exp(−x/2)
dx σ 2 σ 2 x 2

for x > 0, which is the χ22 pdf.


b) This family is a regular one parameter exponential family where w(σ) = −1/(2σ 2 )
is
Pincreasing. Hence the level Pnα UMP test rejects H0 when
n 2 2
y
i=1 i > k where α = P1 ( Y
i=1 i > k) = P1 (T (Y ) > k).

46
T (Y )
c) Since T (Y ) ∼ σ 2 χ22n, 2
∼ χ22n . Hence
σ
α = 0.05 = P1 (T (Y ) > k) = P (χ240 > χ240,1−α),

and k = χ240,1−α = 55.758. Hence the power

T (Y ) 55.758 55.758
β(σ) = Pσ (T (Y ) > 55.758) = P ( 2
> 2
) = P (χ240 > )
σ σ σ2
55.758
= P (χ240 > ) = P (χ240 > 29.051) = 1 − 0.1 = 0.9.
1.9193
7.29. (Aug. 2012 Qual): Consider independent random variables X1 , ..., Xn, where
Xi ∼ N(θi , σ 2), 1 ≤ i ≤ n, and σ 2 is known.
a) Find the most powerful test of

H0 : θi = 0, ∀i, versus H1 : θi = θi0, ∀i,

where θi0 are known. Derive (and simplify) the exact critical region for a level α test.
b) Find the likelihood ratio test of

H0 : θi = 0, ∀i, versus H1 : θi 6= 0, for some i.

Derive (and simplify) the exact critical region for a level α test.
c) Find the power of the test in (a), when θi0 = n−1/3 , ∀i. What happens to this
power expression as n → ∞?
Solution: a) In Neyman Pearson’s lemma, let θ = 0 if H0 is true and θ = 1 if H1 is
true. Then want to find f(x|θ = 1)/f(x|θ = 0) ≡ f1(x)/f0 (x). Since
n
1 −1 X
f(x) = √ exp[ 2 (xi − θi )2 ],
( 2π σ) n 2σ i=1
Pn n n
f1 (x) −1
exp[ 2σ 2
2
i=1 (xi − θi0 ) ] −1 X 2
X
= −1
Pn 2 = exp( 2 [ (xi − θi0) − x2i ]) =
f0 (x) exp[ 2σ 2 x
i=1 i ] 2σ i=1 i=1

X n Xn
−1 2
exp( [−2 x i θ i0 + θi0 ]) > k 0
2σ 2 i=1 i=1

Xn Xn
−1 P P P
if 2
[−2 xi θi0 + 2
θi0 ]) > k” or if ni=1 xi θi0 > k. Under Ho, ni=1 Xi θi0 ∼ N(0, σ 2 ni=1 θi0
2
).
2σ i=1 i=1
Thus Pn
Xi θi0
pi=1
Pn 2 ∼ N(0, 1).
σ i=1 θi0
By Neyman Pearson’s lemma, reject Ho if
Pn
Xi θi0
pi=1
Pn 2 > z1−α
σ i=1 θi0

47
where P (Z < z1−α) = 1 − α when Z ∼ N(0, 1).
b) The MLE under Ho is θ̂i = 0 for i = 1, ..., n, while the unrestricted MLE is θ̂i = xi
for i = 1, ..., n since xi = xi when the sample size is 1. Hence
Pn 2 n
L(θ̂i = 0) −1
exp[ 2σ 2 i=1 xi ] −1 X 2
λ(x) = = −1
Pn = exp[ 2 x i ] ≤ c0
ˆ
L(θi = xi ) exp[ 2σ2
(x
i=1 i − x i ) 2] 2σ i=1

n
−1 X 2 P
if 2
xi ] ≤ c”, or if ni=1 x2i ≥ c. Under Ho, Xi ∼ N(0, σ 2 ), Xi /σ ∼ N(0, 1), and

Pn i=1 2 2 2
Pn 2 2 2
i=1 Xi /σ ∼ χn . So the LRT is reject Ho if i=1 Xi /σ ≥ χn,1−α where P (W ≥
χ2n,1−α ) = 1 − α if W ∼ χ2n .
c) Power = P(reject Ho) =
 −1/3 Pn   −1/3 Pn 
n i=1 Xi n i=1 Xi
P √ > z1−α = P > z1−α =
σ n n−2/3 σ n1/6
 −1/2 Pn  Xn
n i=1 Xi
P > z1−α = P ( Xi > σ z1−α n1/2)
σ i=1
n
X
where Xi ∼ N(n n−1/3, n σ 2) ∼ N(n2/3, n σ 2). So
i=1
Pn  Pn 
Xi − n2/3
i=1 i=1 Xi
√ ∼ N(0, 1), and power = P √ > z1−α =
nσ nσ
 Pn 2/3
  
i=1 Xi − n n2/3 n2/3
P √ > z1−α − √ = 1 − Φ z1−α − √ =
nσ nσ nσ
 
n1/6
1 − Φ z1−α − → 1 − Φ(−∞) = 1
σ
as n → ∞.
7.31. (Jan. 14 Qual): Let X1 , ..., Xm be iid from a distribution with pdf

f(x) = µxµ−1 ,

for 0 < x < 1 where µ > 0. Let Y1 , ..., Yn be iid from a distribution with pdf

g(y) = θy θ−1 ,

for 0 < y < 1 where θ > 0. Let


m
X n
X
T1 = log(Xi ) and T2 = log(Yi).
i=1 j=1

Find the likelihood ratio test statistic for H0 : µ = θ versus H1 : µ 6= θ in terms of T1 , T2


and the MLEs. Simplify.

48
P
Solution: L(µ) = µm exp[(µ −
P 1) log(xi )], and
log(L(µ)) = m log(µ) + (µ − 1) log(xi ). Hence

d log(L(µ)) m X set
= + log(xi ) = 0.
dµ µ
P
Or µ log(xi) = −m or µ̂ = −m/T1, unique. Now

d2 log(L(µ)) −m
2
= 2 < 0.
dµ µ
−n −n
Hence µ̂ is the MLE of µ. Similarly θ̂ = Pn = . Under H0 combine the two
j=1 log(Yj ) T2
samples into one sample of size m + n with MLE

−(m + n)
µ̂0 = .
T1 + T2
Now the likelihood ratio statistic
P P
L(µ̂0 ) µ̂0m+n exp[(µ̂0 − 1)( log(Xi ) + log(Yi ))]
λ= = P P
L(µ̂, θ̂) µ̂m θ̂n exp[(µ̂ − 1) log(Xi ) + (θ̂ − 1) log(Yi )]

µ̂0m+n exp[(µ̂0 − 1)(T1 + T2)] µ̂0m+n exp[−(m + n)] exp[−(T1 + T2 )]


= =
µ̂m θ̂n exp[(µ̂ − 1)T1 + (θ̂ − 1)T2] µ̂m θ̂n exp(−m) exp(−n) exp[−(T1 + T2)]
 m+n
−(m+n)
µ̂0m+n T1+T2
= =  m  n .
µ̂m θ̂n −m −n
T1 T2

7.32. (Aug. 2014 Qual): If Z has a half normal distribution, Z ∼ HN(0, σ 2 ), then
the pdf of Z is  2
2 −z
f(z) = √ exp
2π σ 2σ 2
where σ > 0 and z ≥ 0. Let X1 , ..., Xn be iid HN(0, σ12) random variables and let Y1 , ..., Ym
be iid HN(0, σ22) random variables that are independent of the X’s.
a) Find the α level likelihood ratio test for H0 : σ12 = σ22 vs. H1 : σ12 6= σ22 . Simplify
the test statistic.
b) What happens if m = n?
Solution: a)  Pn Pm 
2 2
2 2 i=1 Xi i=1 Yi
(σ̂1 , σ̂2 ) = ,
n m
is the MLE of (σ12, σ22), and that under the restriction σ12 = σ22 = σ02, say, then the
restricted MLE Pn Pm
2 2
2 X i + i=1 Yi
σ̂0 = i=1 .
n+m

49
Now the likelihood ratio statistic
h Pn 2 Pm 2 i
1 −1
L(σ̂02 ) σ̂0m+n
exp 2σ̂2
( i=1 xi + i=1 yi )
λ= = h P 0 i h Pm 2 i
L(σ̂12, σ̂22) 1
exp 2σ̂ −1 n
x 2 1
exp −1
σ̂1n 1
2 i=1 i σ̂m 2 2σ̂2 2
i=1 yi )

1
 
σ̂0m+n
exp −( n+m
2
) σ̂1n σ̂2m
= = .
1
σ̂1n
exp(−n/2) σ̂1m exp(−m/2) σ̂0m+n
2

So reject H0 if λ(x, y) ≤ c where α = supσ2 ∈Θo P (λ(X, Y ) ≤ c). Here Θo is the set
of σ12 = σ22 ≡ σ 2 such that the Xi and Yi are iid.
b) Then
σ̂ n σ̂ n
λ(x, y) = 12n2 ≤ c
σ̂0
is equivalent to
σ̂1σ̂2
≤ k.
σ̂02
7.33. (Aug. 2016 QUAL): Let θ > 0 be known. Let X1 , ..., Xn be independent,
identically distributed random variables from a distribution with a pdf

λθλ
f(x) =
xλ+1
for x > θ where λ > 0. Note that f(x) = 0 for x ≤ θ.
a) Find the UMP (uniformly most powerful) level α test for
H0 : λ = 1 vs. H1 : λ = 2.
b) If possible, find the UMP level α test for H0 : λ = 1 vs. H1 : λ > 1.
Solution: ab)
I(x > θ) λ
f(x) = λθ exp(λ[− log(x)])
x
is a 1PREF
Pnwhere w(λ) = λ is increasing.
Pn Hence the UMP level α test rejects H0 if
T (x) = − i=1 xi > c where α = P1 (− i=1 Xi > c).
7.34. (Aug. 2016 QUAL): Let X1 , X2 , . . . , X15 denote a random sample from the
density function
 1 3 −x4 /θ
4x e x > 0;
f(x|θ) = θ
0 x ≤ 0,

where θ > 0 is an unknown parameter.


a) Find the rejection region for the most powerful (MP) test of H0 : θ = 2 against
HA : θ = θ1, θ1 > 2, at
Pα15= .05. (Hint: Xi4 /θ ∼ Exponential(1).)
b) If you observe i=1 x4i = 46.98, what is the p-value?
c) Suppose we decide to reject H0 at level α = 0.05, then what is your decision based
on part b)?

50
d) What is the approximate power of your MP test at θ1 = 5 ?
e) Is your MP test also a uniformly most powerful (UMP) test for testing H0 : θ = 2
versus HA : θ > 2? Give reasons.
d δ
0.034 0.05 0.1 0.25 0.75 0.9 0.95 0.975 0.99
15 6.68 7.26 8.55 11.04 19.31 22.31 25.00 27.49 30.58
30 17.51 18.49 20.60 24.48 34.80 40.26 43.77 46.98 50.89
40 25.31 26.51 29.05 33.66 47.27 51.81 55.76 59.34 63.69
Solution:
An easier way to do much of this problem is to not that the distribution is a 1PREF
with
Pn w(θ) = −1/θ an increasing function of θ and t(x) = x4. Hence reject H0 if
4
i=1 Xi > c.
a) We can use the Neyman-Pearson Lemma for specifying the rejection region. Let
R represents the rejection region.

f(x|θ1)
X∈R if >k⇒
f(x|θ0)
Q 15
f(x|θ1 ) 2n 15 3 −x4i /θ1
i=1 4xi e 2 n  θ1 − 2 X 4

= n Q15 4 = exp x i
f(x|θ = 2) θ1 i=1 4x3i e−xi /2 θ1 2θ1 i=1

15 15
2 n  θ1 − 2 X 4 X
Since θ1 > 2, then exp xi > k ⇔ x4i > c
θ1 2θ1 i=1 i=1

So
15
X

R= X: Xi4 > c
i=1

We should determine the c in such way that the size of the test be equal α = .05. i.e.,
15
X
Pθ0 (X ∈ R) = α ⇒ Pθ0 ( Xi4 > c) = .05
i=1

P Xi4 P15 Xi4


From the hint, we have 15 i=1 θ0 ∼ Gamma(15, 1) or 2
2
i=1 θ0 ∼ χ(30), and since θ0 = 2,
P15
therefore i=1 Xi4 ∼ χ2(30); hence we can conclude that c = χ2(30, 0.05)=43.77. Or

15
X

R= X: Xi4 > 43.77 .
i=1

b)
15
X 15
X
p-value = sup Pθ ( Xi4 > 46.98) = Pθ=2 ( Xi4 > 46.98) = 0.0249
θ∈Θ0 i=1 i=1

c)

51
P15
Since p − value < α, we reject the null hypothesis in favor of HA . Or i=1 Xi4 =
46.98 > 43.77, so reject H0 by a).
d)
Note that the power function is given by
15
X 15
X X4 i 43.77
β(θ) = Pθ (X ∈ R) = Pθ ( Xi4 > 43.77) = P (2 >2 )
i=1 i=1
θ θ
43.77
= P (W > 2 )
θ
where W has chi-square distribution with 30 degree of freedom. Then, we can compute
the power of the test as follows
43.77
β(θ = 5) = P (W > 2 ) = P (W > 17.5) = 0.966
5
e) P
Let T = 15 4
i=1 Xi , then from the hint, it can be shown that T ∼ Gamma(15, θ), with
density function
1 1 14 −t/θ
fT (t|θ) = t e
Γ(15) θ15
Now, let θ2 > θ1, then it can be shown that the family of pdf {f(t|θ), θ ∈ Θ} has a MLR.
That is, the ratio
fT (t|θ2) θ1 15 t( θ1 − θ1 )
= e 1 2
fT (t|θ1) θ2
is a increasing function of t. Then, by applying the Karlin-Rubin Theorem we can
conclude that the MP test given in part a) is UMP test for H0 : θ = 2 versus HA : θ > 2.
8.3∗. (Aug. 03 Qual): Let X1 , ..., Xn be a random sample from a population with
pdf
 θxθ−1

0<x<3
f(x) =
0 elsewhere

X
The method of moments estimator for θ is Tn = .
3−X

a) Find the limiting distribution of n(Tn − θ) as n → ∞.
b) Is Tn asymptotically efficient? Why?
c) Find a consistent estimator for θ and show that it is consistent.

Solution. a) E(X) = θ+1
, thus
√ D
n(X − E(X)) → N(0, V (X)), where
9θ y 3
V (X) = (θ+2)(θ+1) 2. Let g(y) = 3−y , thus g 0(y) = (3−y)2
. Using the delta method,
√ D 2
n(Tn − θ) → N(0, θ(θ+1)
θ+2
).

52
√ D
b) It is asymptotically efficient if n(Tn − θ) → N(0, ν(θ)), where
d

(θ)
ν(θ) = d 2 .
−E( dθ 2 lnf(x|θ))

2 θ(θ+1) 2
d 1 2
But, E(( dθ 2 lnf(x|θ)) = θ2 . Thus ν(θ) = θ 6= θ+2
.

c) X → θ+1 in probability. Thus Tn → θ in probability.
8.8. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed random
variables with probability density function
f(x) = θxθ−1 , 0 < x < 1, θ > 0.
1
a) Find the MLE of . Is it unbiased? Does it achieve the information inequality
θ
lower bound?
1
b) Find the asymptotic distribution of the MLE of .
θ
θ
c) Show that X n is unbiased for . Does X n achieve the information inequality
θ+1
lower bound?
1
d) Find an estimator of from part (c) above using X n which is different from the
θ
MLE in (a). Find the asymptotic distribution of your estimator using the delta method.

e) Find the asymptotic relative efficiency of your estimator in (d) with respect to the
MLE in (b).
8.27. (Sept. 05 Qual): Let X ∼ Binomial(n, p) where the positive integer n is large
and 0 < p < 1.
 
√ X
a) Find the limiting distribution of n − p .
n
"  #
2
√ X
b) Find the limiting distribution of n − p2 .
n
"  #
3
X X 1
c) Show how to find the limiting distribution of − when p = √ .
n n 3
(Actually want the limiting distribution of
"  # !
3
X X
n − − g(p)
n n
where g(θ) = θ3 − θ.)
D P
Solution. a) X = ni=1 Yi where Y1 , ..., Yn are iid Ber(ρ). Hence
√ X D √ D
n( − ρ) = n(Y n − ρ) → N(0, ρ(1 − ρ)).
n

53
b) Let g(p) = p2 . Then g 0 (p) = 2p and by the delta method and a),
"  #  
2
√ X 2
√ X D
n −p = n g( ) − g(p) →
n n

N(0, p(1 − p)(g 0 (p))2 ) = N(0, p(1 − p)4p2 ) = N(0, 4p3 (1 − p)).

c) Refer to a) and Theorem 8.30. Let θ = p. Then g 0(θ) = 3θ2 − 1 and g 00 (θ) = 6θ.
Notice that
√ √ √ √ 1 −2
g(1/ 3) = (1/ 3)3 − 1/ 3 = (1/ 3)( − 1) = √ = c.
3 3 3
√ √ √
Also g 0 (1/ 3) = 0 and g 00 (1/ 3) = 6/ 3. Since τ 2(p) = p(1 − p),
√ 1 1
τ 2(1/ 3) = √ (1 − √ ).
3 3
Hence
    
Xn −2 D 1 1 1 6 1
n g − √ → √ (1 − √ ) √ χ21 = (1 − √ ) χ21.
n 3 3 2 3 3 3 3

8.28. (Aug. 04 Qual): Let X1 , ..., Xn be independent and identically distributed (iid)
from a Poisson(λ) distribution.

a) Find the limiting distribution of n ( X − λ ).

b) Find the limiting distribution of n [ (X)3 − (λ)3 ].
√ √ D √ D
Solution. a) By the CLT, n(X − λ)/ λ → N(0, 1). Hence n(X − λ) → N(0, λ).
√ D
b) Let g(λ) = λ3 so that g 0 (λ) = 3λ2 then n[(X)3 − (λ)3 ] → N(0, λ[g 0 (λ)]2 ) =
N(0, 9λ5 ).
8.29. (Jan. 04 Qual): Let X1 , ..., Xn bePiid from a normal distribution with unknown
n
Xi 1
Pn
mean µ and known variance σ 2. Let X = i=1 n
and S 2 = n−1 2
i=1 (Xi − X) .

a) Show that X and S 2 are independent.



b) Find the limiting distribution of n((X)3 − c) for an appropriate constant c.
2
Solution. a) X is a complete sufficient statistic. Also, we have (n−1)Sσ2
has a chi
square distribution with df = n − 1, thus since σ is known the distribution of S 2 does
2

not depend on µ, so S 2 is ancillary. Thus, by Basu’s Theorem X and S 2 are independent.


b) by CLT (n is large ) n(X − µ) has approximately normal distribution√with mean
0
0 and variance σ 2. Let g(x) = x3, thus, g (x) = 3x2. Using delta method n(g(X) −

54
0 √ 3
g(µ)) goes in distribution to N(0, σ 2 (g (µ))2 ) or n(X − µ3 ) goes in distribution to
N(0, σ 2 (3µ2 )2 ).
8.34. (abc Jan. 2010 Qual): Let Y1 , ..., Yn be independent and identically distributed
(iid) from a distribution with probability mass function f(y) = ρ(1− ρ)y for y = 0, 1, 2, ...
and 0 < ρ < 1. Then E(Y ) = (1 − ρ)/ρ and VAR(Y ) = (1 − ρ)/ρ2 .
 
√ 1−ρ
a) Find the limiting distribution of n Y − .
ρ
1
b) Show how to find the limiting distribution of g(Y ) = 1+Y . Deduce it completely.

(This bad notation means find the limiting distribution of n(g(Y )−c) for some constant
c.)
c) Find the method of moments estimator of ρ.
√ 
d) Find the limiting distribution of n (1 + Y ) − d
for appropriate constant d.
e) Note that 1 + E(Y ) = 1/ρ. Find the method of moments estimator of 1/ρ.
Solution. a)    
√ 1−ρ D 1−ρ
n Y − →N 0, 2
ρ ρ
by the CLT.
1
c) The method of moments estimator of ρ is ρ̂ = 1+Y .
0
d) Let g(θ) = 1 + θ so g (θ) = 1. Then by the delta method,
   
√ 1−ρ D 1−ρ 2
n g(Y ) − g( ) → N 0, 2 1
ρ ρ
or    
√ 1 D 1−ρ
n (1 + Y ) − →N 0, 2 .
ρ ρ
This result could also be found with algebra since 1 + Y − ρ1 = Y + 1 − ρ1 = Y + ρ−1
ρ
=
1−ρ
Y − ρ .
e) Y is the method of moments estimator of E(Y ) = (1− ρ)/ρ, so 1+ Y is the method
of moments estimator of 1 + E(Y ) = 1/ρ.
8.35. (Sept. 2010 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a normal distribution with mean µ and variance σ 2.

a) Find the approximate distribution of 1/X̄. Is this valid for all values of µ?
b) Show that 1/X̄ is asymptotically efficient for 1/µ, provided µ 6= µ∗ . Identify µ∗ .

Solution. a) n(X̄ − µ) is approximately N(0, σ 2) Define g(x) = x1 , g 0 (x) = −1 x2
.
√ 1 1 σ2
Using delta method n( X̄ − µ ) has approximately N(0, µ4 ). Thus 1/X is approximately
2
N( µ1 , nµ
σ
4 ), provided µ 6= 0.

55
1 1
b) Using part a) X
is asymptotically efficient for µ
if
 
2
σ2  τ 0 (µ) 
=  2 
µ4 ∂
Eµ ∂µ
ln f(X/µ)

τ (µ) = µ1
τ 0 (µ) = −1
µ2
−1 (x−µ)2
ln f(x|µ) = 2
ln 2πσ 2 − 2σ2
 2
∂ E(X − µ)2 1
E ln f(X/µ) = 4
= 2
∂µ σ σ
Thus
2
τ 0(µ) σ2
h i2 = .
∂ µ4
Eµ ∂µ
ln f(X/µ)

8.36. (Jan. 2011 Qual): Let Y1 , ..., Yn be independent and identically distributed
(iid) from a distribution with probability density function
2y
f(y) =
θ2
for 0 < y ≤ θ and f(y) = 0, otherwise. √ 
a) Find the limiting distribution of n Y − c for appropriate constant c.
√ 
b) Find the limiting distribution of n log( Y ) − d for
appropriate constant d.
c) Find the method of moments estimator of θk .
k k 2 2 2
Solution.
 a) E(Y
 ) = 2θ  /(k2 + 2) so E(Y ) = 2θ/3, E(Y ) = θ /2 and V (Y ) = θ /18.
√ 2θ D θ
So n Y − → N 0, by the CLT.
3 18
b) Let g(τ ) = log(τ
 ) so [g 0(τ )]2 
= 1/τ2 where τ = 2θ/3. Then by the delta method,
√ 2θ D 1
n log(Y ) − log → N 0, .
3 8
P
c) θ̂k = k+2
2n
Yik .
8.37. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed discrete
random variables with probability mass function
 
r+y−1 r
f(y) = P (Y = y) = ρ (1 − ρ)y
y

for y = 0, 1, . . . where positive integer r is known and 0 < ρ < 1. Then


E(Y ) = r(1 − ρ)/ρ, and V (Y ) = r(1 − ρ)/ρ2 .

56
 
√ r(1 − ρ)
a) Find the limiting distribution of n Y − .
ρ
r √ 
b) Let g(Y ) = . Find the limiting distribution of n g(Y ) − c for appro-
r+Y
priate constant c.
c) Find the method of moments estimator of ρ.
   
√ r(1 − ρ) D r(1 − ρ)
Solution: a) n Y − → N 0, by the CLT.
ρ ρ2
b) Let θ = r(1 − ρ)/ρ. Then
r rρ
g(θ) = r(1−ρ)
= = ρ = c.
r+ rρ + r(1 − ρ)
ρ

Now
−r −r −rρ2
g 0(θ) = = = .
(r + θ)2 (r + r(1−ρ) )2 r2
ρ

So
r2 ρ4 ρ4
[g 0(θ)]2 = = .
r4 r2
Hence by the delta method
   
√ D r(1 − ρ) ρ4 ρ2 (1 − ρ)
n ( g(Y ) − ρ ) → N 0, =N 0, .
ρ2 r2 r
set
c) Y = r(1 − ρ)/ρ or ρY = r − rρ or ρY + rρ = r or ρ̂ = r/(r + Y ).
8.38. (Aug. 2013 Qual): Let X1 , ..., Xn be independent identically distributed uni-
form (0, θ) random variables where θ > √0.
a) Find the limiting distribution of n(X − cθ ) for an appropriate constant cθ that
may depend on θ.

b) Find the limiting distribution of n[(X)2 − kθ ] for an appropriate constant kθ that
may depend on θ.
Solution: a) By the CLT,
   
√ θ D θ2
n X− → N 0, .
2 12

b) Let g(y) = y 2. Then g 0 (y) = 2y and by the delta method,


     
√ 2 θ 2 √ 2 θ2 √ θ D
n X −( ) = n X − = n g(X) − g( ) →
2 4 2
     
θ2 0 θ 2 θ2 4θ2 θ4
N 0, [g ( )] = N 0, = N 0, .
12 2 12 4 12

8.39. (Aug. 2014 Qual): Let X1 , ..., Xn be independent identically distributed (iid)
beta(β, β) random variables.

57

a) Find the limiting distribution of n( X n − θ ), for appropriate constant θ.

b) Find the limiting distribution of n( log(X n ) − d ), for appropriate constant d.
β2 1
Solution. a) E(Xi ) = β/(β + β) = 1/2 and V (Xi ) = = =
(2β)2(2β + 1) 4(2β + 1)
1
. So
8β + 4    
√ 1 D 1
n Xn − →N 0,
2 8β + 4
by the CLT.
b) Let g(x) = log(x). So d = g(1/2) = log(1/2). Now g 0 (x) = 1/x and (g 0 (x))2 =
1/x2 . So (g 0 (1/2))2 = 4. So
   
√ D 1 1
n( log(X n ) − log(1/2) ) → N 0, 4 = N 0,
8β + 4 2β + 1
by the delta method.
9.1. (Aug. 2003 Qual): Suppose that X1 , ..., Xn are iid with the Weibull distribution,
that is the common pdf is
( b
b b−1 − xa
f(x) = a
x e 0<x
0 elsewhere

where a is the unknown parameter, but b(> 0) is assumed known.


a) Find a minimal sufficient statistic for a.
b) Assume n = 10. Use the Chi-Square Table and the minimal sufficient statistic to
find a 95% two sided confidence interval for a.
P
Solution. a) ni=1 Xib is minimal sufficient for a.
b
b) It can be shown that Xa has an exponential distribution with mean 1. Thus,
Pn
2 Xib
i=1
a
is distributed χ22n . Let χ22n,α/2 be the upper 100( 21 α)% point of the chi-square
distribution with 2n degrees of freedom. Thus, we can write
P
2 2 ni=1 Xib
1 − α = P (χ2n,1−α/2 < < χ22n,α/2)
a
which translates into Pn Pn !
b b
2 i=1 X i 2 X i
, 2 i=1
χ22n,α/2 χ2n,1−α/2
as a two sided (1 − α) confidence interval for a. For α = 0.05 and n = 20, we have
χ22n,α/2 = 34.1696 and χ22n,1−α/2 = 9.59083. Thus the confidence interval for a is
 Pn Pn 
i=1Xib b
i=1 Xi
, .
17.0848 4.795415

58
9.12. (Aug. 2009 qual): Let X1 , ..., Xn be a random sample from a uniform(0, θ)
distribution. Let Y = max(X1 , X2 , ..., Xn).
a) Find the pdf of Y/θ.
b) To find a confidence interval for θ, can Y/θ be used as a pivot?
c) Find the shortest (1 − α)% confidence interval for θ.
Solution. a) Let Wi ∼ U(0, 1) for i = 1, ..., n and let Tn = Y/θ. Then

Y
P( ≤ t) = P (max(W1, ..., Wn) ≤ t) =
θ
P(all Wi ≤ t) = [FWi (t)]n = tn for 0 < t < 1. So the pdf of Tn is

d n
fTn (t) = t = ntn−1
dt
for 0 < t < 1.
b) Yes, the distribution of Tn = Y/θ does not depend on θ by a).
c) Not sure this is shortest. Let Wi = Xi /θ ∼ U(0, 1) which has cdf FZ (t) = t for
0 < t < 1. Let W(n) = X(n) /θ = max(W1, ..., Wn). Then

X(n)
FW(n) (t) = P ( ≤ t) = tn
θ
for 0 < t < 1 by a).
Want cn so that
X(n)
P (cn ≤ ≤ 1) = 1 − α
θ
for 0 < α < 1. So
1 − FW(n) (cn ) = 1 − α or 1 − cnn = 1 − α
or
cn = α1/n .
Then  
X(n)
X(n) , 1/n
α
is an exact 100(1 − α)% CI for θ.
9.14. (Aug. 2016 Qual, continuation of Problem 6.44):
b
a) Find the distribution function of θθ , and use it to explain why this is a pivotal
quantity.
b) Using this pivotal quantity, derive a statistics θbL such that P (θbL < θ) = 0.9.
c) Find the method of moments estimator of θ. Is this estimator unbiased?
d) Is the method of moments estimator consistent? Fully justify your answer.
e) How do you think the variance of an θb compares to that of the method of moments
estimator, and why?
f) What is the MLE of θ? Is the MLE consistent?
Solution:

59
b
a) Refer to 6.44 c). Let Y = θθ , then we have

θb
Fy (y) = P (Y ≤ y) = P ( ≤ y) = P (θb ≤ θy) = P (X(1) ≤ θy) = FX(1) (θy)
θ
θ 4n 1 4n
=1− = 1−
θy y
b
As it can been seen, the distribution of θθ is independent of any parameter, and that is
the definition of a pivotal quantity.
b)
First, let us to find a b value such that

θb 1 4n
P ( ≤ b) = 0.9 ⇒ 1 − = 0.9 ⇒ b = 101/4n
θ b
or
θb θb
P ( ≤ 101/4n ) = 0.9 ⇒ P ( 1/4n ≤ θ) = 0.9
θ 10
θb X(1)
Therefore, θbL = 101/4n
= 101/4n
.
c) MME:
n
4 1X
µ01 = E[X] = θ, m01 = Xi = X̄
3 n i=1
3
µ01 = m01 ⇒ θe = X̄
4
Check the unbiasedness of MME:

e = E[ 3 X̄] = 3 E[X̄] = 3 ∗ ( 4 θ) = θ
E[θ]
4 4 4 3

hence, θe is an unbiased estimator.


d) One way to check the consistency of an estimator is to check the limits of its MSE,
it goes to zero, then the estimator is consistent. We have,
e = V ar(θ)
MSE(θ) e + Bias(θ)
e
9 9 4θ2 θ2
= V ar(X̄) + 0 = =
16 16 18n 8n
then,
θ2
e = lim
lim MSE(θ) = 0,
n→∞ n→∞ 8n

and this proves the consistency of the MM estimator.


P P
An easier way is X → E(X) by the WLLN, so 0.75X → 0.75E(X) = θ.

60
e) The variance of an θb should be less than the variance of any other unbiased estima-
tor, because we proved that the first one is the UMVUE. Also note that

b = a2 V ar(X(1)) = 4n − 1 2 4nθ2 θ2
V ar(an θ) n = ,
4n (4n − 1)2 (4n − 2) 4n(4n − 2)
e = θ2
V ar(θ) ,
8n
f) From part a) of Problem 6.44, we have
n
Y n
Y
n 4n
L(θ|x1, . . . , xn ) = fθ (xi ) = 4 θ I[θ,∞) (x(1)) x−5
i .
i=1 i=1

The indicator can be written as I(0 < θ ≤ x(1)), so L(θ) > 0 on (0, x(1)], and
L(θ) ∝ θ4n is an increasing function on (0, x(1)]. (Make a sketch of L(θ).) Hence X(1) is
the MLE.
Alternatively, taking derivative with respect to θ (without considering the indicator
function) from the likelihood function we have

Yn
d
L(θ|x1, . . . , xn ) = 4n (4n)θ4n−1 x−5
i >0
dθ i=1

Therefore, the likelihood function is a increasing function of θ on (0, x(1)]. Therefore X(1)
is the MLE of θ.
Following the argument in part d), and recall the MSE of X(1) from part d) of Problem
6.44, we have

lim MSE(X(1) ) = lim V ar(X(1) ) + lim Bias(X(1))2


n→∞ n→∞ n→∞
2
4nθ θ2
= lim + lim = 0,
n→∞ (4n − 1)2 (4n − 2) n→∞ (4n − 1)2

which shows that the MLE is also consistent.

61

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy