Squalsoln
Squalsoln
Most solutions are from David Olive, but a few solutions were contributed by Bhaskar
Bhattacharya, Abdel Mugdadi, and Yaser Samadi.
2.68∗. (Aug. 2000 Qual): The number of defects per yard, Y of a certain fabric is
known to have a Poisson distribution with parameter λ. However, λ is a random variable
with pdf
f(λ) = e−λ I(λ > 0).
a) Find E(Y).
b) Find Var(Y).
Solution. Note that the pdf for λ is the EXP(1) pdf, so λ ∼ EXP(1).
a) E(Y ) = E[E(Y |λ)] = E(λ) = 1.
b) V (Y ) = E[V (Y |λ)] + V [E(Y |λ)] = E(λ) + V (λ) = 1 + 12 = 2.
4.27. (Jan. 2003 Qual) Let X1 and X2 be iid Poisson (λ) random variables. Show
that T = X1 + 2X2 is not a sufficient statistic for λ. (Hint: the Factorization Theorem
uses the word iff. Alternatively, find a minimal sufficient statistic S and show that S is
not a function of T .)
See 4.38 solution.
4.28. (Aug. 2002 Qual): Suppose that X1 , ..., Xn are iid N(σ, σ) where σ > 0.
a) Find a minimal sufficient statistic for σ.
b) Show that (X, S 2) is a sufficient statistic but is not a complete sufficient statistic
for σ.
4.31. (Aug. 2004 Qual): Let X1 , ..., Xn be iid beta(θ, θ). (Hence δ = ν = θ.)
a) Find a minimal sufficient statistic for θ.
b) Is the statistic found in a) complete? (prove or disprove)
Solution.
Γ(2θ) θ−1 Γ(2θ)
f(x) = x (1 − x)θ−1 = exp[(θ − 1)(log(x) + log(1 − x))],
Γ(θ)Γ(θ) Γ(θ)Γ(θ)
P
for 0 < x < 1, a 1 parameter exponential family. Hence ni=1 (log(Xi ) + log(1 − Xi )) is
a complete minimal sufficient statistic.
4.32. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with probability mass function
1
f(x) = P (X = x) =
xν ζ(ν)
where ν > 1 and x = 1, 2, 3, .... Here the zeta function
X∞
1
ζ(ν) =
x=1
xν
for ν > 1.
1
a) Find a minimal sufficient statistic for ν.
b) Is the statistic found in a) complete? (prove or disprove)
c) Give an example of a sufficient statistic that is strictly not minimal.
Solution. a) and b)
1
f(x) = I{1,2,...}(x) exp[−ν log(x)]
ζ(ν)
P
is a 1 parameter regular exponential family with Ω = (−∞, −1). Hence ni=1 log(Xi ) is
a complete minimal sufficient statistic.
c) By the Factorization Theorem,
P W = (X1 , ..., Xn) is sufficient, but W is not mini-
mal since W is not a function of ni=1 log(Xi ).
4.36. (Aug. 2009 Qual): Let X1 , ..., Xn be iid uniform(θ, θ + 1) random variables
where θ is real.
a) Find a minimal sufficient statistic for θ.
b) Show whether the minimal sufficient statistic is complete or not.
Solution. Now
fX (x) = I(θ < x < θ + 1)
and
f(x) I(θ < x(1) ≤ x(n) < θ + 1)
=
f(y) I(θ < y(1) ≤ y(n) < θ + 1)
which is constant for all real θ iff (x(1), x(n) ) = (y(1), y(n) ). Hence T = (X(1) , X(n) ) is a
minimal sufficient statistic by the LSM theorem. To show that T is not complete, first
find E(T ). Now Z t
FX (t) = dx = t − θ
θ
for θ < t < θ + 1. Hence
2
Now
fX(1) (t) = n[1 − FX (t)]n−1 fx (t) = n(1 − t + θ)n−1
for θ < t < θ + 1 and thus
Z θ+1
Eθ (X(1) ) = tn(1 − t + θ)n−1 dt.
θ
n−1
Eθ (−X(1) + X(n) − )=0
n+1
for all real θ but
n−1
Pθ (g(T ) = 0) = Pθ (−X(1) + X(n) − = 0) = 0 < 1
n+1
for all real θ. Hence T is not complete.
4.37. (Sept. 2010 Qual): Let Y1 , ..., Yn be iid from a distribution with pdf
2 2
f(y) = 2 τ y e−y (1 − e−y )τ −1
is P
a 1 parameter exponential family with minimal and complete sufficient statistic
− ni=1 log(1 − e−Yi ).
2
4.38 and 4.27. (Aug. 2016 qual) a) Let X1 , ..., Xn be independent identically dis-
tributed gamma(α, β), and, independently, Y1 , ..., Ym independent identically distributed
gamma(α, kβ) where k is known, and α, β > 0 are parameters. Find a two dimensional
sufficient statistic for (α, β).
b) Let X1 , X2 be independent identically distributed Poisson(θ). Show
T = X1 + 2X2 is not sufficient for θ.
3
Solution: a) f(x, y) =
n Yn Xn m Ym Xm
1 α−1 1 α−1
( xi ) exp[− xi /β] ( yj ) exp[− yj /(kβ)]
Γ(α)β α i=1 i=1
Γ(α)(kβ)α j=1 j=1
n m " Y
n m
Y
#α−1 " Pn Pm !#
1 1 i=1 xi + j=1 yj /k
= ( xi)( yj ) exp − .
Γ(α)β α Γ(α)(kβ)α i=1 j=1
β
By Factorization, !
n
Y m
Y n
X m
X
( Xi )( Yj ), Xi + Yj /k or
i=1 j=1 i=1 j=1
n m n m
!
X X X X
log(Xi ) + log(Yj ), Xi + Yj /k
i=1 j=1 i=1 j=1
is sufficient.
b) The minimal sufficient statistic X1 + X2 is not a function of T , thus T is not
sufficient. Alternatively, the Factorization Theorem says T is sufficient iff f(x|θ) =
g(T (x)|θ)h(x) where h(x) does not depend on θ and g depends on x only through T (x).
No such factorization exists.
5.2. (1989 Univ. of Minn. and Aug. 2000 SIU Qual): Let (X, Y ) have the bivariate
density
1 −1
f(x, y) = exp( [(x − ρ cos θ)2 + (y − ρ sin θ)2 ]).
2π 2
Suppose that there are n independent pairs of observations (Xi , Yi ) from the above density
and that ρ is known. Assume that 0 ≤ θ ≤ 2π. Find a candidate for the maximum
likelihood estimator θ̂ by differentiating the log likelihood log(L(θ)). (Do not show that
the candidate is the MLE, it is difficult to tell whether the candidate, 0 or 2π is the MLE
without the actual data.)
Solution. The likelihood function L(θ) =
1 −1 X 2
X
exp( [ (x i − ρ cos θ) + (yi − ρ sin θ)2 ]) =
(2π)n 2
1 −1 X 2 X
2 2
X
2
X
exp( [ x i − 2ρ cos θ x i + ρ cos θ + y i − 2ρ sin θ yi + ρ2 sin2 θ])
(2π)n 2
1 −1 X 2 X 2 2
X X
= exp( [ x i + y i + ρ ]) exp(ρ cos θ x i + ρ sin θ yi ).
(2π)n 2
Hence the log likelihood log L(θ)
X X
= c + ρ cos θ xi + ρ sin θ yi .
4
Setting this derivative to zero gives
X X
ρ yi cos θ = ρ xi sin θ
or P
y
P i = tan θ.
xi
Thus P
−1 yi
θ̂ = tan ( P ).
xi
Now the boundary points are θ = 0 and θ = 2π. Hence θ̂M LE equals 0, 2π, or θ̂ depending
on which value maximizes the likelihood.
5.23. (Jan. 2001 Qual): Let X1 , ..., Xn be a random sample from a normal distribu-
tion with known mean µ and unknown variance τ.
a) Find the maximum likelihood estimator of the variance τ.
√
b) Find the maximum likelihood estimator of the standard deviation τ . Explain
how the MLE was obtained.
n
Pn
Solution. a) The log likelihood is log L(τ ) = −P 2
log(2πτ ) − 2τ1 2
i=1 (Xi − µ) . The
n n
derivative of the log likelihood is equal to − 2τ + 2τ1P
2
2
i=1 (Xi − µ) . Setting the derivative
n
(X −µ)2
equal to 0 and solving for τ gives the MLE τ̂ = i=1 n i . Now the likelihood is only
defined for τ > 0. As τ goes to 0 or ∞, log L(τ ) tends to −∞. Since there is only one
critical point, τ̂ is the MLE.
q Pn
2
i=1 (Xi −µ)
b) By the invariance principle, the MLE is n
.
5.28. (Aug. 2002 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a half normal HN(µ, σ 2) distribution with pdf
2 −(x − µ)2
f(x) = √ exp ( )
2π σ 2σ 2
where σ > 0 and x > µ and µ is real. Assume that µ is known.
a) Find the maximum likelihood estimator of σ 2.
b) What is the maximum likelihood estimator of σ? Explain.
Solution. This problem is nearly the same as finding the MLE of σ 2 when the data
arePiid N(µ, σ 2 ) when µ is known. See Problem 5.23 and Section 10.23. The MLE in a)
n
is i=1 (Xi − µ)2 /n. For b) use the invariance principle and take the square root of the
answer in a).
5.29. (Jan. 2003 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a lognormal (µ, σ 2) distribution with pdf
1 −(log(x) − µ)2
f(x) = √ exp ( )
x 2πσ 2 2σ 2
5
where σ > 0 and x > 0 and µ is real. Assume that σ is known.
a) Find the maximum likelihood estimator of µ.
b) What is the maximum likelihood estimator of µ3 ? Explain.
Solution. a) P
log(Xi )
µ̂ =
n
To see this note that
Y P
1 − (log(xi ) − µ)2
L(µ) = ( √ ) exp( .
xi 2πσ 2 2σ 2
So P
(log(xi) − µ)2
log(L(µ)) = log(c) −
2σ 2
and the derivative of the log likelihood wrt µ is
P
2(log(xi ) − µ)
.
2σ 2
P
Setting this quantity equal to 0 gives nµ = log(xi ) and the solution µ̂ is unique. The
2
second derivative is −n/σ < 0, so µ̂ is indeed the global maximum.
b)
P 3
log(Xi )
n
by invariance.
5.30. (Aug. 2004 Qual): Let X be a single observation from a normal distribution
with mean θ and with variance θ2 , where θ > 0. Find the maximum likelihood estimator
of θ2.
Solution.
1 2 2
L(θ) = √ e−(x−θ) /2θ
θ 2π
√
ln(L(θ)) = −ln(θ) − ln( 2π) − (x − θ)2 /2θ2
dln(L(θ)) −1 x − θ (x − θ)2
= + 2 +
dθ θ θ θ3
2
x x 1 set
= 3 − 2− =0
θ θ θ
by solving for θ,
x √
θ= ∗ (−1 + 5),
2
and √
x
θ= ∗ (−1 − 5).
2
6
√ √
But, θ > 0. Thus, θ̂ = x2 ∗ (−1 + 5), when x > 0, and θ̂ = x
2
∗ (−1 − 5), when x < 0.
To check with the second derivative
d2 ln(L(θ)) 2θ + x 3(θ2 + θx − x2)
= − +
dθ2 θ3 θ4
2 2
θ + 2θx − 3x
=
θ4
but the sign of the θ4 is always positive, thus the sign of the second derivative depends
on the sign
√ of the numerator. Substitute θ̂ in the numerator and simplify, you get
x2
2
(−5 ± 5), which is always negative. Hence by the invariance principle, the MLE of
2 2
θ is θ̂ .
5.31. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with probability density function
σ 1/λ 1
f(x) = exp −(1 + ) log(x) I[x ≥ σ]
λ λ
where x ≥ σ, σ > 0, and λ > 0. The indicator function I[x ≥ σ] = 1 if x ≥ σ and
0, otherwise. Find the maximum likelihood estimator (MLE) (σ̂, λ̂) of (σ, λ) with the
following steps.
a) Explain why σ̂ = X(1) = min(X1 , ..., Xn ) is the MLE of σ regardless of the value
of λ > 0.
b) Find the MLE λ̂ of λ if σ = σ̂ (that is, act as if σ = σ̂ is known).
Solution. a) For any λ > 0, the likelihood function
" n
#
1 1 X
L(σ, λ) = σ n/λ I[x(1) ≥ σ] n exp −(1 + ) log(xi )
λ λ i=1
Thus n
d −n n 1 X set
log L(σ̂, λ) = 2 log(σ̂) − + 2 log(xi ) = 0,
dλ λ λ λ i=1
Pn
or −n log(σ̂) + i=1 log(xi ) = nλ. So
Pn Pn
i=1 log(xi ) log(xi /σ̂)
λ̂ = − log(σ̂) + = i=1 .
n n
7
Now
d2 2n n 2 X
n
2
log L(σ̂, λ) = 3
log(σ̂) + 2
− 3
log(x i )
dλ λ λ λ i=1
λ=λ̂
n
X
n 2 −n
= − log(xi /σ̂) = < 0.
λ̂2 λ̂3 i=1 λ̂2
Hence (σ̂, λ̂) is the MLE of (σ, λ).
5.32. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf
1 1
f(x) = exp −(1 + ) log(x)
λ λ
where λ > 0 and x ≥ 1.
a) Find the maximum likelihood estimator of λ.
b) What is the maximum likelihood estimator of λ8 ? Explain.
Solution. a) the likelihood
1 1 X
L(λ) = n exp −(1 + ) log(xi ) ,
λ λ
n 2nλ̂ −n
− = < 0.
λ̂2 λ̂3 λ̂2
Hence λ̂ is the MLE of λ.
b) By invariance, λ̂8 is the MLE of λ8 .
5.33. (Jan. 2004 Qual): Let X1 , ..., Xn be independent identically distributed random
variables with probability mass function
1
f(x) = e−2θ exp[log(2θ)x],
x!
8
for x = 0, 1, . . . , where θ > 0. Assume that at least one Xi > 0.
a) Find the maximum likelihood estimator of θ.
b) What is the maximum likelihood estimator of (θ)4 ? Explain.
Solution. a) The likelihood
X
L(θ) = c e−n2θ exp[log(2θ) xi],
Hence
d 2 X set
log(L(θ)) = −2n + xi = 0,
dθ 2θ
P
or xi = 2nθ, or
θ̂ = X/2.
Notice that P
d2 − xi
log(L(θ)) = <0
dθ2 θ2
P
unless xi = 0.
b) (θ̂) = (X/2)4 by invariance.
4
5.34. (Jan. 2006 Qual): Let X1 , ..., Xn be iid with one of two probability density
functions. If θ = 0, then
1, 0 ≤ x ≤ 1
f(x|θ) =
0, otherwise.
If θ = 1, then
1
√
2 x
, 0≤x≤1
f(x|θ) =
0, otherwise.
9
Solution. a) Notice that θ > 0 and
1 1 −(y − θ)2
f(y) = √ √ exp .
2π θ 2θ
Now the quadratic formula states that for a 6= 0, the quadratic equation ay 2 + by + c = 0
has roots √
−b ± b2 − 4ac
.
2a
Applying the quadratic formula to (1) gives
p P
−n ± n2 + 4n ni=1 yi2
θ= .
2n
Since θ > 0, a candidate for the MLE is
q Pn
p Pn
−n + n2 + 4n 2 −1 + 1 + 4 n1 Yi2
i=1 Yi i=1
θ̂ = = .
2n 2
10
Note that
Pn
d2 n y 2
1 Xn
i=1 i 2
2
log(L(θ)) = 2 − = 3 [nθ − 2 yi ] =
dθ 2θ θ3 2θ i=1
θ=θ̂
n
X n
X n
X
1 1
[nθ̂ − yi2 − yi2] = 2
[−nθ̂ − yi2] < 0
2θ̂3 i=1 i=1 2θ̂3 i=1
by (2). Since L(θ) is continuous with a unique root on θ > 0, θ̂ is the MLE.
5.37. (Aug. 2006 Qual): Let X1 , ..., Xn be independent identically distributed (iid)
random variables with probability density function
2 x −(ex − 1)2
f(x) = √ e exp
λ 2π 2λ2
where x > 0 and λ > 0.
a) Find the maximum likelihood estimator (MLE) λ̂ of λ.
b) What is the MLE of λ2 ? Explain.
Pn
−1
Solution. a) L(λ) = c λ1n exp 2λ 2
xi
i=1 (e − 1)
2
.
Thus n
1 X xi
log(L(λ)) = d − n log(λ) − 2 (e − 1)2 .
2λ i=1
Hence
d log(L(λ)) −n 1 X xi set
= + 3 (e − 1)2 = 0,
dλ λ λ
2
P xi 2
or nλ = (e − 1) , or rP
(eXi − 1)2
λ̂ = .
n
Now
d2 log(L(λ)) n 3 X xi
2
= − (e − 1)
dλ2 λ2 λ4 λ=λ̂
n 3n n
= − λ̂2 = [1 − 3] < 0.
λ̂2 λ̂4 λ̂2
So λ̂ is the MLE.
5.38. (Jan. 2007 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a distribution with pdf
2 1 −(log(x))2
f(x) = √ exp
λ 2π x 2λ2
11
b) Find the MLE of λ2 .
Solution. a) The likelihood
Y Y P
1 1 −(log xi )2
L(λ) = f(xi ) = c exp ,
xi λn 2λ2
and the log likelihood
X P
(log xi )2
log(L(λ)) = d − log(xi) − n log(λ) − .
2λ2
Hence P
d −n (log xi )2 set
log(L(λ)) = + = 0,
dλ λ λ3
P
or (log xi )2 = nλ2 , or rP
(log xi )2
λ̂ = .
n
This solution is unique.
Notice that P
d2 n 3 (log xi )2
2
log(L(λ)) = 2 −
dλ λ λ4 λ=λ̂
n 3nλ̂2 −2n
= − = < 0.
λ̂2 λ̂4 λ̂2
Hence rP
(log Xi )2
λ̂ =
n
is the MLE of λ.
b) P
2 (log Xi )2
λ̂ =
n
is the MLE of λ2 by invariance.
5.41. (Jan. 2009 Qual): Suppose that X has probability density function
θ
fX (x) = , x≥1
x1+θ
where θ > 0.
a) If U = X 2 , derive the probability density function fU (u) of U.
b) Find the method of moments estimator of θ.
c) Find the method of moments estimator of θ2 .
5.42. (Jan. 2009 Qual): Suppose that the joint probability distribution function of
X1 , ..., Xk is
Pk !
n! −[( i=1 xi ) + (n − k)xk ]
f(x1 , x2, ..., xk |θ) = exp
(n − k)!θk θ
12
where 0 ≤ x1 ≤ x2 ≤ · · · ≤ xk and θ > 0.
a) Find the maximum likelihood estimator (MLE) for θ.
b) What is the MLE for θ2 ? Explain briefly.
P
Solution. a) Let t = [( ki=1 xi ) + (n − k)xk ]. L(θ) = f(x|θ) and log(L(θ)) =
log(f(x|θ)) =
t
d − k log(θ) − .
θ
Hence
d −k t set
log(L(θ)) = + 2 = 0.
dθ θ θ
Hence
kθ = t
or
t
θ̂ = .
k
This is a unique solution and
d2 k 2t k 2k θ̂ k
2
log(L(θ)) = 2 − 3 = − =− < 0.
dθ θ θ θ=θ̂ θ̂ 2 θ̂ 3 θ̂2
P
Hence θ̂ = T /k is the MLE where T = [( ki=1 Xi ) + (n − k)Xk ].
b) θ̂2 by the invariance principle.
5.43. (Jan. 2010 Qual): Let X1 , ..., Xn be iid with pdf
cos(θ)
f(x) = exp(θx)
2 cosh(πx/2)
where x is real and |θ| < π/2.
a) Find the maximum likelihood estimator (MLE) for θ.
b) What is the MLE for tan(θ)? Explain briefly.
[cos(θ)] n exp(θ
P
xi ) P
Solution. a) L(θ) = Q
2 cosh(πxi /2)
. So log(L(θ)) = c + n log(cos(θ)) + θ xi , and
d log(L(θ)) 1 X set
=n [− sin(θ)] + xi = 0,
dθ cos(θ)
13
5.44. (Aug. 2009 Qual): Let X1 , ..., Xn be a random sample from a population with
pdf
1 x−µ
f(x) = exp − , x ≥ µ,
σ σ
where −∞ < µ < ∞, σ > 0.
a) Find the maximum likelihood estimator of µ and σ.
b) Evaluate τ (µ, σ) = Pµ,σ [X1 ≥ t] where t > µ. Find the maximum likelihood
estimator of τ (µ, σ).
Solution. a) This is a two parameter exponential distribution. So see Section 10.14
where σ = λ and µ = θ.
b)
x−µ
1 − F (x) = τ (µ, σ) = exp − .
σ
By the invariance principle, the MLE of τ (µ, σ) = τ (µ̂, σ̂)
" !#
x − X(1)
= exp − .
X − X(1)
5.45. (Sept. 2010 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
s r !
1 1 θ θ y 1 −1 y θ
f(y) = √ + exp + −2
2 2π θ y y2 θ ν 2ν 2 θ y
Hence n
d −n 1 X set
log(L(ν)) = + 3 wi = 0,
dν ν ν i=1
or r Pn
i=1 wi
ν̂ = .
n
14
This solution is unique and
Pn
d2 n 3 w i n 3nν̂ 2 −2n
log(L(ν)) = 2 − i=1 = − = 2 < 0.
dν 2 ν ν 4 ν̂ 2 ν̂ 4 ν̂
ν=ν̂
Thus r Pn
i=1 Wi
ν̂ =
n
is the MLE of ν if ν̂ > 0.
Pn
2 Wi
b) ν̂ = i=1 by invariance.
n
5.46. (Sept. 2011 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
1 1 −1
f(y) = φ y −(φ+1) −φ
exp[ log(1 + y −φ )]
1+y λ λ
where y > 0, φ > 0 is known and λ > 0.
a) Find the maximum likelihood estimator (MLE) of λ.
b) Find the MLE of λ2 .
Solution. a) The likelihood
" n
#
1 1X
L(λ) = c n exp − log(1 + yi−φ ) ,
λ λ i=1
Pn
and the log likelihood log(L(λ)) = d − n log(λ) − 1
λ i=1 log(1 + yi−φ ). Hence
Pn
d −n i=1 log(1 + yi−φ ) set
log(L(λ)) = + = 0,
dλ λ λ2
Pn
or i=1 log(1 + yi−φ ) = nλ or
Pn
i=1 log(1 + yi−φ )
λ̂ = .
n
This solution is unique and
Pn
−φ
d2 n 2 i=1 log(1 + y i ) n 2nλ̂ −n
log(L(λ)) = − = − = < 0.
dλ2 λ2 λ3 λ̂2 λ̂3 λ̂2
λ=λ̂
Thus Pn
i=1 log(1 + Yi−φ )
λ̂ =
n
is the MLE of λ if φ is known.
b) The MLE is λ̂2 by invariance.
15
5.47. (Aug. 2012 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from an inverse half normal distribution with probability density func-
tion (pdf)
2 1 −1
f(y) = √ exp
σ 2π y 2 2σ 2 y 2
where y > 0 and σ > 0.
a) Find the maximum likelihood estimator (MLE) of σ 2.
b) Find the MLE of σ.
Solution. a) The likelihood
n2 " n
#
1 −1 X 1
L(σ 2) = c exp ,
σ2 2σ 2 i=1 yi2
Hence n
d 2 −n 1 X 1 set
log(L(σ )) = + = 0,
d(σ 2) 2(σ 2 ) 2(σ 2 )2 i=1 yi2
Pn 1
or i=1 yi2 = nσ 2 or
n
2 1X 1
σ̂ = .
n i=1 yi2
This solution is unique and
d2
2 2
log(L(σ 2 )) =
d(σ )
Pn 1
n
i=1 yi2 n nσ̂ 2 2 −n
2 2
− 2 3
= 2 2
− 2 3
= 4 < 0.
2(σ ) (σ ) 2 2 2(σ̂ ) (σ̂ ) 2 2σ̂
σ =σ̂
Thus n
2 1X 1
σ̂ =
n i=1 Yi2
is the MLE of σ 2. √
b) By invariance, σ̂ = σ̂ 2.
5.48. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a distribution with probability density function (pdf)
θ −θ
f(y) = 2 exp
y y
16
where y > 0 and θ > 0.
a) Find the maximum likelihood estimator (MLE) of θ.
b) Find the MLE of 1/θ.
" n
#
X 1
Solution. a) The likelihood L(θ) = c θn exp −θ , and the log likelihood
i=1
y i
Xn
1
log(L(θ)) = d + n log(θ) − θ . Hence
i=1
y i
n
d n X 1 set n
log(L(θ)) = − = 0, or θ̂ = Pn 1 .
dθ θ y
i=1 i i=1 yi
d2 −n
Since this solution is unique and 2 log(L(θ)) = 2 < 0,
dθ θ
n
θ̂ = Pn 1 is the MLE of θ.
i=1 Yi Pn 1
i=1 Yi
b) By invariance, the MLE is 1/θ̂ = .
n
5.49. (Aug. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed (iid)
random variables from a Lindley distribution with probability density function (pdf)
θ2
f(y) = (1 + y)e−θy
1+θ
where y > 0 and θ > 0.
a) Find the maximum likelihood estimator (MLE) of θ. You may assume that
d2
2
log(L(θ)) < 0.
dθ θ=θ̂
Always use properties of logarithms to simplify the log likelihood before taking deriva-
tives. Note that
n
X
log(L(θ)) = d + 2n log(θ) − n log(1 + θ) − θ yi .
i=1
17
Hence n
d 2n n X set
log(L(θ)) = − − yi = 0,
dθ θ (1 + θ) i=1
or
2(1 + θ) − θ 2+θ
− y = 0 or −y =0
θ(1 + θ) θ(1 + θ)
or 2 + θ = y(θ + θ2 ) or yθ2 + θ(y − 1) − 2 = 0. So
q
−(Y − 1) + (Y − 1)2 + 8Y
θ̂ = .
2Y
where θ > 0, xi > 0, and xi > di > 0 for i = 1, ..., n − k. The di and k are known
constants. Find the maximum likelihood estimator (MLE) of θ.
Solution:
n−k n
1 Y −(xi−di )/θ
Y
L(θ) = e e−xi /θ .
θn−k i=1 i=n−k+1
Hence
n−k n
1X 1 X
log(L(θ)) = −(n − k) log(θ) − (xi − di ) − xi =
θ i=1 θ i=n−k+1
n−k n−k n
1X 1X 1 X
−(n − k) log(θ) − xi + di − xi =
θ i=1 θ i=1 θ
i=n−k+1
f
−(n − k) log(θ) −
θ
Pn−k Pn−k P P Pn−k
where f = i=1 xi − i=1 di + ni=n−k+1 xi = ni=1 xi − i=1 di =
Pn−k Pn
i=1 (xi − di ) + i=n−k+1 xi > 0. So
18
This solution was unique and
d2 log(L(θ) n − k 2f n − k 2(n − k)θ̂ −(n − k)
2
= 2
− 3 = − = < 0.
dθ θ θ θ̂ θ̂2 θ̂3 θ̂2
where k > 0 is a constant to be chosen. Determine the value of k which gives the smallest
mean square error. (Hint: Find the MSE as a function of k, then take derivatives with
respect to k. Also, use Theorem 4.1c and Remark 5.1 VII.)
6.7. (Jan. 2001 Qual): Let X1 , ..., Xn be independent, identically distributed N(µ, 1)
random variables where µ is unknown and n ≥ 2. Let t be a fixed real number. Then the
expectation
Eµ (I(−∞,t] (X1 )) = Pµ (X1 ≤ t) = Φ(t − µ)
for all µ where Φ(x) is the cumulative distribution function of a N(0, 1) random variable.
19
b) X is sufficient by a) and complete since the N(µ, 1) family is a regular one param-
eter exponential family.
c) E(I−(∞,t](X1 )|X = x) = P (X1 ≤ t|X = x) = Φ( √t−x ).
1−1/n
Determine the value of c that minimizes the mean square error MSE. Show work and
prove that your value of c is indeed the global minimizer.
P
Solution.
P Note that Xi ∼ G(n, θ). Hence MSE(c) = V arθ (Tn (c)) + [Eθ Tn (c) − θ]2
= c V arθ ( Xi ) + [ncEθ X − θ]2 = c2 nθ2 + [ncθ − θ]2.
2
So
d
MSE(c) = 2cnθ2 + 2[ncθ − θ]nθ.
dc
Set this equation to 0 to get 2nθ2 [c + nc − 1] = 0 or c(n + 1) = 1. So c = 1/(n + 1).
The second derivative is 2nθ2 + 2n2 θ2 > 0 so the function is convex and the local min
is in fact global.
6.19. (Aug. 2000 SIU, 1995 Univ. Minn. Qual): Let X1 , ..., Xn be independent
identically distributed random variables from a N(µ, σ 2 ) distribution. Hence E(X1 ) = µ
and V AR(X1 ) = σ 2. Suppose that µ is known and consider estimates of σ 2 of the form
n
2 1X
S (k) = (Xi − µ)2
k i=1
20
d
Now the derivative dk
MSE(S 2 (k))/σ 4 =
−2 −2(n − k)
3
[2n + (n − k)2] + .
k k2
Set this derivative equal to zero. Then
Hence
2nk = 4n + 2n2
or k = n + 2.
Should also argue that k = n + 2 is the global minimizer. Certainly need k > 0
and the absolute bias will tend to ∞ as k → 0 and the bias tends to σ 2 as k → ∞, so
k = n + 2 is the unique critical point and is the global minimizer.
6.20. (Aug. 2001 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf
2x −x2 /θ
f(x|θ) = e , x>0
θ
and f(x|θ) = 0 for x ≤ 0.
a) Show that X12 is an unbiased estimator of θ. (Hint: use the substitution W = X 2
and find the pdf of W or use u-substitution with u = x2/θ.)
b) Find the Cramer-Rao lower bound for the variance of an unbiased estimator of θ.
c) Find the uniformly minimum variance unbiased estimator (UMVUE) of θ.
√ √
Solution. a) Let W = X 2 . Then f(w) = fX ( w) 1/(2 w) = (1/θ) exp(−w/θ) and
W ∼ EXP (θ). Hence Eθ (X 2 ) = Eθ (W ) = θ.
b) This is an exponential family and
1
log(f(x|θ)) = log(2x) − log(θ) − x2
θ
for x > 0. Hence
∂ −1 1
f(x|θ) = + 2 x2
∂θ θ θ
and
∂2 1 −2
2
f(x|θ) = 2 + 3 x2.
∂θ θ θ
Hence
1 −2 1
I1 (θ) = −Eθ [ 2
+ 3 x2 ] = 2
θ θ θ
by a). Now
[τ 0(θ)]2 θ2
CRLB = =
nI1(θ) n
21
where τ (θ) = θ.
P
c) This is a regular exponential family so ni=1 Xi2 is a complete sufficient statistic.
Since Pn
X2
Eθ [ i=1 i ] = θ,
n
Pn
Xi2
the UMVUE is i=1
n
.
6.21. (Aug. 2001 Qual): See Mukhopadhyay (2000, p. 377). Let X1 , ..., Xn be iid
N(θ, θ2 ) normal random variables with mean θ and variance θ2 . Let
n
1X
T1 = X = Xi
n i=1
and let s
Pn
i=1 (Xi − X)2
T 2 = cn S = cn
n−1
where the constant cn is such that Eθ [cn S] = θ. You do not need to find the constant cn .
Consider estimators W (α) of θ of the form.
where 0 ≤ α ≤ 1.
a) Find the variance
b) Find the mean square error of W (α) in terms of V arθ (T1 ), V arθ (T2) and α.
c) Assume that
θ2
V arθ (T2) ≈
.
2n
Determine the value of α that gives the smallest mean square error. (Hint: Find the
MSE as a function of α, then take the derivative with respect to α. Set the derivative
equal to zero and use the above approximation for V arθ (T2). Show that your value of α
is indeed the global minimizer.)
Solution. a) In normal samples, X and S are independent, hence
22
c) Now
d set
MSE(α) = 2αV arθ (T1) − 2(1 − α)V arθ (T2) = 0.
dα
Hence
θ2
V arθ (T2) 2n
α̂ = ≈ 2θ2 θ2
= 1/3
V arθ (T1 ) + V arθ (T2) 2n
+ 2n
using the approximation and the fact that Var(X̄) = θ2 /n. Note that the second deriva-
tive
d2
MSE(α) = 2[V arθ (T1 ) + V arθ (T2)] > 0,
dα2
so α = 1/3 is a local min. The critical value was unique, hence 1/3 is the global min.
6.22. (Aug. 2003 Qual): Suppose that X1 , ..., Xn are iid normal distribution with
mean 0 and variance σ 2. Consider the following estimators: T1 = 21 |X1 − X2 | and
q P
T2 = n1 ni=1 Xi2 .
a) Is T1 unbiased for σ? Evaluate the mean square error (MSE) of T1 .
b) Is T2 unbiased for σ? If not, find a suitable multiple of T2 which is unbiased for σ.
Z
1 ∞ 2 1 −u2
E(T12) = u √ e 4σ2 du
2 0 4πσ 2
σ2
= .
2
V (T1 ) = σ 2( 12 − π1 ) and
1 1 1 3 2
MSE(T1) = σ 2[( √ ) − 1)2 + − ] = σ 2 [ − √ ].
π 2 π 2 π
Pn 2
Xi i=1 Xi
b) σ
has a N(0,1) and σ2
has a chi square distribution with n degrees of freedom.
Thus r Pn √
i=1 Xi
2 2Γ( n+1
2
)
E( 2
) = n ,
σ Γ( 2 )
and √
σ 2Γ( n+1
2
)
E(T2) = √ n .
n Γ( 2 )
23
Therefore, √
n Γ( n2 )
E( √ T2) = σ.
2 Γ( n+1
2
)
6.23. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables with pdf (probability density function)
1 x
f(x) = exp −
λ λ
where x and λ are both positive. Find the uniformly minimum variance unbiased esti-
mator (UMVUE) of λ2 .
Solution. This
Pnis a regular one parameter exponential family with complete sufficient
statistic Tn = i=1 Xi ∼ G(n, λ). Hence E(Tn ) = nλ, E(Tn ) = V (Tn ) + (E(Tn ))2 =
2
24
for 0 < t < θ.
2n 2n
E(T ) = 2n+1 θ and E(T 2) = 2n+2
θ2 .
2n 2n 2 2n
MSE(CT ) = (C θ − θ)2 + C 2[ θ −( θ)2 ]
2n + 1 2n + 2 2n + 1
dMSE(CT ) 2Cnθ 2nθ 2nθ2 4n2 θ2
= 2[ − θ][ ] + 2C[ − ].
dC 2n + 1 2n + 1 2n + 2 (2n + 1)2
dM SE(CT ) set
Solve dC
= 0 to get
n+1
. C=2
2n + 1
The MSE is a quadratic in C and the coefficient on C 2 is positive, hence the local min is
a global min.
6.26. (Aug. 2004 Qual): Let X1 , ..., Xn be a random sample from a distribution with
pdf
2x
, 0 < x < θ.
f(x) =
θ2
Let T = cX be an estimator of θ where c is a constant.
a) Find the mean square error (MSE) of T as a function of c (and of θ and n).
b) Find the value c that minimizes the MSE. Prove that your value is the minimizer.
c2 θ 2 2θ
+ ( c − θ)2 .
18n 3
b)
dMSE(c) 2cθ2 2θ 2θ
= + 2( c − θ) .
dc 18n 3 3
Set this equation equal to 0 and solve, so
θ2 2c 4 2
+ θ( θc − θ) = 0
18n 3 3
or
2θ2 8 4
c[ + θ2 ] = θ2
18n 9 3
or
1 8 4
c( + )θ2 = θ2
9n 9 3
25
or
1 8n 4
c( + )=
9n 9n 3
or
9n 4 12n
c= = .
1 + 8n 3 1 + 8n
This is a global min since the MSE is a quadratic in c2 with a positive coefficient, or
because
d2 MSE(c) 2θ2 8θ2
= + > 0.
dc2 18n 9
6.27. (Aug. 2004 Qual): Suppose that X1 , ..., Xn are iid Bernoulli(p) where n ≥ 2
and 0 < p < 1 is the unknown parameter.
a) Derive the UMVUE of ν(p), where ν(p) = e2(p(1 − p)).
b) Find the Cramér Rao lower bound for estimating ν(p) = e2(p(1 − p)).
Solution. a) Consider the statistic
P W = X1 (1 − X2 ) which is an unbiased estimator of
ψ(p) = p(1 − p). The statistic T = ni=1 Xi is both complete and sufficient. The possible
values of W are 0 or 1. Let U = φ(T ) where
26
P
E(X) = E(X1 ) = p and V (X) = V (X1 )/n = p(1 − p)/n since Xi ∼ Bin(n, p). So
E[(X) ] = V (X) + [E(X)] = p(1 − p)/n + p . So Ep (U) = a[p(1 − p)/n] + ap2 + bp + c
2 2 2
ap ap2 a a
= − + ap2 + bp + c = ( + b)p + (a − )p2 + c.
n n n n
a
So c = 0 and a − n
= a n−1
n
= −e2 or
−n 2
a= e .
n−1
a
Hence n
+ b = e2 or
a n n 2
b = e2 − = e2 + e2 = e.
n n(n − 1) n−1
So
−n 2 n 2 n 2
U= e (X)2 + eX= e X(1 − X).
n−1 n−1 n−1
b) The FCRLB for τ (p) is [τ 0(p)]2 /nI1 (p). Now f(x) = px (1 − p)1−x , so log f(x) =
x log(p) + (1 − x) log(1 − p). Hence
∂ log f x 1−x
= −
∂p p 1−p
and
∂ 2 log f −x 1−x
2
= 2 − .
∂p p (1 − p)2
So
∂ 2 log f −p 1−p 1
I1(p) = −E( 2
) = −( 2 − 2
)= .
∂p p (1 − p) p(1 − p)
So
[e2(1 − 2p)]2 e4(1 − 2p)2 p(1 − p)
F CRLBn = n = .
p(1−p)
n
6.30. (Jan. 2009 Qual): Suppose that Y1 , ..., Yn are independent binomial(mi, ρ)
where the mi ≥ 1 are known constants. Let
Pn n
i=1 Yi 1 X Yi
T1 = Pn and T2 =
i=1 mi n i=1 mi
be estimators of ρ.
a) Find MSE(T1).
b) Find MSE(T2 ).
c) Which estimator is better?
27
Hint: by the arithmetic–geometric–harmonic mean inequality,
n
1X n
mi ≥ Pn 1 .
n i=1 i=1 mi
Solution. a) Pn Pn
i=1 E(Yi ) mi ρ
E(T1 ) = Pn = Pi=1
n = ρ,
i=1 mi i=1 mi
so MSE(T1) = V (T1) =
n
X Xn Xn
1 1 1
Pn V( Yi ) = P V (Yi ) = P mi ρ(1 − ρ)
( i=1 m i )2 i=1
( ni=1 mi)2 i=1 ( ni=1 mi )2 i=1
ρ(1 − ρ)
= Pn .
i=1 mi
b)
n n n
1 X E(Yi ) 1 X mi ρ 1X
E(T2) = = = ρ = ρ,
n i=1 mi n i=1 mi n i=1
so MSE(T2) = V (T2) =
n
X n n n
1 Yi 1 X Yi 1 X V (Yi ) 1 X mi ρ(1 − ρ)
V( )= 2 V( )= 2 = 2
n2 i=1 mi n i=1 mi n i=1 (mi)2 n i=1 (mi )2
n
ρ(1 − ρ) X 1
= .
n2 i=1
m i
c) The hint
n
1X n
mi ≥ Pn 1
n i=1 i=1 mi
implies that Pn Pn
1 1
n i=1 mi 1 i=1 mi
Pn ≤ and Pn ≤ .
i=1 mi n i=1 mi n2
Hence MSE(T1) ≤ MSE(T2), and T1 is better.
6.31. (Sept. 2010 Qual): Let Y1 , ..., Yn be iid gamma(α = 10, β) random variables.
Let T = cY be an estimator of β where c is a constant.
a) Find the mean square error (MSE) of T as a function of c (and of β and n).
b) Find the value c that minimizes the MSE. Prove that your value is the minimizer.
28
d MSE(c) 2c10β 2 set
b) = + 2(10cβ − β)10β = 0
dc n
2 2 2
or [20β /n] c + 200β c − 20β = 0
or c/n + 10c − 1 = 0 or c(1/n + 10) = 1
or
1 n
c= 1 = .
n
+ 10 10n + 1
This value of c is unique, and
d2 MSE(c) 20β 2
= + 200β 2 > 0,
dc2 n
so c is the minimizer.
6.32. (Jan. 2011 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf (probability density function)
where ν > 0 and n > 1. The indicator I(0,1)(y) = 1 if 0 < y < 1 and I(0,1)(y) = 0,
otherwise.
a) Find a complete sufficient statistic.
b) Find the Fisher information I1(ν) if n = 1.
c) Find the Cramer Rao lower bound (CRLB) for estimating 1/ν.
d) Find the uniformly minimum unbiasedPestimator (UMVUE) of ν.
Hint: You may use the fact that Tn = − ni=1 log(2Yi − Yi2 ) ∼ G(n, 1/ν), and
1 Γ(r + n)
E(Tnr ) =
ν r Γ(n)
for r > −n. Also Γ(1 + x) = xΓ(x) for x > 0.
Solution.
Pn a) Since this distribution is a one parameter regular exponential family,
2
Tn = − i=1 log(2Yi − Yi ) is complete.
b) Note that log(f(y|ν)) = log(ν) + log(2 − 2y) + (1 − ν)[− log(2y − y 2)]. Hence
d log(f(y|ν)) 1
= + log(2y − y 2)
dν ν
and
d2 log(f(y|ν)) −1
2
= 2.
dν ν
−1 1
Since this family is a 1P-REF, I1(ν) = −E = .
ν2 ν2
[τ 0(ν)]2 ν2 1
c) = 4 = .
nI1 (ν) ν n nν 2
1 Γ(−1 + n) ν
d) E[Tn−1 ] = −1 = . So (n − 1)/Tn is the UMVUE of ν by LSU.
ν Γ(n) n−1
29
6.33. (Sept. 2011 Qual): Let Y1 , ..., Yn be iid random variables from a distribution
with pdf
θ
f(y) =
2(1 + |y|)θ+1
where θ > 0 and y is real. Then W = log(1 + |Y |) has pdf f(w) = θe−wθ for w > 0.
a) Find a complete sufficient statistic.
b) Find the (Fisher) information number I1(θ).
c) Find the uniformly minimum variance unbiased estimator (UMVUE) for θ.
θ
Solution. a) Since f(y) = [exp[−(θ + 1) log(1 + |y|)] is a 1P-REF,
P 2
T = ni=1 log(1 + |Yi |) is a complete sufficient statistic.
b) Since this is an exponential family, log(f(y|θ)) = log(θ/2) − (θ + 1) log(1 + |y|) and
∂ 1
log(f(y|θ)) = − log(1 + |y|).
∂θ θ
Hence
∂2 −1
2
log(f(y|θ)) = 2
∂θ θ
and 2
∂ 1
I1(θ) = −Eθ 2
log(f(Y |θ)) = 2 .
∂θ θ
c) The complete sufficient statistic T ∼ G(n, 1/θ). Hence the UMVUE of θ is (n−1)/T
since for r > −n, r
r r 1 Γ(r + n)
E(T ) = E(T ) = .
θ Γ(n)
So
Γ(n − 1)
E(T −1) = θ = θ/(n − 1).
Γ(n)
6.34. (Similar to Sept. 2010 Qual): Suppose that X1 , X2 , ..., Xn are independent
identically distributed random variables from normal distribution with unknown mean µ
and known variance σ 2 . Consider the parametric function g(µ) = e2µ.
a) Derive the uniformly minimum variance unbiased estimator (UMVUE) of g(µ).
b) Find the Cramer-Rao lower bound (CRLB) for the variance of an unbiased esti-
mator of g(µ).
c) Is the CRLB attained by the variance of the UMVUE of g(µ)?
Solution. a) Note that X is a complete and sufficient statistic for µ and X ∼
−1 2
N(µ, n−1 σ 2). We know that E(e2X ), the mgf of X when t = 2, is given by e2µ+2n σ .
−1 2
Thus the UMVUE of e2µ is e−2n σ e2X .
b) The CRLB for the variance of unbiased estimator of g(µ) is given by 4n−1 σ 2e4µ
whereas
30
−1 σ 2 −1 σ 2
V (e−2n e2X̄ ) = e−4n E(e4X̄ ) − e4µ (3)
−4n−1 σ2 4µ+ 12 16n−1 σ2
= e e − e4µ
4µ 4n−1 σ2
= e [e − 1]
−1 2 4µ
> 4n σ e
since ex > 1 + x for all x > 0. Hence the CRLB is not attained.
6.36. (Aug. 2012 Qual): Let Y1 , ..., Yn be iid from a one parameter Pn exponential
family with pdf or pmf f(y|θ) with complete sufficient statistic T (Y ) = i=1 t(Yi ) where
t(Yi ) ∼ θX and X has a known distribution with known mean E(X) and known variance
V (X). Let Wn = cT (Y ) be an estimator of θ where c is a constant.
a) Find the mean square error (MSE) of Wn as a function of c (and of n, E(X) and
V (X)).
b) Find the value of c that minimizes the MSE. Prove that your value is the minimizer.
c) Find the uniformly minimum variance unbiased estimator (UMVUE) of θ.
Solution. See Theorem
Pn 6.5.
a) E(Wn )P= c i=1 E(t(Yi )) = cnθE(X), and
V (Wn ) = c2 ni=1 V (t(Yi )) = c2 nθ2 V (X). Hence MSE(c) ≡ MSE(Wn ) =
V (Wn ) + [E(Wn ) − θ]2 = c2nθ2 V (X) + (cnθE(X) − θ)2 .
b) Thus
d MSE(c) set
= 2cnθ2 V (X) + 2(cnθE(X) − θ)nθE(X) = 0,
dc
or
c(nθ2V (X) + n2 θ2 [E(X)]2) = nθ2E(X),
or
E(X)
cM = ,
V (X) + n[E(X)]2
which is unique. Now
d2 MSE(c)
2
= 2[nθ2 V (X) + n2 θ2 [E(X)]2] > 0.
dc
So MSE(c) is convex and c = cM is the minimizer.
1
c) Let cU = . Then E[cU T (Y )] = θ, hence cU T (Y ) is the UMVUE of θ by the
nE(X)
Lehmann Scheffe theorem.
6.37. (Jan. 2013 qual):Let X1 , ..., Xn be a random sample from a Poisson (λ) dis-
tribution. Let X and S 2 denote the sample mean and the sample variance, respectively.
31
c) Show that Var(S 2 ) > Var(X).
1 P
Solution: a) Since f(x) = exp[log(λ)x]I(x ∈ {0, 1, ...}) is a 1P-REF, ni=1 Xi is a
x! P
complete sufficient statistic and E(X) = λ. Hence X = ( ni=1 Xi )/n is the UMVUE of
λ by the LSU theorem.
b) E(S 2) = λ is an unbiased estimator of λ. Hence E(S 2|X) is the unique UMVUE
of λ by the LSU theorem. Thus E(S 2|X) = X by part a).
c) By Steiner’s formula, V (S 2) = V (E(S 2 |X))+E(V (S 2 |X)) = V (X)+E(V (S 2 |X)) >
V (X). (To show V (S 2 ) ≥ V (X), note that X is the UMVUE and S 2 is an unbiased esti-
mator of λ. Hence V (X) ≤ V (S 2 ) by the definition of a UMVUE, and the inequality is
strict for at least one value of λ since the UMVUE is unique.)
6.38. (Aug. 2012 Qual): Let X1 , ..., Xn be a random sample from a Poisson distri-
bution with mean θ. P
n
a) Show that T = i=1 Xi is complete sufficient statistic for θ.
b) For a > 0, find the uniformly minimum variance unbiased estimator (UMVUE) of
g(θ) = eaθ .
c) Prove the identity:
T
X1
1
E 2 |T = 1 + .
n
Solution: a) See solution to Problem 6.37P a).
b) The complete sufficient statistic T = ni=1 Xi ∼ Poisson (nθ). Hence the mgf of
T is
E(etT ) = mT (t) = exp[nθ(et − 1)].
Thus n(et − 1) = a, or et = a/n + 1, or et = (a + n)/n, or t = log[(a + n)/n]. Thus
T
tT t T a+n a+n
e = (e ) = = exp[(T log( )]
n n
32
c) Find E(X − µ)3 . [Hint: Show that if Y is a N(0, σ 2 ) random variable, then
E(Y 3 ) = 0].
d) Using c), find the UMVUE of µ3 .
2
Solution. a) E(X − µ)2 = Var(X) = σn .
2 2 2 2 2
b) From a), E(X − 2µX + µ2 ) = E(X ) − µ2 = σn , or E(X ) − σn = µ2 , or
2 2
E(X − σn ) = µ2 .
2 2
Since X is a complete and sufficient statistic, and X − σn is an unbiased estimator
2 2
of µ2 and is a function of X, the UMVUE of µ2 is X − σn by the Lehmann-Scheffé
Theorem. R∞
c) Let Y = X − µ ∼N(0, σ 2). Then E(Y 3) = −∞ h(y)dy = 0, because h(y) is an odd
function.
3 3 2 3 2
d) E(X − µ) = E(X − 3µX + 3µ2 X − µ3 ) = E(X ) − 3µE(X ) + 3µ2 E(X) − µ3
3 2 3 2
= E(X ) − 3µ σn + µ2 + 3µ3 − µ3 = E(X ) − 3µ σn − µ3 .
3 2
Thus E(X ) − 3µ σn − µ3 = 0, so replacing µ with its unbiased estimator X in the
middle term, we get
3 σ2
E X − 3X = µ3 .
n
3 2
Since X is a complete and sufficient statistic, and X − 3X σn is an unbiased estimator
3 2
of µ3 and is a function of X̄, the UMVUE of µ3 is X − 3X σn by the Lehmann-Scheffé
Theorem.
6.40. (Jan. 2014 Qual): Let Y1 , ..., Yn be iid from a uniform U(0, θ) distribution
where θ > 0. Then T = max(Y1 , ..., Yn) is a complete sufficient statistic.
a) Find E(T k ) for k > 0.
b) Find the UMVUE of θk for k > 0.
Z θ n−1
ntn−1 k k nt
Solution: a) The pdf of T is f(t) = θn I(0 < t < θ). Hence E(T ) = t dt =
0 θn
Z θ k+n−1
nt nθk+n n k
n
dt = n
= θ .
0 θ (k + n)θ k+n
k+n k
b) Thus the UMVUE of θk is T .
n
6.41. (Jan. 2014 Qual): Let Y1 , ..., Yn be iid from a distribution with probability
distribution function (pdf)
θ
f(y) =
(1 + y)θ+1
where y > 0 and θ > 0.
a) Find a minimal sufficient statistic for θ.
b) Is the statistic found in a) complete? (prove or disprove)
c) Find the Fisher information I1(θ) if n = 1.
d) Find the Cramer Rao lower bound (CRLB) for estimating θ2 .
33
6.42. (Aug. 2014 Qual): Let X1 , ..., Xn be iid from a distribution with pdf
Or use
n n 2 n2 + n
E(T 2) = V (T ) + [E(T )]2 = + = .
θ2 θ θ2
Hence
Γ(n) 2 T2
T =
Γ(2 + n) n(n + 1)
is the UMVUE of θ2 by LSU.
d 1
d) Now log(f(x|θ) = log(θ) + (θ − 1) log(x). So log(f(x|θ) = + log(x), and
dθ θ
d2 −1
2
log(f(x|θ) = 2 . This familay is a 1P-REF, so
dθ θ
2
d
I1 (θ) = −Eθ log(f(x|θ) = 1/θ2 .
dθ2
[τ 0(θ)]2 4
e) Now τ 0 (θ) = −2θ−3 , and CRLB = = 4.
nI1 (θ) nθ
2
6.43. (Jan. 2015 QUAL): (Y , S ) is complete sufficient.
a) Y + S 2 by LSU.
2 cn Y µ cn µ 1
b) Using Y S , get E 2
= 2 = 2 = cn µE .
S σ σ S2
n−1 2 1 n−1 1 n−3
Using 2
S ∼ χ2n−1 , show E 2
= 2
. So cn = .
σ S n−3 σ n−1
6.44. (Aug. 2016 Qual): Assume the service time of a customer at a store follows
a Pareto distribution with minimum waiting time equal to θ minutes. The maximum
length of the service is dependent on the type of the service. Suppose X1 , ..., Xn is a
34
random sample of service times of n customers, where each Xi has a Pareto density given
by
4 −5
4θ x x ≥ θ,
f(x|θ) =
0 x<θ
d)
35
b = MSE(T (x)) = V ar(T (x)) + Bias(T (x))2
MSE(θ)
4nθ2 θ2
= +
(4n − 1)2 (4n − 2) (4n − 1)2
Γ(2θ)
f(x|θ) = [x(1 − x)]θ−1
Γ(θ)Γ(θ)
36
b) If possible, find the UMP level α test for Ho : θ = 1 vs. H1 : θ > 1.
Qn
Solution.
Q For both a) and b), the test is reject Ho iff i=1 xi (1 − xi ) > c where
Pθ=1 [ ni=1 xi (1 − xi ) > c] = α.
7.10. (Jan. 2001 SIU and 1990 Univ. MN Qual): Let X1 , ..., Xn be a random sample
from the distribution with pdf
xθ−1 e−x
f(x, θ) = , x > 0, θ > 0.
Γ(θ)
Find the uniformly most powerful level α test of
H: θ = 1 versus K: θ > 1.
where n
Y X
PH ( Xi > c) = PH ( log(Xi ) > log(c)) = α.
i=1
7.11. (Jan 2001 Qual, see Aug 2013 Qual): Let X1 , ..., Xn be independent identically
distributed random variables from a N(µ, σ 2 ) distribution where the variance σ 2 is known.
We want to test H0 : µ = µ0 against H1 : µ 6= µ0 .
a) Derive the likelihood ratio test.
b) Let λ be the likelihood ratio. Show that −2 log λ is a function of (X − µ0 ).
c) Assuming that H0 is true, find P (−2 log λ > 3.84).
Solution. a) The likelihood function
−1 X
L(µ) = (2πσ 2)−n/2 exp[ (xi − µ)2 ]
2σ 2
and the MLE for µ is µ̂ = x. Thus the numerator of the likelihood ratio test statistic is
L(µ0 ) and the denominator is L(x). So the test is reject H0 if λ(x) = L(µ0 )/L(x) ≤ c
where α = Pµ0 (λ(X) ≤ c).
37
b)P
As a statistic, P
log λ = log L(µ0 ) − log L(X) = P P
− 2σ1 2 [(Xi − µ0 ) − (Xi − X)2 ] = 2σ
2 −n 2
2 [X − µ0 ] since (Xi − µ0 )2 = (Xi − X + X −
P
µ0 )2 = (Xi − X)2 + n(X − µ0 )2 . So −2 log λ = σn2 [X − µ0 ]2.
c) −2 log λ ∼ χ21 and from a chi–square table, P (−2 log λ > 3.84) = 0.05.
7.12. (Aug. 2001 Qual): Let X1 , ..., Xn be iid from a distribution with pdf
2x
f(x) = exp(−x2/λ)
λ
where λ and x are both positive. Find the level α UMP test for Ho : λ = 1 vs H1 : λ > 1.
7.13. (Jan. 2003 Qual): Let X1 , ..., Xn be iid from a distribution with pdf
(log θ)θx
f(x|θ) =
θ−1
where 0 < x < 1 and θ > 1. Find the UMP (uniformly most powerful) level α test of
Ho : θ = 2 vs. H1 : θ = 4.
Solution. Let θ1 = 4. By Neyman Pearson lemma, reject Ho if
n P n
f(x|θ1 ) log(θ1) xi 1 1
= θ1 P >k
f(x|2) θ1 − 1 log(2) 2 xi
iff n P xi
log(θ1) θ1
>k
(θ1 − 1) log(2) 2
iff P xi
θ1
> k0
2
iff X
xi log(θ1/2) > c0 .
P P
So reject Ho iff Xi > c where Pθ=2 ( Xi > c) = α.
7.14. (Aug. 2003 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a distribution with pdf
2
x2 exp −x
2σ2
f(x) = √
σ 3 2 Γ(3/2)
38
Solution. a) By NP lemma reject Ho if
f(x|σ = 2)
> k0 .
f(x|σ = 1)
The LHS = P 2
1
exp[ −1 x ]
23n
−1
8
P 2i
exp[ 2 xi ]
So reject Ho if
1 X 1 1
3n
exp[ x2i ( − )] > k 0
2 2 8
P 2 P 2
or if xi > k where PHo ( xi > k) = α.
b) In the above argument, with any σ1 > 1, get
X 1 1
x2i ( − 2 )
2 2σ1
and
1 1
− 2 >0
2 2σ1
for any σ12 > 1. Hence the UMP test is the same as in a).
7.15. (Jan. 2004 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a distribution with pdf
2 1 −[log(x)]2
f(x) = √ exp
σ 2π x 2σ 2
f(x|σ = 2)
> k0 .
f(x|σ = 1)
The LHS = P
1
2n
exp[ −18
[log(xi )]2]
P
exp[ −1
2
[log(xi )]2 ]
So reject Ho if
1 X
2 1 1
n
exp[ [log(x i )] ( − )] > k 0
2 2 8
P 2
P 2
or if [log(Xi )] > k where PHo ( [log(Xi )] > k) = α.
39
b) In the above argument, with any σ1 > 1, get
X 1 1
[log(xi )]2( − 2 )
2 2σ1
and
1 1
− 2 >0
2 2σ1
for any σ12 > 1. Hence the UMP test is the same as in a).
7.16. (Aug. 2004 Qual): Suppose X is an observable random variable with its pdf
given by f(x), x ∈ R. Consider two functions defined as follows:
3 2
64
x 0≤x≤4
f0 (x) =
0 elsewhere
3 √
16
x 0≤x≤4
f1 (x) =
0 elsewhere.
Determine the most powerful level α test for H0 : f(x) = f0 (x) versus Ha : f(x) =
f1 (x) in the simplest implementable form. Also, find the power of the test when α = 0.01
Solution. The most powerful test will have the following form.
Reject H0 iff ff10 (x)
(x)
> k.
3
But ff10 (x)
(x)
= 4x− 2 and hence we reject H0 iff X is small, i.e. reject H0 is X < k for
some constant k. This test must also haveR the size α, that is we require:
k 3 2 1 3
α = P (X < k) when f(x) = f0(x)) = 0 64 x dx = 64 k ,
1
so that k = 4α 3 .
1
For the power, when k = 4α 3 R
k 3 √ √
P [X < k when f(x) = f1 (x)] = 0 16 xdx = α.
When α = 0.01, the power is = 0.10.
7.17. (Sept. 2005 Qual): Let X be one observation from the probability density
function
f(x) = θxθ−1 , 0 < x < 1, θ > 0.
a) Find the most powerful level α test of H0 : θ = 1 versus H1 : θ = 2.
b) For testing H0 : θ ≤ 1 versus H1 : θ > 1, find the size and the power function of
5
the test which rejects H0 if X > .
8
c) Is there a UMP test of H0 : θ ≤ 1 versus H1 : θ > 1? If so, find it. If not, prove so.
40
7.19. (Jan. 2009 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from a half normal HN(µ, σ 2) distribution with pdf
2 −(x − µ)2
f(x) = √ exp
σ 2π 2σ 2
where σ > 0 and x > µ and µ is real. Assume that µ is known.
a) What is the UMP (uniformly most powerful) level α test for
H0 : σ 2 = 1 vs. H1 : σ 2 = 4 ?
b) If possible, find the UMP level α test for H0 : σ 2 = 1 vs. H1 : σ 2 > 1.
Solution. a) By the NP lemma reject Ho if
f(x|σ 2 = 4)
> k0.
f(x|σ 2 = 1)
The LHS = P
(xi−µ)2
1
2n
exp[( −
2(4)
)]
P
− (xi −µ)2
.
exp[( 2
)]
So reject Ho if
1 X −1 1
n
exp[ (xi − µ)2 ( + )] > k 0
2 8 2
P 2
P 2
or if (xi − µ) > k where Pσ2 =1 ( (Xi − µ) > k) = α.
P
Under Ho, (Xi − µ)2 ∼ χ2n so k = χ2n (1 − α) where P (χ2n > χ2n (1 − α)) = α.
b) In the above argument,
−1 −1
+ 0.5 = + 0.5 > 0
2(4) 8
but
−1
+ 0.5 > 0
2σ12
for any σ12 > 1. Hence the UMP test is the same as in a).
Alternatively, use the fact that this is an exponential family where w(σ 2) = −1/(2σ 2 )
is an increasing function of σ 2 with T (Xi ) = (Xi − µ)2 . Hence the test in a) is UMP for
a) and b) by Theorem 7.3.
7.20. (Aug. 2009 Qual): Suppose that the test statistic T (X) for testing H0 : λ = 1
versus H1 : λ > 1 has an exponential(1/λ1 ) distribution if λ = λ1 . The test rejects H0 if
T (X) < log(100/95).
a) Find the power of the test if λ1 = 1.
b) Find the power of the test if λ1 = 50.
c) Find the pvalue of this test.
Solution. E[T (X)] = 1/λ1 and the power = P(test rejects H0 ) = Pλ1 (T (X) <
log(100/95)) = Fλ1 (log(100/95))
= 1 − exp(−λ1 log(100/95)) = 1 − (95/100)λ1 .
41
a) Power = 1 − exp(− log(100/95)) = 1 − exp(log(95/100)) = 0.05.
b) Power = 1 − (95/100)50 = 0.923055.
c) Let T0 be the observed value of T (X). Then pvalue = P (W ≤ T0 ) where W ∼
exponential(1) since under H0 , T (X) ∼ exponential(1). So pvalue = 1 − exp(−T0).
7.21. (Aug. 2009 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a Burr type X distribution with pdf
2 2
f(x) = 2 τ x e−x (1 − e−x )τ −1
n
Qn 2 =2 (1 − e−xi )2.
2 i=1 (1 − e
−x i)
i=1
So reject Ho if
n
Y 2
(1 − e−xi )2 > k 0
i=1
or n
Y 2
(1 − e−xi ) > c
i=1
or n
X 2
log(1 − e−xi ) > d
i=1
where n
Y 2
α = Pτ =2 ( (1 − e−xi ) > c).
i=1
42
which gives the same test as in a).
7.22. (Jan. 2010 Qual): Let X1 , ..., Xn be independent identically distributed random
variables from an inverse exponential distribution with pdf
θ −θ
f(x) = 2 exp
x x
43
7.24. (Sept. 2010 Qual): The pdf of a bivariate normal distribution is f(x, y) =
" 2 2 #!
1 −1 x − µ1 x − µ1 y − µ2 y − µ2
2 1/2
exp 2
− 2ρ +
2πσ1σ2 (1 − ρ ) 2(1 − ρ ) σ1 σ1 σ2 σ2
where −1 < ρ < 1, σ1 > 0, σ2 > 0, while x, y, µ1 , and µ2 are all real. Let (X1 , Y1 ), ..., (Xn, Yn )
be a random sample from a bivariate normal distribution.
Let θ̂(x, y) be the observed value of the MLE of θ, and let θ̂(X, Y ) be the MLE as a
random variable. Let the (unrestricted) MLEs be µ̂1 , µ̂2 , σ̂1 , σ̂2, and ρ̂. Then
n
X 2 X n 2
xi − µ̂1 nσ̂ 2 yi − µ̂2 nσ̂22
T1 = = 21 = n, and T3 = = = n,
i=1
σ̂1 σ̂1 i=1
σ̂2 σ̂22
n
X
xi − µ̂1 yi − µ̂2
and T2 = = nρ̂.
i=1
σ̂1 σ̂2
Consider testing H0 : ρ = 0 vs. HA : ρ 6= 0. The (restricted) MLEs for µ1 , µ2 , σ1 and
σ2 do not change under H0 , and hence are still equal to µ̂1 , µ̂2 , σ̂1 , and σ̂2 .
a) Using the above information, find the likelihood ratio test for H0 : ρ = 0 vs.
HA : ρ 6= 0. Denote the likelihood ratio test statistic by λ(x, y).
b) Find the large sample (asymptotic) likelihood ratio test that uses test statistic
−2 log(λ(x, y)).
Solution. a) Let k = [2πσ1σ2(1 − ρ2 )1/2]. Then the likelihood L(θ) =
n
" 2 2#!
1 −1 X xi − µ1 xi − µ1 yi − µ2 yi − µ2
exp − 2ρ + .
kn 2(1 − ρ2 ) i=1 σ1 σ1 σ2 σ2
Hence
1 −1
L(θ̂) = exp [T1 − 2ρ̂T2 + T3]
[2πσ̂1σ̂2(1 − ρ̂2 )1/2]n 2(1 − ρ̂2)
1
= exp(−n)
[2πσ̂1σ̂2(1 − ρ̂2 )1/2]n
and
1 −1
L(θ̂ 0 ) = exp [T1 + T3 ]
[2πσ̂1σ̂2]n 2
1
= exp(−n).
[2πσ̂1σ̂2 ]n
Thus λ(x, y) =
L(θ̂ 0 )
= (1 − ρ̂2)n/2 .
L(θ̂)
So reject H0 if λ(x, y) ≤ c where α = supθ ∈Θo P (λ(X, Y ) ≤ c). Here Θo is the set of
θ = (µ1 , µ2 , σ1, σ2, ρ) such that the µi are real, σi > 0 and ρ = 0, i.e., such that Xi and
Yi are independent.
44
b) Since the unrestricted MLE has one more free parameter than the restricted MLE,
−2 log(λ(X, Y )) ≈ χ21, and the approximate LRT rejects H0 if −2 log λ(x, y) > χ21,1−α
where P (χ21 > χ21,1−α) = α.
7.26. (Aug. 2012 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf
y 1 −1 y
f(y) = e I(y ≥ 0) exp (e − 1)
λ λ
where y > 0 and λ > 0.
λ 2
a) Show that W = eY − 1 ∼ χ.
2 2
b) What is the UMP (uniformly most powerful) level α test for
H0 : λ = 2 versus H1 : λ > 2?
c) If n = 20 and α = 0.05, then find the power β(3.8386) of the above UMP test if
λ = 3.8386. Let P (χ2d ≤ χ2d,δ ) = δ. The tabled values below give χ2d,δ .
d δ
0.01 0.05 0.1 0.25 0.75 0.9 0.95 0.99
20 8.260 10.851 12.443 15.452 23.828 28.412 31.410 37.566
30 14.953 18.493 20.599 24.478 34.800 40.256 43.773 50.892
40 22.164 26.509 29.051 33.660 45.616 51.805 55.758 63.691
Solution. b) This family is a regular one parameter exponential family where w(λ) =
−1/λ
Pn isyiincreasing. Hence the level Pnα UMP test rejects H0 when
Yi
i=1 (e − 1) > k where α = P 2 ( i=1 (e − 1) > k) = P2 (T (Y ) > k).
λ 2T (Y )
c) Since T (Y ) ∼ χ22n , ∼ χ22n. Hence
2 λ
α = 0.05 = P2 (T (Y ) > k) = P (χ240 > χ240,1−α),
2T (Y ) 2(55.758) 2(55.758)
β(λ) = Pλ (T (Y ) > 55.758) = P ( > ) = P (χ240 > )
λ λ λ
2(55.758)
= P (χ240 > ) = P (χ240 > 29.051) = 1 − 0.1 = 0.9.
3.8386
7.27. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed N(µ =
0, σ 2) random variables with pdf
2
1 −y
f(y) = √ exp
2πσ 2 2σ 2
45
c) If n = 20 and α = 0.05, then find the power β(3.8027) of the above UMP test if
σ = 3.8027. Let P (χ2d ≤ χ2d,δ ) = δ. The tabled values below give χ2d,δ .
2
d δ
0.01 0.05 0.1 0.25 0.75 0.9 0.95 0.99
20 8.260 10.851 12.443 15.452 23.828 28.412 31.410 37.566
30 14.953 18.493 20.599 24.478 34.800 40.256 43.773 50.892
40 22.164 26.509 29.051 33.660 45.616 51.805 55.758 63.691
Solution. b) This family is a regular one parameter exponential family where w(σ 2) =
2
−1/(2σ
Pn 2 ) is increasing. Hence Pnthe level α UMP test rejects H0 when
2
y
i=1 i > k where α = P 1 ( i=1 iY ) > k) = P1 (T (Y ) > k).
T (Y )
c) Since T (Y ) ∼ σ 2 χ2n , 2
∼ χ2n . Hence
σ
α = 0.05 = P1 (T (Y ) > k) = P (χ220 > χ220,1−α),
T (Y ) 31.41 31.41
β(σ) = Pσ (T (Y ) > 31.41) = P ( 2
> 2
) = P (χ220 > )
σ σ 3.8027
= P (χ220 > 8.260) = 1 − 0.01 = 0.99.
7.28. (Aug. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed random
variables with pdf
y 1 y 2
f(y) = 2 exp −
σ 2 σ
where σ > 0, µ is real, and y ≥ 0.
a) Show W = Y 2 ∼ σ 2χ22 . Equivalently, show Y 2 /σ 2 ∼ χ22.
b) What is the UMP (uniformly most powerful) level α test for
H0 : σ = 1 versus H1 : σ > 1?
√
c)√If n = 20 and α = 0.05, then find the power β( 1.9193) of the above UMP test if
σ = 1.9193. Let P (χ2d ≤ χ2d,δ ) = δ. The above tabled values for problem 7.27 give χ2d,δ .
√
Solution. a) Let X = Y 2/σ 2 = t(Y ). Then Y = σ X = t−1 (X). Hence
dt−1 (x) σ 1
= √
dx 2 x
and the pdf of X is
−1 √ " √ 2 #
dt (x) σ x −1 σ x σ 1
g(x) = fY (t−1 (x)) =
2
exp √ = exp(−x/2)
dx σ 2 σ 2 x 2
46
T (Y )
c) Since T (Y ) ∼ σ 2 χ22n, 2
∼ χ22n . Hence
σ
α = 0.05 = P1 (T (Y ) > k) = P (χ240 > χ240,1−α),
T (Y ) 55.758 55.758
β(σ) = Pσ (T (Y ) > 55.758) = P ( 2
> 2
) = P (χ240 > )
σ σ σ2
55.758
= P (χ240 > ) = P (χ240 > 29.051) = 1 − 0.1 = 0.9.
1.9193
7.29. (Aug. 2012 Qual): Consider independent random variables X1 , ..., Xn, where
Xi ∼ N(θi , σ 2), 1 ≤ i ≤ n, and σ 2 is known.
a) Find the most powerful test of
where θi0 are known. Derive (and simplify) the exact critical region for a level α test.
b) Find the likelihood ratio test of
Derive (and simplify) the exact critical region for a level α test.
c) Find the power of the test in (a), when θi0 = n−1/3 , ∀i. What happens to this
power expression as n → ∞?
Solution: a) In Neyman Pearson’s lemma, let θ = 0 if H0 is true and θ = 1 if H1 is
true. Then want to find f(x|θ = 1)/f(x|θ = 0) ≡ f1(x)/f0 (x). Since
n
1 −1 X
f(x) = √ exp[ 2 (xi − θi )2 ],
( 2π σ) n 2σ i=1
Pn n n
f1 (x) −1
exp[ 2σ 2
2
i=1 (xi − θi0 ) ] −1 X 2
X
= −1
Pn 2 = exp( 2 [ (xi − θi0) − x2i ]) =
f0 (x) exp[ 2σ 2 x
i=1 i ] 2σ i=1 i=1
X n Xn
−1 2
exp( [−2 x i θ i0 + θi0 ]) > k 0
2σ 2 i=1 i=1
Xn Xn
−1 P P P
if 2
[−2 xi θi0 + 2
θi0 ]) > k” or if ni=1 xi θi0 > k. Under Ho, ni=1 Xi θi0 ∼ N(0, σ 2 ni=1 θi0
2
).
2σ i=1 i=1
Thus Pn
Xi θi0
pi=1
Pn 2 ∼ N(0, 1).
σ i=1 θi0
By Neyman Pearson’s lemma, reject Ho if
Pn
Xi θi0
pi=1
Pn 2 > z1−α
σ i=1 θi0
47
where P (Z < z1−α) = 1 − α when Z ∼ N(0, 1).
b) The MLE under Ho is θ̂i = 0 for i = 1, ..., n, while the unrestricted MLE is θ̂i = xi
for i = 1, ..., n since xi = xi when the sample size is 1. Hence
Pn 2 n
L(θ̂i = 0) −1
exp[ 2σ 2 i=1 xi ] −1 X 2
λ(x) = = −1
Pn = exp[ 2 x i ] ≤ c0
ˆ
L(θi = xi ) exp[ 2σ2
(x
i=1 i − x i ) 2] 2σ i=1
n
−1 X 2 P
if 2
xi ] ≤ c”, or if ni=1 x2i ≥ c. Under Ho, Xi ∼ N(0, σ 2 ), Xi /σ ∼ N(0, 1), and
2σ
Pn i=1 2 2 2
Pn 2 2 2
i=1 Xi /σ ∼ χn . So the LRT is reject Ho if i=1 Xi /σ ≥ χn,1−α where P (W ≥
χ2n,1−α ) = 1 − α if W ∼ χ2n .
c) Power = P(reject Ho) =
−1/3 Pn −1/3 Pn
n i=1 Xi n i=1 Xi
P √ > z1−α = P > z1−α =
σ n n−2/3 σ n1/6
−1/2 Pn Xn
n i=1 Xi
P > z1−α = P ( Xi > σ z1−α n1/2)
σ i=1
n
X
where Xi ∼ N(n n−1/3, n σ 2) ∼ N(n2/3, n σ 2). So
i=1
Pn Pn
Xi − n2/3
i=1 i=1 Xi
√ ∼ N(0, 1), and power = P √ > z1−α =
nσ nσ
Pn 2/3
i=1 Xi − n n2/3 n2/3
P √ > z1−α − √ = 1 − Φ z1−α − √ =
nσ nσ nσ
n1/6
1 − Φ z1−α − → 1 − Φ(−∞) = 1
σ
as n → ∞.
7.31. (Jan. 14 Qual): Let X1 , ..., Xm be iid from a distribution with pdf
f(x) = µxµ−1 ,
for 0 < x < 1 where µ > 0. Let Y1 , ..., Yn be iid from a distribution with pdf
g(y) = θy θ−1 ,
48
P
Solution: L(µ) = µm exp[(µ −
P 1) log(xi )], and
log(L(µ)) = m log(µ) + (µ − 1) log(xi ). Hence
d log(L(µ)) m X set
= + log(xi ) = 0.
dµ µ
P
Or µ log(xi) = −m or µ̂ = −m/T1, unique. Now
d2 log(L(µ)) −m
2
= 2 < 0.
dµ µ
−n −n
Hence µ̂ is the MLE of µ. Similarly θ̂ = Pn = . Under H0 combine the two
j=1 log(Yj ) T2
samples into one sample of size m + n with MLE
−(m + n)
µ̂0 = .
T1 + T2
Now the likelihood ratio statistic
P P
L(µ̂0 ) µ̂0m+n exp[(µ̂0 − 1)( log(Xi ) + log(Yi ))]
λ= = P P
L(µ̂, θ̂) µ̂m θ̂n exp[(µ̂ − 1) log(Xi ) + (θ̂ − 1) log(Yi )]
7.32. (Aug. 2014 Qual): If Z has a half normal distribution, Z ∼ HN(0, σ 2 ), then
the pdf of Z is 2
2 −z
f(z) = √ exp
2π σ 2σ 2
where σ > 0 and z ≥ 0. Let X1 , ..., Xn be iid HN(0, σ12) random variables and let Y1 , ..., Ym
be iid HN(0, σ22) random variables that are independent of the X’s.
a) Find the α level likelihood ratio test for H0 : σ12 = σ22 vs. H1 : σ12 6= σ22 . Simplify
the test statistic.
b) What happens if m = n?
Solution: a) Pn Pm
2 2
2 2 i=1 Xi i=1 Yi
(σ̂1 , σ̂2 ) = ,
n m
is the MLE of (σ12, σ22), and that under the restriction σ12 = σ22 = σ02, say, then the
restricted MLE Pn Pm
2 2
2 X i + i=1 Yi
σ̂0 = i=1 .
n+m
49
Now the likelihood ratio statistic
h Pn 2 Pm 2 i
1 −1
L(σ̂02 ) σ̂0m+n
exp 2σ̂2
( i=1 xi + i=1 yi )
λ= = h P 0 i h Pm 2 i
L(σ̂12, σ̂22) 1
exp 2σ̂ −1 n
x 2 1
exp −1
σ̂1n 1
2 i=1 i σ̂m 2 2σ̂2 2
i=1 yi )
1
σ̂0m+n
exp −( n+m
2
) σ̂1n σ̂2m
= = .
1
σ̂1n
exp(−n/2) σ̂1m exp(−m/2) σ̂0m+n
2
So reject H0 if λ(x, y) ≤ c where α = supσ2 ∈Θo P (λ(X, Y ) ≤ c). Here Θo is the set
of σ12 = σ22 ≡ σ 2 such that the Xi and Yi are iid.
b) Then
σ̂ n σ̂ n
λ(x, y) = 12n2 ≤ c
σ̂0
is equivalent to
σ̂1σ̂2
≤ k.
σ̂02
7.33. (Aug. 2016 QUAL): Let θ > 0 be known. Let X1 , ..., Xn be independent,
identically distributed random variables from a distribution with a pdf
λθλ
f(x) =
xλ+1
for x > θ where λ > 0. Note that f(x) = 0 for x ≤ θ.
a) Find the UMP (uniformly most powerful) level α test for
H0 : λ = 1 vs. H1 : λ = 2.
b) If possible, find the UMP level α test for H0 : λ = 1 vs. H1 : λ > 1.
Solution: ab)
I(x > θ) λ
f(x) = λθ exp(λ[− log(x)])
x
is a 1PREF
Pnwhere w(λ) = λ is increasing.
Pn Hence the UMP level α test rejects H0 if
T (x) = − i=1 xi > c where α = P1 (− i=1 Xi > c).
7.34. (Aug. 2016 QUAL): Let X1 , X2 , . . . , X15 denote a random sample from the
density function
1 3 −x4 /θ
4x e x > 0;
f(x|θ) = θ
0 x ≤ 0,
50
d) What is the approximate power of your MP test at θ1 = 5 ?
e) Is your MP test also a uniformly most powerful (UMP) test for testing H0 : θ = 2
versus HA : θ > 2? Give reasons.
d δ
0.034 0.05 0.1 0.25 0.75 0.9 0.95 0.975 0.99
15 6.68 7.26 8.55 11.04 19.31 22.31 25.00 27.49 30.58
30 17.51 18.49 20.60 24.48 34.80 40.26 43.77 46.98 50.89
40 25.31 26.51 29.05 33.66 47.27 51.81 55.76 59.34 63.69
Solution:
An easier way to do much of this problem is to not that the distribution is a 1PREF
with
Pn w(θ) = −1/θ an increasing function of θ and t(x) = x4. Hence reject H0 if
4
i=1 Xi > c.
a) We can use the Neyman-Pearson Lemma for specifying the rejection region. Let
R represents the rejection region.
f(x|θ1)
X∈R if >k⇒
f(x|θ0)
Q 15
f(x|θ1 ) 2n 15 3 −x4i /θ1
i=1 4xi e 2 n θ1 − 2 X 4
= n Q15 4 = exp x i
f(x|θ = 2) θ1 i=1 4x3i e−xi /2 θ1 2θ1 i=1
15 15
2 n θ1 − 2 X 4 X
Since θ1 > 2, then exp xi > k ⇔ x4i > c
θ1 2θ1 i=1 i=1
So
15
X
R= X: Xi4 > c
i=1
We should determine the c in such way that the size of the test be equal α = .05. i.e.,
15
X
Pθ0 (X ∈ R) = α ⇒ Pθ0 ( Xi4 > c) = .05
i=1
15
X
R= X: Xi4 > 43.77 .
i=1
b)
15
X 15
X
p-value = sup Pθ ( Xi4 > 46.98) = Pθ=2 ( Xi4 > 46.98) = 0.0249
θ∈Θ0 i=1 i=1
c)
51
P15
Since p − value < α, we reject the null hypothesis in favor of HA . Or i=1 Xi4 =
46.98 > 43.77, so reject H0 by a).
d)
Note that the power function is given by
15
X 15
X X4 i 43.77
β(θ) = Pθ (X ∈ R) = Pθ ( Xi4 > 43.77) = P (2 >2 )
i=1 i=1
θ θ
43.77
= P (W > 2 )
θ
where W has chi-square distribution with 30 degree of freedom. Then, we can compute
the power of the test as follows
43.77
β(θ = 5) = P (W > 2 ) = P (W > 17.5) = 0.966
5
e) P
Let T = 15 4
i=1 Xi , then from the hint, it can be shown that T ∼ Gamma(15, θ), with
density function
1 1 14 −t/θ
fT (t|θ) = t e
Γ(15) θ15
Now, let θ2 > θ1, then it can be shown that the family of pdf {f(t|θ), θ ∈ Θ} has a MLR.
That is, the ratio
fT (t|θ2) θ1 15 t( θ1 − θ1 )
= e 1 2
fT (t|θ1) θ2
is a increasing function of t. Then, by applying the Karlin-Rubin Theorem we can
conclude that the MP test given in part a) is UMP test for H0 : θ = 2 versus HA : θ > 2.
8.3∗. (Aug. 03 Qual): Let X1 , ..., Xn be a random sample from a population with
pdf
θxθ−1
3θ
0<x<3
f(x) =
0 elsewhere
X
The method of moments estimator for θ is Tn = .
3−X
√
a) Find the limiting distribution of n(Tn − θ) as n → ∞.
b) Is Tn asymptotically efficient? Why?
c) Find a consistent estimator for θ and show that it is consistent.
3θ
Solution. a) E(X) = θ+1
, thus
√ D
n(X − E(X)) → N(0, V (X)), where
9θ y 3
V (X) = (θ+2)(θ+1) 2. Let g(y) = 3−y , thus g 0(y) = (3−y)2
. Using the delta method,
√ D 2
n(Tn − θ) → N(0, θ(θ+1)
θ+2
).
52
√ D
b) It is asymptotically efficient if n(Tn − θ) → N(0, ν(θ)), where
d
dθ
(θ)
ν(θ) = d 2 .
−E( dθ 2 lnf(x|θ))
2 θ(θ+1) 2
d 1 2
But, E(( dθ 2 lnf(x|θ)) = θ2 . Thus ν(θ) = θ 6= θ+2
.
3θ
c) X → θ+1 in probability. Thus Tn → θ in probability.
8.8. (Sept. 2005 Qual): Let X1 , ..., Xn be independent identically distributed random
variables with probability density function
f(x) = θxθ−1 , 0 < x < 1, θ > 0.
1
a) Find the MLE of . Is it unbiased? Does it achieve the information inequality
θ
lower bound?
1
b) Find the asymptotic distribution of the MLE of .
θ
θ
c) Show that X n is unbiased for . Does X n achieve the information inequality
θ+1
lower bound?
1
d) Find an estimator of from part (c) above using X n which is different from the
θ
MLE in (a). Find the asymptotic distribution of your estimator using the delta method.
e) Find the asymptotic relative efficiency of your estimator in (d) with respect to the
MLE in (b).
8.27. (Sept. 05 Qual): Let X ∼ Binomial(n, p) where the positive integer n is large
and 0 < p < 1.
√ X
a) Find the limiting distribution of n − p .
n
" #
2
√ X
b) Find the limiting distribution of n − p2 .
n
" #
3
X X 1
c) Show how to find the limiting distribution of − when p = √ .
n n 3
(Actually want the limiting distribution of
" # !
3
X X
n − − g(p)
n n
where g(θ) = θ3 − θ.)
D P
Solution. a) X = ni=1 Yi where Y1 , ..., Yn are iid Ber(ρ). Hence
√ X D √ D
n( − ρ) = n(Y n − ρ) → N(0, ρ(1 − ρ)).
n
53
b) Let g(p) = p2 . Then g 0 (p) = 2p and by the delta method and a),
" #
2
√ X 2
√ X D
n −p = n g( ) − g(p) →
n n
N(0, p(1 − p)(g 0 (p))2 ) = N(0, p(1 − p)4p2 ) = N(0, 4p3 (1 − p)).
c) Refer to a) and Theorem 8.30. Let θ = p. Then g 0(θ) = 3θ2 − 1 and g 00 (θ) = 6θ.
Notice that
√ √ √ √ 1 −2
g(1/ 3) = (1/ 3)3 − 1/ 3 = (1/ 3)( − 1) = √ = c.
3 3 3
√ √ √
Also g 0 (1/ 3) = 0 and g 00 (1/ 3) = 6/ 3. Since τ 2(p) = p(1 − p),
√ 1 1
τ 2(1/ 3) = √ (1 − √ ).
3 3
Hence
Xn −2 D 1 1 1 6 1
n g − √ → √ (1 − √ ) √ χ21 = (1 − √ ) χ21.
n 3 3 2 3 3 3 3
8.28. (Aug. 04 Qual): Let X1 , ..., Xn be independent and identically distributed (iid)
from a Poisson(λ) distribution.
√
a) Find the limiting distribution of n ( X − λ ).
√
b) Find the limiting distribution of n [ (X)3 − (λ)3 ].
√ √ D √ D
Solution. a) By the CLT, n(X − λ)/ λ → N(0, 1). Hence n(X − λ) → N(0, λ).
√ D
b) Let g(λ) = λ3 so that g 0 (λ) = 3λ2 then n[(X)3 − (λ)3 ] → N(0, λ[g 0 (λ)]2 ) =
N(0, 9λ5 ).
8.29. (Jan. 04 Qual): Let X1 , ..., Xn bePiid from a normal distribution with unknown
n
Xi 1
Pn
mean µ and known variance σ 2. Let X = i=1 n
and S 2 = n−1 2
i=1 (Xi − X) .
√
b) by CLT (n is large ) n(X − µ) has approximately normal distribution√with mean
0
0 and variance σ 2. Let g(x) = x3, thus, g (x) = 3x2. Using delta method n(g(X) −
54
0 √ 3
g(µ)) goes in distribution to N(0, σ 2 (g (µ))2 ) or n(X − µ3 ) goes in distribution to
N(0, σ 2 (3µ2 )2 ).
8.34. (abc Jan. 2010 Qual): Let Y1 , ..., Yn be independent and identically distributed
(iid) from a distribution with probability mass function f(y) = ρ(1− ρ)y for y = 0, 1, 2, ...
and 0 < ρ < 1. Then E(Y ) = (1 − ρ)/ρ and VAR(Y ) = (1 − ρ)/ρ2 .
√ 1−ρ
a) Find the limiting distribution of n Y − .
ρ
1
b) Show how to find the limiting distribution of g(Y ) = 1+Y . Deduce it completely.
√
(This bad notation means find the limiting distribution of n(g(Y )−c) for some constant
c.)
c) Find the method of moments estimator of ρ.
√
d) Find the limiting distribution of n (1 + Y ) − d
for appropriate constant d.
e) Note that 1 + E(Y ) = 1/ρ. Find the method of moments estimator of 1/ρ.
Solution. a)
√ 1−ρ D 1−ρ
n Y − →N 0, 2
ρ ρ
by the CLT.
1
c) The method of moments estimator of ρ is ρ̂ = 1+Y .
0
d) Let g(θ) = 1 + θ so g (θ) = 1. Then by the delta method,
√ 1−ρ D 1−ρ 2
n g(Y ) − g( ) → N 0, 2 1
ρ ρ
or
√ 1 D 1−ρ
n (1 + Y ) − →N 0, 2 .
ρ ρ
This result could also be found with algebra since 1 + Y − ρ1 = Y + 1 − ρ1 = Y + ρ−1
ρ
=
1−ρ
Y − ρ .
e) Y is the method of moments estimator of E(Y ) = (1− ρ)/ρ, so 1+ Y is the method
of moments estimator of 1 + E(Y ) = 1/ρ.
8.35. (Sept. 2010 Qual): Let X1 , ..., Xn be independent identically distributed ran-
dom variables from a normal distribution with mean µ and variance σ 2.
a) Find the approximate distribution of 1/X̄. Is this valid for all values of µ?
b) Show that 1/X̄ is asymptotically efficient for 1/µ, provided µ 6= µ∗ . Identify µ∗ .
√
Solution. a) n(X̄ − µ) is approximately N(0, σ 2) Define g(x) = x1 , g 0 (x) = −1 x2
.
√ 1 1 σ2
Using delta method n( X̄ − µ ) has approximately N(0, µ4 ). Thus 1/X is approximately
2
N( µ1 , nµ
σ
4 ), provided µ 6= 0.
55
1 1
b) Using part a) X
is asymptotically efficient for µ
if
2
σ2 τ 0 (µ)
= 2
µ4 ∂
Eµ ∂µ
ln f(X/µ)
τ (µ) = µ1
τ 0 (µ) = −1
µ2
−1 (x−µ)2
ln f(x|µ) = 2
ln 2πσ 2 − 2σ2
2
∂ E(X − µ)2 1
E ln f(X/µ) = 4
= 2
∂µ σ σ
Thus
2
τ 0(µ) σ2
h i2 = .
∂ µ4
Eµ ∂µ
ln f(X/µ)
8.36. (Jan. 2011 Qual): Let Y1 , ..., Yn be independent and identically distributed
(iid) from a distribution with probability density function
2y
f(y) =
θ2
for 0 < y ≤ θ and f(y) = 0, otherwise. √
a) Find the limiting distribution of n Y − c for appropriate constant c.
√
b) Find the limiting distribution of n log( Y ) − d for
appropriate constant d.
c) Find the method of moments estimator of θk .
k k 2 2 2
Solution.
a) E(Y
) = 2θ /(k2 + 2) so E(Y ) = 2θ/3, E(Y ) = θ /2 and V (Y ) = θ /18.
√ 2θ D θ
So n Y − → N 0, by the CLT.
3 18
b) Let g(τ ) = log(τ
) so [g 0(τ )]2
= 1/τ2 where τ = 2θ/3. Then by the delta method,
√ 2θ D 1
n log(Y ) − log → N 0, .
3 8
P
c) θ̂k = k+2
2n
Yik .
8.37. (Jan. 2013 Qual): Let Y1 , ..., Yn be independent identically distributed discrete
random variables with probability mass function
r+y−1 r
f(y) = P (Y = y) = ρ (1 − ρ)y
y
56
√ r(1 − ρ)
a) Find the limiting distribution of n Y − .
ρ
r √
b) Let g(Y ) = . Find the limiting distribution of n g(Y ) − c for appro-
r+Y
priate constant c.
c) Find the method of moments estimator of ρ.
√ r(1 − ρ) D r(1 − ρ)
Solution: a) n Y − → N 0, by the CLT.
ρ ρ2
b) Let θ = r(1 − ρ)/ρ. Then
r rρ
g(θ) = r(1−ρ)
= = ρ = c.
r+ rρ + r(1 − ρ)
ρ
Now
−r −r −rρ2
g 0(θ) = = = .
(r + θ)2 (r + r(1−ρ) )2 r2
ρ
So
r2 ρ4 ρ4
[g 0(θ)]2 = = .
r4 r2
Hence by the delta method
√ D r(1 − ρ) ρ4 ρ2 (1 − ρ)
n ( g(Y ) − ρ ) → N 0, =N 0, .
ρ2 r2 r
set
c) Y = r(1 − ρ)/ρ or ρY = r − rρ or ρY + rρ = r or ρ̂ = r/(r + Y ).
8.38. (Aug. 2013 Qual): Let X1 , ..., Xn be independent identically distributed uni-
form (0, θ) random variables where θ > √0.
a) Find the limiting distribution of n(X − cθ ) for an appropriate constant cθ that
may depend on θ.
√
b) Find the limiting distribution of n[(X)2 − kθ ] for an appropriate constant kθ that
may depend on θ.
Solution: a) By the CLT,
√ θ D θ2
n X− → N 0, .
2 12
8.39. (Aug. 2014 Qual): Let X1 , ..., Xn be independent identically distributed (iid)
beta(β, β) random variables.
57
√
a) Find the limiting distribution of n( X n − θ ), for appropriate constant θ.
√
b) Find the limiting distribution of n( log(X n ) − d ), for appropriate constant d.
β2 1
Solution. a) E(Xi ) = β/(β + β) = 1/2 and V (Xi ) = = =
(2β)2(2β + 1) 4(2β + 1)
1
. So
8β + 4
√ 1 D 1
n Xn − →N 0,
2 8β + 4
by the CLT.
b) Let g(x) = log(x). So d = g(1/2) = log(1/2). Now g 0 (x) = 1/x and (g 0 (x))2 =
1/x2 . So (g 0 (1/2))2 = 4. So
√ D 1 1
n( log(X n ) − log(1/2) ) → N 0, 4 = N 0,
8β + 4 2β + 1
by the delta method.
9.1. (Aug. 2003 Qual): Suppose that X1 , ..., Xn are iid with the Weibull distribution,
that is the common pdf is
( b
b b−1 − xa
f(x) = a
x e 0<x
0 elsewhere
58
9.12. (Aug. 2009 qual): Let X1 , ..., Xn be a random sample from a uniform(0, θ)
distribution. Let Y = max(X1 , X2 , ..., Xn).
a) Find the pdf of Y/θ.
b) To find a confidence interval for θ, can Y/θ be used as a pivot?
c) Find the shortest (1 − α)% confidence interval for θ.
Solution. a) Let Wi ∼ U(0, 1) for i = 1, ..., n and let Tn = Y/θ. Then
Y
P( ≤ t) = P (max(W1, ..., Wn) ≤ t) =
θ
P(all Wi ≤ t) = [FWi (t)]n = tn for 0 < t < 1. So the pdf of Tn is
d n
fTn (t) = t = ntn−1
dt
for 0 < t < 1.
b) Yes, the distribution of Tn = Y/θ does not depend on θ by a).
c) Not sure this is shortest. Let Wi = Xi /θ ∼ U(0, 1) which has cdf FZ (t) = t for
0 < t < 1. Let W(n) = X(n) /θ = max(W1, ..., Wn). Then
X(n)
FW(n) (t) = P ( ≤ t) = tn
θ
for 0 < t < 1 by a).
Want cn so that
X(n)
P (cn ≤ ≤ 1) = 1 − α
θ
for 0 < α < 1. So
1 − FW(n) (cn ) = 1 − α or 1 − cnn = 1 − α
or
cn = α1/n .
Then
X(n)
X(n) , 1/n
α
is an exact 100(1 − α)% CI for θ.
9.14. (Aug. 2016 Qual, continuation of Problem 6.44):
b
a) Find the distribution function of θθ , and use it to explain why this is a pivotal
quantity.
b) Using this pivotal quantity, derive a statistics θbL such that P (θbL < θ) = 0.9.
c) Find the method of moments estimator of θ. Is this estimator unbiased?
d) Is the method of moments estimator consistent? Fully justify your answer.
e) How do you think the variance of an θb compares to that of the method of moments
estimator, and why?
f) What is the MLE of θ? Is the MLE consistent?
Solution:
59
b
a) Refer to 6.44 c). Let Y = θθ , then we have
θb
Fy (y) = P (Y ≤ y) = P ( ≤ y) = P (θb ≤ θy) = P (X(1) ≤ θy) = FX(1) (θy)
θ
θ 4n 1 4n
=1− = 1−
θy y
b
As it can been seen, the distribution of θθ is independent of any parameter, and that is
the definition of a pivotal quantity.
b)
First, let us to find a b value such that
θb 1 4n
P ( ≤ b) = 0.9 ⇒ 1 − = 0.9 ⇒ b = 101/4n
θ b
or
θb θb
P ( ≤ 101/4n ) = 0.9 ⇒ P ( 1/4n ≤ θ) = 0.9
θ 10
θb X(1)
Therefore, θbL = 101/4n
= 101/4n
.
c) MME:
n
4 1X
µ01 = E[X] = θ, m01 = Xi = X̄
3 n i=1
3
µ01 = m01 ⇒ θe = X̄
4
Check the unbiasedness of MME:
e = E[ 3 X̄] = 3 E[X̄] = 3 ∗ ( 4 θ) = θ
E[θ]
4 4 4 3
60
e) The variance of an θb should be less than the variance of any other unbiased estima-
tor, because we proved that the first one is the UMVUE. Also note that
b = a2 V ar(X(1)) = 4n − 1 2 4nθ2 θ2
V ar(an θ) n = ,
4n (4n − 1)2 (4n − 2) 4n(4n − 2)
e = θ2
V ar(θ) ,
8n
f) From part a) of Problem 6.44, we have
n
Y n
Y
n 4n
L(θ|x1, . . . , xn ) = fθ (xi ) = 4 θ I[θ,∞) (x(1)) x−5
i .
i=1 i=1
The indicator can be written as I(0 < θ ≤ x(1)), so L(θ) > 0 on (0, x(1)], and
L(θ) ∝ θ4n is an increasing function on (0, x(1)]. (Make a sketch of L(θ).) Hence X(1) is
the MLE.
Alternatively, taking derivative with respect to θ (without considering the indicator
function) from the likelihood function we have
Yn
d
L(θ|x1, . . . , xn ) = 4n (4n)θ4n−1 x−5
i >0
dθ i=1
Therefore, the likelihood function is a increasing function of θ on (0, x(1)]. Therefore X(1)
is the MLE of θ.
Following the argument in part d), and recall the MSE of X(1) from part d) of Problem
6.44, we have
61