Spring 2009
Spring 2009
1. Let X1 , . . . , Xn be a random sample from a Bernoulli distribution with parameter p ∈ (0, 1), that is,
n o n
X
p
and since the set 1−p : p ∈ (0, 1) contains an open set in R, we see that T (X) = Xi is
i=1
also complete.
b) Justify Tn (X1 , . . . , Xn ) is sufficient and complete using the definitions of sufficiency and complete-
ness.
Solution. Note that Tn (X1 , . . . , Xn ) ∼ Binomial(n, p). Hence,
P (X = x) pt (1 − p)n−t 1
P (X = x | T (X) = T (x)) = = n
t n−t
= n ,
P (T (X) = T (x)) t p (1 − p) t
which is independent of p, and so, T (X) is sufficient for p. On the other hand, if g is a function
such that E(g(T (X))) = 0, then
n n t
X n t n−t n
X n p
0 = Eg(T (X)) = g(t) p (1 − p) = (1 − p) g(t) ,
t=0
t t=0
t 1−p
where 0 < r < ∞. The expression is a polynomial of degree n is r, and for it to be identically 0,
all the coefficients must be 0. Hence, g(t) = 0 for t = 0, 1, . . . , n, and so, P (g(T (X)) = 0) = 1.
Hence, T (X) is complete.
c) Find the maximum likelihood estimator (MLE) of p, and determine its asymptotic distribution
by a direct application of the Central Limit Theorem.
Solution. We see that the log-likelihood function is
X X
log L(p; x) = xi log p + n − xi log(1 − p),
and so, taking the derivative with respect to p, we get
P P
d xi n − xi
log L(p; x) = − .
dp p 1−p
Setting the above expression, we get our MLE for p:
P
xi
p̂ = = X̄,
n
since
d2
P P
xi n − xi
log L(p; x) = − − < 0.
dp2 p2 (1 − p)2
Now, by the Central Limit Theorem,
Pn
√ Xi − EXi
n (p̂ − p) = i=1 √ ⇒ N (0, EX12 ) = N (0, p(1 − p)) .
n
d) Calculate the Fisher information for the sample, and verify that your result in part (c) agrees
with the general theorem which provides the asymptotic distribution of MLE’s.
Solution. From part (c), we have that
d2
P P
xi n − xi
log L(p; x) = − − ,
dp2 p2 (1 − p)2
and so, the Fisher information is
P P
xi n − xi np n − np n n n
I(p) = −Ep − 2 − = 2 + = + = .
p (1 − p)2 p (1 − p)2 p 1−p p(1 − p)
Hence, the Cramer-Rao lower bound is I(p)−1 = p(1 − p)/n, and so,
√
n (p̂ − p) ⇒ N (0, p(1 − p)).
e) Find a variance stabilizing transformation for p, that is, a function g such that
√ d
→ N (0, σ 2 )
n (g (p̂) − g(p)) −
where σ 2 > 0 and does not depend on p; identify both g and σ 2 .
Solution. By the Delta Method, we have that, for any function g such that g 0 (p) exists and is not
zero,
√
2
n (g (p̂) − g(p)) ⇒ N 0, p(1 − p) [g 0 (p)] .
2
So, if we take [g 0 (p)] = p(1−p)
1
, then we get what we desire. Hence,
Z Z
1 1 1 1
g(x) = p dx = q 2 dx x − = sin θ
x(1 − x) 1 2 2
− x− 1 4 2
1
cos θ dθ
Z Z
= q 2
= dθ = θ = sin−1 (2x − 1).
1 1 2
4 − 4 sin θ
θα−1 e−θ/β
p(θ; α, β) = ,
β α Γ(α)
and suppose that the conditional distribution of X given θ is normal with mean zero and variance
1/θ.
Show that the conditional distribution of θ given X also has a Gamma distribution, and determine
its parameters.
Solution. Note that the conditional pdf is
f (θ, x1 , . . . , xn )
f (θ | x1 , . . . , xn ) = .
f (x1 , . . . , xn )
= e 2 dθ
0 (2π)n/2 β α Γ(α)
Z ∞ P 2
1 n
+α−1 xi 1
= θ 2 exp − + θ dθ
(2π)n/2 β α Γ(α) 0 2 β
P 2 − n2 +α Z ∞
1 xi 1 n
= + u 2 +α−1 e−u du
(2π)n/2 β α Γ(α) 2 β 0
n
P 2 − n2 +α
Γ α+ 2 1 xi 1
= + ,
Γ(α) (2π)n/2 β α 2 β
and so,
P 2
α+ n xi
θ 2 −1
exp − + β1 θ P 2 −1 !
2 n xi 1
f (θ | x1 , . . . , xn ) = − α+ n2 ∼ Γ α + 2 , + .
P x2i 2 β
Γ α + n2 2 + 1
β
Hence,
P 2
xi
α+ n
Z ∞ Z exp −
∞ θ 2 −1
+ β1 θ
2
E[θ | X1 = x1 , . . . , Xn = xn ] = θf (θ | x1 , . . . , xn ) dθ = θ − α+ n2 dθ
0 0 n
P x2 1
Γ α+ 2 2
i
+β
Z ∞ P 2
1 n
xi 1
= − α+ n2 θ a+ 2 exp − + θ dθ
P 2
xi 0 2 β
Γ α + n2 1
2 + β
P 2 − α+ n2 +1
xi
+ β1 Z ∞
Γ α + n2 + 1
−1
P
2 α+ n
−u x2i 1
= u 2 e du = +
− α+ n2 Γ α + n2
P x2 0 2 β
Γ α + n2 2
i
+ 1
β
n
α+ 2
= P x2 ,
1
i
2 + β
and so,
2α + n 1
E[θ | X1 , . . . , Xn ] = P 2 → = θ a.s,
Xi + 2β −1 EX12
by the Law of Large Numbers. Since a.s convergence implies convergence in probability, the
desired result follows.