0% found this document useful (0 votes)
28 views

Spring 2009

1. The document discusses sufficient and complete statistics for a Bernoulli distribution with parameter p. The total number of successes, Tn, is a sufficient and complete statistic for p. 2. The maximum likelihood estimator (MLE) of p is the sample proportion, Xbar. By the central limit theorem, the MLE is asymptotically normally distributed with mean p and variance p(1-p)/n. 3. A variance-stabilizing transformation for p is the arcsine transformation g(p) = sin−1(2p - 1). Under this transformation, the MLE is asymptotically normally distributed with mean g(p) and variance 1.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Spring 2009

1. The document discusses sufficient and complete statistics for a Bernoulli distribution with parameter p. The total number of successes, Tn, is a sufficient and complete statistic for p. 2. The maximum likelihood estimator (MLE) of p is the sample proportion, Xbar. By the central limit theorem, the MLE is asymptotically normally distributed with mean p and variance p(1-p)/n. 3. A variance-stabilizing transformation for p is the arcsine transformation g(p) = sin−1(2p - 1). Under this transformation, the MLE is asymptotically normally distributed with mean g(p) and variance 1.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Spring 2009

1. Let X1 , . . . , Xn be a random sample from a Bernoulli distribution with parameter p ∈ (0, 1), that is,

P (Xi = 1) = p, and P (Xi = 0) = 1 − p.

a) Find a complete sufficient statistic Tn (X1 , . . . , Xn ) for p.


Solution. The pdf is
n
!
P
xi n−
P
xi 1X
f (x1 , . . . , xn ; p) = p (1 − p) = gp xi h(x1 , . . . , xn ),
n i=1

where gp (x) = pnx (1Pn− p)


n−nx
and h(x1 , . . . , xn ) = 1. Hence, by Fisher-Neyman Factorization
Theorem, T (X) = i=1 Xi is a sufficient statistic. Also, the pdf for X1 is
  x1  
x1 1−x1 p p
f (x1 ; p) = p (1 − p) = (1 − p) = (1 − p) exp x1 log ,
1−p 1−p

n o n
X
p
and since the set 1−p : p ∈ (0, 1) contains an open set in R, we see that T (X) = Xi is
i=1
also complete.
b) Justify Tn (X1 , . . . , Xn ) is sufficient and complete using the definitions of sufficiency and complete-
ness.
Solution. Note that Tn (X1 , . . . , Xn ) ∼ Binomial(n, p). Hence,

P (X = x) pt (1 − p)n−t 1
P (X = x | T (X) = T (x)) = = n

t n−t
= n ,
P (T (X) = T (x)) t p (1 − p) t

which is independent of p, and so, T (X) is sufficient for p. On the other hand, if g is a function
such that E(g(T (X))) = 0, then
n   n   t
X n t n−t n
X n p
0 = Eg(T (X)) = g(t) p (1 − p) = (1 − p) g(t) ,
t=0
t t=0
t 1−p

and since (1 − p)n is not 0, it must be that


n   t X n  
X n p n t
0= g(t) = g(t) r,
t=0
t 1 − p t=0
t

where 0 < r < ∞. The expression is a polynomial of degree n is r, and for it to be identically 0,
all the coefficients must be 0. Hence, g(t) = 0 for t = 0, 1, . . . , n, and so, P (g(T (X)) = 0) = 1.
Hence, T (X) is complete.
c) Find the maximum likelihood estimator (MLE) of p, and determine its asymptotic distribution
by a direct application of the Central Limit Theorem.
Solution. We see that the log-likelihood function is
X  X 
log L(p; x) = xi log p + n − xi log(1 − p),
and so, taking the derivative with respect to p, we get
P P
d xi n − xi
log L(p; x) = − .
dp p 1−p
Setting the above expression, we get our MLE for p:
P
xi
p̂ = = X̄,
n
since
d2
P P
xi n − xi
log L(p; x) = − − < 0.
dp2 p2 (1 − p)2
Now, by the Central Limit Theorem,
Pn
√ Xi − EXi
n (p̂ − p) = i=1 √ ⇒ N (0, EX12 ) = N (0, p(1 − p)) .
n

d) Calculate the Fisher information for the sample, and verify that your result in part (c) agrees
with the general theorem which provides the asymptotic distribution of MLE’s.
Solution. From part (c), we have that
d2
P P
xi n − xi
log L(p; x) = − − ,
dp2 p2 (1 − p)2
and so, the Fisher information is
 P P 
xi n − xi np n − np n n n
I(p) = −Ep − 2 − = 2 + = + = .
p (1 − p)2 p (1 − p)2 p 1−p p(1 − p)
Hence, the Cramer-Rao lower bound is I(p)−1 = p(1 − p)/n, and so,

n (p̂ − p) ⇒ N (0, p(1 − p)).

e) Find a variance stabilizing transformation for p, that is, a function g such that
√ d
→ N (0, σ 2 )
n (g (p̂) − g(p)) −
where σ 2 > 0 and does not depend on p; identify both g and σ 2 .
Solution. By the Delta Method, we have that, for any function g such that g 0 (p) exists and is not
zero,
√ 
2

n (g (p̂) − g(p)) ⇒ N 0, p(1 − p) [g 0 (p)] .
2
So, if we take [g 0 (p)] = p(1−p)
1
, then we get what we desire. Hence,
Z Z  
1 1 1 1
g(x) = p dx = q 2 dx x − = sin θ
x(1 − x) 1 2 2
− x− 1 4 2
1
cos θ dθ
Z Z
= q 2
= dθ = θ = sin−1 (2x − 1).
1 1 2
4 − 4 sin θ

Thus, if we take g(x) = sin−1 (2x − 1), then we have that



n (g (p̂) − g(p)) ⇒ N (0, 1) .
2. a) Let θ have a Gamma Γ(α, β) distribution with α, β positive,

θα−1 e−θ/β
p(θ; α, β) = ,
β α Γ(α)

and suppose that the conditional distribution of X given θ is normal with mean zero and variance
1/θ.
Show that the conditional distribution of θ given X also has a Gamma distribution, and determine
its parameters.
Solution. Note that the conditional pdf is

f (θ, x) f (x | θ)f (θ)


f (θ | x) = = .
f (x) f (x)

First, we compute f (x):


Z ∞ Z √∞ α−1 −θ/β
  2 
θ x2 θ e x 1
f (x) = f (x | θ)f (θ) dθ = √ e− 2 θ α dθ set u = + θ
0 0 2π β Γ(α) 2 β
 2 − α+ 12  Z ∞
1 x 1 1
=√ + θ uα− 2 e−u du
2πβ Γ(α) 2
α β 0
− α+ 12 
Γ α + 12
 2 
1 x 1
=√ + θ .
2πβ α 2 β Γ(α)

Then, we have that



x2 x2
 
α−1 −θ/β
√ θ e− 2 θ θ α e 1 − +1 θ
2π β Γ(α) θα− 2 e 2 β
f (θ | x) = − α+ 12  = − α+ 12  ,
Γ α+ 12
 
√ 1 x2 1 x2 1
Γ α + 12

2πβ α 2 + β θ Γ(α) 2 +β
  −1 
1 x2 1
and so, the conditional distribution of θ given X has the distribution Γ α + 2, 2 + β .
b) Conditional on θ as in part (a), suppose that a sample X1 , . . . , Xn is composed of independent
variables, normally distributed with mean zero and variance 1/θ. Find the conditional expectation
E[θ|X1 , . . . , Xn ] of θ given the sample, and show that it is a consistent estimate of θ, that is, that
p
E[θ|X1 , . . . , Xn ] −
→θ as n → ∞.

Solution. Similarly as in part (a),

f (θ, x1 , . . . , xn )
f (θ | x1 , . . . , xn ) = .
f (x1 , . . . , xn )

Now, we compute f (x1 , . . . , xn ):


n
!
Z ∞ Z ∞ Y
f (x1 , . . . , xn ) = f (x1 , . . . , xn | θ)f (θ) dθ = f (xi | θ) f (θ) dθ
0 0 i=1

θn/2 − x2i θ θα−1 e−θ/β
Z P

= e 2 dθ
0 (2π)n/2 β α Γ(α)
Z ∞  P 2  
1 n
+α−1 xi 1
= θ 2 exp − + θ dθ
(2π)n/2 β α Γ(α) 0 2 β
P 2 − n2 +α Z ∞

1 xi 1 n
= + u 2 +α−1 e−u du
(2π)n/2 β α Γ(α) 2 β 0
n
 P 2 − n2 +α
Γ α+ 2 1 xi 1
= + ,
Γ(α) (2π)n/2 β α 2 β

and so,
 P 2  
α+ n xi

θ 2 −1
exp − + β1 θ P 2 −1 !
2 n xi 1
f (θ | x1 , . . . , xn ) = − α+ n2  ∼ Γ α + 2 , + .
  P x2i 2 β
Γ α + n2 2 + 1
β

Hence,
 P 2  
xi
α+ n

Z ∞ Z exp −
∞ θ 2 −1
+ β1 θ
2
E[θ | X1 = x1 , . . . , Xn = xn ] = θf (θ | x1 , . . . , xn ) dθ = θ − α+ n2  dθ
0 0 n
  P x2 1
Γ α+ 2 2
i

Z ∞  P 2  
1 n
 xi 1
= − α+ n2  θ a+ 2 exp − + θ dθ
P 2
xi 0 2 β
Γ α + n2 1

2 + β
P 2 − α+ n2 +1
xi
+ β1 Z ∞
Γ α + n2 + 1
−1
 P
2 α+ n
−u x2i 1
= u 2 e du = +
− α+ n2 Γ α + n2
 
  P x2 0 2 β
Γ α + n2 2
i
+ 1
β
n
α+ 2
= P x2 ,
1
i
2 + β

and so,
2α + n 1
E[θ | X1 , . . . , Xn ] = P 2 → = θ a.s,
Xi + 2β −1 EX12
by the Law of Large Numbers. Since a.s convergence implies convergence in probability, the
desired result follows.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy