Exercises With Solutions
Exercises With Solutions
(a) Determine probability of type I and type II error for that test.
(c) Is the power of the Neyman–Pearson test bigger than the power of the proposed
test? Motivate the answer without any additional calculation.
iid
Exercise 4. Let X1 , . . . , Xn |θ ∼ f ( · |θ) where
Suppose that the prior distribution for the parameter θ is p(θ) = 2 e−2θ , where θ > 0.
(c) Assume that n = 1 (the sample is X1 ), identify the predictive density f (x2 |x1 ).
SOLUTIONS
Exercise 3.
(a) We reject the null hypothesis when X > 0.95, therefore the type I error for the test
is Z 1
α = PH0 (X ∈ CR) = Pβ=1 (X > 0.95) = dx = 1 − 0.95 = 0.05
0.95
whereas the type II error is given by
Z 0.95
η = PH1 (X 6∈ CR) = Pβ=2 (X ≤ 0.95) = 2(1 − x)dx
0
0.95
x2 (0.95)2
=2 x− = 2 · 0.95 − ∼
= 0.9975.
2 0 2
1
(b) The Neyman–Pearson test rejects H0 iff the observed X = x is such that
f1 (x)
≥K
f0 (x)
where K has to be determined later so that the test has size α = 0.05. First of all
observe that
f1 (x) 2(1 − x) 2−K
≥K ⇔ ≥ K ⇔ 1 − x ≥ K/2 ⇔ x ≤ =: K 0
f0 (x) 1 2
hence the Neyman–Pearson test reject the null hypothesis if and only if X ≤ K 0 ,
where K 0 has to be chosen in such a way that the test has size α, i.e. it satisfies
Z K0
0 0
0.05 = α = PH0 (X ≤ K ) = Pβ=1 (X ≤ K ) = dx = K 0
0
which entails K 0 = α = 0.05. Therefore the optimal test of size α = 0.05 has the
critical region {x : x ≤ 0.05}.
(c) Yes, thanks to the Neyman–Pearson Lemma, the test found in (b) is the most pow-
erful test among all tests of size α = 0.05. Since both the tests considered in (a) and
(b) have the same size α = 0.05, the most powerful one is the Neyman–Pearson test
found in (b).
Exercise 4.
(a) Thanks to the Bayes theorem, the posterior distribution can be determined as fol-
lows:
n
Y n
Y
p(θ|x1 , . . . , xn ) ∝ f (x1 , . . . , xn |θ) = f (xi |θ)p(θ) = (θe−xi θ ) · 2e−2θ
i=1 i=1
Pn
n −θ( n
P
n −θ i=1 xi −2θ x +2)
=θ e 2e ∝θ e i=1 i
as a consequence Pn
p(θ|x1 , . . . , xn ) ∝ θn e−θ( i=1 xi +2)
and we recognize that the right hand side of the previousPequation is the kernel of a
gamma p.d.f. with parameters an = n + 1 and bn = 2 + ni=1 xi , hence
(b) The Bayes estimator under a squared loss function is the posterior mean given by
an n+1
θ̂ = E[θ|X1 , . . . , Xn ] = = .
2 + ni=1 Xi
P
bn
2
(c) For x > 0, the predictive density is:
Z ∞
(2 + x1 )2 2−1 −θ(x1 +2)
Z
f (x|x1 ) = f (x|θ)p(θ|x1 )dθ = θe−θx θ e dθ
Θ 0 Γ(2)
Z ∞
Γ(3) (2 + x1 )2
= (2 + x1 )2 θ2 e−θ(2+x1 +1) dθ = (2 + x1 )2 3
=2
0 (2 + x1 + x) (2 + x1 + x)3
where we have evaluated the integral resorting to the p.d.f. of the Gamma density.
3
Advanced Mathematics and Statistics: Exercises
Exercises from the General exam – 7 June, 2019
(b) Calculate the power of the test in (a). Is it possible to determine another test of size
α = 0.01 having a bigger power? Motivate the answer.
(c) Determine the UMP test of size α = 0.01 for the problem H0 : θ = 1 vs H1 : θ > 1,
find the associated power function and draw it on the plane.
Exercise 4. Let X1 be a Binomial random variable with parameters (n, θ), i.e.
n x
P(X1 = x) = θ (1 − θ)n−x , x = 0, 1, . . . , n
x
and θ ∈ (0, 1). Suppose that the prior distribution for the parameter θ is p(θ) = 2 θ1(0,1) (θ)
(note that θ ∼ Beta(2, 1)).
(a) Identify the posterior distribution of θ, given X1 = x1 . Is the class of beta priors a
conjugate family of distributions for this statistical model? Motivate the answer.
(b) Provide the definition of Bayes estimator in its full generality, then determine the
Bayes estimator of θ under a squared loss function. Are you able to determine a
relation between the Bayes estimator and the MLE of θ?
(c) Assume that n = 5, evaluate the posterior probability P(θ > 0.5|X1 = 5).
SOLUTIONS
Exercise 3.
(a) The optimal test of size α rejects H0 iff the observed X = x is such that
f1 (x) 2 · 22 x2 4
≥K ⇔ 3
· ≥K ⇔ x≤ =: K 0
f0 (x) x 2 K
4
where K 0 has to be chosen in such a way that the test is of size α = 0.01. In other
words K 0 is the solution of the equation
Z K0 Z K0
2
0.01 = α = PH0 (X ≤ K 0 ) = Pθ=1 (X ≤ K 0 ) = f0 (x)dx = dx
2 2 x2
which entails
K0
x−2+1 K0
Z
1 1 1 1 1 1
= dx ⇔ = ⇔ 0
= −
200 2 x2 −2 + 1 2 200 K 2 200
therefore K 0 = 200/99 ∼
= 2.02 and the Neyman–Pearson test of size 0.01 has critical
region {x : x ≤ 2.02}.
In order to draw the power function observe that p(1) = α = 0.01, it is increasing in
(1, ∞) and p(θ) → 1 as θ → ∞.
Exercise 4.
(a) The posterior distribution can be evaluated by means of the Bayes theorem:
n x1
p(θ|x1 ) ∝ f (x1 |θ)p(θ) = θ (1 − θ)n−x1 2θ
x1
∝ θx1 +2−1 (1 − θ)n−x1 +1−1
where θ ∈ (0, 1). We recognize that p(θ|x1 ) is proportional to a beta density, more
precisely
θ|X1 = x1 ∼ Beta(x1 + 2, n − x1 + 1)
or in other terms
Γ(n + 3)
p(θ|x1 ) = θx1 +1 (1 − θ)n−x1 1(0,1) (x). (1)
Γ(x1 + 2)Γ(n − x1 + 1)
5
(b) The Bayes estimator is the one which minimizes the posterior risk, θ̂ is
where L is a loss function. If L is the quadratic loss, the Bayes estimator boils
down to the posterior mean
Z
x1 + 2
θ̂ = θp(θ|x1 )dθ =
Θ n+3
where we used the fact that the posterior of θ is a beta distribution and the mean
of a beta having parameters (a, b) is a/(a + b).