0% found this document useful (0 votes)
5 views

practice-exam

The document is an example exam with various questions related to statistics, probability, and Bayesian inference. It includes open-ended questions requiring derivations, true/false statements, and specific calculations related to probability distributions and estimation methods. The exam is structured to assess understanding of concepts such as Bayes' theorem, parameter estimation, and statistical tests.

Uploaded by

c.f.cartoefkors
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

practice-exam

The document is an example exam with various questions related to statistics, probability, and Bayesian inference. It includes open-ended questions requiring derivations, true/false statements, and specific calculations related to probability distributions and estimation methods. The exam is structured to assess understanding of concepts such as Bayes' theorem, parameter estimation, and statistical tests.

Uploaded by

c.f.cartoefkors
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Example exam

DIRECTIONS: ALLOW 3 HOURS. WRITE YOUR NAME AND STUDENT NUMBER


AT THE TOP OF EVERY PAGE OF YOUR SOLUTIONS. Please explain clearly all of the
steps that you used to derive a result. Please make certain that your handwriting is readable to
someone besides yourself.

1. Answer the following open questions: 7 points/question


(a) Using Cox’s product and/or sum rules, derive the mathematical formula that describes
Bayes’ theorem and give the name of each term in the formula.
(b) Describe and compare parameter estimation and model comparison. What role does
the evidence play in each process?
(c) In the context of parameter estimation, describe the difference between a scale pa-
rameter and a location parameter.
(d) Given a continuous uniform distribution function prob(x) = 31 for 1 ≤ x ≤ 4 and
prob(x) = 0 otherwise:

i. Calculate the expectation value h3 x + 6i
ii. Calculate the expectation value hxi and the variance of x, Var(x).
(e) Explain what the Kolmogorov–Smirnov test is used for and how it works.
(f) Given a one-dimensional probability density function, describe how you would find
the 95% confidence interval.
(g) Given the joint posterior distribution of X and Y , prob (X, Y | {data}, I), write down
the definition of the covariance of X and Y . What does it mean when the covariance
is equal to zero?
(h) Describe what a singular value decomposition is and include the formula for the
decomposition. How can a singular value decomposition be used in multi-variate
parameter estimation, and what is its advantage over other methods?
(i) For solving a two-parameter problem with parameters X and Y , the following equa-
tion can be defined:
  
A C X − X0
Q(X, Y ) = (X − X0 Y − Y0 )
C B Y − Y0

with
∂2L ∂2L ∂2L
A= , B= , C= .
∂X 2 X0 ,Y0 ∂Y 2 X0 ,Y0 ∂X∂Y X0 ,Y0

(X0 , Y0 ) is the most probable value for the joined posterior of parameters X and Y ,
and L is the logarithmic joined posterior probability function. In general, the iso-
contours of Q at k (k = Q(X, Y )) trace an ellipse. Specify the following properties of
this ellipse: the ellipse’s centre; the magnitude of the axes and the orientation of the
ellipse. How do these properties relate to the parameters’ uncertainties and (lack of)
correlation?

2. Open questions involving derivations

(a) (15 points) Given a data set {xi : 0 ≤ i < N } of N values, where each value xi
is independent and drawn from a normal (Gaussian) distribution. The values have
different (known) standard deviations, given by σi . Use Bayes’ theorem (derived in
question 1(a)) to prove that the most probable value µ0 for the mean of the set is the
weighted average of the data. Assume a flat prior for the mean µ.

→ See next page for questions 2(b) and 3

1
(b) (12 points) Assume a coin with unknown bias H ∈ [0 . . . 1] is thrown N times
(H = 0.5 for a fair coin, H = 1 for a coin that always throws heads). The number of
times heads comes up is R. Start from Bayes’ theory (derived in question 1(a)) and
2
derive the best estimate of the bias, H0 , and its variance σH , given R and N .

3. True/false questions – mark T for a true statement or F for a false statement on your
exam paper: 1 point/question

(a) The Cauchy distribution is a specific instance of the more generic Student-T distri-
bution.
(b) Given two values that are from the same uniform distribution with zero mean, the
sum of the squared values is chi-squared distributed.
(c) A χ2 test can be used to compare two binned distributions.
(d) Compared to the linear correlation coefficient, Spearman’s rank-order correlation co-
efficient is more robust against outliers.
(e) The theory of “Ockham’s razor” states that, in some situations, when independent
random variables are added, their sum tends toward a normal distribution.
(f) The Cauchy distribution has an undefined mean. (Was: “A set of samples that follow
a Cauchy distribution has an undefined mean.”)
(g) For a uniform distribution, the cumulative distribution function and the probability
distribution function are the same.
(h) The mean of the Poisson distribution is equal to its variance.
(i) Obtaining prob(A | I) from prob(A, B | I) requires marginalization over parameter
A.
(j) After calculating the posterior for a parameter for some set of data, the accuracy of
the estimate can be increased by using the posterior as prior.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy