Likelihood Ratio Test

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Likelihood-ratio test

From Wikipedia, the free encyclopedia


Not to be confused with the use of likelihood ratios in diagnostic testing.
In statistics, a likelihood ratio test is a statistical test used to compare the fit of two
models, one of which (the null model) is a special case of the other
(the alternative model). The test is based on thelikelihood ratio, which expresses how
many times more likely the data are under one model than the other. This likelihood
ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared
to a critical value to decide whether to reject the null model in favour of the alternative
model. When the logarithm of the likelihood ratio is used, the statistic is known as
a log-likelihood ratio statistic, and theprobability distribution of this test statistic,
assuming that the null model is true, can be approximated using Wilks's theorem.
In the case of distinguishing between two models, each of which has no
unknown parameters, use of the likelihood ratio test can be justified by the Neyman
Pearson lemma, which demonstrates that such a test has the highest power among all
competitors.
[1]

1 Contents
1 Use
2 Background
3 Simple-vs-simple hypotheses
4 Definition (likelihood ratio test for composite hypotheses)
4.1 Interpretation
4.2 Distribution: Wilks's theorem
5 Examples
5.1 Coin tossing
6 References
7 External links
2 Use
Each of the two competing models, the null model and the alternative model, is
separately fitted to the data and the log-likelihood recorded. The test statistic (often
denoted by D) is twice the difference in these log-likelihoods:

The model with more parameters will always fit at least as well (have an equal or
greater log-likelihood). Whether it fits significantly better and should thus be
preferred is determined by deriving the probability orp-value of the difference D.
Where the null hypothesis represents a special case of the alternative hypothesis,
the probability distribution of the test statistic is approximately a chi-squared
distribution with degrees of freedom equal to df2 df1 .
[2]
Symbols df1 and df2 represent
the number of free parameters of models 1 and 2, the null model and the alternative
model, respectively. The test requires nested models, that is: models in which the
more complex one can be transformed into the simpler model by imposing a set of
constraints on the parameters.
[3]

For example: if the null model has 1 parameter and a log-likelihood of 8024 and the
alternative model has 3 parameters and a log-likelihood of 8012, then the probability
of this difference is that of chi-squared value of +2 (8024 8012) = 24 with 3 1 = 2
degrees of freedom. Certain assumptions
[4]
must be met for the statistic to follow a
chi-squared distribution and often empirical p-values are computed.
3 Background
The likelihood ratio, often denoted by (the capital Greek letter lambda), is the ratio
of the likelihood function varying the parameters over two different sets in the
numerator and denominator. A likelihood ratio test is a statistical test for making a
decision between two hypotheses based on the value of this ratio.
It is central to the NeymanPearson approach to statistical hypothesis testing, and,
like statistical hypothesis testing in general, is both widely used and criticized.
4 Simple-vs-simple hypotheses
Main article: NeymanPearson lemma
A statistical model is often a parametrized family of probability density
functions or probability mass functions . A simple-vs-simple hypotheses test has
completely specified models under both the nulland alternative hypotheses, which for
convenience are written in terms of fixed values of a notional parameter :

Note that under either hypothesis, the distribution of the data is fully specified; there
are no unknown parameters to estimate. The likelihood ratio test statistic can be
written as:
[5][6]


or

where is the likelihood function, and is the Supremum function. Note that
some references may use the reciprocal as the definition.
[7]
In the form stated here, the
likelihood ratio is small if the alternative model is better than the null model and the
likelihood ratio test provides the decision rule as:
If , do not reject ;
If , reject ;
Reject with probability if
The values are usually chosen to obtain a specified significance level , through
the relation: . The Neyman-Pearson
lemma states that this likelihood ratio test is the most powerful among all level tests
for this problem.
[1]

5 Definition (likelihood ratio test for composite
hypotheses)
A null hypothesis is often stated by saying the parameter is in a specified
subset of the parameter space .

The likelihood function is (with being the pdf or pmf), which
is a function of the parameter with held fixed at the value that was actually
observed, i.e., the data. Thelikelihood ratio test statistic is
[8]


Here, the notation refers to the supremum function.
A likelihood ratio test is any test with critical region (or rejection region) of the
form where is any number satisfying . Many common test
statistics such as the Z-test, the F-test,Pearson's chi-squared test and the G-test are tests for
nested models and can be phrased as log-likelihood ratios or approximations thereof.
6 Interpretation
Being a function of the data , the LR is therefore a statistic. The likelihood ratio
test rejects the null hypothesis if the value of this statistic is too small. How small is
too small depends on the significance level of the test, i.e., on what probability of Type
I error is considered tolerable ("Type I" errors consist of the rejection of a null
hypothesis that is true).
The numerator corresponds to the maximum likelihood of an observed outcome under
the null hypothesis. The denominator corresponds to the maximum likelihood of an
observed outcome varying parameters over the whole parameter space. The numerator
of this ratio is less than the denominator. The likelihood ratio hence is between 0 and
1. Low values of the likelihood ratio mean that the observed result was less likely to
occur under the null hypothesis as compared to the alternative. High values of the
statistic mean that the observed outcome was nearly as likely to occur under the null
hypothesis as compared to the alternative, and the null hypothesis cannot be rejected.
7 Distribution: Wilks's theorem
If the distribution of the likelihood ratio corresponding to a particular null and
alternative hypothesis can be explicitly determined then it can directly be used to form
decision regions (to accept/reject the null hypothesis). In most cases, however, the
exact distribution of the likelihood ratio corresponding to specific hypotheses is very
difficult to determine. A convenient result, attributed to Samuel S. Wilks, says that as
the sample size approaches , the test statistic for a nested model will
be asymptotically -distributed with degrees of freedom equal to the difference in
dimensionality of and .
[4]
This means that for a great variety of hypotheses, a
practitioner can compute the likelihood ratio for the data and
compare to the value corresponding to a desired statistical significance as
an approximate statistical test.
8 Examples
9 Coin tossing
An example, in the case of Pearson's test, we might try to compare two coins to
determine whether they have the same probability of coming up heads. Our
observation can be put into a contingency table with rows corresponding to the coin
and columns corresponding to heads or tails. The elements of the contingency table
will be the number of times the coin for that row came up heads or tails. The contents
of this table are our observation .
Heads Tails
Coin 1

Coin 2

Here consists of the parameters , , , and , which are the probability
that coins 1 and 2 come up heads or tails. The hypothesis space is defined by the
usual constraints on a distribution, , and . The null
hypothesis is the subspace where . In all of these
constraints, and .
Writing for the best values for under the hypothesis , maximum likelihood
is achieved with

Writing for the best values for under the null hypothesis , maximum
likelihood is achieved with

which does not depend on the coin .
The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the
constraints for the logarithm of the likelihood ratio to have the desired nice
distribution. Since the constraint causes the two-dimensional to be reduced to the
one-dimensional , the asymptotic distribution for the test will be ,
the distribution with one degree of freedom.
For the general contingency table, we can write the log-likelihood ratio statistic as

10 References
1. ^
a

b
Neyman, Jerzy; Pearson, Egon S. (1933). "On the Problem of the Most Efficient Tests of
Statistical Hypotheses". Philosophical Transactions of the Royal Society A: Mathematical, Physical
and Engineering Sciences 231 (694706): 289337. doi:10.1098/rsta.1933.0009. JSTOR 91247.
2. ^ Huelsenbeck, J. P.; Crandall, K. A. (1997). "Phylogeny Estimation and Hypothesis Testing Using
Maximum Likelihood". Annual Review of Ecology and Systematics 28: 437
466.doi:10.1146/annurev.ecolsys.28.1.437.
3. ^ An example using phylogenetic analyses is described at Huelsenbeck, J. P.; Hillis, D. M.; Nielsen,
R. (1996). "A Likelihood-Ratio Test of Monophyly". Systematic Biology 45 (4): 546.doi:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy