0% found this document useful (0 votes)
37 views

The Futureof Statistics Bayesian

The document discusses the future of statistics and argues that Bayesian statistics is the only valid approach. It explains the key differences between the Bayesian viewpoint and traditional statistics, and justifies the Bayesian approach through logical axioms. The talk aims to establish Bayesian statistics as the singular direction for the field to develop in the 21st century and beyond.

Uploaded by

Michael Tadesse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

The Futureof Statistics Bayesian

The document discusses the future of statistics and argues that Bayesian statistics is the only valid approach. It explains the key differences between the Bayesian viewpoint and traditional statistics, and justifies the Bayesian approach through logical axioms. The talk aims to establish Bayesian statistics as the singular direction for the field to develop in the 21st century and beyond.

Uploaded by

Michael Tadesse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

The Future of Statistics: A Bayesian 21st Century

Author(s): D. V. Lindley
Source: Advances in Applied Probability , Sep., 1975, Vol. 7, Supplement: Proceedings of
the Conference on Directions for Mathematical Statistics (Sep., 1975), pp. 106-115
Published by: Applied Probability Trust

Stable URL: http://www.jstor.com/stable/1426315

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to
Advances in Applied Probability

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
Supp. Adv. Appl. Prob. 7, 106-115 (1975)
Printed in Israel
? Applied Probability Trust 1975

THE FUTURE OF STATISTICS-


A BAYESIAN 21ST CENTURY

D. V. LINDLEY, University College London and University of Iowa

The thesis behind this talk is very simple: the only good statistics
statistics. Bayesian statistics is not just another technique to be ad
repertoire alongside, for example, multivariate analysis; it is the on
that can produce sound inferences and decisions in multivariate, or
branch of, statistics. It is not just another chapter to add to that elemen
you are writing; it is that text. It follows that the unique dire
mathematical statistics must be along the Bayesian road.
The talk is divided into three sections. In the first I shall state th
position and explain how it differs from that which is currently p
hoped that it would not be necessary to include this section, but
persuaded me that I should. In the short time available, a complete s
not possible; but the literature contains many better and fuller stat
can be given here.* In the second section the central thesis will be j
in the third I shall undertake what I see as the real point of the lecture
study of future directions for statistics. It had originally been my
follow Orwell and and use 1984 in the title, but de Finetti (1974) su
hence the longer time span.

1. Bayesian Statistics

The distinguishing feature of Bayesian statistics is that all unkn


tities are random variables: not just the data, but other vari
parameters, are, before they are observed, random. The act of ob
changes the status of the quantity from a random variable to a nu
are two examples.
Example 1. Consider n Bernoulli trials in which an event occurs
of them. The usual random mechanism is governed by a parameter
of the event occurring in a single trial, usually denoted by p, but here
will be used to r being a random variable having a density, p(r

* Two references are Lindley (1971) and De Groot (1970). The best is de Finetti's
work, of which Volume I has just appeared (1974) in an English translation.
106

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
The future of statistics - a Bayesian 21 st century 107

accurately, p(r 0), since it depends on 0. The Bayesian position is that 0 is also
a random variable with its density, p(8) say. After we have conducted the n
trials and seen the event occur r times, r ceases to be a random variable (the
notation often reflects this in a change from R to r). However, 0, not being
observed, retains its random variable status, its density changing from p (0) to
p(0lr) in consequence of the observation. The change here is governed by
Bayes' theorem: p(0Ir) a '(1 - 8)"-rp(0). Bayesian analysis is concerned*
with the distributions of 0 and how they are changed by observations:
sampling-theory statistics is concerned with the only distribution it has, p(rIO),
a distribution, which the Bayesian claims, is irrelevant after R has been
observed to be r.
Example 2. There are many problems which are concerned with the means
of several normal distributions: for example, the common two-way classifica-
tion (rows and columns, say) using the analysis of variance and the concepts of
main effects and interactions. In the Bayesian position the cell means are
themselves random variables whose distributions, as in Example 1, are affected
by observations. The distinction between Model I and Model II analyses
therefore disappears, though the parameters in the latter model, for example,
the variance component for rows, are random variables in the Bayesian
treatment, though not in the orthodox one. We return to these two examples
later in the talk.
Although all unobserved quantities are, in the Bayesian view, random, the
concept of probability thereby implied is not based on frequency considera-
tions. Probability is a relationship between 'you' and the external world,
expressing your views about that external world. In particular, the Bernoulli
'probability', 0 in Example 1, is not a probability in this sense, because it
describes a property of the external world. We refer to it as the propensity of
the event to occur. The important point here is not the names as such, but the
appreciation of the difference between, on the one hand, a relationship between
you and the sequence, and, on the other, a property of the sequence. The
function of names is to distinguish things: the same name is given to things
which are alike; different names to things which are dissimilar. A rose by any
name would smell as sweet but it would be confusing if the alternative name
was daffodil.
Other concepts enter into the Bayesian approach, in particular that of utility
and the combination of it with probability in the notion of expected utility. The

* Though not exclusively. Sometimes it is useful to talk about the unconditional distribution of
r, p(r), not p(rl0), as when we contemplate the possible results of the n trials. Such distributions
are not available in sampling-theory statistics.

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
108 D. V. LINDLEY

final maximisation of this quantity solves any decision problem


notion is itself probabilistic, so that essentially everything f
basic remark that unobserved quantities are random: that is, h
structure. All the calculations in the system are within the pr
(which is why Jeffreys (1967) uses probability in the title of
statistics). In particular problems of point estimation disappea
is the probability distribution and any single value is nothin
convenient partial description of this distribution.
There is a useful distinction to be made between 'inference' and 'decision'.
The Bayesian view is that the only purpose of an inference is its potential us
a decision problem. To achieve this potentiality it is only necessary to prov
the probability distribution conditional on the data. This provision is inferen
Decision-making adds the utility ingredient, calculates an expectation, usin
the inferential probability, and performs the maximisation. The distincti
occurs outside statistics: law and medicine are mentioned below.

2. Justification

The first complete justification for this viewpoint known to me was given
Ramsey (1964) in 1926. His work lay unappreciated for almost thirty years a
modern work begins with Savage's (1954) important book. The best up-to-dat
treatment in a textbook is probably De Groot's (1970). An alternative approac
is due to de Finetti (1964) in 1937. Ramsey's argument is essentially along t
following lines. In considering the way in which people would themselves wi
to act in the face of uncertainty, the statistician is led to state certain axiom
that they would not wish to violate. An example of these is the one Savage
charmingly called the 'sure-thing' principle. It says that if A is preferred to
when C obtains, and also when C does not obtain, then A is preferred to B
when one is uncertain about C. From these axioms it is possible to develop
mathematical system that we call Bayesian statistics. In particular, it is possi
to prove that uncertain quantities have a probability structure; the proper
that we took as basic to the system. I know of no objection to these axioms th
has persisted, and it is a pity that many critics of the approach do not pay more
attention to them instead of misrepresenting the position and so making it lo
ridiculous.

We should, at this point, take note of a great advantage the Bayesian position
has over all other approaches to statistics: namely, in the way just described, it
is a formal system with axioms and theorems. We all know and appreciate the
great impetus given to probability theory by Kolmogorov's (1950) 1933
axiomatisation of that field. A more striking example is provided by Newton's
statement of the laws of mechanics. Only when a system has a formal structure

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
The future of statistics -a Bayesian 21st century 109

can we be quite sure what it is we are talking about, and can we teach it to all
intelligent enough and willing to listen. Fisherians have condemned Bayesian
statistics as a 'monolithic structure'. Would they term Newtonian mechanics
monolithic? Critics often refer to Bayes as a Messiah; would they grant the
same status to Newton ? I find this messianic attitude particularly curious when
uttered by Fisherians who appear to regard the collected works, Fisher (1950),
and his last book, Fisher (1956), as the old and new testaments respectively.
An important theorem within the formal system is that which says that
inferences should follow the likelihood principle. Now it so happens that
almost all statistical techniques violate this principle and therefore do not fit
into the system. As a result all these techniques must be capable of producing
nonsense. And this indeed is so. In Lindley (1971) I have given a list of
counter-examples to demonstrate how ridiculous every statistical technique
can be. Thus in Example 1 above suppose it is required to test the hypothesis
0 = 1/2, by a standard significance test. Then a vast range of significance levels
can be produced by varying the sample space, or equivalently changing the
stopping rule. Careful reflection shows that this is not exactly sensible. Or
consider Kendall and Stuart's (1970) optimum estimate of 02 in Example 1,
namely r(r - l)/n(n - 1), when r = 1: to estimate a chance as zero when the
event has occurred is incredible.
The above justification for Bayesian statistics is at a theoretical level, thoug
its practical implications are immense. But an important alternative justific
tion rests on the pragmatic fact that it works. Bayesian statistics satisfies th
two basic requirements of science in resting on sound principles and working
practice. Let me demonstrate this using the two examples above.
Example 1. Consider n, trials with r, successes observed, and contemplate
n2 further trials and ask what are the chances of r2 additional successes. First
let us note that this is a practical problem. The physician who treated n, patients
with a drug and had r, respond successfully, could legitimately ask what migh
happen if the treatment were used on n2 further patients. Indeed Pearson (192
went so far as to describe it as one of the fundamental problems of practica
statistics. Although it rarely occurs in quite the simple form here presented,
solution to it is essential before more complicated and realistic problems are
discussed. But then notice that sampling-theory statistics has no simple way o
answering the question. For within that subject it is not possible to talk of
p(r21n2; ri, n1): only probablities conditional on 0 are admitted. The difficulty
circumvented by either making statements about 0 - to which the doctor's
response is that he is treating this patient, not a long-run frequency of patien
- or, rarely, to resort to the complexities of tolerance intervals. So im-
mediately we see that Bayesian statistics has one practical advantage over the
standard approach. But let us go further and consider the Bayesian answer. For

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
110 D. V. LINDLEY

simplicity take the case n2 = r2 = 1: the chance of success on one further trial.
Under certain assumptions* the probability is (r + a )/(n + a + b) - omitting
the suffixes - where a and b refer to the initial (prior) views of the sequence.
Compare this with r/n, the usual point estimate of 0. The most obvious
difference between the two is the occurrence of a and b in the former but not
the latter. But doesn't this make good, practical sense ? The usual estimate says
that it does not matter whether it is a sequence of patients, transistors,
drawing-pins or coins, the estimate is always the same. The Bayesian argument
says it is necessary to think about whether it is patients, transistors, drawing-
pins or coins that are being discussed, for which it is could affect the choice of a
and b. For example, with drawing-pins I would take a = b = 2, but with coins
a = b = 100, say. The resulting Bayesian answers for modest values of n are
very different: isn't that right ? Wouldn't your reaction to drawing-pins (about
whose tossing propensities you probably know very little) be different from
those with coins (which are well-known to have propensities near 1/2)?
Example 2. The techniques available for studying the two-way layout are
extensive and one faces an embarrassment of choices which the textbooks do
not resolve. One can perform an analysis of variance with its associated
significance tests. But if, for example, the main effect of rows and the
interaction are significant at 1 percent, but not the column effect, how is one
supposed to estimate a cell mean? What multiple comparisons are to be
applied? The Bayesian approach is quite clear, first you have to think about
those rows and columns: are they important factors or are they nuisance
factors that good experimental design has suggested be included ? What do you
know about the factors - is one a control? And so on, thinking about the real
problem in order to assess an initial distribution. Having done this, Bayes
theorem is applied to provide answers to all questions in the form of a
probability distribution. Under certain assumptions the expectation of the
parameter describing the cell in the ith row and the jth column is a linear
function of four quantities, the overall mean x.., the row effect xi. - x.., the
column effect x j - x. and the interaction xij - x,. - x., + x., the weights depend-
ing on the appropriate variance components. The estimates avoid all multiple
comparison difficulties and any ambiguities over the meaning of significance
tests: see Lindley (1975).
(A further point arises here: it was not mentioned in the original lecture but
occurs in Rao's paper and was prominent in the discussion. It is now
well-known that the usual estimate of a multivariate normal mean is unsatisfac-
tory and that the Stein (1956) estimate is preferable. Unfortunately this

* The basic assumption is that the trials are exchangeable. This is weaker than the assumption
of a Bernoulli sequence.

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
The future of statistics -a Bayesian 21st century 1l1

estimate, and analogous estimates provided by empirical Bayes methods, are


ambiguous in that they do not declare what multivariate distribution is to be
used. If studying hogs in Montana, why not add data on butterflies in Brazil and
increase the dimensionality? Curiously, the estimates for the hogs will change.
The difficulty can be resolved by recognizing that the Bayes distribution will
reflect the difference between hogs and butterflies and will only produce the
Stein estimate when certain exchangeability assumptions are valid. Hogs are
not exchangeable with butterflies!)

3. Directions for Statistics

As I hinted in the introduction, in my view it would have been better no


have included the above material in the talk, since it is already available in
literature, but instead to have concentrated on future directions for our sub
This was, as I understood it, the purpose of the conference, and is a topic
too well covered in the literature; a notable exception is Watts (1968). In t
time remaining to me I can only provide a cursory guide into the next century.
Bayesian statistics rests on the all-embracing notion of probability a
describing your belief about the state of the world. Once it is admitted that such
beliefs, obeying the calculus of probabilities, exist, we have an import
measurement problem: how to assess them? According to the thesis, your
beliefs can be described numerically: how are these numbers to be found?
Associated with this idea there is the concept of utility, describing numeric
your valuation of the worth of an outcome: how are these to be evaluated
One method is to relate the beliefs to gambles, but this is, for obvious reas
not entirely satisfactory. A modified form of this is to consider a scoring rule.
subject, asked to assess the probability of some event, gives the value p. If
event occurs he is awarded a prize +(p); if not, he obtains 4(1 -p). It is easy
to see that only some functions 4 will qualify, in the sense that in order
maximise his expected score the subject will declare his correct probabilit
The simplest qualifying function is 4 (p) = (1 -p)2. This has been used by d
Finetti, but in meteorology is called the Brier scoring rule. Can we train peo
to be good probability assessors using the Brier, or a similar, rule ? Clearly
must be a subject for much research if the Bayesian ideas are to be
implemented.
One of the most important papers in this field is that of Savage (1971). His
work is theoretical and needs to be supplemented by experimental studies. To
perform these we will need the help of psychologists. At the moment many
psychologists waste their time trying to find out how people make decisions in
practice. It turns out that they aren't natural Bayesians: so then, the psycholog-
ists ask, what rules do people apply? Now, why do this, why not teach people

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
112 D. V. LINDLEY

how to make decisions sensibly: that is to maximise expected


persuade these psychologists and market researchers away f
of why a housewife buys this type of detergent rather than
that in any case will disappear with the capitalist system - an
real problems.
The assessment of utilities presents similar difficulties but
solved directly in terms of gambles since utilities themselves
For example, consider three states of health, here referred to
bad and intermediate. Assigning utilities of 1 and 0 to the f
tively, the utility of the intermediate state can be assessed
someone in that state, contemlating an operation that may r
health but has a chance p of reducing it to bad. What is the
which the operation will be adopted ? The intermediate utility
my experience such ideas are acceptable because subjects c
problem. The Bayesian rules enable several such assessments
so as to handle more complicated and realistic decisions.
In pursuing a sound path it is important to rescue those w
cross the mountains by bad routes. What shall we do with sa
statistics, with significance tests, with confidence intervals;
methods that violate the likelihood principle ? The answer is, l
role has been to provide valuable stepping-stones to the f
appreciation of the originators of these ideas should not be di
remark: for it is largely by the pursuit of the notions that we
understanding that we have today. Each of these techniques
equivalent, which makes better practical sense, and I see no ex
our time on them except in a course on the history of our subject
generally acknowledged that sampling-theory statistics is in t
conference, and hence the emergence of new ideas like data
that data analysis is the antithesis of Bayesian statistics, for
unstructured field in which there are no rules. It is the negat
method. It is a field in which bright ideas of a few clever men ab
ideas are, because of the informality of the subject, difficult,
to convey to the average statistical practitioner. Contrast th
system with theorems stated under precise conditions and it
simple method of communication to all who are interested. I
thought to be decrying informality as such: far from it. Messing
data, making plots of it and such aids to thought are an essen

* At University College London we are working towards an integrated programme on


Bayesian statistics, with one course on sampling-theory ideas for reasons of history and
communication.

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
The future of statistics -a Bayesian 21st century 113

any good statistics. But let it be allied to a good formal framework and regarded
as an approximation to a full Bayesian treatment. Newtonian mechanics
provides a good analogue again. Many problems are impossible to solve strictly
within that framework and much ingenuity is devoted to finding workable
approximations that produce valuable answers. Do your data analysis, but
remember, to make sense, you must never forget the rules of coherent
behaviour, any more than an engineer can forget Newton's laws.
Having cleared some dead wood from the path, let us go forward in a more
constructive vein. Statistics has had its greatest successes in those fields of
science where the long-run frequency view of probability is appropriate - for
example, in agriculture, where experiments may be repeated but nevertheless
the variation is sufficienly large for naive techniques to be inappropriate. But
with the widening of the notion of probablity to embrace non-repeatable
situations the potential scope of statistics is enormously increased. We can now
enter into fields that were previously denied to us, without any loss in the
traditional ones, where propensity and exchangeability replace long-run fre-
quencies and randomisation. The future of statistics looks very bright to me
and perhaps the most important thing I have to say to you today is to ask you to
recognise this enormous widening of our subject. For if we do not recognise
this, others will take over. Let us not repeat the split between OR and statistics.
Only statisticians know how to process evidence: only statisticians know how
to make decisions. (The obvious adjective must be added in two places.)
An an illustration of this widening of the range of applications of statistics,
consider the situation in law. In a court of law, one of the problems is, in
probability language, to assess p(GIE), the probability that the defendant is
guilty, G, given the evidence, E. The judge and jury would clearly wish this
assessment to be done using Bayes theorem; assuming, that is, they do not
themselves wish to stand accused of violating the axioms, such as the
sure-thing principle. At the moment it is unrealistic to be able to do this except
in special cases. One such case is forensic medicine, where the evidence is
precisely stated and certain probabilities are obtainable from scientific evalua-
tions outside the court - such as the chance that two hairs, one from the
suspect, one found at the scene of the crime, have come from the same head.
Again notice, as with the Bayesian solution of Pearson's problem, that such
probabilities do not arise naturally in the usual treatment of this problem.
Utility considerations also enter into legal matters. The jury, in some
situations, is not called upon to pass sentence, that is the prerogative of the
judge. He has a decision problem to solve and will require utility assessments,
either imposed by statute, or by himself, preferably the former. One thing
seems clear: fines should be in utiles. A wealthy man should pay more for a
parking offence than an impecunious student. An interesting example of the

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
114 D. V. LINDLEY

way in which general theorems could influence legal practice is to be found in


the result which says that the expected value of sample information is
non-negative. This goes against the concept of non-admissibility of certain
types of evidence. My personal view is that the reason for some things being
legally inadmissible is that their use as evidence is difficult, not that they are not
evidence. But Bayes theorem could again oblige. (I similarly have little
sympathy with those who argue for privacy of certain types of information, for
example, salaries. The difficulty lies in how we use the information - now
solved in principle - not with the facts per se.)
Another field where statistics could make a significant impact is that of
diagnosis and management in medicine. The problem here is to calculate
p(DIS), the probability that the patient has the disease, D, given the symptoms,
S : and then the use of this probability, combined with utility considerations, to
determine the best management of the patient. Indeed, there is scarcely a field
of human endeavour that cannot be assisted by some statistical considerations.
The future is bright - but can we take advantage of it?
It has been mentioned above that certain ideas, like confidence intervals,
should be allowed to die. In some branches of statistics the interment cannot be
completed until a Bayesian form has been born. An example of such a topic is
multivariate analysis. This is a most peculiar subject in some ways. The
literature on it is vast and yet it contains substantial contradictions and
difficulties that most practitioners in the field ignore. We have only recently
discovered how to estimate the mean of a multivariate normal distribution: we
still do not know how to estimate the dispersion matrix. And yet elaborate
multivariate techniques, and their associated computer packages, have been
developed and extensively used. The need for sound statistical analyses of
many variables is an urgent practical necessity. The problems arise acutely in
the medical diagnosis situation where many signs and symptoms are typically
available. The extensive literature on multidimensional contingency tables
scarcely comes to grip with this problem. Least squares is similarly unsound, at
least in high dimensions, but the replacement there is simpler, because it is
often fairly easy to impose a reasonable probability structure on the parameters
to obtain reasonable posterior judgments.
The mention of multivariate ideas naturally leads us to consider the role of
the computer. The broad line of the development is clear. Bayesian statistics is
within the calculus of probabilities and the only calculations are those implied
by this calculus. The computer is needed for the more complex probability
manipulations, for evaluation of expected utilities and the subsequent maxi-
misation. Multidimensional integration is extensively involved in the elimina-
tion of nuisance parameters. Man thinks, the computer calculates: that is the
basic rule. A Bayesian data package will require thoughtful specification of the

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms
The future of statistics -a Bayesian 21st century 115

model; thoughtful assessment of the initial distribution (and utility, if decision


is involved) followed by calculation according to the laws of probability. It will
not be as easy to use as today's packages because the user will have to think
whether it is data on hogs or butterflies that he is analysing.
The future of statistics is bright. We can expand greatly: but where are the
recruits to come from? We need to attract able young people into the field:
people who have the mathematical experience, and exposure to scientific ideas,
to make good statisticians. My hope is that by teaching Bayesian ideas we shall
succeed in this. The formal system will make it easier to teach, and will appeal
to the mathematical mind. The fact that it works will bring in the interested
scientist.

I have spoken of the 21st century. I wish the change could come sooner. How
about a moratorium on research for two years? In the first of these we will all
read de Finetti's first volume: the next year will do for the second. It would do
you, and our subject, a lot of good.

References

DE FINETrT, B. (1964) Foresight: its logical laws, its subjective sources. Studies in Subjective
Probability, ed. Henry E. Kyburg, Jr. and Howard E. Smokier, pp. 93-158, Wiley, New York,
(Translation of La prevision: ses lois logiques, ses sources subjectives, Ann. Inst. H. Poincare, 7
(1937), 1-68.)
DE FINETTI, B. (1974) Theory of Probability: a critical introductory treatment. Volume 1
(Volume 2 to appear) Wiley, New York. (Translation of Teoria delle probabilita, sintesi introduttiva
con appendice critica (1970) Giulio Einaudi, Torino.)
DE GROOT, M.H. (1970) Optimal Statistical Decisions. McGraw-Hill, New York.
FISHER, R.A. (1950) Contributions to Mathematical Statistics. Wiley, New York.
FISHER, R. A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh.
JEFFREYS, H. (1967) Theory of Probability. 3rd edition (corrected). Clarendon Press, Oxford.
KENDALL, M. G. AND STUART, A. (1970) The Advanced Theory of Statistics, Volume 2. Griffin,
London.

KOLMOGOROV, A.N. (1950) Foundations of the Theory of Probability. Chelsea, New York
(Translation of Grundbegriffe der Wahrscheinlichkeitsrechnung (1933), Springer, Berlin.)
LINDLEY, D.V. (1971) Bayesian Statistics, a Review. SIAM, Philadelphia.
LINDLEY. D. V. (1975) A Bayesian Solution solution for two-way analysis of variance. Proc.
1972 Meeting of Statisticians, Budapest. (To appear).
PEARSON, K. (1920) The fundamental problem of practical statistics. Biometrika 13, 1-16.
RAMSEY, F.P. (1964) Truth and Probability. Studies in Subjective Probability, ed. Henry E
Kyburg, Jr. and Howard E. Smokier, pp. 61-92, Wiley, New York, (Reprinted from Th
Foundations of Mathematics and Other Essays. (1931), 156-198, Kegan, Paul, Trench, Trubner &
Co. Ltd., London.
SAVAGE, L.J. (1954) The Foundations of Statistics. Wiley, New York.
SAVAGE, L.J. (1971) Elicitation of personal probabilities and expectations. J. Amer. Statist.
Assoc. 66, 783-801.
STEIN, C. M. (1956) Inadmissibility of the usual estimator for the mean of a multivariate normal
distribution. Proc. Third Berkeley Symp. Math. Statist. Prob. 1, 197-206. University of California
Press, Berkeley.
WATTS, D.G. (1968) Conference on the Future of Statistics. Academic Press, New York.

This content downloaded from


45.18.10.64 on Sat, 05 Sep 2020 13:33:07 UTC
All use subject to https://about.jstor.org/terms

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy