Why Do Experts Disagree
Why Do Experts Disagree
Why Do Experts Disagree
Julian Reiss
To cite this article: Julian Reiss (2020) Why Do Experts Disagree?, Critical Review, 32:1-3,
218-241, DOI: 10.1080/08913811.2020.1872948
To link to this article: https://doi.org/10.1080/08913811.2020.1872948
The Corona crisis has made plain that the role of scientific experts in
democracies is among the most significant and urgent problems we
face, and should be a core concern in contemporary political theory
and adjacent fields such as economics, economic methodology, the phil-
osophy of science, and science and technology studies. On the one hand,
democracies face fundamental challenges that call for a close collaboration
between science and politics; the pandemic is the most current and
Julian Reiss, julian.reiss@jku.at, Institute for Philosophy and Scientific Method, JKU Linz,
Altenberger Str. , Linz, Austria, is the author, inter alia, of Error in Economics: The Meth-
odology of Evidence-Based Economics (Routledge, ).
Critical Review (–): – ISSN - print, - online
© The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribu-
tion-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-
nd/./), which permits non-commercial re-use, distribution, and reproduction in any
medium, provided the original work is properly cited, and is not altered, transformed, or
built upon in any way. https://doi.org/./..
Reiss • Why Do Experts Disagree?
obvious but by no means unique example. On the other hand, there are
boatloads of reasons to think that expert judgment is not always reliable
and trustworthy. Again, most currently and obviously: experts rec-
ommended against the wearing of face masks early in the crisis only to
reverse their judgments a few months later, without any new evidence
to support this change of mind. Why should we think that they have
got it right this time around? Contributions to the recent debate about
experts in democracies aim to resolve this dilemma in one way or
another.
We can distinguish two camps in this literature. Those we may refer to
as apologists acknowledge (to varying degrees) the limitations of expert
judgment but argue that the role of experts in democracies should never-
theless be strengthened because the alternative is worse. Examples
include Jason Brennan’s Against Democracy (; see also Collins and
Evans and Nichols ). Those opposing the apologists are the
critics. They propose to introduce institutions that aim to keep experts
in check or to motivate them right so that failure and overreach are rela-
tively unlikely. Examples of this latter group include William Easterly’s
The Tyranny of Experts (; see also Levy and Peart and Koppl
). Power Without Knowledge (Oxford University Press, )
belongs squarely in this camp.
According to Stephen Turner (), there is a “problem with
experts” in a democracy. The problem, in short, is this: heeding expert
advice contravenes one of the most deeply held democratic principles,
viz. the fundamental equality of all citizens. Three solutions offer them-
selves. Populist approaches address the problem by ignoring expertise and
making political decisions a matter of mob psychology. Technocratic and
epistocratic approaches subject citizens to rule by experts (on the distinc-
tion, see Van Bouwel , ). Deliberative democratic approaches
maintain that the process of democratic deliberation can balance the
two demands more fairly (Habermas ; Kitcher ).
Friedman’s book is highly original in that it challenges the premise
from which all these approaches start: that there is anything special
about expert knowledge to begin with. Focusing on predicting the
likely effects of interventions aiming to solve social problems such as
unemployment, inflation, failing schools, a dysfunctional health care
system, or drug addiction (), Friedman argues that there are inherent
limitations to such predictions. Worse than that, given the way social
science is currently practiced, social scientists are unlikely to make reliable
Critical Review Vol. , Nos. –
predictions to the extent that this is possible, and there are good reasons to
believe that this won’t change anytime soon.
In what follows I will first discuss Friedman’s skeptical arguments con-
cerning social scientists’ ability to predict the effects of policy interven-
tions. To anticipate my main response, I am somewhat more optimistic
than he is about reliable predictions. I do think that human action is gen-
erally very hard to predict, but there are predictable aspects, and there are
methodological strategies to discover what these aspects are. I will then
outline my own reasons for thinking that social science is an unlikely
source of the knowledge necessary for making epistocracy feasible, and
will come to a conclusion that is, if anything, even more anti-techno-
cratic than Friedman’s.
I should emphasize at the beginning that I am in broad agreement with
many of Friedman’s core claims. Like Friedman, I am critical of any exist-
ing form of technocracy, and I agree that the reasons for rejecting tech-
nocracy are in large part epistemic. I do think exitocracy is an interesting
alternative, and although a society organized according to Rawls’s Differ-
ence Principle isn’t my preferred vision of the good society, I agree that
an exitocracy would require that each citizen dispose of an amount of
resources sufficient to be capable of making use of the freedom to exit,
should he or she want to do so. One difference between us is that I
believe that the epistemic limitations of successful technocracy are of a
more fundamental and less practical nature than Friedman supposes.
State University, for instance, has condensed the research evidence into
six rules of influence (Cialdini ; Cialdini ): reciprocation,
liking, social proof, authority, scarcity, and consistency. For instance,
people are more likely to buy a good or donate to a cause after having
received a free gift, when the seller is (or pretends to be) “much like
them,” or when the good is (or is claimed to be) “the last available
item” for sale. A principle more directly relevant to policy is that of
social proof, which states that people tend to regard as correct that
which they think others think is correct. A study of India’s Green
Rating Project (GRP), a public disclosure program that collects and dis-
seminates information about firms’ environmental performance, found
that GRP achieved significant reductions in pollution loadings among
dirty plants (Powers et al. ). This example is particularly instructive
because (a) it concerns pollution, which is arguably a social problem tech-
nocrats would like to address, and (b) the study was conducted in a non-
WEIRD (Western, educated, industrialized, rich, and democratic)
context, which shows that the principle applies widely ().
I agree with Friedman that ideational heterogeneity creates obstacles
to reliably predicting the consequences of policy interventions. I also
agree that pernicious heterogeneity, i.e., heterogeneity that affects
actual behavior in unpredictable fashion, must be the default assumption.
But instead of a principal obstacle to the predictability of behavioral
responses to policy interventions, it could be treated as a methodological
problem that needs to be addressed, but which may on occasion either
show up only in an attenuated fashion or be solvable.
Friedman himself gives an example of a problem of interpretation in
economic experiments that appears to have been solved satisfactorily:
Linda the feminist bank teller, adduced by Amos Tversky and Daniel
Kahneman in what has become a world-famous experiment (). It
appeared to Tversky and Kahneman (, ), that their experimental
subjects violated the laws of the probability calculus because they thought
that it was more likely that Linda, who as a college student had been a
philosophy major who was “deeply concerned with issues of discrimi-
nation and social justice, and also participated in anti-nuclear demon-
stration,” would become a feminist bank teller than that she would
become a bank teller without qualification—even though the set of
bank tellers must be larger than the subset of feminist bank tellers.
However, when the question was later asked in terms of frequencies
rather than probabilities, the violation largely disappeared (Fiedler
Reiss • Why Do Experts Disagree?
more information than he could make sense of. Stereotypes help one to
navigate by screening out information that is inconsistent with one’s
stereotypes. The more one knows about a subject, the more one’s stereo-
type will get reinforced. Children and, more generally, newcomers to a
subject thus tend to be relatively open minded when confronted with
new ideas, as they have not had time to develop and reinforce stereo-
types, while those who have acquired a great deal of knowledge about
it—such as experts—lean on the stereotype to make sense of the knowl-
edge, while leaning on the knowledge to reinforce the stereotype. In
short, they tend to become dogmatic about the stereotype.
I think that there is a great deal of truth to this theory. It does explain
observations such as this one from Max Planck (, ): “A new scien-
tific truth does not triumph by convincing its opponents and making
them see the light, but rather because its opponents eventually die, and
a new generation grows up that is familiar with it” (which has come to
be known as “Planck’s Principle”; see Hull et al. ). Similarly, there
is the fact that many scientific innovations are made by outsiders to the
field (in biology, for instance, see Harman and Dietrich ).
However, the spiral of conviction is at best a tendency. That is, it is a
mechanism that operates in some individuals but not all, and even where
it operates, there are countervailing mechanisms that prevent it from
complete dominance. As Friedman puts it, “as people gain knowledge
of a topic, their opinions about it tend to rigidify,” there exists “a pressure
for dogmatism” or a “propensity to dogmatism’ (, , , emphases
added). We can imagine, then, that some researchers might become
aware of this tendency (as they become aware of other cognitive
biases, such as confirmation bias) and take active steps to combat it.
Here is one recipe. Identify the “crowning postures” in your belief
system, to use terminology Friedman () borrows from Philip
E. Converse to denote beliefs that are crucial to your interpretive
schema. Treat them not, in a realist manner, as representations of
reality but rather, in instrumentalist fashion, as tools for prediction. Ident-
ify one or a small set of alternative crowning postures. Derive predictions
from the alternative belief constellations. (By “predictions,” here, I do
not mean statements about the future effects of interventions, but
answers to the question: What observations would we expect to make
if the crowning posture were true? Chapter uses this method in the
context of discussing alternatives to the spiral-of-conviction model.
Take the score of failed and successful predictions for each alternative
Reiss • Why Do Experts Disagree?
One might argue that this is not a problem of individual attitudes of scien-
tists but a “systemic request”: objectivity of individual scientists might be
seen as a necessary condition for scientific progress. However, Popper
([Popper ], p. ) begs to differ: “It is a mistake to assume that the
objectivity of a science depends on the objectivity of the scientist. And
it is a mistake to believe that the attitude of the natural scientist is more
objective than that of the social scientist. … The objectivity of science is
not a matter of the individual scientists but rather the social result of
their mutual criticisms, of the friendly-hostile division of labour among
scientists, of their co-operation and also of their competition.”…
all, even if one funeral at a time. Thus, I think we have to identify a differ-
ent kind of obstacle to a well-functioning technocracy.
would put every item on the list that is valued by at least one expert. Here
the opposite considerations apply. Progressives might, for instance, deny
that sanctity or tradition are values to begin with.
The second source of disagreement stems from the fact that values are not
merely plural, but stand in conflict with each other. When they do, choices
have to be made about which of two or more conflicting values are more
important in the situation at hand. This is a point Friedman takes up, but
he discusses it in the context of expert knowledge, not value judgments.
Thus, he argues that experts often lack “knowledge of which social pro-
blems are not only real but significant, in the sense that they affect large
numbers of people — or small numbers intensely” (). But whether or
not a social problem is significant or not is a value judgment, not something
that can be decided on the basis of evidence alone. Is widespread drug abuse
or inflation the more pressing social problem (or a problem at all)? That will
depend, among other things, on whether freedom or well-being is regarded
as the more important value. Even in the counterfactual situation in which
all experts agree on what is on the list, there will be disagreements about
which of two or more conflicting values has priority, and therefore which
social problems are the most significant.
The third source of disagreement concerns the interpretation and appli-
cation of any given value to a given situation. “Freedom,” “an adequate
standard of living,” “equality,” and so on all mean different things to differ-
ent people. Can I be truly free if I don’t have resources? Do we enjoy an
adequate standard of living if others have vastly more? Can society be truly
equal if it grants mere formal equality in the sense of equal rights and
ignores outcomes? Even the more technical-sounding aims, such as “low
inflation,” are open to multiple interpretations that are influenced by
further value judgments. Is a percent inflation target adequate or not?
Arguably, no one’s well-being is greatly affected by percent inflation.
However, over time resources are redistributed from creditor to debtor,
which at least some would regard as unfair. Thus, whether percent
counts as “low” inflation depends on value judgments.
In , Milton Friedman (no relation to the author of Power Without
Knowledge) famously argued that differences in opinion about policy stem
primarily from differences in opinion about which of a number of pol-
icies is most likely to get us closer to an agreed-upon goal:
Fact/Value Entanglement
The second reason why experts disagree is that their disagreement about
values feeds through to their opinions about what the facts are and what
predictions follow from them.
Value judgments affect scientific results in a number of ways, and
attempts to eliminate the influence of values will ultimately be frustrated,
to the detriment of the quality of scientific output. In previous work I
have argued this at length in the context of positive economics (Reiss
) and evidence-based policy (Khosrowi and Reiss ), so let me
illustrate it by returning to Milton Friedman’s example, that of
minimum-wage legislation. Are differences in predictions about the
effects of the minimum wage value free, such that these differences can
“in principle can be eliminated by the progress of positive economics”?
It would not appear to be so, at least if we grant that positive economics
has made progress in the almost years since Friedman wrote these
words. There is still disagreement among economists about the effects
of increases in minimum wages on the employment rate, despite vastly
Critical Review Vol. , Nos. –
better data, vastly improved computing power, and vastly more studies
having been conducted since .
Ironically, it was also in that philosopher of social science
Richard Rudner published a paper attacking the neat separability of
“positive” claims and “normative” claims about the desirability of out-
comes (Rudner ). He argued, very generally, that hypothesis tests
are always subject to uncertainty: no matter how long and hard scientists
try, their judgments concerning the truth or falsehood of a hypothesis
may be wrong. But since there are two types of error (accepting a false
hypothesis vs. rejecting a true hypothesis), and there is a tradeoff
between the two, scientists must consider, when deciding how to
resolve the tradeoff, which of the two errors has graver consequences.
This cannot be done without value judgments. Applied to the effect of
increases in the minimum wage on the employment rate, an economist
might wrongly accept the null hypothesis of no effect or might
wrongly reject it, judging that there is a (negative) effect. In the first
case (assuming that the scientific judgment would be implemented as a
technocratic policy), the error would throw some people out of their
jobs, decreasing employment. In the second case, low-wage employees
would miss wage increases they could have had, had the policy been
implemented. Which error is worse? This depends on whether one
regards unemployment or forgone wage rises as the more significant
social ill.
Apart from obvious reasoning flaws such as confirmation bias, one
reason why economists disagree about the effects of raising minimum
wages is thus because they disagree about whether unemployment or
wage stagnation is a worse possible consequence, which in turn will be
affected by differences about more fundamental values such as freedom,
equality, and well-being. This is why advances in positive economics
—better data, better methods, more computing power, more events
that have happened and are therefore amenable to scientific analysis—
will not eliminate disagreements about the effectiveness of policies.
There is another source of fact/value entanglement that is relevant
here: we cannot measure social phenomena without making value judg-
ments. Many choices required in the construction of a socio-economic
indicator such as the consumer-price index (CPI) cannot be justified
on the basis of “the facts” alone. A price index is fully accurate only
when nothing changes in the economy except the prices of goods.
However, many things do change. People substitute away from goods
Reiss • Why Do Experts Disagree?
the price of which has increased relatively; their tastes change; the quality
of goods changes; new goods appear and old goods are discontinued; new
distribution outlets such as discounters and online platforms are intro-
duced; the environment changes; and so on (Reiss ). None of
these changes can be put into the CPI or left out of it without making
value judgments. Consider quality changes. When a product the price
of which was included when CPI was last measured has disappeared,
but a similar product has become available, a statistician must make a
judgment about whether the two goods are comparable. If they are not
comparable, one of a number of adjustment methods can be used. But
comparable according to what standard? The CPI aims to measure the
cost of living. If the price of a good goes up but its quality improves at
the same time, has the cost of living gone up? This question is hard to
decide, especially if the former version of the good becomes unavailable
so that consumers are forced to buy the new version (or something else
altogether). Ultimately, at any rate, a judgment has to be made whether
(and to what extent) a consumer benefits from the alteration. And that is
certainly a value judgment.
This is relevant to the discussion, in Power Without Knowledge, of Lipp-
man’s proposal to eschew problems of interpretation by collecting and
publishing objective statistics that the public and government decision-
makers could use directly (-). In addition to the problems that
(Jeffrey) Friedman mentions with using statistics in this manner (the
fact that statistics treat unlike as like, the inability of statistics to demon-
strate causality, the unrepresentativeness of the sample from which a
given statistic is drawn), the problem with using statistics is that values
are not eschewed but at best buried under a lather of procedures and
instructions that either implement earlier value judgments or call for
new ones in their application (as the example of quality changes was
meant to show).
Fragile Facts
Moving now away from values, a third source of disagreement among
experts about the effectiveness of policies is related to the sensitivity of
answers to the precise formulation of policy questions. There will
rarely be a unique answer to a question as general as “Do increases in
minimum wages lower employment opportunities?” Note that I am
talking here about the facts themselves, not our inferences about them.
Critical Review Vol. , Nos. –
It is the facts that are fragile, not (merely) our knowledge of them. There
may be one fact about what’s true in the short run, a different one about
what’s true in the middle or long run, and so on. I call them fragile
because what is true in one fully specified context often does not tell
us very much about what is true in an only slightly altered context.
Correct answers to policy questions will often depend on the following
dimensions: horizon, the time and place of application, contrast, and
the choice of outcome variable.
Horizon. The effects of policies unfold over time, and short-run and
long-run effects may differ markedly. That is, policies can have a desirable
effect in the short run but an adverse effect in the long run and vice versa.
It has been argued, for example, that even if it were true that minimum-
wage increases have no negative effects on the employment rate (Card
and Krueger ), this observation holds true only of the short run.
In the long run, relatively labor-intensive businesses will shut down
and get replaced by relatively capital-intensive alternatives, thus decreas-
ing employment opportunities in a manner that may be invisible when
looking for employment effects by means of natural experimentation
(Sorkin ). This variability of effectiveness with respect to the exam-
ined horizon is widespread in economic policy. For example, Keynesian
countercyclical policies may well help in the short run but damage econ-
omies in the long run (Sinn ); “nudges” may help individuals achieve
their goals in the short but undermine their decision-making capacity in
the long run (Bovens ).
When experts disagree about a question such as whether increases in
minimum wages lower employment opportunities, then, they may
simply be interpreting the question differently. Or they may know that
the effects differ over time but attribute a different degree of significance
to the short or the long run.
Time and Place of Application. Causal relations in the social world
depend on complex arrangements of individuals and the social structures
within which they operate. These arrangements differ from place to
place, and in a given place over time. The effects of minimum-wage
increases will differ, for instance, depending on the starting point (i.e.,
whether the existing minimum wages are already quite high or rather
low), on employment laws and business regulation, on social norms,
industry practices, and a host of other factors. What is true in the
United States may not be true in Mexico, and what was true in the
United States in may no longer be true now. Thus, when two
Reiss • Why Do Experts Disagree?
* * *
REFERENCES
Alexandrova, Anna. . A Philosophy for the Science of Well-Being. Oxford: Oxford
University Press.
Angrist, Joshua, and Jörn-Steffen Pischke. . “The Credibility Revolution in
Empirical Economics: How Better Research Design Is Taking the Con Out
of Econometrics.” Journal of Economic Perspectives (): -.
Bareinboim, Elias, and Judea Pearl. . “A General Algorithm for Deciding
Transportability of Experimental Results.” Journal of Causal Inference ():
-.
Berlin, Isaiah. . Two Concepts of Liberty. Oxford: Clarendon Press.
Bovens, Luc. . “The Ethics of Nudge.” In Preference Change: Approaches from
Philosophy, Economics, and Psychology, ed. Till Grüne-Yanoff and Sven Ove
Hansson. Dordrecht: Springer.
Brennan, Jason . Against Democracy. Princeton: Princeton University Press.
Card, David, and Alan Krueger. . Myth and Measurement: The New Economics of the
Minimum Wage. Princeton: Princeton University Press.
Cartwright, Nancy, and Jeremy Hardie. . Evidence-Based Policy: A Practical Guide
to Doing It Better. Oxford: Oxford University Press.
Cartwright, Nancy, and Eileen Munro. . “The Limitations of Randomized
Controlled Trials in Predicting Effectiveness.” Journal of Evaluation in Clinical
Practice (): -.
Cialdini, Robert. . Influence: Science and Practice, th ed. Boston: Allyn & Bacon.
Cialdini, Robert. . Pre-suasion. New York: Simon & Schuster.
Collins, Harry, and Robert Evans. . Why Democracies Need Science. Cambridge:
Polity.
Critical Review Vol. , Nos. –
Easterly, William. . The Tyranny of Experts: Economists, Dictators, and the Forgotton
Rights of the Poor. New York: Basic Books.
Fiedler, Klaus. . “The Dependence of the Conjunction Fallacy on Subtle
Linguistic Factors.” Psychological Research : -.
Freedman, David. . “Statistical Models and Shoe Leather.” Statistical Methodology
: -.
Friedman, Jeffrey. . Power Without Knowledge: A Critique of Technocracy.
New York: Oxford University Press.
Friedman, Milton. . “The Methodology of Positive Economics.” In idem, Essays
in Positive Economics. Chicago: University of Chicago Press.
Goldman, Alvin. . “Experts: Which Ones Should You Trust?” Philosophy and
Phenomenological Research (): -.
Guala, Francesco. . “Extrapolation, Analogy, and Comparative Process
Tracing.” Philosophy of Science (): -.
Habermas, Jürgen. . Toward a Rational Society: Student Protest, Science, and Politics.
Boston: Beacon Press.
Harman, Oren, and Michael Dietrich. . Outsider Scientists: Routes to Innovation in
Biology. Chicago: University of Chicago Press.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert
Gintis, and Richard McElreath. . “In Search of Homo Economicus:
Behavioral Experiments in Small-Scale Societies.” American Economic
Review (): -.
Holland, Paul. . “Statistics and Causal Inference.” Journal of the American Statistical
Association (): -.
Hull, David, Peter Tessner, and Arthur Diamond. . “Planck’s Principle.” Science
(): -.
Just, David, and Brian Roe. . “Internal and External Validity in Economics Research:
Tradeoffs Between Experiments, Field Experiments, Natural Experiments, and
Field Data.” American Journal of Agricultural Economics (): -.
Khosrowi, Donal, and Julian Reiss. . “Evidence-Based Policy: The Tension
Between the Epistemic and the Normative.” Critical Review (): -.
Kirchgässner, Gebhard. . “Empirical Economic Research and Economic Policy
Advice: Some Remarks.” In Economic Policy Issues for the Next Decade, ed. Karl
Aiginger and Gernot Hutschenreiter. New York: Springer.
Kitcher, Philip. . Science in a Democratic Society. Amherst, N.Y.: Prometheus.
Koppl, Roger. . Expert Failure. Cambridge: Cambridge University Press.
Layard, Richard. . Happiness: Lessons from a New Science. New York: Penguin.
Levy, David, and Sandra Peart. . Escape from Democracy: The Role of Experts and the
Public in Economic Policy. Cambridge: Cambridge University Press.
Lippmann, Walter. [] . Public Opinion. New York: Free Press.
Neurath, Otto [] . “Unified Science and Psychology.” In Unified Science, ed.
Brian McGuinness. Dordrecht: Reidel.
Nichols, Thomas. . The Death of Expertise: The Campaign Against Knowledge and
Why It Matters. Oxford: Oxford University Press.
Nussbaum, Martha. . Women and Human Development: The Capabilities Approach.
Cambridge: Cambridge University Press.
Reiss • Why Do Experts Disagree?
Planck, Max. . Scientific Autobiography and Other Papers. New York: Philosophical
Library.
Popper, Karl. . “On the Logic of the Social Sciences.” In Theodor Adorno,
et al., The Positivist Dispute in German Sociology, trans. Glyn Adey and David
Frisby. London: Heinemann.
Powers, Nicholas, Allen Blackman, Thomas Lyon, and Urvashi Narain. . “Does
Disclosure Reduce Pollution? Evidence from India’s Green Rating Project.”
Environmental and Resource Economics : -.
Reiss, Julian. . Error in Economics: Towards a More Evidence-Based Methodology.
London: Routledge.
Reiss, Julian. . Philosophy of Economics: A Contemporary Introduction. New York:
Routledge.
Reiss, Julian. . “Fact-Value Entanglement in Positive Economics.” Journal of
Economic Methodology (): -.
Reiss, Julian. . “Expertise, Agreement, and the Nature of Social Scientific Facts
or: Against Epistocracy.” Social Epistemology (): -.
Reiss, Julian. Unpublished. “The Perennial Methodenstreit: Observation, First
Principles, and the Ricardian Vice.” Department of Philosophy, Durham
University.
Rudner, Richard. . “The Scientist Qua Scientist Makes Value Judgments.”
Philosophy of Science (): -.
Sen, Amartya . Development as Freedom. Oxford: Oxford University Press.
Sinn, Hans-Werner. . The Euro Trap: On Bursting Bubbles, Budgets, and Beliefs.
Oxford: Oxford University Press.
Solomon, Miriam. . “The Social Epistemology of NIH Consensus
Conferences.” In Establishing Medical Reality, ed. Harold Kincaid and
Jennifer McKitrick. New York: Springer.
Sorkin, Isaac. . “Are There Long-Run Effects of the Minimum Wage?” Review
of Economic Dynamics (): -.
Steel, Daniel. . “Across the Boundaries: Extrapolation in Biology and Social
Science.” Oxford: Oxford University Press.
Turner, Stephen . “What Is the Problem with Experts?” Social Studies of Science
(): -.
Tversky, Amos, and Daniel Kahneman. . “Extensional versus Intuitive
Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological
Review (): –.
Van Bouwel, Jeroen, ed. . The Social Sciences and Democracy. Houndmills:
Palgrave Macmillan.
van Eersel, Gerdien, Gabriela Koppenol-Gonzales, and Julian Reiss. .
“Extrapolation of Experimental Results through Analogical Reasoning from
Latent Classes.” Philosophy of Science (): -.
Wallis, W. Allen, and Milton Friedman. . “The Empirical Derivation of
Indifference Functions.” Studies in Mathematical Economics and Econometrics in
Memory of Henry Schultz, ed. O. Lange, F. McIntyre and T. O. Yntema.
Chicago: University of Chicago Press.