Why Do Experts Disagree

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Critical Review

A Journal of Politics and Society

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/rcri20

Why Do Experts Disagree?

Julian Reiss

To cite this article: Julian Reiss (2020) Why Do Experts Disagree?, Critical Review, 32:1-3,
218-241, DOI: 10.1080/08913811.2020.1872948
To link to this article: https://doi.org/10.1080/08913811.2020.1872948

© 2021 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group

Published online: 25 Jan 2021.

Submit your article to this journal

Article views: 3931

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rcri20
Julian Reiss

WHY DO EXPERTS DISAGREE?

ABSTRACT: Jeffrey Friedman’s Power Without Knowledge argues forcefully


that there are inherent limitations to the predictability of human action, due to a
circumstance he calls “ideational heterogeneity.” However, our resources for pre-
dicting human action somewhat reliably in the light of ideational heterogeneity
have not been exhausted yet, and there are no in-principle barriers to progress in
tackling the problem. There are, however, other strong reasons to think that dis-
agreement among epistocrats is bound to persist, such that it will be difficult to
decide who has “the right answer” to a given technocratic problem. These
reasons have to do with competing visions of the good society, fact/value entangle-
ment, and the fragility of the facts of the social sciences.
Keywords: epistocracy; expert disagreement; technocracy; value pluralism; spiral of conviction.

The Corona crisis has made plain that the role of scientific experts in
democracies is among the most significant and urgent problems we
face, and should be a core concern in contemporary political theory
and adjacent fields such as economics, economic methodology, the phil-
osophy of science, and science and technology studies. On the one hand,
democracies face fundamental challenges that call for a close collaboration
between science and politics; the pandemic is the most current and

Julian Reiss, julian.reiss@jku.at, Institute for Philosophy and Scientific Method, JKU Linz,
Altenberger Str. ,  Linz, Austria, is the author, inter alia, of Error in Economics: The Meth-
odology of Evidence-Based Economics (Routledge, ).
Critical Review (–): – ISSN - print, - online
©  The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribu-
tion-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-
nd/./), which permits non-commercial re-use, distribution, and reproduction in any
medium, provided the original work is properly cited, and is not altered, transformed, or
built upon in any way. https://doi.org/./..
Reiss • Why Do Experts Disagree? 

obvious but by no means unique example. On the other hand, there are
boatloads of reasons to think that expert judgment is not always reliable
and trustworthy. Again, most currently and obviously: experts rec-
ommended against the wearing of face masks early in the crisis only to
reverse their judgments a few months later, without any new evidence
to support this change of mind. Why should we think that they have
got it right this time around? Contributions to the recent debate about
experts in democracies aim to resolve this dilemma in one way or
another.
We can distinguish two camps in this literature. Those we may refer to
as apologists acknowledge (to varying degrees) the limitations of expert
judgment but argue that the role of experts in democracies should never-
theless be strengthened because the alternative is worse. Examples
include Jason Brennan’s Against Democracy (; see also Collins and
Evans  and Nichols ). Those opposing the apologists are the
critics. They propose to introduce institutions that aim to keep experts
in check or to motivate them right so that failure and overreach are rela-
tively unlikely. Examples of this latter group include William Easterly’s
The Tyranny of Experts (; see also Levy and Peart  and Koppl
). Power Without Knowledge (Oxford University Press, )
belongs squarely in this camp.
According to Stephen Turner (), there is a “problem with
experts” in a democracy. The problem, in short, is this: heeding expert
advice contravenes one of the most deeply held democratic principles,
viz. the fundamental equality of all citizens. Three solutions offer them-
selves. Populist approaches address the problem by ignoring expertise and
making political decisions a matter of mob psychology. Technocratic and
epistocratic approaches subject citizens to rule by experts (on the distinc-
tion, see Van Bouwel , ). Deliberative democratic approaches
maintain that the process of democratic deliberation can balance the
two demands more fairly (Habermas ; Kitcher ).
Friedman’s book is highly original in that it challenges the premise
from which all these approaches start: that there is anything special
about expert knowledge to begin with. Focusing on predicting the
likely effects of interventions aiming to solve social problems such as
unemployment, inflation, failing schools, a dysfunctional health care
system, or drug addiction (), Friedman argues that there are inherent
limitations to such predictions. Worse than that, given the way social
science is currently practiced, social scientists are unlikely to make reliable
 Critical Review Vol. , Nos. –

predictions to the extent that this is possible, and there are good reasons to
believe that this won’t change anytime soon.
In what follows I will first discuss Friedman’s skeptical arguments con-
cerning social scientists’ ability to predict the effects of policy interven-
tions. To anticipate my main response, I am somewhat more optimistic
than he is about reliable predictions. I do think that human action is gen-
erally very hard to predict, but there are predictable aspects, and there are
methodological strategies to discover what these aspects are. I will then
outline my own reasons for thinking that social science is an unlikely
source of the knowledge necessary for making epistocracy feasible, and
will come to a conclusion that is, if anything, even more anti-techno-
cratic than Friedman’s.
I should emphasize at the beginning that I am in broad agreement with
many of Friedman’s core claims. Like Friedman, I am critical of any exist-
ing form of technocracy, and I agree that the reasons for rejecting tech-
nocracy are in large part epistemic. I do think exitocracy is an interesting
alternative, and although a society organized according to Rawls’s Differ-
ence Principle isn’t my preferred vision of the good society, I agree that
an exitocracy would require that each citizen dispose of an amount of
resources sufficient to be capable of making use of the freedom to exit,
should he or she want to do so. One difference between us is that I
believe that the epistemic limitations of successful technocracy are of a
more fundamental and less practical nature than Friedman supposes.

Technocracy, Epistocracy, and the Problem of Prediction


Friedman uses a broad definition of technocracy, according to which
ordinary citizens who attempt to solve social problems politically count
as “citizen-technocrats,” alongside the “epistocrats”—expert social scien-
tists—who are usually defined as the only technocrats there are. Fried-
man’s rationale for this definition begins with the question of what it is
that distinguishes “technocrats,” in ordinary usage, from everybody
else. The answer is that technocrats—in Friedman’s usage, epistocrats
—claim to know how best to design public policies that will solve the
social and economic problems of the populace. Friedman wants to scru-
tinize this knowledge claim, but that means that he cannot take for
granted, as the ordinary definition does, that “technocrats” (epistocrats)
actually possess problem-solving knowledge that is superior to that pos-
sessed by ordinary citizens. That is, epistocrats cannot be treated as true
Reiss • Why Do Experts Disagree? 

experts rather than putative experts if we are to investigate whether their


claimed expertise is real. We need to investigate the possibility that it is
not because, after all, if epistocrats were no more epistemically qualified
than average citizens to shape public policy, there would be no reason not
to decide policies by more direct democratic means. If epistocrats know
no better than ordinary citizens, why should they be given more power
than ordinary citizens have?
On the other hand, though, Friedman interprets U.S. public-opinion
research to show that ordinary citizens often seem to think that they, too,
know how to solve social and economic problems. If we no longer
simply assume that, on the whole, such “non-experts’” knowledge
claims must be disregarded, then ordinary citizens’ knowledge claims
about how to solve social and economic problems must, prior to the
investigation, be put into the same category as “experts’” knowledge
claims: both groups claim to know how to solve social and economic
problems, and it would beg the question to assert, by definitional fiat,
that one group is right and the other wrong. Thus, both groups must
be defined as technocratic, in that both groups consist of fallible
human beings who think they know how to solve the problems a tech-
nocracy tries to solve (-).
If this seems to be an overly broad definition, it allows Friedman to
specify rather precisely just what type of knowledge any technocrat
needs. Insofar as the technocratic agenda is to solve (or alleviate, or
prevent) social and economic problems, a technocracy, to be viable
and legitimate, needs to predict reliably the effects of policy interventions
on the social and economic problems toward which they are directed.
This, in turn, amounts to predicting how targeted groups of social
actors will react to the incentives created by the interventions. Therefore,
Friedman’s scrutiny of technocratic knowledge claims begins with a
general inquiry into the determinants of human behavior, which he
takes to be ideational (Part I). People’s beliefs determine their actions,
and their beliefs are the products of streams of incoming information
and interpretation that will, to some extent, vary from person to
person (Chapter ). The task of the “judicious,” ideationally sensitive
technocrat, then, will be to discover homogeneities across people’s
webs of belief that enable reliable predictions to be made about the be-
havior that is likely to occur in response to a technocratic intervention.
This task, Friedman argues in Part II, is not even attempted by most
“expert” social scientists nowadays, who instead assume away the possibly
 Critical Review Vol. , Nos. –

confounding role of heterogeneous beliefs (Chapter ), and who tend to


be caught up in self-confirmatory “spirals of conviction” that persuade
them of the accuracy of their non-ideational theories by screening out
or misinterpreting conflicting evidence (Chapter ). Citizen-technocrats,
however, fare no better: while they tend to be more open minded than
epistocrats, due to their lack of self-confirming information, they tend to
assume that policy intentions translate unproblematically into policy
results, effacing the complexities of human behavior that might be
caused by factors such as ideational heterogeneity (Chapter ). In the
concluding Chapter , Friedman outlines the limited promise of a
regime that minimizes technocratic prediction in favor of affording indi-
viduals the “exit option,” which enables them to draw on their personal,
experiential knowledge in navigating modern society rather than requir-
ing them to predict the behavior of faceless masses of other people—as
they must do when they assume the role of technocrat.

Are There Inherent Limits to Predictability?


I am skeptical about highly generic arguments concerning the inherent
limitations to the predictability of human action (Reiss , ff.).
Perhaps my thinking has been influenced by Otto Neurath ([]
, ), who wrote that

In some cases a physicist is a worse prophet than a behaviouristician, as


when he is supposed to specify where in St. Stephen’s Square a thousand
dollar bill swept away by the wind will land, whereas a behaviouristician
can specify the result of a conditioning experiment rather accurately. If
someone complains about the difficulties of making behaviouristic exper-
iments, let him seek consolation in geology and astronomy where exper-
iments are virtually unknown, except for ones involving small-scale
models under radically different conditions.

Neurath’s point was: when assessing the predictability of human action in


the wild, we shouldn’t take laboratory physics as the standard. Uncon-
trolled physics phenomena are often hard to predict, whereas there are
many cases of human action that are predictable under the right
circumstances.
Friedman’s argument to the effect that technocrats cannot predict the
effects of policy interventions to the degree required by technocracy is
built on a set of theses Friedman dubs “Lippmannesque” to honor
Reiss • Why Do Experts Disagree? 

their pedigree in Walter Lippmann’s Public Opinion (Lippmann []


). It is worth quoting them in full (Friedman , -, ):

Thesis  (interpretive determinism). At least insofar as an agent is acting delib-


erately, her interpretation of which action is advisable under her perceived
circumstances will determine which action she takes.

Thesis  (ideational determinism). An agent’s interpretation of which action is


advisable under her perceived circumstances will be determined by the
web of those of her ideas that seem (to her) relevant to (a) the circum-
stances themselves, (b) the purpose of actions that (for her) count as nor-
matively advisable in those circumstances, and (c) the effects … that seem
(to her) likely be produced by such actions in those circumstances. Also
playing a part will be the implicit assumptions and other tacit ideas that
stand behind ideas about (a), (b), and (c).

Thesis  (ideational heterogeneity). The ideas, and thus the interpretations,


that determine agents’ deliberate actions, as well as the ideas of the techno-
crats attempting to predict and control agents’ actions, vary from person to
person to some extent, making each person’s actions somewhat unpredict-
able to the others.

Friedman is right, of course, to assume that different interpretations


often lead to different actions. A good example is Joseph Henrich
et al.’s study of ultimatum games in  small-scale societies. One of
their conclusions is that “there is considerably more behavioral variability
across groups than had been found in previous cross-cultural research,
and the canonical model fails in a wider variety of ways than in previous
experiments” (Henrich et al. , ). Yet there appear to be aspects of
human action that are reliably predictable. To use a drab example, I rely
every day on the prediction that the market vendor won’t poison me.
Friedman appeals to such examples to argue that ideational homogene-
ities (such as the norm that one should not poison people), stemming
from cultural sources, may counteract the “presumption of ideational
heterogeneity” and should therefore be a prime object of judicious tech-
nocratic inquiry (, ).
Let me give some examples from an area of research that focuses on
behavioral modification: marketing, or, more broadly, the science of per-
suasion. Marketing researchers and psychologists now understand pretty
well how to affect people’s behavior (on average). Robert Cialdini,
Regents’ Professor Emeritus of Psychology and Marketing at Arizona
 Critical Review Vol. , Nos. –

State University, for instance, has condensed the research evidence into
six rules of influence (Cialdini ; Cialdini ): reciprocation,
liking, social proof, authority, scarcity, and consistency. For instance,
people are more likely to buy a good or donate to a cause after having
received a free gift, when the seller is (or pretends to be) “much like
them,” or when the good is (or is claimed to be) “the last available
item” for sale. A principle more directly relevant to policy is that of
social proof, which states that people tend to regard as correct that
which they think others think is correct. A study of India’s Green
Rating Project (GRP), a public disclosure program that collects and dis-
seminates information about firms’ environmental performance, found
that GRP achieved significant reductions in pollution loadings among
dirty plants (Powers et al. ). This example is particularly instructive
because (a) it concerns pollution, which is arguably a social problem tech-
nocrats would like to address, and (b) the study was conducted in a non-
WEIRD (Western, educated, industrialized, rich, and democratic)
context, which shows that the principle applies widely ().
I agree with Friedman that ideational heterogeneity creates obstacles
to reliably predicting the consequences of policy interventions. I also
agree that pernicious heterogeneity, i.e., heterogeneity that affects
actual behavior in unpredictable fashion, must be the default assumption.
But instead of a principal obstacle to the predictability of behavioral
responses to policy interventions, it could be treated as a methodological
problem that needs to be addressed, but which may on occasion either
show up only in an attenuated fashion or be solvable.
Friedman himself gives an example of a problem of interpretation in
economic experiments that appears to have been solved satisfactorily:
Linda the feminist bank teller, adduced by Amos Tversky and Daniel
Kahneman in what has become a world-famous experiment (). It
appeared to Tversky and Kahneman (, ), that their experimental
subjects violated the laws of the probability calculus because they thought
that it was more likely that Linda, who as a college student had been a
philosophy major who was “deeply concerned with issues of discrimi-
nation and social justice, and also participated in anti-nuclear demon-
stration,” would become a feminist bank teller than that she would
become a bank teller without qualification—even though the set of
bank tellers must be larger than the subset of feminist bank tellers.
However, when the question was later asked in terms of frequencies
rather than probabilities, the violation largely disappeared (Fiedler
Reiss • Why Do Experts Disagree? 

). Friedman accepts that experiments that frame the question in


terms of frequencies are less subject to the interpretive missteps he attri-
butes to Tversky and Kahneman, such that we can say that such exper-
iments represent progress toward accurate predictive knowledge.
Similarly, policy makers could try to frame policies in such a way that
the desired behavioral modifications are likely to obtain (results from
the science of persuasion will help them achieve that), and such that
interpretive problems are unlikely to interfere. Thus, while there is cer-
tainly a methodological issue to be addressed, I am not yet convinced that
ideational heterogeneity poses an overriding obstacle to successful policy.
Let me explain what I mean by the “methodological issue to be
addressed” in more detail. Section . of Power Without Knowledge criti-
cizes a recent movement in empirical economics, which is commonly
known as “design-based econometrics” (e.g., Reiss , ch. ), and
which Friedman includes under the heading of “positivist social
science.” Design-based econometrics starts from the observation that
social scientists don’t have theories that are both strong enough to deter-
mine crucial aspects of the empirical specification such as covariates,
functional form, and distribution of the error terms; and empirically sup-
ported enough to be regarded as credible by a majority of researchers
(e.g., Freedman ). The randomized experiment is the methodologi-
cal “gold standard” for overcoming these problems of causal inference,
and if randomized experimentation is impossible, according to this type
of positivist, designs should be sought that mimic randomized exper-
iments as closely as possible. Instrumental variables, difference-in-differ-
ences, regression discontinuity, and similar techniques work by
exploiting situations that occur naturally (in that the central variation
to be analyzed is found rather than created by the analyst) but they
resemble experiments nonetheless (Angrist and Pischke ).
There are two fundamental problems of causal inference. The first is to
ascertain that the identified causal relationship does in fact hold for the
population at hand. The other is to determine whether and to what
extent the relationship holds also for other populations, i.e., populations
that are not described by the data set at hand. It is sometimes postulated
that when choosing a research method, researchers face a tradeoff
between the two fundamental problems (Just and Roe ). Some
methods are really good at solving the first problem because their proto-
col makes sure that the method applies only under conditions that make
virtually certain that no explanation other than the targeted causal
 Critical Review Vol. , Nos. –

explanation can account for the data. Randomized experiments are an


example of this type. The problem, however, is that because of the
strict protocol, results tend not to travel far beyond the test population,
for example because experiments create certain artefacts that make
people behave differently than normally. As W. Allen Wallis and
Milton Friedman () put it a long time ago, “It is questionable
whether a subject in so artificial an experimental situation could know
what choices he would make in an economic situation; not knowing,
it is almost inevitable that he would, in entire good faith, systematize
his answers in such a way as to produce plausible but spurious results.”
Other methods, such as ordinary least squares, apply more widely and
do not create artefacts. But they are not very good at ruling out all salient
competing hypotheses. Using these methods, researchers are free to
choose data sets they think are highly representative, but they cannot
be terribly certain of the results.
Econometricians affiliated with the design-based school tend to
resolve the tradeoff by focusing almost exclusively on the first problem
of causal inference. To the extent that they do so, Friedman correctly cri-
ticizes the school for ignoring the second problem of inference. Thus,
design-based econometrics “treats [agents] as if they act homogeneously
in response to whatever objective variables the economist deems worthy
of investigating experimentally” (). In light of Friedman’s argument
for the presumption of ideational heterogeneity in Chapter , he main-
tains in Chapter  that judicious social scientists would treat homogeneity
as the exception rather than the rule, not the other way around.
The inference rule Friedman is criticizing—“assume homogeneity
unless there is some specific reason to think otherwise”— has been
called “simple induction.” It is indeed highly fallible. As Daniel Steel
(, ) argues, “simple induction is limited, and … it is highly desir-
able that it be supplemented with some more sophisticated inferential
strategy.” In the past  years or so, methodologists have spent a lot of
effort on developing such more sophisticated inferential strategies.
Steel’s own approach builds on knowledge of mechanisms, Nancy Cart-
wright’s on knowledge of causal capacities (Cartwright and Munro ),
Judea Pearl’s on knowledge of causal structure (Bareinboim and Pearl
), Francesco Guala’s on certain kinds of analogies (Guala ),
and in joint work with a former student I have defended an approach
that combines Guala’s analogical reasoning with the construction of
latent classes (van Eersel et al. ). None of these approaches assumes
Reiss • Why Do Experts Disagree? 

homogeneity, but instead each tries to distinguish between predictable


and unpredictable aspects of behavior in order to make inferences from
experimental populations to other populations more reliable.
All these approaches are highly abstract in that they do not specify the
source of heterogeneity any more concretely than with respect to under-
lying “mechanisms” or “structures” or simply “causes.” But to the extent
that ideas or “webs of belief” cause behavior, ideational heterogeneity is
one specific reason among others why predictions may fail. And to the
extent that behavior is not entirely random but varies systematically
with aspects of the settings within which it occurs, these approaches
can pick up on the systematic variation. Information about “homogeniz-
ing factors” such as social norms can help a great deal in this process.
Importantly, predicting the effects of policy interventions is a special
case of solving the second fundamental problem of causal inference.
Here, the new context to which a result is applied is not an existing
context but one created by the policy. Much of the recent methodologi-
cal work on the second problem of inference is concerned exactly with
these kinds of predictions (e.g., Reiss , chs. -; Cartwright and
Hardie ). However, this does not show that social science, as it is
actually practiced, takes the possibility of ideational heterogeneity (or
in fact any other kind of heterogeneity) into account when making pre-
dictions. In that sense, the criticism Friedman makes of standard neoclas-
sical economics, the new econometrics, and behavioral economics (along
with social psychology) in Chapter  is well taken. Economists and other
social scientists do rely on a priori theory and simple induction too often (a
point I have also made in Error in Economics, ). However, I agree with
Friedman, too, that a more judicious social science is possible. Indeed,
recent discussions in methodology suggest strategies for making social
science more judicious.
So the question is whether there aren’t any in-principle obstacles to
judicious social science. Chapter  argues there is at least one: the spiral
of conviction (although Friedman does not state that the barrier is
insurmountable).

The Spiral of Conviction


The spiral of conviction starts with Lippmann’s idea that everyone uses
interpretive stereotypes in order to make sense of the “great blooming,
buzzing confusion” that is reality. Every conscious agent receives far
 Critical Review Vol. , Nos. –

more information than he could make sense of. Stereotypes help one to
navigate by screening out information that is inconsistent with one’s
stereotypes. The more one knows about a subject, the more one’s stereo-
type will get reinforced. Children and, more generally, newcomers to a
subject thus tend to be relatively open minded when confronted with
new ideas, as they have not had time to develop and reinforce stereo-
types, while those who have acquired a great deal of knowledge about
it—such as experts—lean on the stereotype to make sense of the knowl-
edge, while leaning on the knowledge to reinforce the stereotype. In
short, they tend to become dogmatic about the stereotype.
I think that there is a great deal of truth to this theory. It does explain
observations such as this one from Max Planck (, ): “A new scien-
tific truth does not triumph by convincing its opponents and making
them see the light, but rather because its opponents eventually die, and
a new generation grows up that is familiar with it” (which has come to
be known as “Planck’s Principle”; see Hull et al. ). Similarly, there
is the fact that many scientific innovations are made by outsiders to the
field (in biology, for instance, see Harman and Dietrich ).
However, the spiral of conviction is at best a tendency. That is, it is a
mechanism that operates in some individuals but not all, and even where
it operates, there are countervailing mechanisms that prevent it from
complete dominance. As Friedman puts it, “as people gain knowledge
of a topic, their opinions about it tend to rigidify,” there exists “a pressure
for dogmatism” or a “propensity to dogmatism’ (, , , emphases
added). We can imagine, then, that some researchers might become
aware of this tendency (as they become aware of other cognitive
biases, such as confirmation bias) and take active steps to combat it.
Here is one recipe. Identify the “crowning postures” in your belief
system, to use terminology Friedman () borrows from Philip
E. Converse to denote beliefs that are crucial to your interpretive
schema. Treat them not, in a realist manner, as representations of
reality but rather, in instrumentalist fashion, as tools for prediction. Ident-
ify one or a small set of alternative crowning postures. Derive predictions
from the alternative belief constellations. (By “predictions,” here, I do
not mean statements about the future effects of interventions, but
answers to the question: What observations would we expect to make
if the crowning posture were true? Chapter  uses this method in the
context of discussing alternatives to the spiral-of-conviction model.
Take the score of failed and successful predictions for each alternative
Reiss • Why Do Experts Disagree? 

constellation of beliefs. If an alternative outperforms one’s crowning pos-


tures to some pre-determined degree, switch to the alternative.
Of course, such countervailing mechanisms won’t be perfect. I just use
this to illustrate the idea that the spiral of conviction is a tendency rather
than an inescapable trap. What is more important is that even if every
scientist were dogmatic individually, this would not mean that science
as a whole could not make progress. Gebhard Kirchgässner (, -
) makes this point in discussing dogmatism and ideology among econ-
omists, borrowing a Popperian idea. It is worthwhile quoting him in full:

One might argue that this is not a problem of individual attitudes of scien-
tists but a “systemic request”: objectivity of individual scientists might be
seen as a necessary condition for scientific progress. However, Popper
([Popper ], p. ) begs to differ: “It is a mistake to assume that the
objectivity of a science depends on the objectivity of the scientist. And
it is a mistake to believe that the attitude of the natural scientist is more
objective than that of the social scientist. … The objectivity of science is
not a matter of the individual scientists but rather the social result of
their mutual criticisms, of the friendly-hostile division of labour among
scientists, of their co-operation and also of their competition.”…

To get a satisfactory answer to this question, it is necessary to take into


account, as K. Popper emphasises, that science is not the business of sep-
arate individuals but that it is a social process, in which some scientists
make conjectures and others criticise these. Some of these conjectures
will (at least temporarily) survive criticism and be taken as approved
hypotheses, while others will be refuted because of logical deficiencies
or incompatibility with the available empirical data. What is decisive is
not whether the individual scientist is objective or not, but that the scien-
tific discourse takes place in a climate where criticisms are not only allowed
but even desired. Only criticisms of our conjectures enable us to detect
their weaknesses and to proceed to better conjectures. … If we had to
rely on the objectivity of the individual scientist to reach scientific pro-
gress, then the chance for progress would be rather limited, because scien-
tists—like all other human beings—are generally biased in favour of their
own ideas. Whether scientific progress is possible or not in a society
depends much more on the (rational) organisation of the scientific
process than on the intentions of the individual scientists.

So even if every individual scientist is subject to the spiral of conviction,


it does not mean that science as a whole cannot use mechanisms that
control the influence of individual biases and thus enable progress. This
I take to be the lesson of Planck’s Principle: science does advance after
 Critical Review Vol. , Nos. –

all, even if one funeral at a time. Thus, I think we have to identify a differ-
ent kind of obstacle to a well-functioning technocracy.

Why Epistocrats Disagree


My reservations about epistocratic knowledge start earlier than at the point
of extrapolating from existing results to policy predictions. Essentially, it is
highly unlikely that experts will agree even on what the relevant (social
scientific) facts are. (Cf. also Friedman’s discussion of the “Fact of Techno-
cratic Disagreement,” ff). But expert agreement is essential to the feasi-
bility of technocracy and epistocracy. Without expert agreement, policy
makers would have to choose among the disagreeing experts. How
could they do that in a way that doesn’t bias selection along political
lines and in the absence of social science expertise on their part? Or, as
Friedman suggests, they could do it in a way that is biased toward the con-
fidence shown by some experts, which would institutionalize the closed-
mindedness he derives from the theory of the spiral of conviction—not
because they take an expert’s confidence as a sign of having gotten hold
of truth, but because less-confident experts will tend not to want to
enter into political debate with more-dogmatic experts, having less confi-
dence that their position should be defended at all costs ().
Very abstract principles such as those proposed by Alvin Goldman ()
would not seem to help in this case. Nor will we usually have the time to
force agreement in a consensus conference, or by means of Habermas’s
“unforced force of the better argument” applied to epistocrats—the episte-
mic reasonableness of which is dubious anyway (Solomon ).
Beyond the problem of epistocrats’ disagreement about facts is the
problem of all technocrats’ (potential) disagreement about values. This
problem, however, will aggravate the fact of expert disagreement, making
it difficult for citizen-technocrats or their political agents to choose
among experts in mixed epistocratic-democratic technocracies such as
ours. Let me break this problem down into the following four issues:

. Experts disagree about values.


. Disagreement about values feeds through to factual opinions
because of fact/value entanglement.
. The fragility of social science facts raises a different kind of interpre-
tive problem.
Reiss • Why Do Experts Disagree? 

. There are currently no evidential standards that are widely shared


among social scientists, nor are they likely to emerge in the near future.

In what follows, I will address each point in turn.

Alternative Visions of the Good Society


Early on in Power Without Knowledge, Friedman uses survey data from
twentieth-century America to contend that there seems to have been a
broad public consensus in favor of technocracy-friendly policy objec-
tives, such as low inflation and low unemployment (-). Further
examples are “a high standard of living, freedom, peace and a better
life for one’s children” (, quoting public-opinion scholar Bernard Berel-
son). I certainly do not deny that these are all valuable objectives, and I
suspect that most social science experts agree. Nonetheless, I do not
think that there is a value consensus among experts in the relevant
sense. Disagreement about which of a number of policy options is best
therefore stems not only from possible disagreement about which
policy is best to achieve a given end, which is Friedman’s focus, but
also from disagreement about the ends themselves.
My starting point is value pluralism of the kind articulated by Isaiah
Berlin (e.g., Berlin ). Genuine values are many, and different values
may stand in conflict with each other, necessitating a choice between
them. The first source of disagreement given plural values is what to put
on the list. What is conspicuously absent from Berelson’s/Friedman’s list
is equality, which is likely to be very high on many policy experts’ lists.
Does a society in which everyone has “enough” according to some
accepted standard, while the most advantaged members nevertheless
command far more resources than the least advantaged, suffer from a
“social or economic problem”? There is no consensus on this question.
Disagreements about what to put on the list of valuable things could be
resolved in two ways. The conjunctive approach would include only items
we can find on every expert’s list. Apart from the risk of ending up with an
empty list, the approach would be skewed against some visions of the good
society. A society in which inflation is zero, no one is unemployed, and
everyone lives freely and enjoys a high standard of living is not a good
society, according to some experts, if (say) material inequality is rampant,
and according to others if (say) everyone pursues their own hedonistic
pleasures without regard for transcendent goals. The disjunctive approach
 Critical Review Vol. , Nos. –

would put every item on the list that is valued by at least one expert. Here
the opposite considerations apply. Progressives might, for instance, deny
that sanctity or tradition are values to begin with.
The second source of disagreement stems from the fact that values are not
merely plural, but stand in conflict with each other. When they do, choices
have to be made about which of two or more conflicting values are more
important in the situation at hand. This is a point Friedman takes up, but
he discusses it in the context of expert knowledge, not value judgments.
Thus, he argues that experts often lack “knowledge of which social pro-
blems are not only real but significant, in the sense that they affect large
numbers of people — or small numbers intensely” (). But whether or
not a social problem is significant or not is a value judgment, not something
that can be decided on the basis of evidence alone. Is widespread drug abuse
or inflation the more pressing social problem (or a problem at all)? That will
depend, among other things, on whether freedom or well-being is regarded
as the more important value. Even in the counterfactual situation in which
all experts agree on what is on the list, there will be disagreements about
which of two or more conflicting values has priority, and therefore which
social problems are the most significant.
The third source of disagreement concerns the interpretation and appli-
cation of any given value to a given situation. “Freedom,” “an adequate
standard of living,” “equality,” and so on all mean different things to differ-
ent people. Can I be truly free if I don’t have resources? Do we enjoy an
adequate standard of living if others have vastly more? Can society be truly
equal if it grants mere formal equality in the sense of equal rights and
ignores outcomes? Even the more technical-sounding aims, such as “low
inflation,” are open to multiple interpretations that are influenced by
further value judgments. Is a  percent inflation target adequate or not?
Arguably, no one’s well-being is greatly affected by  percent inflation.
However, over time resources are redistributed from creditor to debtor,
which at least some would regard as unfair. Thus, whether  percent
counts as “low” inflation depends on value judgments.
In , Milton Friedman (no relation to the author of Power Without
Knowledge) famously argued that differences in opinion about policy stem
primarily from differences in opinion about which of a number of pol-
icies is most likely to get us closer to an agreed-upon goal:

I venture the judgment, however, that currently in the Western world,


and especially in the United States, differences about economic policy
Reiss • Why Do Experts Disagree? 

among disinterested citizens derive predominantly from different predic-


tions about the economic consequences of taking action—differences
that in principle can be eliminated by the progress of positive economics
—rather than from fundamental differences in basic values, differences
about which men can ultimately only fight. An obvious and not unimpor-
tant example is minimum-wage legislation. Underneath the welter of
arguments offered for and against such legislation there is an underlying
consensus on the objective of achieving a “living wage” for all, to use
the ambiguous phrase so common in such discussions. The difference of
opinion is largely grounded on an implicit or explicit difference in predic-
tions about the efficacy of this particular means in furthering the agreed-on
end. (Friedman , -)

But this mistaken. Even if everyone concerned agreed that “achieving a


living wage for all” is a worthwhile goal, judgments about minimum-
wage policies will be affected by value judgments concerning what
other goals economic policy should pursue, the relative importance of
achieving a living wage vis-à-vis these other goals, and what the phrase
“living wage for all” amounts to in the situation at hand. We cannot
therefore neatly separate “positive” questions about the efficacy of
means from “normative” questions about the desirability of ends.

Fact/Value Entanglement
The second reason why experts disagree is that their disagreement about
values feeds through to their opinions about what the facts are and what
predictions follow from them.
Value judgments affect scientific results in a number of ways, and
attempts to eliminate the influence of values will ultimately be frustrated,
to the detriment of the quality of scientific output. In previous work I
have argued this at length in the context of positive economics (Reiss
) and evidence-based policy (Khosrowi and Reiss ), so let me
illustrate it by returning to Milton Friedman’s example, that of
minimum-wage legislation. Are differences in predictions about the
effects of the minimum wage value free, such that these differences can
“in principle can be eliminated by the progress of positive economics”?
It would not appear to be so, at least if we grant that positive economics
has made progress in the almost  years since Friedman wrote these
words. There is still disagreement among economists about the effects
of increases in minimum wages on the employment rate, despite vastly
 Critical Review Vol. , Nos. –

better data, vastly improved computing power, and vastly more studies
having been conducted since .
Ironically, it was also in  that philosopher of social science
Richard Rudner published a paper attacking the neat separability of
“positive” claims and “normative” claims about the desirability of out-
comes (Rudner ). He argued, very generally, that hypothesis tests
are always subject to uncertainty: no matter how long and hard scientists
try, their judgments concerning the truth or falsehood of a hypothesis
may be wrong. But since there are two types of error (accepting a false
hypothesis vs. rejecting a true hypothesis), and there is a tradeoff
between the two, scientists must consider, when deciding how to
resolve the tradeoff, which of the two errors has graver consequences.
This cannot be done without value judgments. Applied to the effect of
increases in the minimum wage on the employment rate, an economist
might wrongly accept the null hypothesis of no effect or might
wrongly reject it, judging that there is a (negative) effect. In the first
case (assuming that the scientific judgment would be implemented as a
technocratic policy), the error would throw some people out of their
jobs, decreasing employment. In the second case, low-wage employees
would miss wage increases they could have had, had the policy been
implemented. Which error is worse? This depends on whether one
regards unemployment or forgone wage rises as the more significant
social ill.
Apart from obvious reasoning flaws such as confirmation bias, one
reason why economists disagree about the effects of raising minimum
wages is thus because they disagree about whether unemployment or
wage stagnation is a worse possible consequence, which in turn will be
affected by differences about more fundamental values such as freedom,
equality, and well-being. This is why advances in positive economics
—better data, better methods, more computing power, more events
that have happened and are therefore amenable to scientific analysis—
will not eliminate disagreements about the effectiveness of policies.
There is another source of fact/value entanglement that is relevant
here: we cannot measure social phenomena without making value judg-
ments. Many choices required in the construction of a socio-economic
indicator such as the consumer-price index (CPI) cannot be justified
on the basis of “the facts” alone. A price index is fully accurate only
when nothing changes in the economy except the prices of goods.
However, many things do change. People substitute away from goods
Reiss • Why Do Experts Disagree? 

the price of which has increased relatively; their tastes change; the quality
of goods changes; new goods appear and old goods are discontinued; new
distribution outlets such as discounters and online platforms are intro-
duced; the environment changes; and so on (Reiss ). None of
these changes can be put into the CPI or left out of it without making
value judgments. Consider quality changes. When a product the price
of which was included when CPI was last measured has disappeared,
but a similar product has become available, a statistician must make a
judgment about whether the two goods are comparable. If they are not
comparable, one of a number of adjustment methods can be used. But
comparable according to what standard? The CPI aims to measure the
cost of living. If the price of a good goes up but its quality improves at
the same time, has the cost of living gone up? This question is hard to
decide, especially if the former version of the good becomes unavailable
so that consumers are forced to buy the new version (or something else
altogether). Ultimately, at any rate, a judgment has to be made whether
(and to what extent) a consumer benefits from the alteration. And that is
certainly a value judgment.
This is relevant to the discussion, in Power Without Knowledge, of Lipp-
man’s proposal to eschew problems of interpretation by collecting and
publishing objective statistics that the public and government decision-
makers could use directly (-). In addition to the problems that
(Jeffrey) Friedman mentions with using statistics in this manner (the
fact that statistics treat unlike as like, the inability of statistics to demon-
strate causality, the unrepresentativeness of the sample from which a
given statistic is drawn), the problem with using statistics is that values
are not eschewed but at best buried under a lather of procedures and
instructions that either implement earlier value judgments or call for
new ones in their application (as the example of quality changes was
meant to show).

Fragile Facts
Moving now away from values, a third source of disagreement among
experts about the effectiveness of policies is related to the sensitivity of
answers to the precise formulation of policy questions. There will
rarely be a unique answer to a question as general as “Do increases in
minimum wages lower employment opportunities?” Note that I am
talking here about the facts themselves, not our inferences about them.
 Critical Review Vol. , Nos. –

It is the facts that are fragile, not (merely) our knowledge of them. There
may be one fact about what’s true in the short run, a different one about
what’s true in the middle or long run, and so on. I call them fragile
because what is true in one fully specified context often does not tell
us very much about what is true in an only slightly altered context.
Correct answers to policy questions will often depend on the following
dimensions: horizon, the time and place of application, contrast, and
the choice of outcome variable.
Horizon. The effects of policies unfold over time, and short-run and
long-run effects may differ markedly. That is, policies can have a desirable
effect in the short run but an adverse effect in the long run and vice versa.
It has been argued, for example, that even if it were true that minimum-
wage increases have no negative effects on the employment rate (Card
and Krueger ), this observation holds true only of the short run.
In the long run, relatively labor-intensive businesses will shut down
and get replaced by relatively capital-intensive alternatives, thus decreas-
ing employment opportunities in a manner that may be invisible when
looking for employment effects by means of natural experimentation
(Sorkin ). This variability of effectiveness with respect to the exam-
ined horizon is widespread in economic policy. For example, Keynesian
countercyclical policies may well help in the short run but damage econ-
omies in the long run (Sinn ); “nudges” may help individuals achieve
their goals in the short but undermine their decision-making capacity in
the long run (Bovens ).
When experts disagree about a question such as whether increases in
minimum wages lower employment opportunities, then, they may
simply be interpreting the question differently. Or they may know that
the effects differ over time but attribute a different degree of significance
to the short or the long run.
Time and Place of Application. Causal relations in the social world
depend on complex arrangements of individuals and the social structures
within which they operate. These arrangements differ from place to
place, and in a given place over time. The effects of minimum-wage
increases will differ, for instance, depending on the starting point (i.e.,
whether the existing minimum wages are already quite high or rather
low), on employment laws and business regulation, on social norms,
industry practices, and a host of other factors. What is true in the
United States may not be true in Mexico, and what was true in the
United States in  may no longer be true now. Thus, when two
Reiss • Why Do Experts Disagree? 

economists disagree about the effects of policy, they may simply be


talking about different applications. (Friedman makes similar points in
Sections .., “The Problem of Novel Circumstances,” and ..,
“The Problem of Heterogeneous Agents.”)
Contrast. One way to think about causal factors is that they are “differ-
ences that make a difference” (Holland ). Thus, an intervention is
judged to be effective if its implementation makes a difference to some
outcome variable of interest. The difference in the cause is between the
intervention and a situation without the intervention; the difference in
the effect is between the outcome variable with and without the interven-
tion. In many experiments, it is clear what this means. In a clinical trial, for
instance, there will be a treatment group and a control group, and one
measures the outcome variable in both groups. However, in a policy
context, it is not always clear which two situations we should compare.
When we ask, say, whether free trade is beneficial for a nation, do we
mean “relative to a thoroughly protectionist regime” or rather “relative
to an intelligent mix of protectionist and market elements”? The answer
is likely to differ from expert to expert. Thus, when two economists dis-
agree about the effectiveness of policies, they might in fact disagree
about which of a number of alternatives is the most relevant one.
Outcome variable. In the minimum-wage case, the employment rate is
usually the chosen outcome variable. It is questionable, however,
whether this is the best choice. The employment rate does not reflect
the number of hours worked, for instance. Thus, the employment rate
may respond in one way to an increase in the minimum wage, the
full-time equivalent (FTE) employment rate in another. The FTE rate,
however, is insensitive to the number of people employed. And
neither rate reflects the type of employment contract being used.
Especially in the long run, more permanent contracts may be replaced
by temporary arrangements, changing the nature of the employment
relationship. Nor do the two rates reflect other aspects of work, such
as job satisfaction or stress related to particular jobs.
The choice of outcome variable may also reflect hidden value judg-
ments. For some people, a job is merely a means of making ends meet,
while for others it is a source of satisfaction and for others, a way to
get ahead in life, or retire early, or impress others. We may therefore
ask if some measure of well-being would not be the more appropriate
outcome variable than the sheer fact of employment. But the measure-
ment of well-being is, of course, fraught with conceptual and practical
 Critical Review Vol. , Nos. –

difficulties (see for instance Alexandrova ). Do we mean by “well-


being” preference satisfaction, as most neoclassical economists do, or
shall we use an alternative conception such as happiness (Layard )
or capabilities (Sen ; Nussbaum )? If we stick with preference
satisfaction, shall we assume that people’s actual choices reflect their pre-
ferences or shall we allow for mistakes? If we allow for mistakes, how can
we avoid the paternalistic imputation of preferences by the economist?
The point of this is: there is no unique outcome variable; the effect a
policy has on one variable may differ on its effect on an alternative vari-
able; and the choice of variable may be irreducibly normative.
Fragile facts are those that are sensitive to the choice of horizon, time
and place of application, contrast, and outcome variable (Reiss ).
Many if not most social facts are fragile in this sense. When two
experts disagree about a policy question, it may be due to a different
interpretation of the question they are being asked, or it may be due to
a different judgment of which facts are most relevant for making a predic-
tion about a future intervention.

The Absence of Evidentiary Standards


There is a further source of potential disagreement among social science
experts about what the facts are. Methodological battles have plagued the
social sciences almost since the birth of economics, which was the first
social science to be separated from moral philosophy. Many of these
battles have revolved around the proper place of social theory in learning
about the principles that describe and explain social phenomena (Reiss
unpublished). One side maintains that all inference starts from theory,
leaving relatively little room for inductive reasoning from facts (no
room at all, in the case of the Austrian school of economics). The
other side maintains that social science does not yet have credible
theory and must therefore rely on inductive reasoning from facts,
leaving relatively little room for theory (no room at all, in the case of
some members of the German historical school of economics).
The recent debate in economics about the role of randomized trials
and natural experiments is an installment in the ongoing methodological
battles. Proponents of randomized and natural experiments are the des-
cendants of the older inductivists. Their critics tend to argue that induc-
tive inference without guiding theory is likely to be unreliable, making
them the descendants of the older deductivists. I do not think that
Reiss • Why Do Experts Disagree? 

there is a straightforward solution to these debates. Deductivists rightly


say that induction without theory is blind; but inductivists rightly
respond that deduction from poor theory is empty.
Given this meta-disagreement, it seems unlikely that there will be
widespread agreement on evidentiary standards soon. But agreement
on evidentiary standards is needed for an agreement on what the facts are.

* * *

The “fact of technocratic disagreement” is, in my view, the main reason


why epistocracy is a bad idea. If expert disagreement is likely to persist, as
I have argued here, a major prerequisite of an epistocracy that does its job
will not be met. If anything, I am thus more skeptical of at least the epis-
tocratic version of technocracy, and mixed versions, than Friedman is,
but for slightly different, albeit also epistemic, reasons.

REFERENCES

Alexandrova, Anna. . A Philosophy for the Science of Well-Being. Oxford: Oxford
University Press.
Angrist, Joshua, and Jörn-Steffen Pischke. . “The Credibility Revolution in
Empirical Economics: How Better Research Design Is Taking the Con Out
of Econometrics.” Journal of Economic Perspectives (): -.
Bareinboim, Elias, and Judea Pearl. . “A General Algorithm for Deciding
Transportability of Experimental Results.” Journal of Causal Inference ():
-.
Berlin, Isaiah. . Two Concepts of Liberty. Oxford: Clarendon Press.
Bovens, Luc. . “The Ethics of Nudge.” In Preference Change: Approaches from
Philosophy, Economics, and Psychology, ed. Till Grüne-Yanoff and Sven Ove
Hansson. Dordrecht: Springer.
Brennan, Jason . Against Democracy. Princeton: Princeton University Press.
Card, David, and Alan Krueger. . Myth and Measurement: The New Economics of the
Minimum Wage. Princeton: Princeton University Press.
Cartwright, Nancy, and Jeremy Hardie. . Evidence-Based Policy: A Practical Guide
to Doing It Better. Oxford: Oxford University Press.
Cartwright, Nancy, and Eileen Munro. . “The Limitations of Randomized
Controlled Trials in Predicting Effectiveness.” Journal of Evaluation in Clinical
Practice (): -.
Cialdini, Robert. . Influence: Science and Practice, th ed. Boston: Allyn & Bacon.
Cialdini, Robert. . Pre-suasion. New York: Simon & Schuster.
Collins, Harry, and Robert Evans. . Why Democracies Need Science. Cambridge:
Polity.
 Critical Review Vol. , Nos. –

Easterly, William. . The Tyranny of Experts: Economists, Dictators, and the Forgotton
Rights of the Poor. New York: Basic Books.
Fiedler, Klaus. . “The Dependence of the Conjunction Fallacy on Subtle
Linguistic Factors.” Psychological Research : -.
Freedman, David. . “Statistical Models and Shoe Leather.” Statistical Methodology
: -.
Friedman, Jeffrey. . Power Without Knowledge: A Critique of Technocracy.
New York: Oxford University Press.
Friedman, Milton. . “The Methodology of Positive Economics.” In idem, Essays
in Positive Economics. Chicago: University of Chicago Press.
Goldman, Alvin. . “Experts: Which Ones Should You Trust?” Philosophy and
Phenomenological Research (): -.
Guala, Francesco. . “Extrapolation, Analogy, and Comparative Process
Tracing.” Philosophy of Science (): -.
Habermas, Jürgen. . Toward a Rational Society: Student Protest, Science, and Politics.
Boston: Beacon Press.
Harman, Oren, and Michael Dietrich. . Outsider Scientists: Routes to Innovation in
Biology. Chicago: University of Chicago Press.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert
Gintis, and Richard McElreath. . “In Search of Homo Economicus:
Behavioral Experiments in  Small-Scale Societies.” American Economic
Review (): -.
Holland, Paul. . “Statistics and Causal Inference.” Journal of the American Statistical
Association (): -.
Hull, David, Peter Tessner, and Arthur Diamond. . “Planck’s Principle.” Science
(): -.
Just, David, and Brian Roe. . “Internal and External Validity in Economics Research:
Tradeoffs Between Experiments, Field Experiments, Natural Experiments, and
Field Data.” American Journal of Agricultural Economics (): -.
Khosrowi, Donal, and Julian Reiss. . “Evidence-Based Policy: The Tension
Between the Epistemic and the Normative.” Critical Review (): -.
Kirchgässner, Gebhard. . “Empirical Economic Research and Economic Policy
Advice: Some Remarks.” In Economic Policy Issues for the Next Decade, ed. Karl
Aiginger and Gernot Hutschenreiter. New York: Springer.
Kitcher, Philip. . Science in a Democratic Society. Amherst, N.Y.: Prometheus.
Koppl, Roger. . Expert Failure. Cambridge: Cambridge University Press.
Layard, Richard. . Happiness: Lessons from a New Science. New York: Penguin.
Levy, David, and Sandra Peart. . Escape from Democracy: The Role of Experts and the
Public in Economic Policy. Cambridge: Cambridge University Press.
Lippmann, Walter. [] . Public Opinion. New York: Free Press.
Neurath, Otto [] . “Unified Science and Psychology.” In Unified Science, ed.
Brian McGuinness. Dordrecht: Reidel.
Nichols, Thomas. . The Death of Expertise: The Campaign Against Knowledge and
Why It Matters. Oxford: Oxford University Press.
Nussbaum, Martha. . Women and Human Development: The Capabilities Approach.
Cambridge: Cambridge University Press.
Reiss • Why Do Experts Disagree? 

Planck, Max. . Scientific Autobiography and Other Papers. New York: Philosophical
Library.
Popper, Karl. . “On the Logic of the Social Sciences.” In Theodor Adorno,
et al., The Positivist Dispute in German Sociology, trans. Glyn Adey and David
Frisby. London: Heinemann.
Powers, Nicholas, Allen Blackman, Thomas Lyon, and Urvashi Narain. . “Does
Disclosure Reduce Pollution? Evidence from India’s Green Rating Project.”
Environmental and Resource Economics : -.
Reiss, Julian. . Error in Economics: Towards a More Evidence-Based Methodology.
London: Routledge.
Reiss, Julian. . Philosophy of Economics: A Contemporary Introduction. New York:
Routledge.
Reiss, Julian. . “Fact-Value Entanglement in Positive Economics.” Journal of
Economic Methodology (): -.
Reiss, Julian. . “Expertise, Agreement, and the Nature of Social Scientific Facts
or: Against Epistocracy.” Social Epistemology (): -.
Reiss, Julian. Unpublished. “The Perennial Methodenstreit: Observation, First
Principles, and the Ricardian Vice.” Department of Philosophy, Durham
University.
Rudner, Richard. . “The Scientist Qua Scientist Makes Value Judgments.”
Philosophy of Science (): -.
Sen, Amartya . Development as Freedom. Oxford: Oxford University Press.
Sinn, Hans-Werner. . The Euro Trap: On Bursting Bubbles, Budgets, and Beliefs.
Oxford: Oxford University Press.
Solomon, Miriam. . “The Social Epistemology of NIH Consensus
Conferences.” In Establishing Medical Reality, ed. Harold Kincaid and
Jennifer McKitrick. New York: Springer.
Sorkin, Isaac. . “Are There Long-Run Effects of the Minimum Wage?” Review
of Economic Dynamics (): -.
Steel, Daniel. . “Across the Boundaries: Extrapolation in Biology and Social
Science.” Oxford: Oxford University Press.
Turner, Stephen . “What Is the Problem with Experts?” Social Studies of Science
(): -.
Tversky, Amos, and Daniel Kahneman. . “Extensional versus Intuitive
Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological
Review (): –.
Van Bouwel, Jeroen, ed. . The Social Sciences and Democracy. Houndmills:
Palgrave Macmillan.
van Eersel, Gerdien, Gabriela Koppenol-Gonzales, and Julian Reiss. .
“Extrapolation of Experimental Results through Analogical Reasoning from
Latent Classes.” Philosophy of Science (): -.
Wallis, W. Allen, and Milton Friedman. . “The Empirical Derivation of
Indifference Functions.” Studies in Mathematical Economics and Econometrics in
Memory of Henry Schultz, ed. O. Lange, F. McIntyre and T. O. Yntema.
Chicago: University of Chicago Press.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy