Bioethics, Experimental Approaches
Jonathan Lewis a
Joanna Demaree-Cotton b
Brian D. Earp b
a
b
Centre of Social Ethics and Policy, Department of Law, The University of Manchester, UK
Uehiro Centre for Practical Ethics, University of Oxford, UK
This is the authors’ copy of an article in press. Please cite as:
Lewis, J., Demaree-Cotton, J., & Earp, B. D. (in press). Bioethics, experimental
approaches. In M. N. S. Sellers & S. Kirste (eds.), Encyclopedia of the Philosophy of Law
and Social Philosophy. Cham, Switzerland: Springer.
Introduction
This chapter summarizes an emerging sub-discipline of both empirical bioethics and
experimental philosophy (“x-phi”) which has variously been referred to as experimental
philosophical bioethics, experimental bioethics, or simply “bioxphi” (Earp, Latham and Tobia,
2020; Earp et al., 2020; Lewis, 2020; Mihailov, Hannikainen and Earp, 2021). Like empirical
bioethics, bioxphi uses data-driven research methods to capture what various stakeholders
think (feel, judge, etc.) about moral issues of relevance to bioethics. However, like its other
parent discipline of x-phi, bioxphi tends to favor experiment-based designs drawn from the
cognitive sciences (Knobe, 2016)—including psychology, neuroscience, and behavioral
economics—to tease out why and how stakeholders think as they do.
Using insights gleaned from these experiments, bioxphi aims to bridge the descriptive and
normative programs of bioethical inquiry. Thus, it seeks not only to draw on, or respond to,
ethical questions raised by bioethicists (e.g., for purposes of formulating empirical research
questions), but also to advance substantive normative debates within the field. To this end,
rather than relying on unrealistic, abstract thought experiments to identify the contours of what
is morally at stake in some issue (e.g., Thomson’s “violinist” analogy in arguments about
abortion; for discussions, see Walsh, 2011; McMillan, 2018), bioxphi tends to deal with cases
that are more directly inspired by real-world dilemmas and decisions. These might pertain, for
example, to specific healthcare policy options or standards of clinical practice (Kingsbury and
1
Hegarty, 2022), to medical research and rules proposed to protect participants’ rights
(Dranseika et al., unpublished), to the understanding, use, or application of relevant legal
concepts (Sommers, 2020; Demaree-Cotton and Sommers, 2022), to evaluation and regulation
of cognitive enhancement or other emerging biotechnologies (Faber, Savulescu and Douglas,
2016; Mihailov et al., 2021), or (more generally) to human-technology and human-biosphere
relations (for overviews, see Earp, 2019; Earp et al., 2020; Earp et al., 2021; Earp et al., 2022).
We begin by articulating some of the conceptual and methodological issues that have motivated
a general interest in experimental approaches to bioethics with a view to detailing the ways in
which research in bioxphi has responded to those issues. We also further situate this emerging
sub-discipline in relation to both empirical bioethics and x-phi. In the second section, we
outline some of the strategies that have been employed within bioxphi studies to enlist
empirical findings (i.e., descriptive findings or models showing how and why people make
certain ethical judgments and/or interpret or apply relevant concepts) in the service of
bioethical arguments. Finally, we conclude with a brief reflection on the state of this
burgeoning sub-discipline.
The Value and Methods of Bioxphi
McMillan (2018) and Machery (2017) have argued, in different contexts, that when it comes
to people’s ethical judgments or applications of relevant concepts (e.g., deciding whether
someone is competent to refuse a doctor-recommended treatment), the basis for their decision
is not always readily apparent. In the case of professional bioethicists, we do, typically, have
some idea of how they have reached their normative conclusions regarding an issue: for
example, when they explain their premises and reasoning in the context of an explicit argument
in the academic literature. Similarly, we can learn how bioethicists apply certain concepts such
as informed consent, competence, coercion, futility, equipoise, or medical necessity: ideally,
they will provide precise definitions of the concepts and explain how the concepts are being
applied. Why, then, might we be motivated to go beyond the armchair and employ empirical
methods to probe more deeply how individuals—both bioethicists and non-bioethicists—think
about ethical issues and why they think as they do? Several answers suggest themselves.
Firstly, even if we assume that professional bioethicists’ explicit argumentation tells us all we
need to know about their moral judgments and associated thought processes, professional
2
bioethicists make up only a tiny fraction of those engaged in ethical reflection on healthcare,
biomedical research, health policy, and related matters. Their intuitive moral responses to
particular cases, on the basis of which they are likely to formulate their normative arguments
(at least in part), may not be representative of those of a wider population. And yet, these
responses are likely to be shaped, to some extent, by a bioethicist’s own (relatively narrow or
circumscribed) experiences, life circumstances, or even psychological dispositions. Thus,
professional bioethicists may, in some cases, fail to “detect” morally relevant features of certain
cases. This, in turn, may unduly restrict the scope or applicability of the arguments they develop
(for a discussion, see Leget, Borry and De Vries, 2009). Indeed, there is a vast array of different
stakeholders making important moral judgments and applying ethical concepts on a routine
basis, often in situations that have substantial real-life stakes and consequences. These diverse
stakeholders may include medical practitioners and other healthcare providers, hospital
managers, biomedical researchers, biobank personnel, policymakers, lawmakers, judges,
patients, and their families. Such “on the ground” participants in practical ethical decisionmaking, faced with complex, morally charged situations, may have developed certain intuitive
or morally relevant insights not available to the average armchair bioethicist. And although
these insights may not always be easily articulated, they may nevertheless be revealed through
the patterns of judgment these stakeholders generate in response to (experimentally controlled
variations on) realistic cases.
Importantly, just like these other healthcare stakeholders, professional bioethicists may not
always understand the underlying sources of their own intuitive responses to morally charged
situations or the contextual factors that influence those responses. Depending on such
background variables, including the cognitive processes that give rise to specific intuitions or
shape them into concrete judgments (e.g., of right or wrong), there may be reasons to assign
more or less weight to an intuition as a basis for moral judgment. This could be the case, for
instance, if an intuitive moral reaction to a given scenario, or set of scenarios, is shown to
emerge from a psychological process that is widely seen as normatively unreliable: for
example, a process distorted by racist or sexist assumptions or biases. Such a process, and its
biased outputs, would be unreliable in the sense that they are unlikely to “track the truth” of
the situation or help us arrive at a morally defensible conclusion (for discussions, see
Wedgwood, 2007; Sinnott-Armstrong, 2008; Machery, 2017).
3
Another reason to understand why or how a bioethical judgment applies to a given scenario is
so that action-guiding considerations, principles, or protocols can be developed for relevantly
similar cases. A crucial part of understanding whether the perceived moral (un)acceptability of
a particular action generalizes to other situations involves identifying the factors that shape
such perceptions in the first place and systematically exploring their scope (i.e., dimensions of
variance across situations that elicit similar reactions or judgments). For these and other
reasons, there is both theoretical and practical value in analyzing how and why people think
about bioethical matters, and not just what they think (Lewis, 2020).
However, such analysis cannot be conducted from the armchair. Reflection solely from the
armchair rather than from the bedside, bench, or committee room, especially on abstract or
idealized cases, may limit the real-world relevance of the intuitions, inferences and judgments
that make up such reflection. For instance, James Rachels (1975) appeals to intuitions about
fictional cases unrelated to healthcare to attempt to call into question the moral difference
between active and passive euthanasia. However, as McMillan (2018) notes, physicians have
often objected that the distinction between active and passive euthanasia is morally relevant in
real clinical cases, and that Rachels’ fictional cases fail to generalize to actual end-of-life
decisions. Relatedly, Rodríguez-Arias and colleagues (2020) have shown that, under realistic
conditions, ordinary people draw the “killing” and “letting die” distinction very differently to
the way endorsed by some bioethicists. For their research participants:
the distinction between “ending” a patient's life and “allowing” it to end arises from
morally motivated causal selection. That is, when a patient wishes to die, her illness
is treated as the cause of death and the doctor is seen as merely allowing her life to
end. In contrast, when a patient does not wish to die, the doctor's behaviour is treated
as the cause of death and, consequently, the doctor is described as ending the
patient's life. This effect emerged regardless of whether the doctor's behaviour was
omissive (as in withholding treatment) or commissive (as in applying a lethal
injection) (p. 509).
More generally, if the goal is to develop a normative position regarding a concrete bioethical
issue, such as in the context of clinical care, it may be that the judgments of doctors or their
patients, rather than (only) those of armchair bioethicists, will in some cases constitute more
relevant data (Earp et al., 2021).
4
Empirical bioethicists will no doubt agree that the judgments of healthcare practitioners,
policymakers, patients, their families, and so on, should be considered when developing
guidance and recommendations for dealing with complex ethical issues in the real world. What
distinguishes bioxphi in terms of its relation to empirical bioethics is that, when it comes to
either investigating the normative reliability of different stakeholder judgments or clarifying
relevant bioethical concepts, such efforts involve experimentally testing the effects of different
variables on those judgments and building explanatory models of how the latter come about
(Earp et al., 2021). This feature is what bioxphi inherits from x-phi, which likewise draws on
the methods of cognitive science and experimental moral psychology.
In principle, bioxphi studies could employ the full range of experimental methods used in the
cognitive and psychosocial sciences, including, for example, the use of transcranial magnetic
or direct-current brain stimulation devices to influence the cognitive processes involved in
making moral judgments (e.g., Kuehne et al., 2015), or the administration of psychoactive
substances to influence moral motivations (see Earp, 2018). Indeed, as some have argued
(O’Neill and Machery, 2014; Mihailov, Hannikainen and Earp, 2021; Nado, 2021; Alfano,
Loeb and Plakias, 2022), experimental methods could also usefully be employed in
combination with other empirical methods, such as interviews, qualitative surveys, linguistic
corpus data analyses, anthropological work, and virtual reality simulations.
Nevertheless, the main method in x-phi from its inception, and hence of bioxphi more recently,
has been the “contrastive vignette technique” (CVT) (for an overview, see Reiner, 2019).
Broadly speaking, the CVT involves designing a pair of vignettes describing the exact same
situation, but which differ from one another in a single, key respect. This difference constitutes
the experimental manipulation, which is expected, on theoretical grounds, to influence
participant responses, such as their normative judgments about the (im)permissibility of a given
action or their application of a given bioethical concept. By systematically varying what is
manipulated between conditions and measuring the outcome, a model can be built of the
various factors that make a difference to participant responses. These models can then be used
to infer the underlying cognitive processes involved. As a final step, bioxphi researchers can
appeal to these empirical models, in combination with background theoretical commitments,
including normative considerations, to advance a substantive argument about whether, when,
5
or to what extent participants’ moral judgments should be given prescriptive weight in reaching
bioethical conclusions.
Bioxphi as a Normative Enterprise: Some Common Strategies
What are some of the most common strategies in bioxphi studies for reaching normative
conclusions from premises that include empirical information about how and why people think
as they do when making moral judgments, that is, empirical information about the underlying
cognitive processes (“how”) and eliciting factors (“why”) that shape such judgments? Four
broad approaches have recently been identified: parsimony, debunking, triangulation, and
pluralism (Earp et al., 2021). Some of these approaches overlap with strategies adopted by
empirical bioethicists (e.g., giving prima facie normative weight to the most consistent,
common, and robust judgments within the studied population or adopting a method of
reflective equilibrium) (see, e.g., Leget, Borry and De Vries, 2009; Davies, Ives and Dunn,
2015). Of course, these are not the only strategies that could feasibly be employed in bioxphi
studies. Rather, being among the most salient examples in the recent literature, they are used
for illustration.
According to the parsimony strategy, widely shared, consistent, and robust moral judgments
among a relevant group of stakeholders should carry some normative weight in bioethical
argumentation (Earp et al., 2021; for examples, see Beverley and Beebe, 2018; Earp et al., in
press). Of course, simply identifying common and consistent moral responses and taking these
for granted without additional normative evaluation will typically not be sufficient for a
convincing argument. These responses might, after all, reflect some misunderstanding,
contradictory beliefs, inferential mistakes, bias, or prejudice. Thus, as DeGrazia and Millum
(2021) have recently noted, by investigating the consistency of stakeholder judgments across
different presentations of a case or providing evidence of the factors that bear on the normative
reliability of judgment-forming processes, psychological experiments might be considered a
new way of identifying Rawlsian “considered judgments” for the purposes of engaging in
reflective equilibrium (see the “triangulation” strategy below).
The parsimony strategy, however, does not (and, indeed, should not) reduce bioethical
conclusions and recommendations to a popularity contest (for a discussion, see Leget, Borry
and De Vries, 2009). The fact that a given moral judgment has been identified as being
6
consistently held within a certain population—and has even survived experimental tests for
normative (un)reliability—does not mean that it is the “all-things-considered” most reasonable
or most justifiable normative basis for action. For instance, the judgment may conflict with the
equally or more reliable judgments of other stakeholders, such as experienced moral
philosophers or bioethicists; or it may come into tension with other widely accepted normative
factors (including moral and legal norms, principles, and theories). In such cases, a reasonable
process of deliberation could well entail that the judgment should, despite its popularity, be
overruled, discounted, or outweighed in arriving at some conclusion. All that the parsimony
strategy entails is that the consistent, experimentally robust moral judgments of relevant
stakeholders should be accorded some (defeasible) normative weight. Effectively, it “puts the
burden of proof on those who would argue that no normative weight should be assigned to the
consistent judgments of relevant stakeholders about a given moral issue” (Earp et al., 2022, pp.
190-1).
As alluded to above, bioethical judgments sometimes rely on false information, prejudiced
attitudes, epistemological distortions, morally irrelevant factors (e.g., framing effects), or
faulty inferences (Greene et al., 2001; Singer, 2005; Wedgwood, 2007; Sinnott-Armstrong,
2008; Berker, 2009; Greene et al., 2009; Gino, Shu and Bazerman, 2010; Andow, 2016;
Machery, 2017; May, 2018; Sauer, 2018; DeGrazia and Millum, 2021; however, see DemareeCotton, 2016; Demaree-Cotton and Kahane, 2018 regarding framing effects). All else being
equal, such factors should typically weaken the normative weight assigned to such judgments
when reaching a bioethical conclusion (Wedgwood, 2007; Machery, 2017; Demaree-Cotton,
2019). At the extreme, a given judgment might be entirely “debunked”—that is, shown to be
entirely unreliable for ethical guidance. A key motivation of bioxphi studies is to provide
evidence of factors that influence the normative reliability of stakeholders’ moral responses
(judgments, decisions, attitudes, intuitions, inferences, and so on).
The debunking strategy combines evidence against the normative reliability of a moral
response with a type of argument inspired by work in x-phi (Mukerji, 2019):
(P1) Judgment p is the output of a psychological process that possesses the
empirical property of being substantially influenced by factor F. (Empirical
premise)
7
(P2) If a judgment is the output of a psychological process that possesses the
empirical property of being substantially influenced by factor F, then it is pro tanto
unreliable. (Bridging normative premise)
(C) Judgment p is pro tanto unreliable.
However, the scope of the debunking is necessarily conditional. After all, factor F in the
argument schema above may itself be contested: perhaps the bioxphi researcher views it as a
morally irrelevant factor whereas someone else sees it as a legitimate moral consideration (see,
e.g., Königs, 2020; DeGrazia and Millum, 2021). Take the following judgment adapted from
the findings of a bioxphi study conducted by Smith and Hegarty (2021): “Clitorectomies
violate human rights more when performed on non-intersex female infants than on infants with
intersex traits”. Although Smith and Hegarty do not explicitly attempt to debunk this judgment,
other work suggests that permissive attitudes toward intersex genital cutting are driven by such
factors as participant endorsement of heteronormativity and the gender binary, and
participants’ own heterosexual identification (Kingsbury and Hegarty, 2021). A politically
progressive theorist who sees heteronormativity or belief in the gender binary as ethically
misguided would thus likely regard such findings as supporting a debunking argument about
the aforementioned judgment regarding intersex vs. non-intersex female human rights. A
politically conservative theorist, by contrast, who sees both heteronormativity and the gender
binary as being scientifically and ethically justified, would not regard such findings as
debunking the judgment.
The issue of normative disagreement crops up in other ways. What happens, for example, when
there is a divergence in two or more sets of pro tanto reliable judgments among a given
population of relevant stakeholders (or between populations)? Indeed, how do ethical theories
and principles, the judgments of professional bioethicists, and those of, say, patients,
physicians, or the public relate to one another, and how can this information be integrated by
bioxphi researchers to draw well-founded normative conclusions? In bioxphi research, one way
of answering these questions involves adopting a triangulation strategy, one that is similar to
reflective equilibrium (Earp et al., 2021). According to this strategy:
Divergence among the judgments of various groups of experts and/or between
expert and lay judgments requires the following: adjusting, pruning, or
supplementing the normative conclusions derived from [one group’s] judgments
8
in order to accommodate: (1) the normative implications of the opposing views;
and (2) normative considerations derived from, for example, ethical or legal
principles, background theories, morally relevant facts, and/or the best arguments
for a normative position in the relevant expert literature (Earp et al., 2022, p. 189).
Of course, the mere fact that conflicting normative judgments exist does not immediately
necessitate a triangulation strategy. As we have seen, one of the benefits of bioxphi is that it
can employ experimental methodologies and argumentation strategies to investigate the pro
tanto reliability of these conflicting judgments. Thus, if the psychological processes outputting
one judgment are convincingly shown to be influenced by, for example, a morally irrelevant
or normatively distorting factor, while the psychological processes outputting another
judgment cannot be shown to be subject to such influence (despite comparable efforts), then
one of the conflicting judgments might appropriately be discounted or discarded on that basis
(i.e., debunking). Once conflicting moral judgments have survived various attempts at being
shown to be pro tanto unreliable, they can be employed as initially credible (i.e., “considered”)
judgments for purposes of triangulation (or) in pursuit of reflective equilibrium. This will
involve the execution of trade-offs among the respective considered judgments, or adjustment
of weights, toward revising normative conclusions (or ethical theories, concepts, or principles)
as coherence and mutual support seem to require (Earp et al., 2021; DeGrazia and Millum,
2021).
Alternatively, faced with a divergence, bioxphi studies may indicate that a given bioethical
concept or moral judgment is—even at the expert level—unclear, vague, or tends to generate
confusion regarding one’s obligations. The purpose of the triangulation strategy would then be
to clarify a moral judgment or the concepts and inferences underlying that judgment. For
example, the concepts of consent and autonomy have tended to be conflated at law, with
statutory and common law applications of these concepts often running together the conditions
for consent and the conditions for autonomy (Lewis, 2021; Lewis and Holm, 2022; for a series
of bioxphi studies that provide evidence for this conceptual conflation, see Demaree-Cotton
and Sommers, 2022). One of the aims of the triangulation strategy could then be to resolve this
confusion by making explicit the respective functions, uses, and/or values of these two
concepts and thereby provide patients, physicians, legal professionals, and the public with
some form of contextual re-education.
9
In any case, it must be remembered that merely appealing to a divergence between sets of moral
judgments will be inadequate to deliver an “all-things-considered” normative conclusion or
recommendation. Although the triangulation approach is a useful starting point, adjusting,
pruning, or supplementing opposing judgments will, in many cases, also require engagement
with broader normative considerations, such as, for example, background theories, legal and
moral principles, morally relevant facts, and the like (i.e., “wide reflective equilibrium”)
(DeGrazia and Millum, 2021; Earp et al., 2021).
Finally, pluralism is an approach that that does not seek to find one single normative answer
to an ethical question. Rather, it holds that in cases where various stakeholders have
“conflicting, yet pro tanto reliable, judgments or where multiple and independent communities
each reveal persistent disagreement between two or more conflicting, yet pro tanto reliable,
judgments, these judgments may all have comparable normative weight” (Earp et al., 2021, pp.
106-7).
Conclusion
Relative to its parent disciplines—empirical bioethics and x-phi—bioxphi is an emerging field,
one whose scope in terms of its methods, functions, and applications for practice and policy
ends is yet to be established. This situation should be viewed positively. It affords those
interested in adopting experimental approaches to bioethics a level of creativity and freedom
to explore, test, and get to grips with what works and what doesn’t. At the same time, there are
challenges and unanswered questions facing this burgeoning sub-discipline: how and to what
extent can the methods and strategies of bioxphi be integrated with others in empirical
bioethics, philosophical bioethics, x-phi, cognitive science, and moral psychology? How do
we, in practice, draw upon experimental models of how and why people think about realistic
bioethical issues in order to develop concrete recommendations for clinical practice and health
policy? How do we, in practice, deal with the defeasible normative weight of seemingly reliable
judgments in order to deliver “all-things-considered” judgments? Does bioxphi have a specific
role to play in generating “all-things-considered” normative solutions and recommendations?
Of course, the field of bioethics in general is still attempting to grapple with some, if not all,
of these questions.
10
In this chapter, our characterization of bioxphi has been deliberately modest. Situating the field
in relation to empirical bioethics and x-phi, we have illustrated some of the ways in which
bioxphi has brought empirical data into the service of reaching normative conclusions that are
of significance to healthcare practice and policy, medical research, and emerging
biotechnologies. We have also explained some ways in which bioxphi, at least at this stage of
its development, differs in important ways from empirical bioethics and x-phi.
We have argued that there is value in understanding not only what people think about bioethical
issues but also how and why they think as they do. In particular, the “hows” and “whys” will
often have practical normative significance for a range of bioethical situations and problems.
Bioxphi seeks to generate evidence and provide strategies for assessing such normative
significance, allowing us to better navigate the views of different stakeholders across the
relevant domains of medicine and healthcare.
References
Alfano M, Loeb D, Plakias A. (2022) Experimental moral philosophy. In: Zalta EN (ed) The
Stanford
Encyclopedia
of
Philosophy.
https://plato.stanford.edu/archives/fall2022/entries/experimental-moral/
Andow J. (2016) Reliable but not home free? What framing effects mean for moral intuitions.
Philosophical Psychology 29:904–911
Berker S. (2009) The normative insignificance of neuroscience. Philosophy & Public Affairs
37:293–329
Beverley J, Beebe J. (2018) Judgments of moral responsibility in tissue donation cases.
Bioethics 32:83–93
Davies R, Ives J, Dunn M. (2015) A systematic review of empirical bioethics methodologies.
BMC Medical Ethics 16(15):1–13
DeGrazia D., Millum J. (2021) A Theory of Bioethics. Cambridge: Cambridge University Press
Demaree-Cotton J. (2016) Do framing effects make moral intuitions unreliable? Philosophical
Psychology 29(1):1-22
Demaree-Cotton J. (2019) Analyzing debunking arguments in moral psychology: Beyond the
counterfactual analysis of influence by irrelevant factors. Behavioral and Brain
Sciences 42:E151
11
Demaree-Cotton J, Kahane G. (2018) The neuroscience of moral judgment. In: Timmons M,
Jones K, Zimmerman A (eds) Routledge handbook on moral epistemology. London:
Routledge, pp. 84-104
Demaree-Cotton J, Sommers R. (2022) Autonomy and the folk concept of valid
consent. Cognition 224:105065
Dranseika V, Hannikainen I, Bystranowski P, Earp BD, Tobia KP, Almeida G, Kneer K,
Struchiner N, Dolinina K, Janik B, Lauraityte E, Liefgreen A, Prochnicki M, Rosas A,
Strohmaier N, Żuradzki T. (unpublished manuscript) Personal identity, direction of
change, and the right to withdraw from research.
Earp BD. (2018) Psychedelic moral enhancement. Royal Institute of Philosophy Supplements
83:415–439
Earp BD. (2019) Introducing bioXphi. The New Experimental Philosophy Blog, February 8,
2019. https://xphiblog.com/introducing-bioxphi
Earp BD, Demaree-Cotton J, Dunn M, Dranseika V, Everett JAC, Feltz A, Geller G,
Hannikainen I, Jansen L, Knobe J, Kolak J, Latham S, Lerner A, May J, Mercurio M,
Mihailov E, Rodríguez-Arias D, Rodríguez López B, Savulescu J, Sheehan M,
Strohminger N, Sugarman J, Tabb K, Tobia KP. (2020) Experimental philosophical
bioethics. AJOB Empirical Bioethics 11(1):30–3
Earp BD, Latham S, Tobia KP. (2020) Personal transformation and advance directives: An
experimental bioethics approach. The American Journal of Bioethics 20(8):72–75
Earp BD, Lewis J, Dranseika V, Hannikainen I. (2021) Experimental philosophical bioethics
and normative inference. Theoretical Medicine and Bioethics 42(3–4):91–111
Earp BD, Lewis J, Skorburg J, Hannikainen I, Everett JAC. (2022) Experimental philosophical
bioethics of personal identity. In: Tobia KP (ed) Experimental philosophy of identity and
the self. London: Bloomsbury, pp. 183-202
Earp BD, Hannikainen I, Dale S, Latham S. (in press) Experimental philosophical bioethics,
advance directives, and the true self in dementia. In: De Block A, Hens K (eds.)
Experimental philosophy of medicine. London: Bloomsbury
Faber NS, Savulescu J, Douglas T. (2016) Why is cognitive enhancement deemed
unacceptable? The role of fairness, deservingness, and hollow achievements. Frontiers
in Psychology 7(232):1–12
Gino F, Shu L, Bazerman M. (2010) Nameless + harmless = blameless: When seemingly
irrelevant factors influence judgment of (un)ethical behavior. Organizational Behavior
and Human Decision Processes 111(2):93–101
12
Greene J, Sommerville R, Nystrom L, Darley J, Cohen J. (2001) An fMRI investigation of
emotional engagement in moral judgment. Science 293:2105–2108
Greene J, Cushman F, Stewart L, Lowenberg K, Nystrom L, Cohen J. (2009) Pushing moral
buttons:
The
interaction
between
personal
force
and
intention
in
moral
judgment. Cognition 111(3):364–371
Kingsbury H, Hegarty P. (2022) LGB+ and heterosexual-identified people produce similar
analogies to intersex but have different opinions about its medicalisation. Psychology &
Sexuality 13(3):535-549
Knobe J. (2016) Experimental philosophy is cognitive science. In: Sytsma J, Buckwalter W
(eds) A companion to experimental philosophy. Oxford: Wiley-Blackwell, pp. 37-52
Königs P. (2020) Experimental ethics, intuitions, and morally irrelevant factors. Philosophical
Studies 177:2605–2623
Kuehne M, Heimrath K, Heinze H, Zaehle T. (2015) Transcranial direct current stimulation of
the left dorsolateral prefrontal cortex shifts preference of moral judgments. PLOS ONE
10(5):e0127061
Leget C, Borry P, De Vries R. (2009) ‘Nobody tosses a dwarf!’ The relation between the
empirical and the normative reexamined. Bioethics 23(4):226-235
Lewis J. (2020) From x-phi to bioxphi: Lessons in conceptual analysis 2.0. AJOB Empirical
Bioethics 11(1):34–36
Lewis J. (2021) Safeguarding vulnerable autonomy? Situational vulnerability, the inherent
jurisdiction and insights from feminist philosophy. Medical Law Review 29(2):306-336
Lewis J, Holm S. (2022) Patient autonomy, clinical decision making, and the
phenomenological reduction. Medicine, Health Care and Philosophy (online ahead of
print):1-13. doi: 10.1007/s11019-022-10102-2
Machery E. (2017) Philosophy within its proper bounds. Oxford: Oxford University Press
May J. (2018) Regard for reason in the moral mind. Oxford: Oxford University Press
McMillan J. (2018) The methods of bioethics: An essay in meta-bioethics. Oxford: Oxford
University Press
Mihailov E, Hannikainen I, Earp BD. (2021) Advancing methods in empirical bioethics:
Bioxphi meets digital technologies. The American Journal of Bioethics 21(6):53–56
Mihailov E, López BR, Cova F, Hannikainen I. (2021) How pills undermine skills:
Moralization of cognitive enhancement and causal selection. Consciousness and
Cognition 91:103-120.
Mukerji N. (2019) Experimental philosophy: A critical study. London: Rowman and Littlefield
13
Nado J. (2021) Conceptual engineering via experimental philosophy. Inquiry 64:76–96
O’Neill E, Machery E. (2014) Experimental philosophy: What is it good for? In: Machery E,
O’Neill E (eds) Current controversies in experimental philosophy. London: Routledge,
pp. vii–xxix
Rachels J. (1975) Active and passive euthanasia. New England Journal of Medicine 292:78–
80
Reiner P. (2019) Experimental neuroethics. In: Nagel SK (ed) Shaping children: Ethical and
social questions that arise when enhancing the young. Cham: Springer, pp. 75-83.
Rodríguez-Arias D, Rodríguez López B, Monasterio-Astobiza A, Hannikainen I. (2020) How
do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting
descriptive and normative hypotheses. Bioethics 34:509–518
Sauer H. (2018) Debunking arguments in ethics. Cambridge: Cambridge University Press
Singer P. (2005) Ethics and intuitions. The Journal of Ethics 9:331–352
Sinnott-Armstrong W. (2008) Framing moral intuitions. In: Sinnott-Armstrong W (ed) Moral
psychology, vol. 2: The cognitive science of morality: Intuition and diversity. Cambridge:
MIT Press, pp. 47-76
Smith A, Hegarty P. (2021) An experimental philosophical bioethical study of how human
rights are applied to clitorectomy on infants identified as female and as intersex. Culture,
Health & Sexuality 23(4):548-563
Sommers R. (2020) Commonsense consent. Yale Law Review 129: 2232–2324
Walsh, A. (2011) A moderate defence of the use of thought experiments in applied
ethics. Ethical Theory and Moral Practice 14(4):467-481
Wedgwood R. (2007) The nature of normativity. Oxford: Oxford University Press
14