Brian D . Earp
University of Oxford, Psychiatry, Director, Oxford-NUS Centre for Neuroethics & Society and HOPE (Hub at Oxford for Psychedelic Ethics)
Brian D. Earp, PhD is an Associate Professor of Biomedical Ethics within the Yong Loo Lin School of Medicine at the National University of Singapore (NUS) and, by courtesy, Associate Professor of Philosophy and of Psychology. Also a Research Associate of the Uehiro Oxford Institute of the University of Oxford, Brian directs the Oxford-NUS Centre for Neuroethics and Society, and is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center. In 2022, Brian was elected to the UK Young Academy under the auspices of the British Academy and the Royal Society.
Brian’s work is cross-disciplinary, following training in philosophy, cognitive science, experimental psychology, history and sociology of science and medicine, and ethics. A co-recipient of the 2018 Daniel M. Wegner Theoretical Innovation Prize from the Society for Personality and Social Psychology, Brian was also one of four named finalists for the 2020 John Maddox Prize for “standing up for science” (awarded by Nature). Brian is also recipient of both the Robert G. Crowder Prize in Psychology and the Ledyard Cogswell Award for Citizenship from Yale University, where, as an undergraduate, Brian was elected President of the Yale Philosophy Society and served as Editor-in-Chief of the Yale Philosophy Review.
Brian then conducted graduate research in psychological methods as a Henry Fellow of New College at the University of Oxford, followed by a degree in the history, philosophy, and sociology of science, technology, and medicine as a Cambridge Trust Scholar and Rausing Award recipient at Trinity College at the University of Cambridge. After spending a year in residence as the inaugural Presidential Scholar in Bioethics at The Hastings Center in Garrison, New York, Brian was appointed Benjamin Franklin Resident Graduate Fellow while completing a dual Ph.D. in philosophy and psychology at Yale University. Brian’s essays have been translated into Polish, German, Italian, Spanish, French, Portuguese, Japanese, and Hebrew.
Brian’s work is cross-disciplinary, following training in philosophy, cognitive science, experimental psychology, history and sociology of science and medicine, and ethics. A co-recipient of the 2018 Daniel M. Wegner Theoretical Innovation Prize from the Society for Personality and Social Psychology, Brian was also one of four named finalists for the 2020 John Maddox Prize for “standing up for science” (awarded by Nature). Brian is also recipient of both the Robert G. Crowder Prize in Psychology and the Ledyard Cogswell Award for Citizenship from Yale University, where, as an undergraduate, Brian was elected President of the Yale Philosophy Society and served as Editor-in-Chief of the Yale Philosophy Review.
Brian then conducted graduate research in psychological methods as a Henry Fellow of New College at the University of Oxford, followed by a degree in the history, philosophy, and sociology of science, technology, and medicine as a Cambridge Trust Scholar and Rausing Award recipient at Trinity College at the University of Cambridge. After spending a year in residence as the inaugural Presidential Scholar in Bioethics at The Hastings Center in Garrison, New York, Brian was appointed Benjamin Franklin Resident Graduate Fellow while completing a dual Ph.D. in philosophy and psychology at Yale University. Brian’s essays have been translated into Polish, German, Italian, Spanish, French, Portuguese, Japanese, and Hebrew.
less
Related Authors
Johannes Zachhuber
University of Oxford
Daniel D. Hutto
University of Wollongong
Noel B. Salazar
KU Leuven
Muqtedar Khan
University of Delaware
Robert Lickliter
Florida International University
Galen Strawson
The University of Texas at Austin
Judith L Green
University of California, Santa Barbara
Steven Pinker
Harvard University
Gunnar Björnsson
Stockholm University
Andreas Umland
National University of "Kyiv-Mohyla Academy"
InterestsView All (67)
Uploads
Artificial Intelligence by Brian D . Earp
The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI- related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty. However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.
Given the complex nature of consent in clinical research, which involves both written documentation (in the form of participant information sheets and informed consent forms) and in-person conversations with a researcher, the use of LLMs raises significant concerns about the adequacy of existing regulatory frameworks. Institutional Review Boards (IRBs) will need to consider substantial reforms to accommodate the integration of LLM-based consent processes. We explore five potential models for LLM implementation, ranging from supplementary roles to complete replacements of current consent processes, and offer recommendations for researchers and IRBs to navigate the ethical landscape. Thus, we aim to provide practical recommendations to facilitate the ethical introduction of LLM-based consent in research settings by considering factors such as participant understanding, information accuracy, human oversight and types of LLM applications in clinical research consent.
1. Accuracy: The decision is the right one, where the “right” decision is that which best aligns with relevant justifying values, principles and their respective weights as they apply to the case at hand.
2. Transparency: The patients are provided with an explanation of the decision in terms of relevant values, principles and how they are weighed. In other words, the patients are offered reasons that explain and justify the decision.
For the use of artificial intelligence in clinical ethics to be ethically justified, it should improve the transparency and accuracy of ethical decision-making beyond that which physicians and ethics committees are currently capable of providing.
As a note to readers, this abstract was generated by AUTOGEN and edited for accuracy by the authors. The rest of the text was written manually.
In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.
General Bioethics by Brian D . Earp
The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI- related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty. However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.
Given the complex nature of consent in clinical research, which involves both written documentation (in the form of participant information sheets and informed consent forms) and in-person conversations with a researcher, the use of LLMs raises significant concerns about the adequacy of existing regulatory frameworks. Institutional Review Boards (IRBs) will need to consider substantial reforms to accommodate the integration of LLM-based consent processes. We explore five potential models for LLM implementation, ranging from supplementary roles to complete replacements of current consent processes, and offer recommendations for researchers and IRBs to navigate the ethical landscape. Thus, we aim to provide practical recommendations to facilitate the ethical introduction of LLM-based consent in research settings by considering factors such as participant understanding, information accuracy, human oversight and types of LLM applications in clinical research consent.
1. Accuracy: The decision is the right one, where the “right” decision is that which best aligns with relevant justifying values, principles and their respective weights as they apply to the case at hand.
2. Transparency: The patients are provided with an explanation of the decision in terms of relevant values, principles and how they are weighed. In other words, the patients are offered reasons that explain and justify the decision.
For the use of artificial intelligence in clinical ethics to be ethically justified, it should improve the transparency and accuracy of ethical decision-making beyond that which physicians and ethics committees are currently capable of providing.
As a note to readers, this abstract was generated by AUTOGEN and edited for accuracy by the authors. The rest of the text was written manually.
In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.
But how do these and other principles apply to particular cases? What does it really mean for a person to give informed consent — and what goes into that process, psychologically? How do doctors actually think about harm and benefit, especially when there is disagreement about what constitutes a harm or benefit for a particular patient? What is the role of social context in shaping these kinds of judgements? When policymakers decide about fair distribution of resources, what factors influence their intuitions about what justice demands? And how do proxy decision makers make sense of respect for persons when personhood is not entirely clear, as in the case of fetuses, or individuals with advanced dementia?
Although bioethicists have occasionally drawn on empirical data to supplement normative bioethical analysis, the emerging field of experimental philosophical bioethics — or bioXphi — seeks to systematically characterize the underlying cognitive processes that bear on moral judgments in a healthcare context. We see this work as having serious significance for medical policy and clinical judgment: generalized research on psychological processes may not apply to real-world decision making in the kinds of life-or-death situations that doctors often face. And formalized models of informed consent may have little to do with the facts on the ground when it comes to factors that influence a patient’s decision to give permission for a surgery.
Method: In an independent, pre-registered, direct replication and extension study with open data and materials (https://osf.io/t73c4/), we showed participants the same video from Cohen et al. (2014), with the child described as a boy or a girl depending on condition. We then asked adults to rate how much pain the child experienced and displayed, how typical the child was in these respects, and how much they agreed with explicit gender stereotypes concerning pain response in boys versus girls.
Results: Similar to Cohen et al. (2014), but with a larger and more demographically diverse sample, we found that the ‘boy’ was rated as experiencing more pain than the ‘girl’ despite identical clinical circumstances and identical pain behavior cross conditions. Controlling for explicit gender stereotypes eliminated the effect.
Conclusions: Explicit gender stereotypes—e.g., that boys are more stoic or girls are more emotive—may bias adult assessment of children’s pain.
that the cursed thing will come to mind every minute.” According to a recent research paper from Oxford’s Department of Experimental Psychology conducted by the author and his colleagues, Dostoevsky’s observation on ironic thought processes has public health implications for no-smoking signs. [Research subsequently published as Earp, B. D., Dill, B., Harris, J., Ackerman, J. M., & Bargh, J. A. (2013). No sign of quitting: Incidental exposure to “no
smoking” signs ironically boosts cigarette-approach tendencies in smokers. Journal of Applied Social Psychology,
Vol. 43, No. 10, 2158–2162.]
Ethicists Brian D. Earp and Julian Savulescu say that the time to think through such questions is now. Biochemical interventions into love and relationships are not some far-off speculation. Our most intimate connections are already being influenced by drugs we ingest for other purposes. Controlled studies are underway to see whether artificial brain chemicals can enhance couples therapy. And conservative religious groups are experimenting with certain medications to quash romantic desires—and even the urge to masturbate—among children and vulnerable sexual minorities. Simply put, the horse has bolted. Where it runs is up to us. Love Drugs arms us with the latest scientific knowledge and a set of ethical tools that we can use to decide if these sorts of medications should be a part of our society. Or whether a chemical romance will be right for us.
This controversial and timely new book argues that recent medical advances have brought chemical control of our romantic lives well within our grasp. Substances affecting love and relationships, whether prescribed by doctors or even illicitly administered, are not some far-off speculation - indeed our most intimate connections are already being influenced by pills we take for other purposes, such as antidepressants.
Treatments involving certain psychoactive substances, including MDMA-the active ingredient in Ecstasy-might soon exist to encourage feelings of love and help ordinary couples work through relationship difficulties. Others may ease a breakup or soothe feelings of rejection. Such substances could have transformative implications for how we think about and experience love.
This brilliant intervention into the debate builds a case for conducting further research into "love drugs" and "anti-love drugs" and explores their ethical implications for individuals and society. Rich in anecdotal evidence and case-studies, the book offers a highly readable insight into a cutting-edge field of medical research that could have profound effects on us all.
Will relationships be the same in the future? Will we still marry? It may be up to you to decide whether you want a chemical romance.
In Love Drugs: The Chemical Future of Relationships, co-authors Brian D. Earp, an expert in health policy and ethics at institutions including Yale University, and Julian Savulescu, director of the Uehiro Centre for Practical Ethics at the U.K.'s University of Oxford, explore how both legal and currently illicit substances could be used to improve our relationship with our emotional state.
The book comes amid what is known as the psychedelic renaissance, as researchers around the world investigate the potential benefits of using psychedelic drugs in controlled medical settings to treat mental disorders like depression, anxiety and PTSD.
Newsweek spoke to Earp about the future and ethics of toying with love and drugs.
Methods: The proposed selection protocol uses multiple inclusion and exclusion criteria for replication study selection, including: the year of publication and citation rankings, research disciplines, study types, the research question and key dependent variable, study methods and feasibility. Studies selected for replication will be stratified into pools based on instrumentation and expertise required, and will then be allocated to volunteer laboratories for replication. Replication outcomes will be assessed using a multiple inferential strategy and descriptive information will be reported regarding the final number of included and excluded studies, and original author responses to requests for raw data.
(1) What did the Reproducibility Project really show, and in what specific sense can the follow-up studies meaningfully be described as “failures to replicate” the original findings? I argue that, contrary to what many are suggesting, very little can be learned about the validity of the original studies based upon a single (apparent) failure to replicate: instead, multiple replications (of sufficient quality) of each contested experiment would be needed before any strong conclusions could be drawn about the appropriate degree of confidence to be placed in the original findings.
(2) Is psychology in crisis or not? And if so, what kind of crisis? I tease apart two senses of crisis here. The first sense is “crisis of confidence,” which is a descriptive or sociological claim referring to the notion that many people, within the profession and without, are, as a matter of fact, experiencing a profound and, in some ways, unprecedented lack of confidence in the validity of the published literature. Whether these people are justified in feeling this way is a separate but related question, and the answer depends on a number of factors, to be discussed. The second sense of “crisis” is “crisis of process” – i.e., the notion that (due in large part to apparent failures to replicate a substantial portion of previously published findings), psychological science is “fundamentally broken,” or perhaps not even a “true” science at all. This notion would be based on the assumption that most or perhaps even all of the findings in a professionally published literature should “hold up” when they are replicated, in order for a discipline to be a “true” science, or not to be in a state of “crisis” in this second sense. But this assumption, I will argue, is erroneous: failures of various sorts in science, including bona fide failures to replicate published results, are often the wellspring of important discoveries and other innovations. Therefore, (apparent) replication failure, even on a wide scale, is no evidence that science is broken, per se. Nevertheless,
(3) This does not mean that there is not substantial room for serious, even radical improvements to be made in the conduct of psychological science. These issues must not be brushed under the rug. Even holding the replication debate aside, that is, we have many at least partially independent reasons to push for deep changes in contemporary research and publication norms. Problems that need urgently to be addressed include: publication bias against “negative” results, the related “file drawer” problem, sloppy statistics and lack of adequate statistical training among many scientists, small sample sizes, inefficient and arbitrary peer review, and so on.
To gain greater clarity on the demographic patterns and frequency of UIEs, we conducted the first national survey on UIEs. Data from this survey suggest that UIEs may occur under a broader range of circumstances than addressed by most law and policy. The survey resulted in nearly the exact same rate of affirmative responses between males and females in answer to whether they had received a UIE within the past five years. The survey results also showed evidence of racial disparity. Additional research is needed to understand the nature of UIEs.
improved understanding to address pressing medical and social problems. Scientific findings, however, can only justifiably inform theory or applied problems if they are at minimum internally and externally provisionally trustworthy. Internal trustworthiness is gauged by quantifying the analytic reproducibility and robustness of a study's results; external trustworthiness is gauged by quantifying the replicability and generalizability of published effects and phenomena. The following paper outlines a unified curation framework to quantify the reproducibility, robustness, replicability, and generalizability of scientific findings, categorically addressing all forms of researcher and publication bias. Five major challenges are addressed by the proposed framework: (1) a standardized workflow and principled metric to quantify the analytic reproducibility and robustness of reported results from primary, auxiliary, and secondary analyses; (2) a flexible workflow and replication taxonomy to categorize (i) sufficiently methodologically similar replications that can speak to replicability and (ii) eligible generalizations of an original effect that can speak to generalizability; (3) a principled meta- analytic approach to synthesizing replicability and generalizability evidence; (4) accounting for variations in study characteristics of replications and generalizations; and (5) a viable crowd- sourced web platform to allow the community of scientists to quantify the provisional trustworthiness of published findings in an incremental and ongoing basis. Ultimately, the framework will accelerate investigations into the validity of such trustworthy findings, and consequently accelerate our understanding of the world and development of applied solutions to societal problems.