Academia.eduAcademia.edu

Mental shortcuts (unabridged)

2016

In this brief “field note,” I discuss some of the ways in which scientific-sounding rhetoric and other mental shortcuts in medicine (and other fields) can pose a threat to critical thinking, and thereby undermine our reasoning when it comes to issues of ethical importance.

Mental shortcuts (unabridged) Hastings Center Report Brian D. Earp Visiting Scholar, The Hastings Center Bioethics Research Institute Garrison, New York, USA Authors’ personal copy. Published manuscript. Please cite as: Earp, B. D. (2016). Mental shortcuts [unabridged version]. Hastings Center Report, Vol. 46, No. 2, inside front cover. Available at https://www.researchgate.net/publication/292148550_Mental_shortcuts_unabridged ** Please note: this is the unabridged version of the essay, before editing for length by the journal staff. For the shorter, formally published version, please visit: http://www.thehastingscenter.org/Publications/HCR/Detail.aspx?id=7812 Abstract In this brief “field note,” I discuss some of the ways in which scientific-sounding rhetoric and other mental shortcuts in medicine (and other fields) can pose a threat to critical thinking, and thereby undermine our reasoning when it comes to issues of ethical importance. Key words: rhetoric, jargon, evidence, evidence-based medicine, peer review Field Note I’m snowed in at the Hastings Center, looking out the window at the Hudson (past the famous tree, beyond a blanket of white), and I’m thinking a lot about words. At a recent lunchtime talk about the legal and ethical implications of reproductive technologies, the term “eugenics” was brought up. The implication was that if something was “eugenics” then it was clearly wrong—end of story. But as one of the Research Scholars pointed out, that kind of rhetoric is just too easy. It becomes a stand-in for an ethical argument, and allows us to skip the actual thinking. Why is eugenics bad? Under what circumstances? What kind of eugenics are we even talking about? Surely the Nazi model is not the only game in town. For example, is there a sense in which simple mate choice could be a form of eugenics? Now, I won’t attempt to answer these questions here; that would be fruitless. My point is only that we should remember to ask them. I see a similar thing going on in science and medicine. Suggestive phrases are used instead of step-by-step arguments. Statistical rituals are performed, and their outputs mindlessly quoted, instead of meaningful analyses of either the theoretical or real-world implications of a study’s (apparent) results (see, e.g., Fidler et al., 2004; Gigerenzer, 1998, 2004). And scientific-sounding jargon is woven into the discussion sections of papers, instead of plain-language, common sense evaluations of the quality—or even the relevance—of the given evidence (see, e.g., Hirst, 2003, and Earp, 2015a, for examples and further discussion). 2 I don’t want to overstate my case. There is a lot of good work out there. But when philosophers, ethicists, and others turn to the empirical literature to bolster their arguments, they should be careful to keep in mind the ways in which scientistic bombast (to concoct an expression) can lull our critical faculties to sleep. Let me give you a couple of examples. I’ve often seen researchers underline the fact that they are citing “peer reviewed” articles from “leading” journals (in support of some contention), or perhaps the “official” policy statement of some well-known organization. I admit that I have used such rhetoric myself. Of course, as a general rule, it is probably better to cite the Lancet or the U.S. Centers for Disease Control and Prevention (CDC) than it is to rely on Wikipedia, but the apparent prestige of a source does not guarantee its value. As former BMJ editor Richard Smith, for example, has argued, peer review is on the whole a depressingly unreliable quality control mechanism (see, e.g., Smith, 1997, 2006, 2010); and “leading” journals publish nonsense all the time (Ioannidis, 2005; Fang & Casadevall, 2011). Moreover, “official” policy statements are often written by stressed-out working groups whose members are not immune from getting even basic things wrong (see Earp, 2015b for a related discussion). Or take the term “evidence-based.” It does seem preferable that a viewpoint, policy, or whatever, should be “based” in at least some kind of “evidence” (as opposed to, say, arbitrarily asserted), but the term is often used in vacuous ways. Presumably, it is the quality of the evidence that is most important, as well as the specific way in which it supports (or does not support) the particular agenda being advanced. Calling something 3 “evidence-based,” then, at least without sufficient elaboration, is essentially to emit white noise. Other examples abound: labeling a finding “high quality” simply because it comes from an (in principle) relatively strong study design, such as a randomized control trial, without examining the specific materials used in the study to make sure that they were fit for purpose (see Earp, 2015a); denigrating a resource as “non-peer-reviewed” (cf. above) without showing that its content is actually invalid; and saying that something has been “proven” or “shown” without discussing the limitations of the cited research. In fact, “proven” is almost always too strong a term outside of logic or mathematics—and should therefore be treated with suspicion when it’s applied to findings in, e.g., medicine—and there are almost always limitations worthy of being addressed (see Ioannidis, 2007). Rhetoric can be persuasive, in a good way; it can also lead us astray and keep us from thinking things through. In medical ethics, as in any other discipline, it’s important to be wary of the latter. References Earp, B. D. (2015a). Sex and circumcision. The American Journal of Bioethics, 15(2), 4345. Earp, B. D. (2015b). Do the benefits of male circumcision outweigh the risks? A critique of the proposed CDC guidelines. Frontiers in Pediatrics, 3(18), 1-6. Fang, F. C., & Casadevall, A. (2011). Retracted science and the retraction index. Infection and Immunity, 79(10), 3855-3859. 4 Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychological Science, 15(2), 119-126. Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587606. Gigerenzer, G. (1998). We need statistical thinking, not statistical rituals. Behavioral and Brain Sciences, 21(02), 199-200. Hirst, R. (2003). Scientific jargon, good and bad. Journal of Technical Writing and Communication, 33(3), 201-229. Ioannidis, J. P. (2007). Limitations are not properly acknowledged in the scientific literature. Journal of Clinical Epidemiology, 60(4), 324-329. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. Smith, R. (2010). Classical peer review: an empty gun. Breast Cancer Research, 12(Suppl 4), S13. Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178-182. Smith, R. (1997). Peer review: reform or revolution? BMJ: British Medical Journal, 315(7111), 759. 5
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy