Science 2013 Couzin Frankel 68 9

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Downloaded from www.sciencemag.

org on December 12, 2014

the literature is lled with papers


that present results as stronger than they actually areand
rarely report negative outcomes.
What makes it so difcult to
portray negative results straight
up? Along with the drive to
prove ones theories right, there
is a wide perception that negative studies are useless, indeed
a failure, says Douglas Altman,
a statistician at the University of
Oxford in the United Kingdom.
Journals, too, often like big,
exciting, positive findings, he
says. As a result, negative results
are often relegated to the dustbin, a problem commonly called
the file drawer effect. Or, by
cherry-picking data or spinning the conclusions, a paper
may make a negative study seem
more positive. But those practices are being challenged.
In some f ields, theres
increasing recognition that
leaving negative findings unpublished is not the right thing
to do, says David Allison, direcGaining ground in the ongoing struggle to coax researchers to share negative results
tor of the Nutrition Obesity
Research Center at the University
Glenn Begley was stymied. At the drug giant Amgen, where of Alabama, Birmingham, where he studies methodologies in obeBegley was vice-president and global head of hematology and sity research. Furthermore, as journals move online, where pages are
oncology research, he was struggling to repeat an animal study of limitless, and as more pledge to treat positive and negative studies
tumor growth by a respected oncologist, published in Cancer Cell. equally in the review process, the venues for publishing null ndings
One gure stood out as particularly impressive. But it was proving are expanding.
stubbornly resistant to replication.
Still, many researchers and journals continue to cast results as a
In March of 2011, Begley saw a chance to play detective. At the story that they believe others will want to read. They choose their words
annual meeting of the American Association for Cancer Research carefully, describing, say, an obesity increase as alarming rather than
in Orlando, he and a colleague invited the papers senior author modest, Allison suggests. Or investigators dig through mounds of
out to breakfast. Across his orange juice and oatmeal, Begley data in the hunt for shiny nuggets. Is there some gold in these hills?
oated the question: Why couldnt his group get the same nding as We know if you sift through the data enough, Allison says, youll nd
the oncologist?
things, even if by chance.
The oncologist, whom Begley declines to name, had an easy explanation. He said, We did this experiment a dozen times, got this answer A matter of emphasis
once, and thats the one we decided to publish.
Many who study negative results have focused on clinical trials, in part
Begley was aghast. I thought Id completely misheard him, he because biased results can directly affect patients, and because a pubsays, thinking back on the encounter. It was disbelief, just disbelief.
lished trial can often be compared with the original goals and protocol,
As it turned out, the respected oncologist was in good company. which these days is often publicly available. That makes it easier to see
A year later, Begley and Lee Ellis, a surgical oncologist at MD whether a published study accentuates the positive.
Anderson Cancer Center in Houston, published a commentary in
Altman and clinical epidemiologist Isabelle Boutron, workNature about lax standards in preclinical research in cancer. They ing together at Oxford about 5 years ago, set out to explore these
shared that Amgen scientists couldnt replicate 47 of 53 landmark effects. They and their colleagues culled 616 randomized clinical
cancer studies in animals, including the respected oncologists. trials from a public database, all of them published in December
(Last year, Begley left Amgen to become chief scientic ofcer 2006, and focused on 72 whose primary goals hadnt panned out.
for TetraLogic Pharmaceuticals, a small company based in The group hunted for spin in the papers, dening it as the use of
Malvern, Pennsylvania.)
specic reporting strategies, from whatever motive, to highlight that
Striking as it is, Begley and Elliss expos is part of a pattern. In the experimental treatment is benecial, despite a statistically nonelds from clinical medicine to psychology, studies are showing that signicant difference for the primary outcome.

68

4 OCTOBER 2013 VOL 342 SCIENCE www.sciencemag.org


Published by AAAS

ILLUSTRATION: DAVID PLUNKERT

The Power of Negative Thinking

SPECIALSECTION
In 13 papers, the title contained spin, as did 49 abstracts and sity of North Carolina, Chapel Hill, School of Public Health, agreed
21 results sections, Altman and Boutron reported in 2010 in the that the abstract overstated the case, suggesting that swapping water or
Journal of the American Medical Association. Some authors compared diet drinks for calorie-heavy beverages caused weight loss when that
patients before and after treatment, instead of comparing those who outcome was not statistically signicant. Popkin blamed the journal,
got the experimental drug and those who didnt. In one case, authors AJCN, for the sentence. The journal process is an imperfect process,
contrasted the treated patients not with their comparison group in the he told Science. Thats what the editors wanted to put in there.
same trial, but with a placebo group from another study to argue that
Bier agreed that the change had been urged by the journalbut
the treatment was superior.
only because, he said, the original version overstated the result to an
Boutron says the authors of these papers were upfront about the even greater degree.
fact that their studys original goal hadnt been met. Still, often they
tried to spin what had happened, says Boutron, now at Universit Telling it like it is
Paris Descartes.
Forest ecologist Sean Thomas of the University of Toronto in CanShe acknowledges that in clinical research, negative results can be ada and his Ph.D. student spent 2 years traveling to and from the
difcult to interpret. If a well-designed trial nds that a new drug eases rainforests of Dominica testing whether, as long suspected, trees
symptoms, it probably does. But nding the opposite doesnt necessar- need more light the larger they grow. His results were lackluster, but
ily mean the therapy is hopeless. When a therapys superiority isnt sta- not his publication strategy.
tistically signicant, it might still helpor the difference could be due
In the end, the relationship between light requirements and tree
to chance. You wont be able to provide a black-and-white answer, growth was imsy, Thomas found, but as with many negative results,
says Boutron, who is now struggling to write up a negative trial herself. he couldnt say for sure that the thesis was wrong. The Ph.D. student,
Its not just clinical research. So-called soft sciences, such as despairing, switched topics. But I was bound and determined to
psychology, carry an even greater risk of biased reporting, according publish, Thomas says.
to Daniele Fanelli, who studies bias and misconduct at the University
We thought about ways of positively spinning things, he conof Montreal in Canada. In August, Fanelli and epidemiologist John fesses. Then he hit on another solution: submitting the paper to the
Ioannidis at Stanford University added an intriguing twist: They Journal of Negative Results: Ecology & Evolutionary Biology. A
found behavioral research was more likely
group of postdocs at the University of Helto claim positive results if the corresponding
sinki conceived of the journal in the early
author was based in the United States.
2000s and launched it in 2004. Its publicaThe
To probe why researchers spin negative
tion record is paltry; it gets only two or three
of this [is that] people submissions a year. It could hardly get any
studies, Altman, medical statistician Paula
Williamson at the University of Liverpool, and
lower unless they were publishing nothing,
report their studies
others decided to ask them. They identied
Thomas says.
268 clinical trials in which there were suspiUndeterred, he sent his paper in, and it
saying exactly what
cions of selective reporting. Investigators from
appeared in 2011. We didnt have to packthey did.
59 agreed to be interviewed.
age things in a way that might pull out some
The researchers offered diverse explanakind of marginal result, Thomas says. This
tions for why they failed to report data on an
was an honest way of presenting the informaDANIELE FANELLI,
UNIVERSITY OF MONTREAL
outcome theyd previously pledged to examtion that would also make it clear what the
ine. It was just uninteresting and we thought
story was from the beginning.
it confusing so we left it out, said one in the paper by Williamsons
Are studies that tell it like it is the exception to the rule, or the
group, published in 2011 in BMJ. Another said, When I take a look cusp of a new trend? Most agree that journals for negative results
at the data I see what best advances the story, and if you include too another exists for biomedical studiesare not a comprehensive
much data the reader doesnt get the actual important message.
solution. Some mainstream journals are getting in on the act: A
At the American Journal of Clinical Nutrition (AJCN), Editor-in- study presented last month at the International Congress on Peer
Chief Dennis Bier encounters this routinely. A study may be designed Review and Biomedical Publication examined how eight medical
to measure whether an intervention will lead to weight loss, say, but the journals had handled recently submitted papers of clinical trials and
manuscript focuses on blood pressure or cholesteroloriginally iden- found they were just as likely to publish those with negative results
tied as secondary endpoints but now taking on a starring role. The as positive ones.
weight loss outcome may occur in one line in table 3, Bier says. For
Allison believes that making all raw data public could minimize
the most part, these are not people trying to be duplicitous, he says. bias in reporting, as authors face the fact that others will parse their
Theyre trying to get as much as they can out of their data. Still, he primary work. A more lasting solution may come from a shift in
says that he and his fellow editors spend an awful lot of time trying to norms: embracing and sharing null ndings without regret or shame.
get things out of papers, and getting the caveats in.
The only way out of this, Fanelli says, is that people report their
Scientists rarely admit that these practices result in biased reports. studies saying exactly what they did.
Several whose work was cited in studies of bias did not respond to
Openness has its rewards. Although Thomass Ph.D. student,
Sciences requests for comment, or insisted that their work was sta- Adam Martin, adopted a new thesis topic after the Dominican raintistically sound. After examining papers positing that reducing sug- forest op, as Martin dives into the job market Thomas is urging him
ary drinks consumption decreases obesity, Allison, who takes fund- to launch his job talk with that story. Its an unusual enough thing,
ing from the soda industry, agged several papers for distortion of Thomas says, that youll stand out in a crowded eld.
information. The senior author of one, Barry Popkin of the UniverJENNIFER COUZIN-FRANKEL

only way out

www.sciencemag.org SCIENCE VOL 342 4 OCTOBER 2013


Published by AAAS

69

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy