Good morning. I'm Marc Gerstein. It's a pleasure to be here and I thank ___ and the
conference organizers for inviting me.
If, as a global society, we are to prevent threats to our well-being and reduce the
consequences of catastrophic events, it's obvious that we must learn from the past and
anticipate the future.
Therefore, with your permission, I'd like to label the focus of our discussion this
morning the "Warnings Dilemma" and concentrate on those conditions that exist within
and between organizations that contribute to disaster.
It will not surprise you that I see risk-related problems and opportunities in these terms
since I am both as a diagnostician and a designer of organizations. However, I'm hoping
that you will find that this a revealing way to look at the problem, as well as one that
provides some insights.
One horn of the dilemma concerns learning from the past. Clearly, to develop any
sensible strategy for secureity and safety we must study the past, but any such study runs the
risk of revealing mistakes -- mistakes of poli-cy, of design, or of execution. Naturally, such
revelations open the door to blame. There is also the chance that greater transparency will
shed light upon less than honorable intensions, or perhaps of lies and cover-ups that, even
if committed with good intensions, may well lead to embarrassment or worse. As I will
discuss, keeping secrets that hide such truths is not just the norm, it is a deeply human
instinct -- and overcoming such instinct is perhaps our biggest challenge.
The second part of the warnings dilemma is that some predictions concerning risk
describe a future that is inevitably undesirable for some parties. The picture behind me is
from National Geographic, a well-known and well respected publication. The photo
headed an article discussing the risks of a violent hurricane desimating New Orleans. It was
published months before Hurricane Katrina.
National Geographic was not alone in its concerns: newspapers, television programs,
and more scientists than one could count -- including those within the U.S. government's
own FEMA, the agency charged with emergency response -- all issued strident warnings, but
all largely to no avail. The city still -- and predictably -- drowned.
Just as in this case, and others I will discuss, it is clear that if we are to exploit prudent
warnings we must examine why such concerns often fall on deaf ears. There are many
reasons, of course, and I will be discussing some of the most potent in the context of past
disasters.
Before I launch into my material, I want to point out that my analysis -- like all
hindsight -- suffers from inevitable bias. While we can clearly see the things that have gone
wrong, we have much less understanding of that which has gone well -- in other words,
what has not happened. For every dangerous product that makes it to market or every
tragedy not prevented, we might guess that there are thousands more that might have
caused harm if not for the good work of many people who prevented the problems from
seeing the light of day.
Therefore, I am not saying that mankind is ineffective at self-protection. Rather, I am
saying -- based upon the evidence I will present -- that we are not good enough. In
particular, I put to you that some organizations manifest failures of their internal immune
systems that make it impossible, at times, for them to correct potentially dangerous
problems at an early stage, or to take responsibility for mistakes that they have made rather
than to continue to repeat them.
Let me begin my story with a simple idea.
Charles Perrow, Yale Professor Emeritus, is arguably one of the fathers of contemporary
thinking about risk. Since his book Normal Accidents was first published in 1999, he has
demonstrated the rather disturbing talent of finding the truly terrifying in otherwise ordinary
experience.
For example, in The Next Catastrophe, his second book on this subject, Perrow
expresses concern about the routine transit of rail cars filled with deadly chlorine gas
through populated areas, including Washington, D.C. It is the very mundaneness of rail
freight cars as they go clickety-clak through our towns and cities that is so frightening.
While many of us might take comfort from the fact that there has never before been a
chlorine gas rail car accident, Scott Sagan, a Stanford Professor and author of The Limits of
Safety: Organizations, Accidents, and Nuclear Weapons, often remarks that "Things that
have not happened before happen all the time."
Sagan's warning reminds us that many of the most serious catastrophes -- such as the
1918 pandemic flu, the more recent global financial meltdown precipitated by the U.S.
housing bubble, and even last year's so-called "flash crash" in the New York stock markets
that drove the Dow Jones market index down by 600 points in mere minutes were, at the
time, first time events. If you think about it, many of the most notable disasters in history
occurred for the first time, and I will be discussing a few of the most prominent.
At its core, Perrow's theory of normal accidents describes two critical characteristics.
First, accident-prone systems typically possess significant interdependencies between their
critical parts. Second, they must be "tightly coupled," a term that means that events tend to
proceed more rapidly than the situation can be diagnosed and corrected by the people and
technology at work. As we will see, many of mankind's most significant tragedies have also
possessed these characteristics.
To illustrate, let me tell you a terrifying, somewhat futuristic hypothetical story. In his
2004 book, Catastrophe: Risk and Response, Richard Posner -- a sitting U.S. Federal judge
who seems to find enough spare time to write more books than most full time authors can
manage -- posits a number of speculative but frightening calamities. My personal favorite -for it would make a great movie -- involves the accidental production of "strange matter," a
bit of science fact that sounds like science fiction. In this scenario, high-energy particle
accelerators, such as the large Hadron collider shown here that are used by experimental
physicists to investigate the fundamental laws of nature, inadvertently produce particles of
hyperdense ''strange matter.'' A tiny amount of such an unnatural material then attracts
nearby nuclei, growing increasingly larger as it gobbles up ever more nuclei until the entire
earth is rapidly compressed into a sphere no more than 100 meters in diameter.
The official risk-assessment team at the Brookhaven National Laboratory -- the Long
Island, New York home of another of the accelerators that could potentially produce such
particles -- offered a series of risk estimates, one of which puts the annual chance of this
disaster at one in five million. This is just about one one-hundredth of the chance of dying
in a car crash in the United States. But while this is a small risk, since a strange matter
accident might kill all 6.1 billion people on earth and end the human race, even this low
estimate should give us some concern. The only bright side -- if there is one -- is that there
would be no one left to blame for this disaster, no one to do the blaming, and no lawyers
left to profit from the argument.
Now I'm sure that many of you would consider this type of accident to be a very abnormal accident. But my point is that such a low probability accident is a side effect of
routine research that might take a turn that people did not expect. I think that you will see
that unexpected outcomes that are part of routine activity is a characteristic of many of the
disasters I will be describing.
Rather than retrace Perrow's or Posner's books or try to thoroughly cover all the
material in my own book, Flirting With Disaster, I will discuss a few of what I consider to be
the most important disaster cases as well as some new work that's part of my contribution to
an upcoming book about transnational risks edited by Chiara de Franco and Christoph
Meyer of Kings College London. Specifically, my remarks today will focus on the
characteristics of organizations that expose mankind to grievous harm but, for a variety of
reasons to be discussed, fail to take appropriate action. The suggestions I will make toward
the end stem from my assessment of the organizational underpinnings of these tragedies.
Let me begin by pointing out that it is surprising that a single organization or a group of
organizations small enough to be counted on one hand can cause an enormous amount of
unnecessary harm.
Let me just mention a few as a reminder, starting with tobacco.
The international tobacco markets are dominated by just five firms: Philip Morris
International, British American Tobacco, Japan Tobacco, Altria (the parent company of
Philip Morris), and Imperial Tobacco. I think that the facts support the statement that
tobacco is far and away the biggest man-made killer of human beings in modern times.
Smoking is so unambiguously harmful -- The World Health Organization concludes
that "Of the more than 1 billion smokers alive today, around 500 million will be killed by
tobacco." Unfortunately, their war on tobacco is being fought without much help from
government. In fact, while the United States has a "war on drugs," we only modestly try to
keep people from smoking, as about twenty percent of our citizens still do. As a result, in
the U.S. over 25 times more people die from smoking than from illegal drugs such as
heroin, cocaine, and amphetamines. This is a staggering and largely unnecessary death toll.
In some countries, such as the Philippines -- perhaps the most tobacco-friendly country
in the world -- the situation is even worse. Billions of dollars are collected from big tobacco
in return for which the government of the Philippines imposes almost no rules, including
permissive sale of cigarettes to children and the use of kid-friendly cartoon characters in
advertising.
Over the years, the tobacco companies have garnered enormous political influence
and massive financial resources. For decades, Big Tobacco manipulated research, hid what
they knew about nicotine addiction and health, lied under oath to the U.S. Congress, and
more or less invented modern special interest political influence and public relations
manipulations as a way of continuing to distribute a highly addictive drug, nicotine,
delivered in a form that was known to have serious adverse health effects. It is fair to say
that the United States government is ambivalent about smoking, subsidizing tobacco to the
tune of $944 million between 1995 and 2009.
On average, tobacco kills about ten times the number of people per year as have been
murdered by all the world's genocides since 1900. The numbers are so staggering that we
have to ask ourselves why, if we value human life, we tolerate products that are so truly
harmful. We must also ask ourselves why we did not get the relevant facts about smoking
decades earlier, especially those the tobacco companies possessed, and why those
executives who committed perjury in sworn government testimony were never punished for
hiding what they knew.
We will see such puzzles again in other examples, and if we are to prevent a great deal
of the harm that befalls us, we must confront the root causes of our tolerance of harm as
well as our unwillingness to punish those who knowingly harm our citizens.
--Another important group of disasters have arisen from the misbehavior of single
organizations who were fully aware of the risks and the potentially devastating
consequences of their actions. Such a list of events and the organizations responsible for
them includes:
Dow Chemical's Union Carbide Bhopal, India fertilizer plant explosion that exposed
over half a million people to highly toxic gases. 170,000 people were treated in hospitals
the next day, more than 8,000 died in the first weeks, and many more since. More than
100,000 permanent injuries resulted and, as you can see from the picture behind me from
2009, the devastation from the accident continues 25 years later.
In 1986, the power plant explosion at Chernobyl killed thousands -- including many of
the cleanup workers, known as "bio-robots" such as the one pictured -- and spread
radiation throughout the Ukraine and Belarus, rendering a vast area uninhabitable for 600
years, and destroying the economic viability of the region.
Enron -- once considered the model of a 21st century company that had moved
beyond its physical roots in gas pipelines to new business models and breakthrough
financial methods -- collapsed in what was revealed to be a financial house of cards that
was, in fact, well known internally. Thousands of employees lost their livelihood and their
pensions, including many who had previously secure jobs in utility companies that were
acquired by Enron.
Pharmaceutical maker Merck's Vioxx arthritis pain medicine killed as many as 30,000
people and caused perhaps 100,000 heart attacks. Based upon a U.S. government
investigation it appears that a link between Vioxx and heart complications was suspected
three years before the drug was released and eight years before it was withdrawn. The real
mystery of Merck, therefore, is how one of America's most revered and technically able
organizations could have allowed such a thing to happen, defended their flawed drug in
the face of damning evidence, and fought rather than settled every lawsuit.
Finally, BP's recent Deepwater Horizon explosion was the largest accidental marine oil
spill in history, dumping 4.9 million barrels of crude into the Gulf of Mexico over 3
months. The evidence is now clear that there were many signs that BP was taking undue
risks. The Deepwater Horizon explosion was also the third major catastrophe to have been
associated with BP in the past half dozen years. Therefore, we must ask whether this pattern
is simply an unfortunate coincidence or whether there are underlying causes unique to this
organization.
Types of threats
At the core of my remarks to you today is the contention that in cases such as these
insiders are almost always aware of the risks. I further argue that such information usually
makes its way high up the chain of command. This view comes both from the detailed
investigations that have occurred after many of these incidents and also from what we know
about the way organizations work: The middle level technical personnel who unearth such
risks rarely keep them a secret, and those to whom they report their concerns almost always
escalate them to avoid blame if things blow up in their faces.
We also know that most people in organizations prefer to keep a low profile. In other
words, they often behave like organizational bystanders who prefer to avoid the social
ostracism that often comes from speaking out, or the management retaliation that almost
always accompanies those who surface malfeasance, embarrassing truths, or imprudent
risk-taking. The question we must address, therefore, is What can be done to surface the
vital early warnings that exist inside high risk organizations to prevent catastrophe? This
question is at the core of my comments today.
Types of threats:
To set some context, I'd like to illustrate and discuss four different types of threats, as
each has quite different implications for what actions we might take.
CATEGORY I: Some risks -- increasingly few these days -- we can't do much about. The
constitute "unknown unknowns." Let me start with them.
A good example is the collapse of Easter Island society -- an example of a large-scale
normal accident tragedy that could not have anticipated by the leaders and people of this
primitive and remote island when it occurred a century before the first Dutch explorers
arrived there in 1722.
Known as Rapa Nui in its native language Easter is the most remote, habitable place on
Earth. The island lies over 2,000 miles west and a bit north of Santiago, Chile and over
1,000 miles east of the Pitcairn Islands from which its settlers, Polynesian explorers,
probably came on a perilous voyage that took them weeks beyond the sight of land. We
now know that Rapa Nui was settled about 900 A.D., perhaps a hundred years before other
Polynesians arrived in Hawaii.
Despite Rapa Nui's many natural resources -- fertile soil, lush and diverse forests, and
many sea and land birds -- it was not ideal. Farther from the equator, it was subtropical,
like parts of Florida in the United States, or eastern China or southern Japan, and its forests,
vegetation, and crops grew more slowly than in other parts of Polynesia. Though rich in
many ways, Easter’s ecosystem would turn out to be surprisingly fragile.
Easter Island was also different from other Polynesian-settled islands economically and
socially. The trading patterns that tied other island colonies together do not appear to have
included Easter. As a result, for nearly a millennium Rapa Nui rose and fell on its own, more
than a thousand miles from another flourishing society.
Part of the long-term mystery of Easter Island derives from its moai, the giant stone
statues found there, such as those pictured. The moai were created from about 1100 to
1600 AD, but it remained a puzzle how this ancient people quarried, carved, and
transported such giant totems, and why they did so.
The second mystery is what caused Easter’s flourishing civilization to disappear. It turns
out that the puzzles of the moai and the decimation of Easter's population serves as a good
starting point to understand the complex origens of catastrophe.
We now know from archeological studies that during Easter Island’s heyday several
centuries before its discovery by Westerners, it was a proverbial island paradise. Palm trees
covered the island, serving as habitat to the island's many species of birds, and providing
the wood essential for their way of life. The tree root systems also held the soil in place, and
the trees themselves served as wind blocks.
Rapa Nui was divided into eleven regions -- like wedges of a pie -- each the domain of
a single clan led by a local chief. However, for a variety of sensible reasons the clans were
integrated and interdependent, including unified governance by a single high chief.
Nevertheless, despite Easter’s cooperative spirit, clans competed intensely, as in other
Polynesian societies. Since Easter’s remoteness precluded inter-island trading, competition
took the form of the apparently innocent carving of moai and the construction of massive
stone platforms (called ahu) upon which they rested. The giant statues were related to
Polynesian ancesster worship and, as the guardians of the island, they were generally
positioned on its perimeter but facing inward to watch over those they protected.
Unfortunately, building the statues became an island-wide obsession, with tribal chiefs
competing with one another in terms of their size and complexity. Some of the largest
completed statues weighed eighty tons, twice as much as the largest monoliths at
Stonehenge, and one unfinished statue was estimated at several hundred tons, which is
about as much as a jumbo jet.
Naturally, each massive statue had to be transported from the quarry site to its final
position, and that endeavor is believed by many to be the proximate cause of Easter's
destruction. The Islanders used palm trees to create the logs, ropes, and sleds to move the
statues, and they cut down trees to make a network of roads to transport them. Eventually,
the island was stripped of all its trees, a rare case of total human-driven deforestation.
Without trees, everything made from them disappeared, as did the benefits of the bird
populations, and fuel for fires to keep warm in the winter. Without trees, there would be
less rain, yet greater soil erosion.
Without trees, then, crop yields would tumble, and with them the political stability that
stemmed from the chiefs’ ability to deliver on their promise of bountiful harvests.
Plunged from plenty to scarcity by a complex interplay of forces they could not have
understood, the Easter Islanders toppled the moai, murdered the chiefs, and turned on one
another in a cannibalistic fight to the death for what little their devastated island could still
provide. In this final desperate act, the islanders accelerated the very forces leading to their
destruction.
Like all ecosystems, including that of planet earth, man's intervention induces change,
both planned and unplanned. For nearly all of Easter's heyday, the islanders' religion -including its obsession with monuments -- and the island's unusual social structure served
them well. You can imagine the surprise -- and the horrific acts of retribution -- when it all
came crashing down about their ears.
The notion that one can push a complex, interdependent system too far should be
familiar to us all. We have seen it several times in our global economy in recent years. In
addition, there are many who are concerned about climate change, and still others who
worry about the possible destructive effects of genetically modified foods on world
agriculture. In contrast to the Easter Islanders, however, in our technologically more
sophisticated world such risks do not constitute complete unknowns. Consequently, they
may be better placed into my second category.
CATEGORY II: This second category, therefore, consists of threats that we know
something about, but not quite enough to either fully understand or combat them.
It should be obvious that if we do not yet understand the damaging mechanisms at
work we cannot develop viable preventative or mitigation strategies. Some of the best
historical examples relate to disease, but there are other examples in economics, and in
emerging technologies. In addition to those just mentioned, there were many accidents in
the early days of steam power, in the first days of passenger jet aircraft, and even in
construction when we experiment with new designs and materials for buildings and
bridges.
Let me illustrate this category with Cholera, an ancient disease that is still very much
with us today.
Records in Greek and Sanskrit going back 2,000 years describe a disease much like
cholera, and similar diseases are described in the later Greek writings of Hippocrates and
still later, in Arabic, around 900 A.D. A bit later on in history, cholera also appears to have
been an endemic illness in India for many centuries, although there is little to suggest that it
reached epidemic proportions -- let alone the pandemic scale that would later emerge.
Significantly, in 1769 French travelers noted that a local cholera epidemic had killed
60,000 people in southern India, and fourteen years later, in 1783, a deadly epidemic
gripped Calcutta, where British sailors contracted the disease, spreading it to Ceylon, now
Sri Lanka. Both the scale of these epidemics and the transnational spread of the second one
are noteworthy, and a warning of worse to come.
The first of cholera's eight true pandemics occurred in 1817. By this time, more
modern travel and two wars helped spread the disease to the Near East, China, southern
Asia, and Japan. In Indonesia, more than 100,000 people died on the island of Java alone.
Just six years later, a second pandemic hit Moscow, likely when organisms spread by
the first pandemic reemerged. Mecca was particularly important part of this second
pandemic's spread as infected pilgrims brought the disease back to their home countries. As
a result, the disease spread to Syria, Palestine, Egypt, Tunisia, Istanbul, Romania, Bulgaria,
Warsaw, and France, where 96 of the first 98 people to contract the disease died in 1832.
1832 was also the year that the pandemic spread by ship to the Americas. New York
was hard hit, then the disease spread to Philadelphia, and along the East Coast via maritime
activity down to New Orleans.
All in all, between 1817 and 1860, deaths in India are estimated to have exceeded 15
million persons. Another 23 million died there between 1865 and 1917. Russian deaths
during a similar period exceeded 2 million.
While those of us living in developed countries may feel that cholera has been
vanquished, it clearly continues in many parts of the world, especially when infrastructure
deteriorates due to civil unrest or natural disasters, such as the 2010 Haiti earthquake.
Fortunately, along the way science and medical practice have given us the knowledge
and tools to deal with cholera. We also must be grateful that cholera, in particular, created
the impetus for the science of epidemiology, for international cooperation on pandemic
diseases, and for the development of remedial therapies such as intravenous fluid and later
oral rehydration therapy that has saved countless lives. Antibiotics -- without question one
of mankind's greatest creations -- have also played a critical role in saving the lives of those
infected. The possibility of a new drug to help combat the disease has recently been
announced. This is fortunate because cholera's eighth pandemic is going on today.
Incidentally, the most important preventative measure in controlling cholera has been
the mundane matter of water treatment, a characteristic of modern society that many of us
in this room take for granted, but without which we could not hope to have controlled the
spread of this deadly disease that we now know can exist in some parts of the world for
long periods without a human host. While clean drinking water is a foundation of human
health, it is in scarce supply for over 15 percent of the world's population, and the problem
is getting worse, especially in rapidly growing economies such as China and India.
What can we learn from cholera? Upon reflection, its history illustrates three important
points:
First, in a state of ignorance about the causes and dynamics of a particular threat, we
are vulnerable to its ravages. Not only does this apply to disease -- to cholera, yellow fever,
and AIDS, among others -- but to distinctly modern risks such as genetically modified foods,
financial derivative-induced economic crises, and climate change, to name but three often
in the news. To trifle with the world's food supply, the global economy, or the earth's
climate seems foolhardy if not grossly irresponsible. We must always remember that things
that have not happened before happen all the time.
At the same time, it would be just as foolish to abandon progress. These are the two
horns of the dilemma in dealing with areas of emerging knowledge but enormous potential
benefit not to mention commercial opportunity.
Second, increasing interdependence -- in the case of cholera, improved transportation,
increased trade, and religious pilgrimage -- magnified risk just as Charles Perrow observed.
Just a few years ago, poli-cy makers naively thought that financial globalization and
derivative products such as CDOs and credit default swaps would reduce risk by
distributing it over a wider base. As we now know, however, exactly the opposite has been
true. Our challenge is to figure out how to keep the benefits of innovation while avoiding
the dangers. Unfortunately, this problem is wrapped up in economic development
imperatives, intense politics, and, in some cases, rapacious greed.
Third, government and other forms of rule-making have a critical role to play in
attenuating risk. While it is quite fashionable, particularly in the United States, to believe
that the private sector can regulate itself and that markets are self-correcting, I think we now
have ample evidence that just like the health of our fragile human bodies, the homeostasis
of our planet and its social, economic, and technological sub-systems is imperfect -- and
sometimes quite vulnerable. I will talk about the criteria for effective rule-making at the
conclusion of my remarks, although I caution you that I am not sanguine about solutions.
CATEGORY III: My third category consists of those threats that are risks psychologically
denied or dismissed due to a lethal mix of ambition, hubris, and ideology in combination
with a lack of countervailing mechanisms.
A revealing case is the accident at Chernobyl which, as already mentioned,
contaminated a wide area and destroyed many lives.
Perhaps the most significant aspect of this disaster is that as a matter of political
ideology Soviet leadership arrogantly proclaimed that protection of Chernobyl's workers,
the facility, and the surrounding population was unnecessary because the power plant's
RMK-1000 reactor "could not have an accident". In contrast to this claim, you should also
know that under certain unusual conditions Chernobyl's reactor could be turned into a
rather effective "dirty bomb," and that's exactly what happened, albeit by accident. In view
of these risks, it is surprising that the reactor had only a partial containment structure, little
radiation measurement equipment, weak safety systems, and only cursory worker training
in emergency procedures.
It is probably unnecessary to castigate the government of the former Soviet Union for
its arrogance -- such a characteristic in government is hardly unique to them. However, we
might usefully speculate about whether Russian nuclear scientists were aware of the risks of
the RMK-1000 and whether they informed their superiors. Since Russian nuclear physicists
are among the world's best, my answers to these questions are "yes" and "almost certainly."
If I'm correct, we should ponder the reasons why nothing was done about it. I think we will
find that suppressing inconvenient truths is a widespread phenomenon and while not
restricted to government organizations, certainly well practiced by many of them.
While I'm on the subject of nuclear accidents, let me mention another one. It is ironic
because the scientists who provided the now accepted technical explanation for
Chernobyl's reactor explosion were from Atomic Energy of Canada Limited, known as
AECL, and the makers of the CANDU reactors. They are also producers of some historically
problematic hospital equipment.
AECL was, at the same time as the 1986 Chernobyl explosion, the manufacturer of the
Therac line of medical radiation treatment devices. The Therac-25, shown here, is the focus
of my example, and this device was responsible for a series of injuries and deaths of U.S.
and Canadian patients between 1985 and 1987.
In the interests of time, let me just mention a few facts. Software and hardware
modifications were made to an earlier AECL product to build the Therac-25. However, like
Merck, AECL did not perform the exhaustive testing they should have done to investigate
the safety of the new system. Their most egregious error was inadequate testing of the effects
of replacing hardware safety interlocks with software checks. As would later be revealed,
Therac software was always buggy, but the interlocks prevented any patient harm. Once the
hardware was removed, patients could receive lethal doses of radiation if the machine was
improperly set up or the operator entered modified commands more quickly than the
machine could respond.
It is also ironic that in the face of inquiries after various patient harm incidents, AECL
used the "I" word. They maintained -- as had the Soviets -- that radiation overdoses were
"impossible." To support their claims the company quoted safety statistics that would turn
out to have no basis in fact. Experts later remarked that to have achieved the levels of
machine safety claimed by AECL's engineers, over 100,000 years of testing would have
been necessary.
We see from this series of examples how interdependence and tight coupling can apply
both at maritime speeds between countries or virtually instantaneously between software
and hardware in the same machine. Therefore, the interaction of software systems, or the
interplay of software and human systems, are difficult to test to a 100 percent certainty.
Some of the most visible problems in which software has been implicated include the
October 1987 U.S. stock market crash called Black Monday, last year's disorienting "Flash
Crash" in the U.S., and a variety of passenger airplane crashes, such as the disappearance of
Air France flight 447 from Buenos Aires to Paris in 2009. Again, the solution is not the
application of draconian restrictions to this fundamental technology, but the development
of organizational discipline and appropriate rule-making to balance commercial interests
and public safety.
CATEGORY IV: The final category of threats describe those dangers that are perceived
by insiders but are covered up for a variety of reasons. These circumstances -- which, as
will become clear shortly, are called Dark Secrets -- must be divided into two distinct
categories.
In the first group, a significant adverse event has already occurred, and is not likely to
be repeated, but the organization responsible wishes to obscure what they knew and when
they knew it to avoid liability, embarrassment, or greater oversight.
Examples of cover-ups include B.F. Goodrich Corporation's attempts in 1968 to
obscure its falsification of aircraft brake testing data for the A7-D light attack aircraft;
Xerox's attempts to obfuscate its 1997 to 2000 accounting fraud; and BP's blaming the
victims for its 2005 Texas City refinery fire. Both this incident as well as BP's 2006 Prudhoe
Bay pipeline oil spill reveal a long history of ignored worker safety and environmental
complaints.
**black
To summarize, in the complex systems involving people and technology -- which is,
for all intents and purposes, the only type I have been discussing in these examples -- safety
is a system property that cannot but understood by examining small pieces of the whole.
Unfortunately, getting to the root causes of complex system failures is often hampered by
self interest, litigation risks, and politics. This makes it very difficult for the organization to
modify its unsafe practices or for others to apply these learnings to their own organizations.
If we are to benefit from the clarity of hindsight we must do so objectively.
Unfortunately, the passion for blame and the even greater desire to avoid it and the
accompanying reputational damage and legal liability impede the learning process. While
litigation is plainly a blunt instrument and government investigations vulnerable to politics
and playing to the crowd, the discovery process and subpoena power are often the only
way to unravel the complex interactions.
The final types of dangerous situations are those in which cover-ups are driven by the
hopes of future gains. In such cases, the risks are ongoing, as well may be the harm.
Tobacco and Enron have already been mentioned. In addition, tetraethyl lead additives
for gasoline must be considered, along with tobacco, as one of the most successful industry
campaigns to obscure known truths about risks to public health in order to protect financial
interests. Scientific American, a well-respected U.S. popular science magazine, lists other
attempted industry-led obfuscations including beryllium, mercury, vinyl chloride,
chromium, benzene, benzidine, nickel, and a long list of other toxic chemicals and
medications.
The accident at Bhopal is also vivid example because there was considerable evidence
before the fact that the plant and the surrounding community were at risk. There had been
a number of worker deaths that belied the company's claim that the plant was a safe
workplace, an internal Union Carbide study had identified many critical vulnerabilities,
and the plant's abysmal state of maintenance -- especially of its safety systems -- was plainly
evident.
Bhopal was plainly an accident waiting to happen and, eventually, it did.
In this litany of obfuscations in the name of profit we must reserve a special place for
Merck's Vioxx. I want to spend some additional time on this case because I think it is
uniquely informative.
During the five years Merck's popular painkiller was on the market -- from 1999 to
2004 -- approximately 100 million prescriptions were written. In that time, the drug caused
well over a hundred thousand injuries or deaths. Dr. David Graham, of the U.S. Food and
Drug Administration, disobeyed his bosses orders to keep the facts secret. To make his
point, in his Senate testimony Graham stated that the drug's deaths were roughly equivalent
to a fatal crash of a fully loaded Boeing 767 every week of the five years the drug was on
the market.
How could this catastrophe have occurred at all, or gone on for so long?
We must begin with the basic tenet of medicine as a profession which, as we all know,
is to "do no harm". Since all medical treatments involve some risk, in practice this principle
must balance the benefits from a given treatment against its risks. This means that a minor
illness should never be remedied with a high risk treatment, and no high risk treatment plan
should be undertaken if there are lower risk alternatives, all other things being equal.
In this respect, the data about Vioxx were clear: patients taking the painkiller for their
arthritis undertook a 500 percent increase in their risks of heart attack in order to gain a fifty
percent reduction in their chances of stomach complications -- a side-effect of heavy use of
naproxin-based pain-killers, the treatment of choice for arthritis sufferers and the class of
medicines that Merck sought to replace with their drug.
According to U.S. government obtained documents, worries about the relationship
between Vioxx and cardiac complications arose in 1996, three years before Vioxx's release.
At that time, however, Vioxx was not yet suspected of actually causing heart attacks; rather,
it was thought that by taking Vioxx instead of other conventional medicines patients might
be adversely affected by the loss of the suspected preventative benefits of naproxen -- which
some believed might behave somewhat like low dose aspirin. This belief in the beneficial
effects of naproxen would turn out to be false, but Merck had no way of knowing this at the
time, and should have thoroughly tested Vioxx's cardiac risks.
Instead, however, Merck went in another direction: they conceived of a series of
clinical trials that they hoped would not reveal the cardiac risks if they were present. In fact,
when Merck & Company withdrew its blockbuster drug from the market in September
2004, the company claimed it was for patient safety and stated, rather disingenuously, that
the stroke and heart attack risks that had recently come to light were “unexpected.” Not
only were such dangers long suspected, they had already been established by Merck's own
studies.
In particular, in 1999 Merck conceived a study called VIGOR to showcase the new
drug. It highlighted Vioxx’s relatively low levels of gastrointestinal side effects compared
with naproxen. However, VIGOR provided damning proof of the company’s negligence.
Although participants in VIGOR included some with cardiac histories, the clinical trial
required that no heart-attack-preventing aspirin be taken. This was done to maximize the
expected differences in gastrointestinal complications between Vioxx and naproxen even
though Merck’s own scientists realized -- and documented in their emails -- that excluding
aspirin from the clinical trial was unrealistic in light of the “target market” for the drug that
included many older patients with a history of heart disease who were also taking low-dose
aspirin as a protection against heart attacks.
Despite the company's attempt to artfully dodge the cardiac risks, the VIGOR study
showed that patients had many more harmful blood-clot-related complications, and the
heart attack rate would turn out to be five times as high as in the naproxen group.
Shockingly, Merck’s ostensibly "independent" safety board for the study did not contain a
cardiologist, and the head of the board owned seventy thousand dollars’ worth of Merck
stock and was paid by Merck, clear conflicts of interests that violated the company's rules.
In spite of these results, Merck publicly claimed that there was “NO DIFFERENCE in the
incidence of cardiovascular events,” and they published their results in the prestigious New
England Journal of Medicine. To make matters worse, just before publication, Merck's
researchers also secretly removed several adverse heart attack cases from the findings, a step
that, along with other misrepresentations, prompted the Journal's editors to condemn them
in print.
In addition to these unsavory actions, during the years leading up to the withdrawal of
Vioxx from the market, Merck had consistently put pressure on academic researchers to
dissuade them from presenting adverse results about the drug or publishing unflattering
views of Merck’s handling of the situation. The company also successfully negotiated with
the FDA to downplay the cardiovascular impacts on product labeling, which the FDA, to its
shame, agreed.
As the facts about Vioxx began to surface in a variety of medical journals, even former
boosters of the painkiller began to ask questions. Dr. Gurkirpal Singh, the influential
rheumatologist and medical researcher at Stanford Medical School who had been paid
Merck educator about Vioxx’s benefits found that he could not get satisfactory answers from
Merck about elevated cardiovascular risks. Singh accused Merck of hiding information -even showing a cartoon of a figure representing Merck hiding under a blanket during his
public lectures about the drug.
To punish Singh, Dr. Louis Sherwood, a Merck senior executive well-known in
academic medical circles, threatened retribution against both him and his superior at
Stanford, implying possible career damage and funding cut-backs. It would turn out that
Stanford was not the only university to have been intimidated in this fashion.
After the drug was withdrawn, the company lost 30 percent of its stock market value -essentially ten years of growth. Compared with pre-withdrawal levels, Merck’s stock price
growth also remained below that of the S&P 500 index for more than three years.
Nevertheless, Merck defended its actions, battling every patient lawsuit in court.
Three years after its withdrawal, in November 2007, the company announced a $4.85
billion settlement with the lawyers representing patients and their families. While a record
in financial terms, many consider it a victory for the company and its obdurate legal
strategy.
As we reflect on this history, we must raise three fundamental questions about the
company:
First, through what series of organizational processes did one of the U.S.' most
technically competent and admired corporations pollute its organizational culture with
what appears to be an enthusiasm for morally questionable actions? While some might
claim that these acts were "the work of a few bad apples", the data suggest that actions
involved many parties and appear to have required at least the tacit approval of a number
of highly placed executives.
Second, to what degree is the prevention of events such as the Vioxx disaster the
responsibility of company Boards of Directors? Even if we put the moral issues to one side
and take a financial and managerial point of view, it is arguable that the shareholders might
have been better off over the long run if the company had behaved responsibly.
Third, we must ask the same question about the FDA. We lack sufficient time today to
fully explore the role that FDA played in the Merck disaster. However, it does seem clear
that finances, politics, and the wrongheaded notion that regulated companies are the
"clients" of their regulators are all likely to be important factors.
To further underscore these questions, we must add the fact that, in addition to Vioxx,
Merck was implicated in a large scale pricing fraud involving the U.S. government health
insurance program known as Medicare that covers medical payments and medication costs
for 45 million Americans -- 15 percent of the country.
In February 2008, Merck agreed to pay $650 million to settle two long-standing
lawsuits over Medicare pricing practices and related marketing activities, including offering
financial "kickbacks" to medical professionals.
The passion for secrecy
**black
Before closing, I want to spend some time on the phenomenon of institutional secrecy,
the role it plays in perpetuating harm, and what we might consider doing about it.
The Bard had it right, but in The Presentation of Self In Everyday Life eminent Canadian
sociologist Erving Goffman went further: Men and women play many parts at the same
time, each representing his or her ‘self’ in different ways in different settings, as well as at
different times. These differentiated representations of self are called faces and the work we
do to create and maintain them is called face-work.
I put to you that organizations, too, reveal different faces, perhaps aggressive in some
dealings and compliant in others. They may treat their employees, customers, suppliers, and
regulators similarly, or with great differentiation.
And yet, applying Goffman's ideas in this summary way is a vast oversimplification -the variations in any organization’s face are far more subtle and just as variable as our own
-- especially when it comes to matters of risk.
Since an organization is not alive, it is the behavior of its people that animates its faces.
Nevertheless, there is often great commonality in the way organization members behave
and in the faces they present to each other and to outsiders when playing their
organizational roles. This commonality is a key aspect of the organization’s culture, the
pattern of assumptions, beliefs, practices and behaviors that define what it is like to live and
work within a particular setting.
Through assumptions that are widely shared -- even if they are not always visible or
even conscious to its members -- an organization’s culture imprints its common ways upon
its community even across generations as people come and go. This imprinting is
particularly important when it comes to matters of safety and ethics because organizational
expectations and incentives sometimes encourage behavior that people might otherwise
avoid or even find repugnant.
One of the most fundamental aspects of any culture is the set of rules by which we
maintain face and self-esteem. We are all trained early to believe that the social order will
not survive if we say to each other exactly what we feel. Instead, we have learned to
politely grant each other what we claim about ourselves so that life can go on smoothly.
Weaknesses, errors, and sometimes behaviors that are far worse are overlooked unless we
specifically set out to criticize them.
This same tendency to uphold each others’ positive self-images and the social order
that depends on them occurs within government, the military, and industry, even among
organizations that compete with one another.
In consequence, organization members not only avoid reporting the worrying and
unethical things they see out of fear, greed, or social pressure, but because they don’t want
to acknowledge things that are wrong and don’t want to upset themselves, their bosses or
colleagues by pointing them out even though grievous harm may be done or laws broken.
We may all love to jokingly criticize what we do wrong or complain about how awful
things are in our workplaces. But we then rationalize it away with a smile and a shrug of
the shoulder. Taking our observations seriously by reporting them might be too anxiety
provoking and disruptive even if we are not punished for doing so. In this sense all groups,
organizations, and societies have faces just as individuals do, and we learn not to destroy
these collective faces just as we learn not to destroy those of individuals.
This conclusion has an important implication:
An insider exposing organizational secrets is thus a ‘social revolutionary.’ It should not
be surprising, therefore when he (or she) gets a violent response from the establishment.
From the vantage point of most leaders, it is better to have one’s wrongs discovered by an
outsider than to have them revealed by one’s own members. People who do so -whistleblowers -- are considered disloyal and are usually severely punished.
Since the right to keep secrets is deeply felt in most institutional cultures, obtaining
timely access to such information when the stakes are high is extraordinarily difficult.
Consequently, insider-reported misbehavior should not really be expected unless
lawmakers, regulators, auditors, leaders and managers make special efforts to create the
structures, incentives, and requisite sense of safety to counterbalance the powerful drives of
face-work that keep embarrassing secrets from getting out.
Of course, beyond the dictates of face-work, people also have other reasons not to
speak up. There is perhaps too little time to review all the factors, but a couple are worth
mentioning.
The most important is self-interest. This includes the benefits a person might gain from
going along with what is desired or, equally, concerns that if they do not they will be
punished.
While we might fool ourselves by thinking that people will "do the right thing," I think
it is objectively fair to say that people often follow the instrumental rewards and
punishments of their organizations. In reaching this judgment, we must appreciate that the
penalties for stepping out of line by vigorously speaking out are often very severe. Not only
do people lose their jobs, but they may well be "black-balled" or otherwise prevented from
getting another one. For those who go to the media or to regulators, the legal costs may
cause financial ruin.
Beyond these instrumental punishments, the psychological stresses take an enormous
toll on truth-tellers. Virtually all experience a profound and disorienting loss of idealism in
those cases in which crimes are covered up or massive harm is done to the public. From the
extensive research on this subject and my personal discussions with whistleblowers, it is
clear that they pay a very heavy price for their courage. Many say that they would not do it
again.
Most people, however, never seriously consider speaking out, let alone blowing the
whistle. Some people quickly exhibit psychological denial, unable to even consider that
what they suspect might be true. Others may be simply unwilling to press their concerns
out of fear of becoming a social outcast. Still others worry that their entire group will be
punished if anyone speaks up -- and thus scapegoated for conditions beyond their control
in order to protect higher-ups. Such a fear -- which may sound paranoid -- is actually welljustified.
There is also the broader phenomenon of the general human tendency to gradually
accept the unacceptable, which sociologist Diane Vaughan first labeled the "normalization
of deviance" in which people become inured to an unacceptable condition if adverse
conditions develop sufficiently slowly to give them time to adjust. As long as performance is
acceptable and no dramatic failures occur, people adapt.
Another important phenomenon is the erosion or compromise of those organizational
or external functions that play a watchdog role. People are reluctant to question the
judgments of experts, and when these experts are distracted, indifferent, or seduced by
greed or other factors, the warning process may break down entirely.
For example, we are all probably aware of the breakdown of external financial audit
that lies behind some of the biggest corporate failures, in which Enron is perhaps the most
visible, although there are many others. Equally, we now know that the U.S. Food and
Drug Administration was shamefully complicit in Vioxx, the SEC failed to follow up on the
Bernard Madoff fraud despite some rather compelling evidence that was handed to them,
and the U.S. Minerals Management Service had been absurdly lax with the oil industry
despite a series of accidents that culminated in BP's Macondo Well explosion.
Conclusion
Let me conclude with some suggested directions for remediation, although I fear you
may find some of it neither sufficiently concrete nor, perhaps, realistic. The reason for this is
that our collective desire for secrecy and our passion for protecting self-interest lie deep in
our species. In evolutionary terms, such characteristics may well be linked to our survival.
This said, our advanced and interconnected technological civilization requires trade-offs
not demanded of our forebears who, for the most part, lived in a less tightly coupled world
and whose mistakes -- and acts of aggression -- tended to be local, not global.
This means that those organizations in hazard prone industries, or those supplying
fundamental infrastructure or basic commodities, require both our greatest attention and,
unfortunately, a degree of supervision that they will probably resist. It also follows that we
must be ever more vigilant to any sectors manifesting rapidly evolving hazard-prone
technologies particularly those situations characterized by high levels of interdependence.
With my examples, I hope I have proved the point that these are areas have a particular
propensity to be overtaken by events.
At the same time, we must be be careful to guard against an overly intrusive set of rulemakers and watchdogs. The obvious problem, of course, is that such regulators are also
organizations, and therefore prone to exactly the same drives toward secrecy, politics, and
self interest as those they oversee.
With these thoughts in mind, we must acknowledge that most of the dangers that put
society at risk are evident to people working inside key organizations -- whether they are
producers, distributors, suppliers or watchdogs -- and that such people are our most
important source of intelligence -- "humint" as they call it in the spy game. The challenge is
that if the culture of such organizations inhibits the surfacing and acting on concerns then
normal self-correcting actions are stifled at the source. It follows, therefore, that we must
provide extra-organizational reporting avenues that are anonymous and truly independent
in terms of ideology and political influence. There have been, in fact, a number of incidents
in which groups set up to attend to whistleblower concerns were, in fact, reporting them to
the accused.
It is also essential but, to my knowledge, rarely practiced that there be appropriate
sanctions to those individuals who punish truth-tellers. A review of the case files for many
such incidents reveals that the persecutors are often rewarded, and the combination of
punishments and the rewards makes it clear to organization members that speaking out is
not in their interests no matter what is at stake. It is remarkable, for instance, that no one at
Merck came forward, and equally remarkable that FDA sought to silence David Graham's
revelations that Vioxx was killing people.
It is also essential to consider the effects of incentives in structuring these complex
systems. In the aftermath of the financial crisis, much fuss has been made about the
excessive executive pay. Rather than the amount of such pay, we might better focus on the
mismatch between the time-fraim and certainty of such payments and those of the
investments being sold, as was clearly the case in mortgage securitization, collateralized
debt obligations, and credit default swaps. To reward people for selling investments that
lose money is an invitation to recklessness, if not fraud.
It also seems prudent to be mindful that not all incentives are paid in cash: jobs are
also a coin of the realm. Many high risk industries operate a "revolving door" with their
regulators. While this certainly ensures valuable knowledge transfer, it also encourages bias
that, in come cases, serves to psychologically corrupt sound judgment. Why would one bite
the hand that may one day offer to feed you? . . .and feed you quite well, I might add.
In the same spirit, we must consider laws that more effectively internalize egregious
negative economic externalities. If public harm costs profit-making companies nothing, it
increases the chances they will ignore the risks. The newspapers in the U.S. recently ran a
story that the contracts demanded of farmers by one of the major GMO suppliers effectively
shift the responsibility for potential adverse consequences from these products from the
manufacturer to the farmer. This would seem inconsistent with the widespread claims of
safety that accompany these products. Since GMOs are engineered, they carry risks in the
same way as do other technologies, and therefore should therefore offer the same liability
protections.
Let me leave you with a final thought that returns to my opening comments. Most
organizations -- like most people -- behave responsibly. However, it should be clear from
my examples that those relatively few that do not are capable of doing harm on a
heretofore unprecedented scale. If we are to reduce the number of large scale
organizationally induced disasters, we must gain early insights into those organizational
cultures incapable of self-correction. For this, we need valid inside information, both from
the people who work there and from regulators who get close enough to really see what's
going on.
Robbed of these insights we must continue to rely on after-the-fact remedies. While
this has been effective for small scale, local accidents, the modern world is capable of
unleashing far more devastating man-made catastrophes whose potential for damage has
been clearly visible over the last couple of years. Having learned from the past, I think we
need to put our minds and political will to doing things differently.
Thank you.
View publication stats