SAT For Intelligence Analysis
SAT For Intelligence Analysis
SAT For Intelligence Analysis
STRUCTURED ANALYTIC
TECHNIQUES FOR INTELLIGENCE
ANALYSIS
Randolph H. Pherson
CQ Press
E-mail: order@sagepub.com
1 Oliver’s Yard
55 City Road
United Kingdom
India
Singapore 048423
All rights reserved. Except as permitted by U.S. copyright law, no part of this
work may be reproduced or distributed in any form or by any means, or stored
in a database or retrieval system, without permission in writing from the
publisher.
All third party trademarks referenced or depicted herein are included solely for
the purpose of illustration and are the property of their respective owners.
Reference to these trademarks in no way indicates any relationship with, or
endorsement by, the trademark owner.
Printed in the United States of America
Title: Structured analytic techniques for intelligence analysis / by Randolph H. Pherson and Richards J.
Heuer Jr.
Identifiers: LCCN 2019033477 | ISBN 9781506368931 (spiral bound) | ISBN 9781506368917 (epub) |
ISBN 9781506368924 (epub) | ISBN 9781506368948 (pdf)
8.2.4 What If? Scenario: India Makes Surprising Gains from the
Global Financial Crisis 223
8.2.5 High Impact/Low Probability Scenario: Conflict in the Arctic
228
And yet, analysis has probably never been a more important part of
the profession—or more needed by policymakers. In contrast to the
bipolar dynamics of the Cold War, this new world is strewn with
failing states, proliferation dangers, regional crises, rising powers,
and dangerous non-state actors—all at play against a backdrop of
exponential change in fields as diverse as population and
technology.
Finally, for all of these reasons, analysts live with a high degree
of risk—essentially the risk of being wrong and thereby
contributing to ill-informed policy decisions.
The key point is that all analysts should do something to test the
conclusions they advance. To be sure, expert judgment and intuition
have their place—and are often the foundational elements of sound
analysis—but analysts are likely to minimize error to the degree they
can make their underlying logic explicit in the ways these techniques
demand.
We designed the book for ease of use and quick reference. The
spiral binding allows analysts to have the book open while they
follow step-by-step instructions for each technique. In this edition, we
regrouped the techniques into six families based on the analytic
production process. Tabs separating each chapter contain a table of
contents for the selected chapter. For each family of techniques, we
provide an overarching description of that category and then a brief
summary of each technique covered in that chapter.
THE AUTHORS
Randolph H. Pherson is CEO of Globalytica, LLC; president of
Pherson Associates, LLC; and a founding director of the nonprofit
Forum Foundation for Analytic Excellence. He teaches advanced
analytic techniques and critical thinking skills to analysts in more
than two dozen countries, supporting major financial institutions,
global retailers, and security firms; facilitates Foresight workshops
for foreign governments, international foundations, and multinational
corporations; and consults with senior corporate and government
officials on how to build robust analytic organizations. Mr. Pherson
collaborated with Richards J. Heuer Jr. in developing and launching
use of ACH.
After retiring from the CIA, Mr. Heuer was associated with the U.S.
Intelligence Community in various roles for more than five decades
until his death in August 2018. He wrote extensively on personnel
security, counterintelligence, deception, and intelligence analysis. Mr.
Heuer earned a BA in philosophy from Williams College and an MA
in international relations from the University of Southern California.
He also pursued graduate studies at the University of California,
Berkeley, and the University of Michigan.
ACKNOWLEDGMENTS
For this edition, the authors would like to acknowledge the thoughtful
comments and suggestions they received from managers of
intelligence analysis in the United States, United Kingdom, Canada,
Spain, and Romania. We thank Peter de Werd and Noel
Hendrickson for reviewing and offering useful critiques on the
sections describing Analysis by Contrasting Narratives and
Counterfactual Reasoning, respectively. We also deeply appreciate
the invaluable recommendations and edits provided by Rubén Arcos,
Abigail DiOrio, Alysa Gander, Cindy Jensen, Kristine Leach,
Penelope Mort Ranta, Mary O’Sullivan, Richard Pherson, Karen
Saunders, and Roy Sullivan as well as the insightful graphics design
support provided by Adriana Gonzalez.
The ideas, interest, and efforts of all the above contributors to this
book are greatly appreciated, but the responsibility for any
weaknesses or errors rests solely on the shoulders of the authors.
DISCLAIMER
All statements of fact, opinion, or analysis expressed in this book are
those of the authors and do not reflect the official positions of the
CIA or any other U.S. government agency. Nothing in the contents
should be construed as asserting or implying U.S. government
authentication of information or agency endorsement of the authors’
views. This material has been reviewed by the CIA only to prevent
the disclosure of classified information.
NOTE
1. Katherine Hibbs Pherson and Randolph H. Pherson, Critical
Thinking for Strategic Intelligence, 2nd ed. (Washington, DC: CQ
Press/SAGE, 2015).
CHAPTER 1 INTRODUCTION AND
OVERVIEW
No formula exists, of course, for always getting it right, but the use of
structured techniques can reduce the frequency and severity of error.
These techniques can help analysts deal with proven cognitive
limitations, sidestep some of the known analytic biases, and explicitly
confront the problems associated with unquestioned mental models
or mindsets. They help analysts think more rigorously about an
analytic problem and ensure that preconceptions and assumptions
are explicitly examined and, when possible, tested.3
The term “alternative analysis” became widely used in the late 1990s
after (1) Adm. David Jeremiah’s postmortem analysis of the U.S.
Intelligence Community’s failure to foresee India’s 1998 nuclear test,
(2) a U.S. congressional commission’s review of the Intelligence
Community’s global missile forecast in 1998, and (3) a report from
the CIA Inspector General that focused higher-level attention on the
state of the Directorate of Intelligence’s analytic tradecraft. The
Jeremiah report specifically encouraged increased use of what it
called “red team analysis.”
Over time, analysts who misunderstood, or resisted the call for more
rigor, interpreted alternative analysis as simply meaning an
alternative to the normal way that analysis is done. For them the
term implied that alternative procedures were needed only in
exceptional circumstances when an analysis is of critical importance.
Kent School instructors countered that the techniques were not
alternatives to traditional analysis but were central to good analysis
and should become routine—instilling rigor and structure into the
analysts’ everyday work process.
From the several hundred techniques that might have been included
in this book, we selected a core group of sixty-six techniques that
appear to be most useful for the intelligence profession as well as
analytic pursuits in government, academia, and the private sector.10
We omitted techniques that tend to be used exclusively for a single
type of analysis in fields such as law enforcement or business
consulting.
Some training programs may have a need to boil down the list of
techniques to the essentials required for a given type of analysis. No
one list will meet everyone’s needs. However, we hope that having
one reasonably comprehensive list and lexicon of common
terminology available to the growing community of analysts now
employing Structured Analytic Techniques will help to facilitate
discussion and use of these techniques in projects involving
collaboration across organizational boundaries.
Chapter 3 (“Choosing the Right Technique”) describes the criteria we used for selecting techniques,
discusses which techniques might be learned first and used the most, and provides a guide for matching
techniques to analysts’ needs. The guide asks twelve questions about what the analyst wants or needs
to do. An affirmative answer to any question directs the analyst to the appropriate chapter(s), where the
analyst can quickly zero in on the most applicable technique(s). It concludes with a description of the
value of instilling five core habits of thinking into the analytic process.
Chapter 4 (“Practitioner’s Guide to Collaboration”) builds on our earlier observation that analysis done
across the global intelligence community is in a transitional stage from a mental activity performed
predominantly by a sole analyst to a collaborative team or group activity. The chapter discusses, among
other things, how to expand the analytic process to include rapidly growing social networks of area and
functional specialists who often work from several different geographic locations. It proposes that most
analysis be done in two phases: a divergent analysis or creative phase with broad participation by a
social network, followed by a convergent analysis phase and final report done by a small analytic team.
Chapters 5 through 10 each describe a different family of structured techniques, which taken together
cover sixty-six structured techniques (see Figure 1.6).11 Each of these chapters starts with a description
of the specific family and how techniques in that family help to mitigate known cognitive biases,
misapplied heuristics, or intuitive traps. A brief overview of each technique is followed by a detailed
discussion of each, including when to use it, the value added, description of the method, potential pitfalls
when noteworthy, relationship to other techniques, and origins of the technique.
Figure 1.6 Six Families of Structured Analytic Techniques
Readers who go through these six chapters of techniques from start to finish may perceive some
overlap. This repetition is for the convenience of those who use this book as a reference guide and seek
out individual sections or chapters. The reader seeking only an overview of the techniques can save
time by reading the introduction to each family of techniques, the brief overview of each technique, and
the full descriptions of only those specific techniques that pique the reader’s interest.
Chapter 5: Getting Organized. The eight techniques cover the basics, such as checklists, sorting,
ranking, and organizing your data.
Chapter 6: Exploration Techniques. The nine techniques include several types of brainstorming,
including Circleboarding™, Starbursting, and Cluster Brainstorming, which was called Structured
Brainstorming in previous editions. The Nominal Group Technique is a form of brainstorming that is
appropriate when there is concern that a brainstorming session might be dominated by a
particularly aggressive analyst or constrained by the presence of a senior officer. It also introduces
several mapping techniques, Venn Analysis, and Network Analysis.
Chapter 7: Diagnostic Techniques. The eleven techniques covered in this chapter include the
widely used Key Assumptions Check and Chronologies and Timelines. The Cross-Impact Matrix
supports group learning about relationships in a complex system. Several techniques fall in the
domain of hypothesis generation and testing (Multiple Hypothesis Generation, Diagnostic
Reasoning, Analysis of Competing Hypotheses [ACH], Argument Mapping, and Deception
Detection), including a new technique called the Inconsistencies Finder™, which is a simplified
version of ACH.
Chapter 8: Reframing Techniques. The sixteen techniques in this family help analysts break away
from established mental models by using Outside-In Thinking, Structured Analogies, Red Hat
Analysis, Quadrant Crunching™, and the Delphi Method to reframe an issue or imagine a situation
from a different perspective. What If? Analysis and High Impact/Low Probability Analysis are tactful
ways to suggest that the conventional wisdom could be wrong. Two important techniques
developed by the authors, Premortem Analysis and Structured Self-Critique, give analytic teams
viable ways to imagine how their own analysis might be wrong. The chapter concludes with a
description of a subset of six techniques grouped under the umbrella of Adversarial Collaboration
and an original approach to Structured Debate.
Chapter 9: Foresight Techniques. This family of twelve techniques includes four new techniques
for identifying key drivers, analyzing contrasting narratives, and engaging in Counterfactual
Reasoning. The chapter also describes five methods for developing scenarios and expands the
discussion of Indicators Validation and Evaluation by presenting several new techniques for
generating indicators.
Chapter 10: Decision Support Techniques. The ten techniques in this family include three new
Decision Support Techniques: Opportunities Incubator™, Bowtie Analysis, and Critical Path
Analysis. The chapter also describes six classic Decision Support Techniques, including Decision
Matrix, Force Field Analysis, and Pros-Cons-Faults-and-Fixes, all of which help managers,
commanders, planners, and policymakers make choices or trade-offs between competing goals,
values, or preferences. The chapter concludes with a description of the Complexity Manager, which
was developed by Richards J. Heuer Jr.
How can we know that the use of Structured Analytic Techniques does, in fact, improve the overall
quality of the analytic product? Chapter 11 (“The Future of Structured Analytic Techniques”) begins with
a discussion of two approaches to answer this question: logical reasoning and empirical research. The
chapter then employs one of the techniques in this book, Complexity Manager, to assess the prospects
for continued growth in the use of Structured Analytic Techniques. It asks the reader to imagine it is
2030 and answer the following questions based on an analysis of ten variables that could support or
hinder the growth of Structured Analytic Techniques during this time period: Will structured techniques
gain traction and be used with greater frequency by intelligence agencies, law enforcement, and the
business sector? What forces are spurring the increased use of structured analysis? What obstacles are
hindering its expansion?
NOTES
1. Vision 2015: A Globally Networked and Integrated Intelligence
Enterprise (Washington, DC: Director of National Intelligence, 2008).
System 1 Thinking is intuitive, fast, efficient, and often unconscious. It draws naturally on available
knowledge, experience, and often a long-established mental model of how people or things work in
a specific environment. System 1 Thinking requires little effort; it allows people to solve problems
and make judgments quickly and efficiently. Although it is often accurate, intuitive thinking is a
common source of cognitive biases and other intuitive mistakes that lead to faulty analysis. Three
types of cognitive limitations—cognitive bias, misapplied heuristics, and intuitive traps—are
discussed later in this chapter.
System 2 Thinking is analytic. It is slow, methodical, and conscious, the result of deliberate
reasoning. It includes all types of analysis, such as critical thinking and Structured Analytic
Techniques, as well as the whole range of empirical and quantitative methods.
The description of each Structured Analytic Technique in this book includes a discussion of which
cognitive biases, misapplied heuristics, and intuitive traps are most effectively avoided, overcome, or at
least mitigated by using that technique. The introduction to each family of techniques also identifies how
the techniques discussed in that chapter help counter one or more types of cognitive bias and other
common intuitive mistakes associated with System 1 Thinking.
Intelligence analysts have largely relied on intuitive judgment—a System 1 process—in constructing
their analyses. When done well, intuitive judgment—sometimes referred to as traditional analysis—
combines subject-matter expertise with basic thinking skills. Evidentiary reasoning, historical method,
case study method, and reasoning by analogy are examples of this category of analysis.2 The key
characteristic that distinguishes intuitive judgment from structured analysis is that intuitive judgment is
usually an individual effort in which the reasoning remains largely in the mind of the individual analyst
until it is written down in a draft report. Training in this type of analysis is generally acquired through
postgraduate education, especially in the social sciences and liberal arts, and often along with some
country or language expertise.
This chapter presents a taxonomy that defines the domain of System 2 Thinking. A taxonomy is a
classification of all elements of some body of information or knowledge. It defines the domain by
identifying, naming, and categorizing all the various objects in a specialized discipline. The objects are
organized into related groups based on some factor common to each object in the group.
The word “taxonomy” comes from the Greek taxis, meaning arrangement, division, or order, and nomos,
meaning law. Classic examples of a taxonomy are Carolus Linnaeus’s hierarchical classification of all
living organisms by kingdom, phylum, class, order, family, genus, and species that is widely used in the
biological sciences. The periodic table of elements used by chemists is another example. A library
catalog is also considered a taxonomy, as it starts with a list of related categories that are then
progressively broken down into finer categories.
Robert Clark has described a taxonomy of intelligence sources.4 He also categorized some analytic
methods commonly used in intelligence analysis, but not to the extent of creating a taxonomy. To the
best of our knowledge, no one has developed a taxonomy of analytic techniques for intelligence
analysis, although taxonomies have been developed to classify research methods used in forecasting,5
operations research,6 information systems,7 visualization tools,8 electronic commerce,9 knowledge
elicitation,10 and cognitive task analysis.11
After examining taxonomies of methods used in other fields, we found that there is no single right way to
organize a taxonomy—only different ways that are useful in achieving a specified goal. In this case, our
goal is to gain a better understanding of the domain of Structured Analytic Techniques, investigate how
these techniques contribute to providing a better analytic product, and consider how they relate to the
needs of analysts. The objective has been to identify various techniques that are currently available,
identify or develop additional potentially useful techniques, and help analysts compare and select the
best technique for solving any specific analytic problem. Standardization of terminology for Structured
Analytic Techniques will facilitate collaboration across agency and international boundaries during the
use of these techniques.
Description
The taxonomy presented in Figure 2.1 distinguishes System 1, or intuitive thinking, from the four broad
categories of analytic methods used in System 2 Thinking. It describes the nature of these four
categories, one of which is structured analysis. The others are critical thinking, empirical analysis, and
quasi-quantitative analysis. This chapter describes the rationale for these four broad categories. In the
next chapter, we review the six categories or families of Structured Analytic Techniques.
2.2 DEVELOPING A TAXONOMY OF
STRUCTURED ANALYTIC TECHNIQUES
Intelligence analysts employ a wide range of methods to deal with an
even wider range of subjects. Although this book focuses on the field
of structured analysis, it is appropriate to identify some initial
categorization of all the methods to see where structured analysis
fits. Many researchers write of only two general approaches to
analysis, contrasting qualitative with quantitative, intuitive with
empirical, or intuitive with scientific. Others might claim that there are
three distinct approaches: intuitive, structured, and scientific. In our
taxonomy, we have sought to address this confusion by describing
two types of thinking (System 1 and System 2) and defining four
categories of System 2 Thinking.
In this chapter, we distinguish between three types of cognitive limitations (see Figure 2.3):
Cognitive biases are inherent thinking errors that people make in processing information. They
prevent an analyst from accurately understanding reality even when all the needed data and
evidence that would form an accurate view is in hand.
Heuristics are experience-based techniques that can give a solution that is not guaranteed to be
optimal. The objective of a heuristic is to produce quickly a solution that is good enough to solve
the problem at hand. Analysts can err by overrelying on or misapplying heuristics. Heuristics help
an analyst generate a quick answer, but sometimes that answer will turn out to be wrong.
Intuitive traps are practical manifestations of commonly recognized cognitive biases or heuristics
that analysts in the intelligence profession—and many other disciplines—often fall victim to in their
day-to-day activities.
There is extensive literature on how cognitive biases and heuristics affect a person’s thinking in many
fields. Intuitive traps, however, are a new category of bias first identified by Randolph Pherson and his
teaching colleagues as they explored the value of using Structured Analytic Techniques to counter the
negative impact of cognitive limitations. Additional research is ongoing to refine and revise the list of
eighteen intuitive traps.
All cognitive biases, misapplied heuristics, or intuitive traps, except perhaps the personal equity bias,
are more frequently the result of fast, unconscious, and intuitive System 1 Thinking and not the result of
thoughtful reasoning (System 2). System 1 Thinking—though often correct—is more often influenced by
cognitive biases and mindsets as well as insufficient knowledge and the inherent unknowability of the
future. Structured Analytic Techniques—a type of System 2 Thinking—help identify and overcome the
analytic biases inherent in System 1 Thinking.
Behavioral scientists have studied the impact of cognitive biases on analysis and decision making in
many fields, such as psychology, political science, medicine, economics, business, and education ever
since Amos Tversky and Daniel Kahneman introduced the concept of cognitive biases in the early
1970s.14 Richards Heuer’s work for the CIA in the late 1970s and the 1980s, subsequently followed by
his book Psychology of Intelligence Analysis, first published in 1999, applied Tversky and Kahneman’s
insights to problems encountered by intelligence analysts.15 Since the publication of Psychology of
Intelligence Analysis, other authors associated with the U.S. Intelligence Community (including Jeffrey
Cooper and Rob Johnston) have identified cognitive biases as a major cause of analytic failure at the
CIA.16
Figure 2.3 Glossary of Cognitive Biases, Misapplied Heuristics, and Intuitive Traps
This book is a logical follow-on to Psychology of Intelligence Analysis, which described in detail many of
the biases and heuristics that influence intelligence analysis.17 Since then, hundreds of cognitive biases
and heuristics have been described in the academic literature using a wide variety of terms. As Heuer
noted many years ago, “Cognitive biases are similar to optical illusions in that the error remains
compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not
produce a more accurate perception.”18 This is why cognitive limitations are exceedingly difficult to
overcome. For example, Emily Pronin, Daniel Y. Lin, and Lee Ross observed in three different studies
that people see the existence and operation of cognitive and motivational biases much more in others
than in themselves.19 This explains why so many analysts believe their own intuitive thinking (System
1) is sufficient.
Analysts in the intelligence profession—and many other disciplines—often fall victim to cognitive
biases, misapplied heuristics, and intuitive traps that are manifestations of commonly recognized
biases. Structured Analytic Techniques help analysts avoid, overcome, or at least mitigate their impact.
How a person perceives information is strongly influenced by factors such as experience, education,
cultural background, and what that person is expected to do with the data. Our brains are trained to
process information quickly, which often leads us to process data incorrectly or to not recognize its
significance if it does not fit into established patterns. Some heuristics, such as the fight-or-flight instinct
or knowing you need to take immediate action when you smell a gas leak, are helpful. Others are
nonproductive. Defaulting to “rules of thumb” while problem solving can often lead to inherent thinking
errors, because the information is being processed too quickly or incorrectly.
Cognitive biases, such as Confirmation Bias or Hindsight Bias, impede analytic thinking from the
very start.20
Misapplied heuristics, such as Groupthink or Premature Closure, could lead to a correct decision
based on a non-rigorous thought process if one is lucky. More often, they impede the analytic
process because they prevent us from considering a full range of possibilities.
Intuitive traps, such as Projecting Past Experiences or Overinterpreting Small Samples, are
mental mistakes practitioners make when conducting their business. A classic example is when a
police detective assumes that the next case he or she is working will be like the previous case or a
general prepares to fight the last war instead of anticipating that the next war will have to be fought
differently.
Unfortunately for analysts, these biases, heuristics, and traps are quick to form and extremely hard to
correct. After one’s mind has reached closure on an issue, even a substantial accumulation of
contradictory evidence is unlikely to force a reappraisal. Analysts often do not see new patterns
emerging or fail to detect inconsistent data. An even larger concern is the tendency to ignore or dismiss
outlier data as “noise.”
Structured Analytic Techniques help analysts avoid, overcome, or at least mitigate these common
cognitive limitations. Structured techniques help analysts do the following:
Ensure accountability.
Make the analysis more transparent to other analysts and decision makers.
2.4 MATCHING COGNITIVE LIMITATIONS TO STRUCTURED
TECHNIQUES
Figure 2.4 Matching Cognitive Limitations to the Six Families of Structured Techniques
In this book, we proffer guidance on how to reduce an analyst’s vulnerability to cognitive limitations. In
the overview of each family of Structured Analytic Techniques, we list two cognitive biases or misapplied
heuristics as well as two intuitive traps that the techniques in that family are most effective in countering
(see Figure 2.4). The descriptions of each of the sixty-six techniques include commentary on which
biases, heuristics, and traps that specific technique helps mitigate. In our view, most techniques help
counter cognitive limitations with differing degrees of effectiveness, and the matches we selected are
only illustrative of what we think works best. Additional research is needed to empirically validate the
matches we have identified from our experience teaching the techniques over the past decade and
exploring their relationship to key cognitive limitations.
2.5 COMBATING DIGITAL
DISINFORMATION
The growing use of social media platforms to manipulate popular
perceptions for partisan political or social purposes has made
democratic processes increasingly vulnerable in the United States
and across the world. Largely unencumbered by commercial or legal
constraints, international standards, or morality, proponents of Digital
Disinformation21 have become increasingly adept at exploiting
common cognitive limitations, such as Confirmation Bias,
Groupthink, and Judging by Emotion. History may show that we
have grossly underestimated how easy it has been to influence
popular opinion by leveraging cognitive biases, misapplied
heuristics, and intuitive traps.
19. Emily Pronin, Daniel Y. Lin, and Lee L. Ross, “The Bias Blind
Spot: Perceptions of Bias in Self versus Others,” Personality and
Social Psychology Bulletin 28, no. 3 (2002): 369–381.
20. Definitions of these and other cognitive biases, misapplied
heuristics, and intuitive traps mentioned later in this chapter are
provided in Figure 2.3 on pages 24–25.
23. The term “active measures” refers to actions taken by the Soviet
Union, and later Russia, beginning in the 1920s to influence popular
perceptions through propaganda, false documentation, penetration
of institutions, persecution of political activists, and political violence,
including assassinations. For more information, see the testimony of
Gen. (ret.) Keith B. Alexander, Disinformation: A Primer in Russian
Active Measures and Influence Campaigns, United States Senate
Select Committee on Intelligence, March 30, 2017,
https://www.intelligence.senate.gov/sites/default/files/documents/os-
kalexander-033017.pdf.
24. Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of
True and False News Online,” Science 359, no. 6380 (2018): 1146–
1151.
25. Elisa Shearer and Jeffrey Gottfried, News Use across Social
Media Platforms 2017, (Washington, DC: Pew Research Center,
September 7, 2017), http://www.journalism.org/2017/09/07/news-
use-across-social-media-platforms-2017/
To identify the structured techniques that would be most helpful in learning how to perform a task with
more rigor and imagination, analysts pick the statement that best describes their objectives and then
choose one or two of the techniques listed below the task. Analysts should refer to the appropriate
chapter in the book and first read the brief discussion of that family of techniques (which includes a short
description of each technique in the chapter) to validate their choice(s). The next step is to read the
section of the chapter that describes when, why, and how to use the chosen technique. For many
techniques, the information provided sufficiently describes how to use the technique. Some more
complex techniques require specialized training or facilitation support by an experienced user.
Another question often asked is, “When should I use the techniques?” Figure 3.3b provides a reference
guide for when to use thirty-three of the most used structured techniques.
3.4 PROJECTS USING MULTIPLE
TECHNIQUES
Many projects require the use of multiple techniques, which is why
this book includes sixty-six different techniques. Each technique may
provide only one piece of a complex puzzle; knowing how to put
these pieces together for a specific project is part of the art of
structured analysis. Separate techniques might be used for
organizing the data, evaluating ideas, and identifying assumptions.
There are also several techniques appropriate for generating and
testing hypotheses, drawing conclusions, challenging key findings,
and implementing new strategies.
Tool rut. Analysts are inclined to use whatever tool they already
know or have readily available. Psychologist Abraham Maslow
observed that “if the only tool you have is a hammer, it is
tempting to treat everything as if it were a nail.”3
Some take a little more time to learn, but, once learned, often save analysts considerable time over
the long run. Cluster Brainstorming, Analysis of Competing Hypotheses (ACH), and Red Hat
Analysis are good examples of this phenomenon.
Most Foresight Techniques, Premortem Analysis, and Structured Self-Critique take more time to
perform but offer major rewards for discovering both “unknown unknowns” and errors in the original
analysis that can be remedied.
When working on quick-turnaround items, such as a current situation report or an alert that must be
produced the same day, one can credibly argue that it is not possible to take time to use a structured
technique. When deadlines are short, gathering the right people in a small group to employ a structured
technique can prove to be impossible.
The best response to this valid observation is to encourage analysts to practice using core structured
techniques when deadlines are less pressing. In so doing, they ingrain new habits of thinking. If they,
and their colleagues, practice how to apply the concepts embedded in the structured techniques when
they have time, they will be more capable of applying these critical thinking skills instinctively when
under pressure. The Five Habits of the Master Thinker are described in Figure 3.6.4 Each habit can be
mapped to one or more Structured Analytic Techniques.
Key Assumptions.
In a healthy work environment, challenging assumptions should be commonplace, ranging from “Why do
you assume we all want pepperoni pizza?” to “Won’t higher oil prices force them to reconsider their
export strategy?” If you expect your colleagues to challenge your key assumptions on a regular basis,
you will become more sensitive to them yourself and will increasingly question if your assumptions are
well-founded.
Alternative Explanations.
When confronted with a new development, the first instinct of a good analyst is to develop a hypothesis
to explain what has occurred based on the available evidence and logic. A master thinker goes one step
further and immediately asks whether any alternative explanations should be considered. If envisioning
one or more alternative explanations is difficult, a master thinker will simply posit a single alternative that
the initial or lead hypothesis is not true. Although at first glance these alternatives may appear much
less likely, as new evidence surfaces over time, one of the alternatives may evolve into the lead
hypothesis. Analysts who do not generate a set of alternative explanations at the start of a project but
rather quickly lock on to a preferred explanation will often fall into the trap of Confirmation Bias—
focusing on the data that are consistent with their explanation and ignoring or rejecting other data that
are inconsistent.
Inconsistent Data.
Looking for inconsistent data is probably the hardest of the five habits to master, but it is the one that
can reap the most benefits in terms of time saved when investigating or researching an issue. The best
way to train your brain to look for inconsistent data is to conduct a series of ACH or Inconsistencies
Finder™ exercises. Such practice helps analysts readily identify what constitutes compelling contrary
evidence. If an analyst encounters an item of data that is compellingly inconsistent with one of the
hypotheses (for example, a solid alibi), then that hypothesis can be quickly discarded. This will save the
analyst time by redirecting his or her attention to more likely solutions.
Key Drivers.
Asking at the outset what key drivers best explain what has occurred or foretell what is about to happen
is a key attribute of a master thinker. If analysts quickly identify key drivers, the chance of surprise will
be diminished. An experienced analyst should know how to vary the weights of these key drivers (either
instinctively or by using techniques such as Multiple Scenarios Generation or Quadrant Crunching™) to
generate a set of credible alternative scenarios that capture the range of possible outcomes.
Context.
Analysts often get so engaged in collecting and sorting data that they miss the forest for the trees.
Learning to stop and reflect on the overarching context for the analysis is a key habit to learn. Most
analysis is done under considerable time pressure, and the tendency is to plunge in as soon as a task is
assigned. If the analyst does not take time to reflect on what the client is really seeking, the resulting
analysis could prove inadequate and much of the research a waste of time. Ask yourself: “What do they
need from me,” “How can I help them frame the issue,” and “Do I need to place their question in a
broader context?” Failing to do this at the outset can easily lead the analyst down blind alleys or require
reconceptualizing an entire paper after it has been drafted. Key structured techniques for developing
context include Starbursting, Mind Mapping, Outside-In Thinking, and Cluster Brainstorming.
Learning how to internalize the five habits will take a determined effort. Applying each core technique to
three to five real problems should implant the basic concepts firmly in any analyst’s mind. With every
repetition, the habits will become more ingrained and, over time, will become instinctive. Few analysts
can wish for more. If they master the habits, they will produce a superior product in less time.
NOTES
1. J. Scott Armstrong, “Combining Forecasts,” in Principles of
Forecasting, ed. J. Scott Armstrong (New York: Springer
Science+Business Media, 2001), 418–439.
2. The first three items in this list are from Craig S. Fleisher and
Babette E. Bensoussan, Strategic and Competitive Analysis:
Methods and Techniques for Analyzing Business Competition (Upper
Saddle River, NJ: Prentice Hall, 2003), 22–23.
Traditional analytic team: This is the typical work team assigned to perform a specific task. It has
a leader appointed by a manager or chosen by the team, and all members of the team are
collectively accountable for the team’s product. The team may work jointly to develop the entire
product, or each team member may be responsible for a specific section of the work. Historically, in
the U.S. Intelligence Community, many teams were composed of analysts from a single agency,
and involvement of other agencies was through coordination during the latter part of the production
process rather than by collaborating from the beginning. This approach is now evolving because of
changes in policy and easier access to secure interagency communications and collaborative
software. Figure 4.1a shows how the traditional analytic team works. The core analytic team, with
participants usually working at the same office, drafts a paper and sends it to other members of the
community for comment and coordination. Ideally, the core team will alert other stakeholders in the
community of their intent to write on a specific topic; but, too often, such dialogue occurs much later,
when the author is seeking to coordinate the finished draft. In most cases, analysts must obtain
specific permissions or follow established procedures to tap the knowledge of experts outside the
office or outside the government.
Special project team: Such a team is usually formed to provide decision makers with near real-
time analytic support during a crisis or an ongoing operation. A crisis support task force or field-
deployed interagency intelligence team that supports a military operation exemplifies this type of
team. Members typically are in the same physical office space or are connected by video
communications. There is strong team leadership, often with close personal interaction among team
members. Because the team is created to deal with a specific situation, its work may have a
narrower focus than a social network or regular analytic team, and its duration may be limited.
There is usually intense time pressure, and around-the-clock operation may be required. Figure
4.1b is a diagram of a special project team.
Social networks: Experienced analysts have always had their own network of experts in their field
or related fields with whom they consult from time to time and whom they may recruit to work with
them on a specific analytic project. Social networks are critical to the analytic business. Members of
the network do the day-to-day monitoring of events, produce routine products as needed, and may
recommend the formation of a more formal analytic team to handle a specific project. This form of
group activity is now changing dramatically with the growing ease of cross-agency secure
communications and the availability of collaborative software. Social networks are expanding
exponentially across organization boundaries. The term “social network,” as used here, includes all
analysts in government or business working anywhere in the world on any issue. It can be limited to
a small group with special clearances or comprise a broad array of government, business,
nongovernmental organization (NGO), and academic experts. The network can be in the same
office, in different buildings in the same metropolitan area, or, increasingly, at multiple locations
around the globe.
Description
Description
The key problem that arises with social networks is the geographic distribution of their members. For
widely dispersed teams, air travel is often an unaffordable expense. Even within the Washington, D.C.,
metropolitan area, distance is a factor that limits the frequency of face-to-face meetings, particularly as
traffic congestion becomes a growing nightmare. From their study of teams in diverse organizations,
which included teams in the U.S. Intelligence Community, Richard Hackman and Anita Woolley came to
this conclusion:
Distributed teams do relatively well on innovation tasks for which ideas and solutions need to
be generated but generally underperform face-to-face teams on decision-making tasks.
Although decision-support systems can improve performance slightly, decisions made from
afar still tend to take more time, involve less exchange of information, make error detection and
correction more difficult, and can result in less participant satisfaction with the outcome than is
the case for face-to-face teams.2
In sum, distributed teams are appropriate for many, but not all, team tasks. Using them well requires
careful attention to team structure, a face-to-face launch when members initially come together, and
leadership support throughout the life of the team to keep members engaged and aligned with collective
purposes.3
Research on effective collaborative practices has shown that geographically and organizationally
distributed teams are most likely to succeed when they satisfy six key imperatives of effective
collaboration:
Mutual Trust. Know and trust one another; this usually requires that they meet face to face at least
once.
Mission Criticality. Feel a personal need to engage with the group to perform a critical task.
Access and Agility. Connect with one another virtually on demand and easily add new members.
Incentives. Perceive incentives for participating in the group, such as saving time, gaining new
insights from interaction with other knowledgeable analysts, or increasing the impact of their
contribution.
Common Understanding. Share a common lexicon and understanding of the problem with agreed
lists of terms and definitions.4
4.2 DIVIDING THE WORK
Managing the geographic distribution of the social network can be addressed by dividing the analytic
task into two parts: (1) exploiting the strengths of the social network for divergent or creative analysis to
identify ideas and gather information and (2) forming a smaller analytic team that employs convergent
analysis to meld these ideas into an analytic product. When the draft is completed, it goes back for
review to all members of the social network who contributed during the first phase of the analysis, and
then back to the team to edit and produce the final paper.
Structured Analytic Techniques, web-based discussions, and other types of collaborative software
facilitate this two-part approach to analysis. The use of Exploration Techniques to conduct divergent
analysis early in the analytic process works well for a geographically distributed social network
communicating online. The products of these techniques can provide a solid foundation for the smaller
analytic team to do the subsequent convergent analysis. In other words, each type of group performs
the type of task for which it is best qualified. This process is applicable to most analytic projects. Figure
4.2 shows the functions that collaborative websites can perform.
A project leader informs a social network of an impending project and provides a tentative project
description, target audience, scope, and process to be followed. The leader also broadcasts the name
and internet address of the collaborative virtual workspace to be used and invites interested analysts
knowledgeable in that area to participate. Any analyst with access to the collaborative network is
authorized to add information and ideas to it. Any of the following techniques may come into play during
the divergent analysis phase as specified by the project leader:
Collaboration in sharing and processing data using other techniques, such as timelines, sorting,
networking, mapping, and charting, as described in chapters 5 and 6.
Some form of brainstorming, as described in chapters 6 and 9, to generate a list of driving forces,
variables, players, and so on.
Description
Putting the list into a Cross-Impact Matrix, as described in chapter 7, and then discussing and
recording in the web discussion stream the relationship, if any, between each pair of driving forces,
variables, or players in that matrix.
Developing a list of relevant information for consideration when evaluating generated hypotheses,
as described in chapter 7.
Doing a Key Assumptions Check, as described in chapter 7. This can take less time using a
synchronous collaborative virtual setting than when done in a face-to-face meeting; conducting
such a check can uncover the network’s thinking about key assumptions.
Most of these steps involve making lists, which can be done quite effectively in a virtual environment.
Making such input online in a chat room or asynchronous email discussion thread can be even more
productive than a face-to-face meeting. Analysts have more time to think about and write up their
thoughts. They can look at their contribution over several days and make additions or changes as new
ideas come to them.
Ideally, a project leader should oversee and guide the process. In addition to providing a sound
foundation for further analysis, this process enables the project leader to identify the best analysts for
inclusion in the smaller team that conducts the project’s second phase—making analytic judgments and
drafting the report. The project lead should select second-phase team members to maximize the
following criteria: level of expertise on the subject, level of interest in the outcome of the analysis, and
diversity of opinions and collaboration styles among members of the group. The action then moves from
the social network to a small, trusted team (preferably no larger than eight analysts) to complete the
project, perhaps using other techniques, such as Analysis of Competing Hypotheses, Red Hat Analysis,
or What If? Analysis. At this stage in the process, the use of virtual collaborative software is usually
more efficient than face-to-face meetings. Software used for exchanging ideas and revising text should
allow for privacy of deliberations and provide an audit trail for all work done.
The draft report is best done by a single person. That person can work from other team members’
inputs, but the report usually reads better if it is crafted in one voice. As noted earlier, the working draft
should be reviewed by those members of the social network who participated in the first phase of the
analysis.
4.3 VALUE OF COLLABORATIVE
PROCESSES
In our vision for the future, intelligence analysis increasingly
becomes a collaborative enterprise, with the focus shifting “away
from coordination of draft products toward regular discussion of data
and hypotheses early in the research phase.”5 This is a major
change from the traditional concept of intelligence analysis as largely
an individual activity with coordination as the final step in the
process. In this scenario, instead of reading a static, hard copy
paper, decision makers would obtain analysis of the topic of interest
by accessing a web-based knowledge database that was
continuously updated. The website might also include dropdowns
providing lists of key assumptions, critical information gaps, or
indicators; a Source Summary Statement; or a map showing how the
analytic line has shifted over time.
If you had to identify, in one word, the reason that the human
race has not achieved, and never will achieve, its full
potential, that word would be meetings.
Dave Barry, American humorist
Academic studies show that “the order in which people speak has a
profound effect on the course of a discussion. Earlier comments are
more influential, and they tend to provide a framework within which
the discussion occurs.”8 Once that framework is in place, discussion
tends to center on that framework, to the exclusion of other options.
This phenomenon is also easily observed when attending a panel
discussion at a conference. Whoever asks the first question or two in
the Q&A session often sets the agenda (or the analytic framework,
depending on the astuteness of the question) for the remainder of
the discussion.
The more heterogeneous the group, the lower the risk of Premature
Closure, Groupthink, Satisficing, and polarization. Use of a
structured technique also sets a clear step-by-step agenda for any
meeting where that technique is used. This makes it easier for a
group leader to keep a meeting on track to achieve its goal.13
In a task-oriented team environment, advocacy of a specific position can lead to emotional conflict and
reduced team effectiveness. Advocates tend to examine evidence in a biased manner, accepting at face
value information that seems to confirm their own point of view and critically evaluating any contrary
evidence. Advocacy is appropriate in a meeting of stakeholders that one is attending for the purpose of
representing a specific interest. It is also “an effective method for making decisions in a courtroom when
both sides are effectively represented, or in an election when the decision is made by a vote of the
people.”15 However, it is not an appropriate method of discourse within a team “when power is unequally
distributed among the participants, when information is unequally distributed, and when no clear rules of
engagement exist—especially about how the final decision will be made.”16 An effective resolution may
be found only through the creative synergy of alternative perspectives.
Figure 4.6 displays the differences between advocacy and the objective inquiry expected from a team
member or a colleague.17 When advocacy leads to emotional conflict, it can lower team effectiveness by
provoking hostility, distrust, cynicism, and apathy among team members. Such tensions are often
displayed when challenge techniques, such as Devil’s Advocacy and Team A/Team B Analysis, are
employed (a factor that argued strongly for dropping them from the third edition of this book). On the
other hand, objective inquiry, which often leads to cognitive conflict, can lead to new and creative
solutions to problems, especially when it occurs in an atmosphere of civility, collaboration, and common
purpose. Several effective methods for managing analytic differences are described at the end of
chapter 8.
We believe a team or group using Structured Analytic Techniques is less vulnerable to group-process
traps than a comparable group doing traditional analysis because the techniques move analysts away
from advocacy and toward inquiry. This idea has not yet been tested and demonstrated empirically, but
the rationale is clear. These techniques work best when an analyst is collaborating with a small group of
other analysts. Just as these techniques provide structure to our individual thought processes, they play
an even stronger role in guiding the interaction of analysts within a small team or group.18
Some techniques, such as the Key Assumptions Check, Analysis of Competing Hypotheses (ACH), and
Argument Mapping, help analysts gain a clear understanding of how and exactly why they disagree. For
example, many CIA and FBI analysts report that they use ACH to gain a better understanding of the
differences of opinion between them and other analysts or between analytic offices. The process of
creating an ACH matrix requires identification of the evidence and arguments being used and
ascertaining the basis for labeling items and arguments as either consistent or inconsistent with the
various hypotheses. Review of this matrix provides a systematic basis for identification and discussion
of differences between two or more analysts.
CIA and FBI analysts also note that jointly building an ACH matrix helps to depersonalize arguments
when differences of opinion emerge.19 One side might suggest evidence that the other had not known
about, or one side will challenge an assumption and a consensus will emerge that the assumption is
unfounded. In other words, ACH can help analysts, operators, and decision makers learn from their
differences rather than fight over them. Other structured techniques, including those discussed in the
section on Adversarial Collaboration in chapter 8, do this as well.
4.7 LEADERSHIP AND TRAINING
Considerable research on virtual teaming shows that leadership effectiveness is a major factor in the
success or failure of a virtual team.20 Although leadership usually is provided by a group’s appointed
leader, it can also emerge as a more distributed peer process. A trained facilitator can increase a team’s
effectiveness (see Figure 4.7). When face-to-face contact is limited, leaders, facilitators, and team
members must compensate by paying more attention than they might otherwise devote to the following
tasks:
Articulating a clear mission, goals, specific tasks, and procedures for evaluating results.
Defining measurable objectives with milestones and timelines for achieving them.
Building relationships with and among team members and with stakeholders.
As illustrated in Figure 4.7, the interactions among the various types of team participants—whether
analyst, leader, facilitator, or technologist—are as important as the individual roles played by each. For
example, analysts on a team will be most effective not only when they have subject-matter expertise or
knowledge that lends a new viewpoint, but also when the rewards for their participation are clearly
defined by their manager. Likewise, a facilitator’s effectiveness is greatly increased when the goals,
timeline, and general focus of the project are establishing with the leader in advance. When roles and
interactions are explicitly defined and functioning, the group can more easily turn to the more
challenging analytic tasks at hand.
As greater emphasis is placed on intra- and interoffice collaboration and more work is done through
computer-mediated communications, it becomes increasingly important that analysts be trained in the
knowledge, skills, and abilities required for facilitation and management of both face-to-face and virtual
meetings, with a strong emphasis on using silent brainstorming techniques and Adversarial
Collaboration during such meetings. Training is more effective when it occurs just before the skills and
knowledge must be used. Ideally, it should be fully integrated into the work process and reinforced with
mentoring. Good instructors should aspire to wear three different hats, acting in the roles of coaches,
mentors, and facilitators.
Multi-agency or intelligence community–wide training programs of this sort could provide substantial
support to interagency collaboration and the formation of virtual teams. Whenever a new interagency or
virtual team or a distributed global project team is formed, all members should have benefited from
training in understanding the pitfalls of group processes, performance expectations, standards of
conduct, differing collaboration styles, and conflict resolution procedures. Standardization of this training
across multiple organizations or agencies will accelerate the development of a shared analytic culture
and reduce the start-up time needed when launching a new interagency project or orchestrating the
work of a globally distributed group.
Description
3. Ibid.
13. This paragraph and the previous paragraph express the authors’
professional judgment based on personal experience and anecdotal
evidence gained in discussion with other experienced analysts. As
discussed in chapter 11, there is a clear need for systematic
research on this topic and other variables related to the effectiveness
of Structured Analytic Techniques.
15. Martha Lagace, “Four Questions for David Garvin and Michael
Roberto,” Working Knowledge: Business Research for Business
Leaders (Harvard Business School weekly newsletter), October 15,
2001, http://hbswk.hbs.edu/item/3568.html
16. Ibid.
17. The table is from David A. Garvin and Michael A. Roberto, “What
You Don’t Know about Making Decisions,” Working Knowledge:
Business Research for Business Leaders (Harvard Business School
weekly newsletter), October 15, 2001,
http://hbswk.hbs.edu/item/2544.html.
Back to Figure
Back to Figure
Back to Figure
Leader articulates goals, establishes team, and enforces
accountability. Facilitator identifies appropriate techniques, and leads
structured analytic sessions. Technologist builds and optimizes tools.
Analysts are subject-matter experts. Leader and facilitator agree on
project timeline, focus, and applicability of small-group process.
Leader and analysts clearly articulate individual performance
expectations, evaluation metrics, and rewards for constructive
participation. Analysts, facilitator, and technologists build and
maintain collaborative analytic workspace, identify technology needs.
All participants work from same key analytic question; establish
agreed team norms and expectations for communications, dispute
resolution, and allocation of individual responsibilities.
CHAPTER 5 GETTING ORGANIZED
5.1 Sorting [ 69 ]
5.3 Matrices [ 76 ]
Matrices are generic analytic tools for sorting and organizing data in
a manner that facilitates comparison and analysis. They are used to
analyze the relationships among any two sets of variables or the
interrelationships among a single set of variables. A matrix consists
of a grid with as many cells as needed for the problem under study.
Matrices are used so frequently to analyze a problem that we have
included several distinct techniques in this book.
Gantt Charts are a specific type of Process Map that uses a matrix
to chart the progression of a multifaceted process over a specific
time period. Process Maps and Gantt Charts were developed
primarily for use in business and the military, but they are also useful
to intelligence analysts.
Choose a category and sort the data within that category. Look
for any insights, trends, or oddities. Good analysts notice trends;
great analysts notice anomalies.
Create a table with the letters across the top and down the left side, as in Figure 5.2a. The results
of the comparison of each pair of items are marked in the cells of this table. Note the diagonal line
of darker-colored cells. These cells are not used, as each item is never compared with itself. The
cells below this diagonal line are not used because they would duplicate a comparison in the cells
above the diagonal line. If you are working in a group, distribute a blank copy of this table to each
participant.
Looking at the cells above the diagonal row of darkened cells, compare the item in the row with the
one in the column. For each cell, decide which of the two items is more important (or preferable or
probable). Write the letter of the winner of this comparison in the cell, and score the degree of
difference on a scale from 0 (no difference) to 3 (major difference), as in Figure 5.2a.
Description
Consolidate the results by adding up the total of all the values for each letter and put this number in
the “Score” column. For example, in Figure 5.2a, item B has one 3 in the first row, plus one 2 and
two 1s in the second row, for a score of 7.
Finally, it may be desirable to convert these values into a percentage of the total score. To do this,
divide the total number of scores (20 in the example) by the score for each individual item. Item B,
with a score of 7, is ranked most important or most preferred. Item B received a score of 35 percent
(7 divided by 20), as compared with 25 percent for item D and only 5 percent each for items C and
E, which received only one vote each. This example shows how Paired Comparison captures the
degree of difference between each ranking.
To aggregate rankings received from a group of analysts, simply add the individual scores for each
analyst.
5.2.3 The Method: Weighted Ranking
In Weighted Ranking, a specified set of criteria is used to rank items. The analyst creates a table with
items to be ranked listed across the top row and criteria for ranking these items listed down the far-left
column (see Figure 5.2b). There are a variety of valid ways to conduct this ranking. We have chosen to
present a simple version of Weighted Ranking here because analysts usually are making subjective
judgments rather than dealing with hard numbers. As you read the following steps, refer to Figure 5.2b:
Create a table with one column for each item. At the head of each column, write the name of an
item or assign it a letter to save space.
Add two more blank columns on the left side of this table. Count the number of selection criteria,
and then adjust the table so that it has that number of rows plus three more, one at the top to list
the items and two at the bottom to show the raw scores and percentages for each item. In the first
column on the left side, starting with the second row, write in all the selection criteria down the left
side of the table. There is some value in listing the criteria roughly in order of importance, but that is
not critical. Leave the bottom two rows blank for the scores and percentages.
Now work down the second column, assigning weights to the selection criteria based on their
relative importance for judging the ranking of the items. Depending on how many criteria are listed,
take either 10 points or 100 points and divide these points among the selection criteria based on
what the analysts believe to be their relative importance in ranking the items. In other words, decide
what percentage of the decision should be based on each of these criteria. Be sure that the weights
for all the selection criteria combined add up to either 10 or 100, whichever is selected. The criteria
should be phrased in such a way that a higher weight is more desirable. For example, a proper
phrase would be “ability to adapt to changing conditions,” and an improper phrase would be
“sensitivity to changing conditions,” which does not indicate whether the entity reacts well or poorly.
Work across the rows to write the criterion weight in the left side of each cell.
Next, work across the matrix one row (selection criterion) at a time to evaluate the relative ability of
each of the items to satisfy that selection criteria. Use a ten-point rating scale, where 1 = low and 10
= high, to rate each item separately. (Do not spread the ten points proportionately across all the
items as was done to assign weights to the criteria.) Write this rating number after the criterion
weight in the cell for each item.
Again, work across the matrix one row at a time to multiply the criterion weight by the item rating for
that criterion, and enter this number for each cell, as shown in Figure 5.2b.
Figure 5.2B Weighted Ranking Matrix
Now add the columns for all the items. The result will be a ranking of the items from highest to
lowest score. To gain a better understanding of the relative ranking of one item as compared with
another, convert these raw scores to percentages. To do this, first add together all the scores in the
“Totals” row to get a total number. Then divide the score for each item by this total score to get a
percentage ranking for each item. All the percentages together must add up to 100 percent. In
Figure 5.2b, it is apparent that item B has the number one ranking (with 20.3 percent), while item E
has the lowest (with 13.2 percent).
Potential Pitfalls
When any of these techniques is used to aggregate the opinions of a
group of analysts, the rankings provided by each group member are
totaled and averaged. This means that the opinions of the outliers,
whose views are quite different from the others, are blended into the
average. As a result, the ranking does not show the range of
different opinions that might be present in a group. In some cases,
the identification of outliers with a minority opinion can be of great
value. Further research might show that the outliers are correct.
Relationship to Other Techniques
Some form of ranking, scoring, or prioritizing is commonly used with
Cluster Brainstorming, Mind Mapping, Nominal Group Technique,
and the Decision Matrix, all of which generate ideas that should be
evaluated or prioritized. Applications of the Delphi Method may also
generate ideas from outside experts that need to be evaluated or
prioritized.
Origins of This Technique
Ranking, Scoring, and Prioritizing are common analytic processes in
many fields. All three forms of ranking described here are based
largely on internet sources. For Ranked Voting, we referred to
http://en.wikipedia.org/wiki/Voting_system; for Paired Comparison,
http://www.mindtools.com; and for Weighted Ranking,
www.ifm.eng.cam.ac.uk/dstools/choosing/criter.html.
5.3 MATRICES
A matrix is an analytic tool for sorting and organizing data in a
manner that facilitates comparison and analysis. It consists of a
simple grid with as many cells as needed for the problem being
analyzed.
The Impact Matrix (chapter 10) uses a matrix to chart the impact
a new policy or decision is likely to have on key players and how
best to manage that impact.
When to Use It
Matrices are used to analyze the relationships between any two sets
of variables or the interrelationships between a single set of
variables. Among other things, matrices enable analysts to
A matrix is such an easy and flexible tool to use that it should be one
of the first tools analysts think of when dealing with a large body of
data. One limiting factor in the use of matrices is that information
must be organized along only two dimensions.
Value Added
Matrices provide a visual representation of a complex set of data. By
presenting information visually, a matrix enables analysts to deal
effectively with more data than they could manage by juggling
various pieces of information in their head. The analytic problem is
broken down into component parts so that each part (that is, each
cell in the matrix) can be analyzed separately, while ideally
maintaining the context of the problem in its entirety.
Figure 5.3, “Rethinking the Concept of National Security: A New Ecology,” is an example of a complex
matrix that not only organizes data but also tells its own analytic story.3 It shows how the concept of
national security has evolved over recent decades and suggests that the way we define national
security will continue to expand in the coming years. In this matrix, threats to national security are
arrayed along the vertical axis, beginning at the top with the most traditional actor, the nation-state. At
the bottom end of the spectrum are systemic threats, such as infectious diseases or threats that “have
no face.” The top row of the matrix presents the three primary mechanisms for dealing with threats:
military force, policing and monitoring, and collaboration. The cells in the matrix provide historic
examples of how the three different mechanisms of engagement have dealt with the five different
sources of threats. The top-left cell (dark color) presents the classic case of using military force to
resolve nation-state differences. In contrast, at the bottom-right corner, various actors deal with systemic
threats, such as the outbreak of a pandemic, by collaborating with one another.
Description
Classic definitions of national security focus on the potential for conflicts involving nation-states. The
top-left cell lists three military operations. In recent decades, the threat has expanded to include threats
posed by subnational actors as well as terrorist and other criminal organizations. Similarly, the use of
peacekeeping and international policing has become more common than in the past. This shift to a
broader use of the term “national security” is represented by the other five cells (medium color) in the
top left of the matrix. The remaining cells (light color) to the right and at the bottom of the matrix
represent how the concept of national security is continuing to expand as the world becomes
increasingly globalized.
By using a matrix to present the expanding concept of national security in this way, one sees that
patterns relating to how key players collect intelligence and share information vary along the two primary
dimensions. In the upper left of Figure 5.3, the practice of nation-states is to seek intelligence on their
adversaries, classify it, and protect it. As one moves diagonally across the matrix to the lower right, this
practice reverses. In the lower right of this figure, information is usually available from unclassified
sources and the imperative is to disseminate it to everyone as soon as possible. This dynamic can
create serious tensions at the midpoint, for example, when those working in the homeland security
arena must find ways to share sensitive national security information with state and local law
enforcement officers.
Origins of This Technique
The description of this basic and widely used technique is from
Pherson Associates, LLC, training materials. The national security
matrix was developed by Randolph H. Pherson.
5.4 PROCESS MAPS
Process Mapping is an umbrella term that covers a variety of
procedures for identifying and depicting the steps in a complex
procedure. It includes Flow Charts of various types (Activity Flow
Charts, Commodity Flow Charts, Causal Flow Charts), Relationship
Maps, and Value Stream Maps commonly used to assess and plan
improvements for business and industrial processes. Process Maps
or Flow Charts are diagrams that analysts use to track the step-by-
step movement of events or commodities to identify key pathways
and relationships involved in a complex system or activity. They
usually contain various types of boxes connected by lines. The
boxes are connected by arrows to illustrate the flow direction of a
process.
When to Use It
Flow Charts can track the flow of money and other commodities, the
movement of goods through production and delivery, the sequencing
of events, or a decision-making process. They help analysts
document, study, plan, and communicate complex processes with
simple, easy-to-understand diagrams. In law enforcement and
intelligence, they are used widely to map criminal and terrorist
activity.
Value Added
Flow Charts can identify critical pathways and choke points and
discover unknown relationships. In law enforcement, they help
analysts probe the meaning of various activities, identify alternative
ways a task can be accomplished, better understand how events are
related to one another, and focus attention on the most critical
elements of the process under study or investigation.
The Method
The process is straightforward.
When entering tasks into the Gantt Chart, enter the sequential tasks
first in the required sequence. Ensure that those tasks don’t start
until the tasks on which they depend are completed. Then enter the
parallel tasks in an appropriate time frame toward the bottom of the
matrix so that they do not interfere with the sequential tasks on the
critical path to completion of the project.
Gantt Charts that map a generic process can also track data about a
more specific process as it is received. For example, the Gantt Chart
depicted in Figure 5.5 can be used as a template over which new
information about a specific group’s activities could be layered using
a different color or line type. Layering in the specific data allows an
analyst to compare what is expected with the actual data. The chart
can then identify and narrow gaps or anomalies in the data and even
identify and challenge assumptions about what is expected or what
is happening. The analytic significance of considering such
possibilities can mean the difference between anticipating an attack
and wrongly assuming a lack of activity means a lack of intent. The
matrix illuminates the gap and prompts the analyst to consider
various explanations.
Origins of This Technique
The first Gantt Chart was devised in the mid-1890s by Karol Adamiecki, a Polish engineer who ran a
steelworks in southern Poland.6 Some fifteen years later, Henry Gantt, an American engineer and
project management consultant, created his own version of the chart. His chart soon became popular in
Western countries, and Henry Gantt’s name became associated with this type of chart. Gantt Charts
were a revolutionary advance in the early 1900s. During the period of industrial development, they were
used to plan industrial processes and still are in common use today. Information on how to create and
use Gantt Charts is available at www.ganttchart.com and https://www.projectmanager.com/gantt-chart.
Description
The cells in each row show the impact of the variable, represented
by that row, on each of the variables listed across the top of the
matrix. The cells in each column show the impact of each variable,
listed down the left side of the matrix, on the variable represented by
the column.
The matrix consists of eight columns and six rows. The columns are
as follows: A, B, C, D, E, F, Score, and percentage. The rows are as
follows: A, B, C, D, E, and F.
Back to Figure
The gantt chart shows bars across different time periods before the
day of attack for different activities. Intent, such as conceiving the
idea, meetings, and deciding to act, takes place two years before
attack. Planning, such as gathering information, surveilling,
recruiting, and making bombs, takes place between two years to one
year before the day of attack. The time period for various activities
associated with preparation is as follows. Training: One year to six
months before the day of attack. Surveilling: Six months before the
day of attack to the day. Assembling the material: Six months to
eight to four weeks before the day of attack. Issuing attack order: On
the day of attack. Attack, such as positioning and detonating the
bomb, takes place on the day of the attack. The aftermath, such as
fleeing the scene and claiming credit, takes place on the day of the
attack or on the day after.
CHAPTER 6 EXPLORATION
TECHNIQUES
New ideas, and the combination of old ideas in new ways, are
essential elements of effective analysis. Some structured techniques
are specifically intended for the purpose of exploring concepts and
eliciting ideas at various stages of a project, and they are the topic of
this chapter. Exploration Techniques help analysts create new
approaches and transform existing ideas, knowledge, and insights
into something novel but meaningful. They help analysts identify
potential new sources of information or reveal gaps in data or
thinking and provide a fresh perspective on long-standing issues.
What is the meaning of the word or what words come to mind when you look at the picture?
Figure 6.1 Engaging All Participants Using the Index Card Technique
Ask yourself how the word or picture could be related to the issue or problem. For example, when
considering who a newly-elected prime minister or president might appoint to a particularly tough
Cabinet job, the word “basketball” or a picture of a basketball court might generate ideas such as team
game, team player, assigned position, coaching, player development, winning, hoops and balls, time
outs, periods of play, or winners and losers. Each of those terms could trigger subsequent ideas about
the type of official who would be “ideal” for the job.
A freewheeling, informal group discussion session will often generate new ideas, but a structured group
process is more likely to succeed in helping analysts overcome existing mindsets, “empty the bottom of
the barrel of the obvious,” and come up with fresh insights and ideas.
Paper Recording.
Paper Recording is a silent technique that is a variant of Nominal Group Technique (discussed later in
this chapter). This group method requires each participant to write down one to four ideas on the topic
on a piece of paper initially distributed by the facilitator or organizer.
After jotting down their ideas, participants place their papers in a central “pick-up point.” They select
a piece of paper (not their own) from the “pick-up point” and add to the ideas on the paper.
The process continues until participants run out of ideas. Participants can hold up or “play” a
colored card to note when they have exhausted their ideas.
Pass out self-stick notes and marker-type pens to all participants. A group of ten to twelve
contributors works best. Tell those present that no one can speak except the facilitator during the
initial collection phase of the exercise.
Write the question you are brainstorming on a whiteboard or easel. The objective is to make the
question as open-ended as possible. For example, Cluster Brainstorming focal questions often
begin with “What are all the (things/forces and factors/circumstances) that would help explain . . .?”
Ask the participants to write down their responses to the question on self-stick notes. Participants
are asked to capture the concept with a few key words that will fit on the self-stick note. The
facilitator then collects the self-stick notes from the participants.
After an initial quiet time of two minutes, the facilitator begins to read the self-stick notes out loud.
The quiet time allows participants time to reflect and process before the facilitator starts talking
again. The facilitator puts all the self-stick notes on the wall or a whiteboard as he/she reads them
aloud (see Figure 6.2). All ideas are treated the same. Participants are urged to listen to and build
on one another’s ideas.
Usually there is an initial spurt of ideas followed by pauses as participants contemplate the
question. After five or ten minutes, expect a long pause of a minute or so. This slowing down
suggests that the group has “emptied the barrel of the obvious” and is now on the verge of coming
up with some fresh insights and ideas. Facilitators should not talk during this pause, even if the
silence is uncomfortable.
After a couple of long pauses, facilitators conclude this divergent stage of the brainstorming
process. They then ask a subset of the group to go up to the board and silently arrange the self-
stick notes into affinity groups (basically grouping the ideas by like concept). The group should
arrange the self-stick notes in clusters, not in rows or columns. The group should avoid putting the
self-stick notes into obvious groups like “economic” or “political.” Group members cannot talk while
they are doing this. If one self-stick note seems to “belong” in more than one group, make a copy
and place one self-stick note in each affinity group.
If the group has many members, those who are not involved in arranging the self-stick notes should
be asked to perform a different task. For example, the facilitator could make copies of several of the
self-stick notes that were considered outliers because the group at the board has not fit them into
any obvious group or the items evoked laughter when read aloud. One of these outliers is given to
each table or to each smaller group of participants, who then are asked to explain how that outlier
relates to the primary task.
When the group at the board seems to slow down organizing the notes, ask a second small group
to go to the board and review how the first group arranged the self-stick notes. The second group
cannot speak, but members are encouraged to continue to rearrange the notes into more coherent
groups. Usually the exercise should generate five to ten affinity groupings on the board.
When all the self-stick notes have been arranged, members of the group at the board can converse
among themselves to pick a word or phrase that best describes each affinity grouping.
Pay attention to outliers or self-stick notes that do not belong in a particular group. Such an outlier
could either be useless noise or, more likely, contain a gem of an idea that deserves further
elaboration as a theme. If outlier self-stick notes were distributed earlier in the exercise, ask each
group to explain how that outlier is relevant to the issue.
To identify the potentially most useful ideas, the facilitator or group leader should establish up to five
criteria for judging the value or importance of the ideas. If so desired, then use the Ranking,
Scoring, and Prioritizing technique, described in chapter 5, for voting on or ranking or prioritizing
ideas. Another option is to give each participant ten votes. Tell them to come up to the whiteboard
or easel with a marker and place their votes. They can place ten votes on a single self-stick note or
affinity group label, one vote each on ten self-stick notes or affinity group labels, or any combination
in between. Tabulate the votes.
Assess what you have accomplished and what areas will need more work or more brainstorming.
Then ask the group, “What do you see now that you did not see before?” Review the key ideas or
concepts as well as new areas that need more work or further brainstorming.
Set priorities based on the voting results, decide on the next steps for analysis, and develop an
action plan.
Setup. Draw a circle and write the following words around the circle: Who, What, How, When,
Where, and Why (see Figure 6.4). The order follows how a declarative sentence is written in
English (who did what, for what reason). The What and the How are next to each other because
they often overlap conceptually. In the middle of the circle, write “So What?”
Define the Topic. Begin by asking the group to validate the discussion topic and agree on the
objectives of the session.
Seek Answers. Systematically work through each question asking the group to “shout out” what
they know about the topic as it relates to each question. Write down the answers.
Reflect. After going around the circle, ask the group to reflect on their responses and suggest
which of the questions appears most important to the analysis.
Prioritize Results. In some cases, highlight or prioritize the responses to the most important
questions.
So What? Initiate a final discussion to explore the So What? question in the center of the circle.
Final Product. Capture all the input on a single graphic and distribute it to participants and others
interested in the topic for comment.
Relationship to Other Techniques
Circleboarding™ is a simpler version of Starbursting (presented next
in this chapter) that focuses on exploring the answers to—rather
than generating questions related to—the journalist’s classic list of
queries: Who, What, How, When, Where, and Why. Ranking,
Scoring, and Prioritizing (chapter 5) can be helpful in prioritizing the
clusters of answers that result.
Origins of This Technique
Circleboarding™ was developed by Pherson Associates as an
alternative to Starbursting. The technique can be particularly helpful
for analysts working in law enforcement, medicine, and financial
intelligence investigating money laundering, who focus mostly on
generating tactical analytic products and are often pressed for time.
6.5 STARBURSTING
Starbursting is a form of brainstorming that focuses on generating
questions rather than eliciting ideas or answers. It uses the six
questions commonly asked by journalists: Who? What? How?
When? Where? and Why?
When to Use It
Use Starbursting to help define your research requirements and
identify information gaps. After deciding on the idea, topic, or issue
to be analyzed, brainstorm to identify the questions that need to be
answered by the research. Asking the right questions is a common
prerequisite to finding the right answer.
Value Added
Starbursting uses questions commonly asked by journalists (Who,
What, How, When, Where, and Why) to spur analysts and analytic
teams to ask the questions that will inevitably arise as they present
their findings or brief their clients. In thinking about the questions,
analysts will often discover new and different ways of combining
ideas.
Often only three or four of the words are directly relevant to the
intelligence question. In cyber analysis, the Who is often difficult to
define. For some words (for example, When or Where), the answer
may be a given and not require further exploration.
Do not try to answer the questions as they are identified; just focus
on developing as many questions as possible. After generating
questions that start with each of the six words, ask the group either
to prioritize the questions to be answered or to sort the questions
into logical categories. Figure 6.5 is an example of a Starbursting
diagram. It identifies questions to be asked about a terrorist group
that launched a biological attack in a subway.
Relationship to Other Techniques
This Who, What, How, When, Where, and Why approach can be combined effectively with the Issue
Redefinition and Getting Started Checklist methods mentioned in the introduction to chapter 5 and
described more fully in Critical Thinking for Strategic Intelligence.3 Ranking, Scoring, and Prioritizing
(chapter 5) is helpful in prioritizing the questions to be worked on. Starbursting is also directly related to
several Diagnostic Techniques discussed in chapter 7, such as the Key Assumptions Check. In addition,
it can be used to order the analytic process used in Indicators Validation in chapter 9.
Description
By an individual or a group to help sort out ideas and achieve a shared understanding of key
concepts. By getting the ideas down on paper or a computer screen, the individual or group is
better able to remember, critique, and modify them.
Mind Maps can help analysts brainstorm the various elements or key players involved in an issue and
how they might be connected. The technique stimulates analysts to ask questions such as, Are there
additional categories or “branches on the tree” that we have not considered? Are there elements of this
process or applications of this technique that we have failed to capture? Does the Mind Map suggest a
different context for understanding the problem?
Description
Analysts and students find that Mind Maps and Concept Maps are useful tools for taking notes during an
oral briefing or lecture. By developing a map as the lecture proceeds, the analyst or student can chart
the train of logic and capture all the data presented in a coherent map with all key elements on a single
page. The map also provides a useful vehicle for explaining to someone else in an organized way what
was presented in the briefing or lecture.
Concept Maps are also frequently used as teaching tools. Most recently, they have been effective in
developing “knowledge models,” in which large sets of complex Concept Maps are created and
hyperlinked together to represent analyses of complex domains or problems.
Value Added
Mapping facilitates the presentation or discussion of a complex body
of information. It is useful because it presents a considerable amount
of information that can generally be seen at a glance. Creating a
visual picture of the basic structure of a complex problem helps
analysts be as clear as possible in stating precisely what they want
to express. Diagramming skills enable analysts to stretch their
analytic capabilities.
Mind Maps and Concept Maps vary greatly in size and complexity
depending on how and why they are used. When used for structured
analysis, a Mind Map or a Concept Map is typically larger,
sometimes much larger, than the examples shown in this chapter.
After having defined the problem, the group should be better able to
identify what further research needs to be done and able to parcel
out additional work among the best-qualified members of the group.
The group should also be better able to prepare a report that
represents as fully as possible the collective wisdom of the group.
Mind Maps and Concept Maps help mitigate the impact of several
cognitive biases and misapplied heuristics, including the tendency to
provide quick and easy answers to difficult questions (Mental
Shotgun), to predict rare events based on weak evidence or
evidence that easily comes to mind (Associative Memory), and to
select the first answer that appears “good enough” (Satisficing).
They can also provide an antidote to intuitive traps such as
Overinterpreting Small Samples, Ignoring the Absence of
Information, Confusing Causality and Correlation, and overrating the
role of individual behavior and underestimating the importance of
structural factors (Overestimating Behavioral Factors).
The Method
Start a Mind Map or Concept Map with an inclusive question that
focuses on defining the issue or problem. Then follow these steps:
While building all the links among the concepts and the focal
question, look for and enter cross-links among concepts.
Mind Mapping has only one main or central idea, and all other
ideas branch off from it radially in all directions. The central idea
is preferably shown as an image rather than in words, and
images are used throughout the map. Around the central word,
draw the five or ten main ideas that relate to that word. Then
take each of these first-level words and again draw the five or
ten second-level ideas that relate to each of the first-level
words.6
Mind Maps and Concept Maps have been given new life by the
development of software that makes them more useful and easier to
use. For information on Mind Mapping, see Tony and Barry Buzan,
The Mind Map Book (Essex, England: BBC Active, 2006).
Information on Concept Mapping is available at
http://cmap.ihmc.us/conceptmap.html. For an introduction to
mapping in general, visit http://www.inspiration.com/visual-
learning/mind-mapping.
6.7 VENN ANALYSIS
Venn Analysis is a visual technique that analysts can use to explore the logic of arguments. Venn
diagrams consist of overlapping circles and are commonly used to teach set theory in mathematics.
Each circle encompasses a set of objects; objects that fall within the intersection of circles are members
of both sets. A simple Venn diagram typically shows the overlap between two sets. Circles can also be
nested within one another, showing that one thing is a subset of another in a hierarchy. Figure 6.7a is an
example of a Venn diagram. It describes the various components of critical thinking and how they
combine to produce synthesis and analysis.
Venn diagrams can illustrate simple sets of relationships in analytic arguments; we call this process
Venn Analysis. When applied to argumentation, it can reveal invalid reasoning. For example, in Figure
6.7b, the first diagram shows the flaw in the following argument: Cats are mammals, dogs are
mammals; therefore, dogs are cats. As the diagram shows, dogs are not subsets of cats, nor are any
cats subsets of dogs; therefore, they are distinct subsets, but both belong to the larger set of mammals.
Venn Analysis can also validate the soundness of an argument. For example, the second diagram in
Figure 6.7b shows that the argument that cats are mammals, tigers are cats, and therefore tigers are
mammals is true.
Description
Description
Ask whether there are elements within the circles that should be
added.
Using Venn Analysis, the analyst would first draw a large circle to represent the autocratic state of
Zambria, a smaller circle within it representing state-owned enterprises, and small circles within that
circle to represent individual state-owned companies. A mapping of all Zambrian corporations would
include more circles to represent other private-sector companies as well as a few that are partially state-
owned (see Figure 6.7c). Simply constructing this diagram raises several questions, such as What
percentage of companies are state-owned, partially state-owned, or privately owned? Do the
percentages correspond to the size of the circles? What is the definition of “state-owned”? What does
“state-owned” imply politically and economically in a nondemocratic state like Zambria? Do these
distinctions even matter? The answer to this last question would determine whether Zambrian
companies should be represented by one, two, or three circles.
The diagram should prompt the analyst to examine several assumptions implicit in the questions:
Should we assume that state-owned enterprises would act contrary to U.S. interests?
Would private-sector companies be less hostile or even supportive of free enterprise and U.S.
interests?
Each of these assumptions, made explicit by the Venn diagram, can now be challenged. For example,
the original statement may have oversimplified the problem. Perhaps the threat may be broader than
just state-owned enterprises. Could Zambria exert influence through its private companies or private-
public partnerships?
The original statement refers to investment in U.S. port infrastructure projects by companies in Zambria.
The overlap in the two circles in Figure 6.7d depicts which of these companies are investing in U.S. port
infrastructure projects. The Venn diagram shows that only private Zambrian companies and one private-
public partnership are investing in U.S. port infrastructure projects. Additional circles could be added
showing the level of investment in U.S. port infrastructure improvement projects by companies from
other countries. The large circle enveloping all the smaller circles would represent the entirety of foreign
investment in all U.S. infrastructure projects. By including this large circle, readers get a sense of what
percentage of total foreign investment in U.S. infrastructure is targeted on port improvement projects.
Can we assume Zambrian investors are the biggest investors in U.S. port infrastructure projects?
Who would be their strongest competitors? How much influence does their share of the investment
pie give them?
Can we estimate the overall amount of Zambrian investment in infrastructure projects and where it
is directed? If Zambrian state-owned companies are investing in other U.S. projects, are the
investments related only to companies doing work in the United States or also overseas?
Do we know the relative size of current port infrastructure improvement projects in the United States
as a proportion of all port improvement investments being made globally? What percentage of
foreign companies are working largely overseas in projects to improve port infrastructure or critical
infrastructure as more broadly defined? What percentage of these companies are functioning as
global enterprises or as state-owned or state-controlled companies?
Many analytic arguments highlight differences and trends, but it is important to put these differences into
perspective before becoming too engaged in the argument. By using Venn Analysis to examine the
relationships between the overlapping areas of the Venn diagram, analysts have a more rigorous base
from which to organize and develop their arguments. In this case, if the relative proportions are correct,
the Venn Analysis would reveal a more important national security concern: Zambria’s position as an
investor in U.S. port infrastructure improvement projects could give it significant economic as well as
military advantage.
Network Charting.
The key to good Network Analysis is to begin with a good chart. An example of such a chart is Figure
6.8a, which shows the terrorist network behind the attacks of September 11, 2001. It was compiled by
networks researcher Valdis E. Krebs using data available from news sources on the internet in early
2002.
Several tried and true methods for making good charts will help the analyst to save time, avoid
unnecessary confusion, and arrive more quickly at insights. Network charting usually involves the
following steps:
Identify at least one reliable source or stream of data to serve as a beginning point.
Description
Identify each node and interaction by some criterion that is meaningful to your analysis. These
criteria often include frequency of contact, type of contact, type of activity, and source of
information.
Draw the connections between nodes—connect the dots—on a chart by hand, using a computer
drawing tool, or using Network Analysis software. If you are not using software, begin with the
nodes that are central to your intelligence question. Make the map more informative by presenting
each criterion in a different color or style or by using icons or pictures. A complex chart may use all
these elements on the same link or node. The need for additional elements often happens when the
intelligence question is murky (for example, when “I know something bad is going on, but I don’t
know what”); when the chart is being used to answer multiple questions; or when a chart is
maintained over a long period of time.
Work out from the central nodes, adding links and nodes until you run out of information from the
good sources.
Add nodes and links from other sources, constantly checking them against the information you
already have. Follow all leads, whether they are people, groups, things, or events, regardless of
source. Make note of the sources.
Stop in these cases: When you run out of information, when all the new links are dead ends, when
all the new links begin to turn in and start connecting with one another like a spider’s web, or when
you run out of time.
Update the chart and supporting documents regularly as new information becomes available, or as
you have time. Just a few minutes a day will pay enormous dividends.
Rearrange the nodes and links so that the links cross over one another as little as possible. This is
easier if you are using software. Many software packages can rearrange the nodes and links in
various ways.
Cluster the nodes. Do this by looking for “dense” areas of the chart and relatively “empty” areas.
Draw shapes around the dense areas. Use a variety of shapes, colors, and line styles to denote
different types of clusters, your relative confidence in the cluster, or any other criterion you deem
important.
Label each cluster according to the common denominator among the nodes it contains. In doing
this, you will identify groups, events, activities, and/or key locations. If you have in mind a model for
groups or activities, you may be able to identify gaps in the chart by what is or is not present that
relates to the model.
Look for “cliques”—a group of nodes in which every node is connected to every other node in the
group, though not to many nodes outside the group. These groupings often look like stars or
pentagons. In the intelligence world, they often turn out to be clandestine cells.
Look in the empty spaces for nodes or links that connect two clusters. Highlight these nodes with
shapes or colors. These nodes are brokers, facilitators, leaders, advisers, media, or some other key
connection that bears watching. They also point where the network is susceptible to disruption.
Chart the flow of activities between nodes and clusters. You may want to use arrows and time
stamps. Some software applications will allow you to display dynamically how the chart has
changed over time.
Analyze this flow. Does it always go in one direction or in multiple directions? Are the same or
different nodes involved? How many different flows are there? What are the pathways? By asking
these questions, you can often identify activities, including indications of preparation for offensive
action and lines of authority. You can also use this knowledge to assess the resiliency of the
network. If one node or pathway were removed, would there be alternatives already built in?
Network Analysis.
Figure 6.8b is a modified version of the 9/11 hijacker network depicted in Figure 6.8a. It identifies the
different types of clusters and nodes discussed under Network Analysis. Cells are star-like or pentagon-
like line configurations, potential cells with star-like or pentagon-like line configurations are circled, and
the large diamond surrounds a cluster of cells. Brokers are shown as nodes overlaid with small
hexagons. Note the broker in the center. This node has connections to all but one of the other brokers.
This is a senior leader: Al-Qaeda’s former head of operations in Europe, Imad Eddin Barakat Yarkas.
Description
Degree centrality: This is measured by the number of direct connections that a node has with other
nodes. In the network depicted in Figure 6.8c, Deborah has the most direct connections. She is a
“connector” or a “hub” in this network.
Betweenness centrality: Helen has fewer direct connections than does Deborah, but she plays a
vital role as a “broker” in the network. Without her, Ira and Janice would be cut off from the rest of
the network. A node with high “betweenness” has great influence over what flows—or does not flow
—through the network.
Closeness centrality: Frank and Gary have fewer connections than does Deborah, yet the pattern of
their direct and indirect ties allows them to access all the nodes in the network more quickly than
anyone else. They are in the best position to monitor all the information that flows through the
network.
Potential Pitfalls
This method is extremely dependent upon having at least one good
source of information. It is hard to know when information may be
missing, and the boundaries of the network may be fuzzy and
constantly changing, in which case it is difficult to determine whom to
include. The constantly changing nature of networks over time can
cause information to become outdated. You can be misled if you do
not constantly question the data being entered, update the chart
regularly, and look for gaps and consider their potential significance.
You should never rely blindly on the SNA software but strive to
understand how the application works. As is the case with any
software, different applications measure different things in different
ways, and the devil is always in the details.
Origins of This Technique
This is an old technique transformed by the development of sophisticated software programs for
organizing and analyzing large databases. Each of the following sources has made significant
contributions to the description of this technique: Valdis E. Krebs, “Social Network Analysis, An
Introduction,” www.orgnet.com/sna.html; Krebs, “Uncloaking Terrorist Networks,” First Monday 7, no. 4
(April 1, 2002), https://firstmonday.org/ojs/index.php/fm/article/view/941/863%22%3B%3E%3BNetwork;
Robert A. Hanneman, “Introduction to Social Network Methods,” Department of Sociology, University of
California, Riverside,
http://faculty.ucr.edu/~hanneman/nettext/C1_Social_Network_Data.html#Populations; Marilyn B.
Peterson, Defense Intelligence Agency, “Association Analysis,” unpublished manuscript, n.d., used with
permission of the author; Cynthia Storer and Averill Farrelly, Pherson Associates, LLC; and Pherson
Associates training materials.
Description
6. Tony Buzan, The Mind Map Book, 2nd ed. (London: BBC Books,
1995).
What? Exactly what agent was used? Was it lethal enough to cause
that many casualties?
How? How was the agent dispersed? Did the perpetrators have to
be present?
Where? Does the choice of this site send a particular message? Did
the location help in dispersing the agent?
When? Was the time of day significant? Was the date on a particular
anniversary?
Why? What was the intent? To incite terror, cause mass casualties,
or cause economic disruption? Did the attack help advance a
political agenda?
Back to Figure
Back to Figure
Back to Figure
Back to Figure
In the first diagram, two ellipses, labeled cats and dogs, are within a
large ellipse, labeled mammals. Text reads, “Cats are mammals.
Dogs are mammals. Therefore, dogs are cats.”
Back to Figure
Back to Figure
Back to Figure
The first part of this chapter describes techniques for challenging key
assumptions, establishing analytic baselines, and identifying the
relationships among the key variables, drivers, or players that may
influence the outcome of a situation. These and similar techniques
allow analysts to imagine new and alternative explanations for their
subject matter.
When tracking the same topic or issue over time, analysts should
consider reassessing their key assumptions on a periodic basis,
especially following a major new development or surprising event. If,
on reflection, one or more key assumptions no longer appear to be
well-founded, analysts should inform key policymakers or corporate
decision makers working that target or issue that a foundational
construct no longer applies or is at least doubtful.
Value Added
Preparing a written list of one’s working assumptions at the
beginning of any project helps the analyst do the following:
Could it have been true in the past but not any longer?
There is a particularly noteworthy interaction between Key Assumptions Check and ACH. Key
assumptions need to be included as “evidence” in an ACH matrix to ensure that the matrix is an
accurate reflection of the analyst’s thinking. Assumptions often emerge during a discussion of relevant
information while filling out an ACH matrix. This happens when an analyst assesses the consistency or
inconsistency of an item of evidence with a hypothesis and concludes that the designation is dependent
upon something else—usually an assumption. Classic Quadrant Crunching™ (chapter 8) and Simple
Scenarios, the Cone of Plausibility, and Reversing Assumptions (chapter 9) all use assumptions and
their opposites to generate multiple explanations or outcomes.
Origins of This Technique
Although assumptions have been a topic of analytic concern for a
long time, the idea of developing a specific analytic technique to
focus on assumptions did not occur until the late 1990s. The
discussion of Key Assumptions Check in this book is from Randolph
H. Pherson, Handbook of Analytic Tools and Techniques, 5th ed.
(Tysons, VA: Pherson Associates, LLC, 2019).
7.2 CHRONOLOGIES AND TIMELINES
A Chronology is a list that places events or actions in the order in
which they occurred, usually in narrative or bulleted format. A
Timeline is a graphic depiction of those events put in context of the
time of the events and the time between events. Both are used to
identify trends or relationships among the events or actions and, in
the case of a Timeline, among the events and actions as well as
other developments in the context of the overarching intelligence
problem.
When to Use It
Chronologies and Timelines aid in organizing events or actions. The
techniques are useful whenever analysts need to understand the
timing and sequence of relevant events. They can also reveal
significant events or important gaps in the available information. The
events may or may not have a cause-and-effect relationship.
Does the Timeline have all the critical events that are
necessary for the outcome to occur?
When did the information become known to the analyst or a
key player?
Are there any points along the Timeline when the target is
particularly vulnerable to the collection of intelligence or
information or countermeasures?
Description
The group then fills out the matrix, considering, and then recording, the relationship between each
variable or event and every other variable or event. For example, does the presence of Variable 1
increase or diminish the influence of Variables 2, 3, 4, and so on? Or does the occurrence of Event 1
increase or decrease the likelihood of Events 2, 3, 4, and so forth? If one variable does affect the other,
the positive or negative magnitude of this effect can be recorded in the matrix by entering a large or
small + or a large or small – in the appropriate cell (or by making no marking at all if there is no
significant effect). The terminology used to describe the relationship between each pair of variables or
events is based on whether it is “enhancing,” “inhibiting,” or “unrelated.”
The matrix shown in Figure 7.3 has six variables, with thirty possible interactions. Note that the
relationship between each pair of variables is assessed twice, as the relationship may not be symmetric.
That is, the influence of Variable 1 on Variable 2 may not be the same as the impact of Variable 2 on
Variable 1. It is not unusual for a Cross-Impact Matrix to have substantially more than thirty possible
interactions, in which case careful consideration of each interaction can be time-consuming.
Description
Analysts should use the Cross-Impact technique to focus on significant interactions between variables
or events that may have been overlooked, or combinations of variables that might reinforce one another.
Combinations of variables that reinforce one another can lead to surprisingly rapid changes in a
predictable direction. On the other hand, recognizing that there is a relationship among variables and
the direction of each relationship may be sufficient for some problems.
The depth of discussion and the method used for recording the results are discretionary. Each depends
upon how much you are learning from the discussion, which will vary from one application of this matrix
to another. If the group discussion of the likelihood of these variables or events and their relationships to
one another is a productive learning experience, keep it going. If key relationships are identified that are
likely to influence the analytic judgment, fill in all cells in the matrix and take good notes. If the group
does not seem to be learning much, cut the discussion short.
As a collaborative effort, team members can conduct their discussion—and periodically review—their
key findings online. As time permits, analysts can enter new information or edit previously entered
information about the interaction between each pair of variables. This record will serve as a point of
reference or a memory jogger throughout the project.
Relationship to Other Techniques
Matrices as a generic technique with many types of applications are
discussed in chapter 5. The use of a Cross-Impact Matrix as
described here frequently follows some form of brainstorming at the
start of an analytic project. Elicit the assistance of other
knowledgeable analysts in exploring all the relationships among the
relevant factors identified in the brainstorming session. Analysts can
build on the discussion of the Cross-Impact Matrix by developing a
visual Mind Map or Concept Map of all the relationships.
Observation—and knowledge-based.
Gather together a diverse group to review the available information and explanations for the issue,
activity, or behavior that you want to evaluate. In forming this diverse group, consider including different
types of expertise for different aspects of the problem, cultural expertise about the geographic area
involved, different perspectives from various stakeholders, and different styles of thinking (left brain/right
brain, male/female). Then do the following:
Ask each member of the group to write down on an index card up to three alternative explanations
or hypotheses. Prompt creative thinking by using the following:
Applying theory. Drawing from the study of many examples of the same phenomenon.
Description
Situational logic. Representing all the known facts and an understanding of the underlying
forces at work at the given time and place.
Collect the cards and display the results on a whiteboard. Consolidate the list to avoid any
duplication.
Employ additional individual and group brainstorming techniques, such as Cluster Brainstorming, to
identify key forces and factors.
Aggregate the hypotheses into affinity groups and label each group.
Use problem restatement and consideration of the opposite to develop new ideas.
Update the list of alternative hypotheses. If the hypotheses will be used in Analysis of Competing
Hypotheses, strive to keep them mutually exclusive—that is, if one hypothesis is true, all others
must be false.
Have the group clarify each hypothesis by asking the journalist’s classic list of questions: Who,
What, How, When, Where, and Why?
Quadrant Hypothesis Generation is easier and quicker to use than the Multiple Hypotheses Generator®,
but it is limited to cases in which the outcome of a situation will be determined by two major driving
forces—and it depends on the correct identification of these forces. The technique is less effective when
more than two major drivers are present or when analysts differ over which forces constitute the two
major drivers.
Identify two main drivers by using techniques such as Cluster Brainstorming or by surveying
subject-matter experts. A discussion to identify the two main drivers can be a useful exercise.
Think of each driver as a continuum from one extreme to the other. Write the extremes of each of
the drivers at the end of the vertical and horizontal axes.
Fill in each quadrant with the details of what the end state would be if shaped by the two extremes
of the drivers.
Develop signposts or indicators that show whether events are moving toward one of the
hypotheses. Use the signposts or indicators of change to develop intelligence collection strategies
or research priorities to determine the direction in which events are moving.
Description
Figure 7.4.2 Quadrant Hypothesis Generation: Four Hypotheses on the Future of Iraq
Figure 7.4.2 shows an example of a Quadrant Hypothesis Generation chart. In this case, analysts have
been tasked with developing a paper to project possible futures for Iraq, focusing on the potential end
state of the government. The analysts have identified and agreed upon the two key drivers in the future
of the government: the level of centralization of the federal government and the degree of religious
control of that government. They develop their quadrant chart and lay out the four logical hypotheses
based on their decisions.
The four hypotheses derived from the quadrant chart can be stated as follows:
The final state of the Iraq government will be a centralized state and a secularized society.
The final state of the Iraq government will be a centralized state and a religious society.
The final state of the Iraq government will be a decentralized state and a secularized society.
The final state of the Iraq government will be a decentralized state and a religious society.
7.4.3 The Method: Multiple Hypotheses Generator®
The Multiple Hypotheses Generator® is a technique for developing multiple alternatives for explaining an
issue, activity, or behavior. Analysts often can brainstorm a useful set of hypotheses without such a tool,
but the Multiple Hypotheses Generator® may give greater confidence than other techniques that
analysts have not overlooked a critical alternative or outlier. Analysts should employ the Multiple
Hypotheses Generator® to ensure that they have considered a broad array of potential hypotheses. In
some cases, they may have considerable data and want to ensure that they have generated a set of
plausible explanations consistent with all the data at hand. Alternatively, they may have been presented
with a hypothesis that seems to explain the phenomenon at hand and been asked to assess its validity.
The technique helps analysts rank alternative hypotheses from the most to least credible, focusing on
those at the top of the list as those deemed most worthy of attention.
Gather a diverse group to define the issue, activity, or behavior under study. Often, it is useful to ask
questions in the following ways:
What are the possible permutations that would flip the assumptions contained in the lead
hypothesis that . . .?
Identify the Who, What, and Why for the lead hypothesis. Then generate plausible alternatives
for each relevant key component.
Review the lists of alternatives for each of the key components; strive to keep the alternatives on
each list mutually exclusive.
Evaluate the credibility of the remaining permutations by challenging the key assumptions of each
component. Some of these assumptions may be testable themselves. Assign a “credibility score” to
each permutation using a 1-to-5-point scale where 1 is low credibility and 5 is high credibility.
Re-sort the remaining permutations, listing them from most credible to least credible.
Restate the permutations as hypotheses, ensuring that each meets the criteria of a good
hypothesis.
Select from the top of the list those hypotheses most deserving of attention.
Potential Pitfalls
The value of this technique is limited to the ability of analysts to
generate a robust set of alternative explanations. If group dynamics
are flawed, the outcomes will be flawed. Whether the correct
hypothesis will emerge from this process and analysts identify it as
such cannot be guaranteed, but the prospect of the correct
hypothesis being included in the set of hypotheses under
consideration is greatly increased.
Relationship to Other Techniques
The product of any Foresight analysis process can be thought of as
a set of alternative hypotheses. Quadrant Hypothesis Generation is
a specific application of the generic method called Morphological
Analysis, described in chapter 9. Alternative Futures Analysis uses a
similar quadrant chart approach to define four potential outcomes,
and Multiple Scenarios Generation uses the approach to define
multiple sets of four outcomes. Both of these techniques are also
described in chapter 9.
Origins of This Technique
The generation and testing of hypotheses is a key element of
scientific reasoning. The Simple Hypotheses approach and Quadrant
Hypothesis Generation are described in the Handbook of Analytic
Tools and Techniques, 5th ed. (Tysons, VA: Pherson Associates,
LLC, 2019) and Pherson Associates training materials. The
description of the Multiple Hypotheses Generator® can be found in
the fourth edition of the Handbook of Analytic Tools and Techniques
(Reston, VA: Pherson Associates, LLC, 2015).
7.5 DIAGNOSTIC REASONING
Diagnostic Reasoning is the application of hypothesis testing to a
new development, a single new item of information or intelligence, or
the reliability of a source. It differs from Analysis of Competing
Hypotheses (ACH) in that it is used to evaluate a single item of
relevant information or a single source; ACH deals with an entire
range of hypotheses and multiple items of relevant information.
When to Use It
Analysts should use Diagnostic Reasoning if they find themselves
making a snap intuitive judgment while assessing the meaning of a
new development, the significance of a new report, or the reliability
of a stream of reporting from a new source. Often, much of the
information used to support one’s lead hypothesis turns out to be
consistent with alternative hypotheses as well. In such cases, the
new information should not—and cannot—be used as evidence to
support the prevailing view or lead hypothesis.
The use of collaborative ACH tools ensures that all analysts are
working from the same database of evidence, arguments, and
assumptions, and that each member of the team has had an
opportunity to express his or her view on how that information relates
to the likelihood of each hypothesis. Such tools can be used both
synchronously and asynchronously and include functions such as a
survey method to enter data that protects against bias, the ability to
record key assumptions and collection requirements, and a filtering
function that allows analysts to see how each person rated the
relevant information.5
The Method
To retain five or seven hypotheses in working memory and note how each item of information fits into
each hypothesis is beyond the capabilities of most analysts. It takes far greater mental agility than the
common practice of seeking evidence to support a single hypothesis already believed to be the most
likely answer. The following nine-step process is at the heart of ACH and can be done without software.
Identify all possible hypotheses that should be considered. Hypotheses should be mutually
exclusive; that is, if one hypothesis is true, all others must be false. The list of hypotheses should
include a deception hypothesis, if that is appropriate. For each hypothesis, develop a brief scenario
or “story” that explains how it might be true. Analysts should strive to create as comprehensive list
of hypotheses as possible.
Make a list of significant relevant information, which means everything that would help analysts
evaluate the hypotheses, including evidence, assumptions, and the absence of things one would
expect to see if a hypothesis were true. It is important to include assumptions as well as factual
evidence, because the matrix is intended to be an accurate reflection of the analyst’s thinking about
the topic. If the analyst’s thinking is driven by assumptions rather than hard facts, this needs to
become apparent so that the assumptions can be challenged. A classic example of absence of
evidence is the Sherlock Holmes story of the dog barking in the night. The failure of the dog to bark
was persuasive evidence that the guilty party was not an outsider but an insider whom the dog
knew.
Create a matrix and analyze the diagnosticity of the information. Create a matrix with all
hypotheses across the top and all items of relevant information down the left side. See Figure 7.6a
for an example. Analyze the “diagnosticity” of the evidence and arguments to identify which points
are most influential in judging the relative likelihood of the hypotheses. Ask, “Is this input Consistent
with the hypothesis, is it Inconsistent with the hypothesis, or is it Not Applicable or not relevant?”
This can be done by either filling in each cell of the matrix row-by-row or by randomly selecting cells
in the matrix for analysts to rate. If it is Consistent, put a “C” in the appropriate matrix box; if it is
Inconsistent, put an “I”; if it is Not Applicable to that hypothesis, put an “NA.” If a specific item of
evidence, argument, or assumption is particularly compelling, put two “C’s” in the box; if it strongly
undercuts the hypothesis, put two “I’s.”
When you are asking if an input is Consistent or Inconsistent with a specific hypothesis, a common
response is, “It all depends on . . .” That means the rating for the hypothesis is likely based on an
assumption. You should record all such assumptions when filling out the matrix. After completing the
matrix, look for any pattern in those assumptions, such as the same assumption being made when
ranking multiple items of information. After the relevant information has been sorted for diagnosticity,
note how many of the highly diagnostic Inconsistency ratings are based on assumptions. Consider how
much confidence you should have in those assumptions and then adjust the confidence in the ACH
Inconsistency Scores accordingly.
Review where analysts differ in their assessments and decide if the ratings need to be adjusted
(see Figure 7.6b). Often, differences in how analysts rate an item of information can be traced back
to different assumptions about the hypotheses when doing the ratings.
Refine the matrix by reconsidering the hypotheses. Does it make sense to combine two
hypotheses into one, or to add a new hypothesis that was not considered at the start? If a new
hypothesis is added, go back and evaluate all the relevant information for this hypothesis.
Additional relevant information can be added at any time.
Figure 7.6A Creating an ACH Matrix
Description
Draw tentative conclusions about the relative likelihood of each hypothesis, basing your
conclusions on an analysis regarding the diagnosticity of each item of relevant information. Proceed
by trying to refute hypotheses rather than confirm them. Add up the number of Inconsistency ratings
for each hypothesis and note the Inconsistency Score for each hypothesis. As a first cut, examine
the total number of “I” and “II” ratings for each hypothesis. The hypothesis with the most
Inconsistent ratings is the least likely to be true and the hypothesis or hypotheses with the lowest
Inconsistency Score(s) is tentatively the most likely hypothesis.
The Inconsistency Scores are broad generalizations, not precise calculations. ACH is a tool
designed to help the analyst make a judgment, but not to actually make the judgment for the
analyst. This process is likely to produce correct estimates more frequently than less systematic or
rigorous approaches, but the scoring system does not eliminate the need for analysts to use their
own good judgment. The “Potential Pitfalls” section below identifies several occasions when
analysts need to override the Inconsistency Scores.
Analyze the sensitivity of your tentative conclusion to see how dependent it is on a few critical
items of information. For example, look for evidence that has a “C” for the lead hypothesis but an “I”
for all other hypotheses. Evaluate the importance and credibility of those reports, arguments, or
assumptions that garnered a “C.” Consider the consequences for the analysis if that item of relevant
information were wrong or misleading or subject to a different interpretation. If all the evidence
earns a “C” for each hypothesis, then the evidence is not diagnostic. If a different interpretation of
any of the data would cause a change in the overall conclusion, go back and double-check the
accuracy of your interpretation.
Report the conclusions. Consider the relative likelihood of all the hypotheses, not just the most
likely one. State which items of relevant information were the most diagnostic, and how compelling
a case they make in identifying the most likely hypothesis.
Identify indicators or milestones for future observation. Generate two lists: one focusing on
future events or what additional research might uncover that would substantiate the analytic
judgment, and a second that would suggest the analytic judgment is less likely to be correct or that
the situation has changed. Validate the indicators and monitor both lists on a regular basis,
remaining alert to whether new information strengthens or weakens your case.
Potential Pitfalls
A word of caution: ACH only works when all participating analysts
approach an issue with a relatively open mind. An analyst already
committed to a “right answer” will often find a way to interpret
relevant information to align with or make consistent with the
preexisting belief. In other words, as an antidote to Confirmation
Bias, ACH is like a flu shot. Getting the flu shot will usually keep you
from getting the flu, but it won’t make you well if you already have
the flu.
The Inconsistency Scores generated for each hypothesis are not the
product of a magic formula that tells you which hypothesis to believe.
The ACH software takes you through a systematic analytic process,
and the Inconsistency Score calculation that emerges is only as
accurate as your selection and evaluation of the relevant information.
Place an “I” in the box that rates each item against each
hypothesis if you would not expect to see that item of
information if the hypothesis were true.
It is very hard to deal with deception when you are really just
trying to get a sense of what is going on, and there is so
much noise in the system, so much overload, and so much
ambiguity. When you layer deception schemes on top of that,
it erodes your ability to act.
—Robert Jervis, “Signaling and Perception in the Information Age,” in
The Information Revolution and National Security (August 2000)
Analysts have also found the following “rules of the road” helpful in
anticipating the possibility of deception and dealing with it:8
An Argument Map makes it easier for both the analysts and the
recipients of the analysis to clarify and organize their thoughts and
evaluate the soundness of any conclusion. It shows the logical
relationships between various thoughts in a systematic way and
allows one to assess quickly in a visual way the strength of the
overall argument. The technique also helps the analysts and
recipients of the report to focus on key issues and arguments rather
than focusing too much attention on minor points.
When to Use It
When making an intuitive judgment, use Argument Mapping to test
your own reasoning. Creating a visual map of your reasoning and
the evidence that supports this reasoning helps you better
understand the strengths, weaknesses, and gaps in your argument.
It is best to use this technique before you write your product to
ensure the quality of the argument and refine it if necessary.
Draw a set of boxes below this initial box and list the key
reasons why the statement is true along with the key objections
to the statement.
Use green lines to link the reasons to the primary claim or other
conclusions they support.
Description
Figure 7.9 Argument Mapping: Does North Korea Have Nuclear Weapons?
Source: Diagram produced using the bCisive Argument Mapping software from Austhink, www.austhink.com.
NOTES
1. See the discussion in chapter 2 contrasting the characteristics of
System 1, or intuitive thinking, with System 2, or analytic thinking.
Back to Figure
The cells in each row show the impact of the variable represented by
that row on each of the variables listed across the top of the matrix. The
cells in each column show the impact of each variable listed down the
left side of the matrix on the variable represented by the column.
Back to Figure
Back to Figure
The illustration shows a quadrant. The top side is labeled centralized;
the right side is labeled religious; the bottom side is labeled
decentralized; and the left side is labeled secularized. H1: Centralized
state and secularized society. H2: Centralized state and religious society.
H3: Decentralized state and secularized society. H4: Decentralized state
and religious society.
Back to Figure
The matrix lists relevant information and hypothesis, buttons for rating
credibility, and a column for listing notes, assumptions, and credibility
justification. At the top of the matrix, a color legend shows the level of
disagreement. The column consists of a tool for reordering hypotheses
from most to least likely. The row consists of a tool for moving the most
discriminating information to the top of the matrix. The cells consists of
options for access chat and viewing analyst ratings. The number of
inconsistencies in the hypotheses is also displayed.
Back to Figure
The contention is that North Korea has nuclear weapons. The objection
to this is that North Korea does not have technical capacity to produce
enough weapons-grade fissile material. A rebuttal to this is that North
Korea was provided key information and technology by Pakistan around
1997. The reasoning to the contention is that North Korea has exploded
nuclear weapons in tests. The evidence to this is that North Korea
claimed to have exploded test weapons in 2006 and 2009, and that there
are seismological evidence of powerful explosions.
CHAPTER 8 REFRAMING TECHNIQUES
Students of the intelligence profession have long recognized that failure to challenge a consensus
judgment or a well-established mental model has caused most major intelligence failures. The
postmortem analysis of virtually every major U.S. intelligence failure since Pearl Harbor has identified an
analytic mental model or outdated mindset as a key factor contributing to the failure. Appropriate use of
the techniques in this chapter can, however, help mitigate a variety of common cognitive limitations and
improve the analyst’s odds of getting the analysis right.
This record of analytic failures has generated discussion about the “paradox of expertise.”1 Experts can
be the last to recognize the occurrence and significance of change. For example, few specialists on the
Middle East foresaw the events of the Arab Spring, few experts on the Soviet Union foresaw its
collapse, and almost every economist was surprised by the depth of the financial crisis in 2008. Political
analysts in 2016 also failed to forecast the United Kingdom vote to leave the European Union or Donald
Trump’s election to the presidency of the United States.
As we noted in chapter 2, an analyst’s mental model is everything the analyst knows about how things
normally work in a certain environment or a specific scientific field. It tells the analyst, sometimes
subconsciously, what to look for, what is important, and how to interpret what he or she sees. A mental
model formed through education and experience serves an essential function: it is what enables the
analyst to provide routinely reasonably good intuitive assessments or estimates about what is
happening or likely to happen.
What gets us into trouble is not what we don’t know, it’s what we know for sure that just ain’t so.
—Mark Twain, American author and humorist
The problem is that a mental model that has previously provided accurate assessments and estimates
for many years can be slow to change. New information received incrementally over time is easily
assimilated into one’s existing mental model, so the significance of gradual change over time is easily
missed. It is human nature to see the future as a continuation of the past. Generally, major trends and
events evolve slowly, and the future is often foreseen by skilled intelligence analysts. However, life does
not always work this way. The most significant intelligence failures have been failures to foresee
historical discontinuities, when history pivots and changes direction.
In the wake of Al-Qaeda’s attack on the United States on September 11, 2001, and the erroneous 2002
National Intelligence Estimate on Iraq’s weapons of mass destruction, the U.S. Intelligence Community
came under justified criticism. This prompted demands to improve its analytic methods to mitigate the
potential for future such failures. For the most part, critics, especially in the U.S. Congress, focused on
the need for “alternative analysis” or techniques that challenged conventional wisdom by identifying
potential alternative outcomes. Such techniques carried many labels, including challenge analysis,
contrarian analysis, and Red Cell/Red Team/Red Hat analysis.
U.S. intelligence agencies responded by developing and propagating techniques such as Devil’s
Advocacy, Team A/Team B Analysis, Red Team Analysis, and Team A/Team B Debate. Use of such
techniques had both plusses and minuses. The techniques forced analysts to consider alternative
explanations and explore how their key conclusions could be undermined, but the proposed techniques
have several drawbacks.
In a study evaluating the efficacy of Structured Analytic Techniques, Coulthart notes that techniques
such as Devil’s Advocacy, Team A/Team B Analysis, and Red Teaming are not quantitative and are
more susceptible to the biasing found in System 1 Thinking.2 This observation reinforces our view that
this book should give more weight to techniques, such as Analysis of Competing Hypotheses or
Structured Self-Critique, that are based on the more formalized and deliberative rule processes
consistent with System 2 Thinking.
Another concern is the techniques carry with them an emotional component. No one wants to be told
they did something wrong, and it is hard not to take such criticism personally. Moreover, when analysts
feel obligated to defend their positions, their key judgments and mindsets tend to become even more
ingrained.
One promising solution to this dilemma has been to develop and propagate Reframing Techniques that
hopefully accomplish the same objective while neutralizing the emotional component. The goal is to find
ways to look at a problem from multiple perspectives while avoiding the emotional pitfalls of an us-
versus-them approach.
A frame is any cognitive structure that guides the perception and interpretation of what one sees. A
mental model of how things normally work can be thought of as a frame through which an analyst sees
and interprets evidence. An individual or a group of people can change their frame of reference, and
thus challenge their own thinking about a problem, simply by changing the questions they ask or
changing the perspective from which they ask the questions. Analysts can use a Reframing Technique
when they need to generate new ideas, when they want to see old ideas from a new perspective, or
when they want to challenge a line of analysis.3 Reframing helps analysts break out of a mental rut by
activating a different set of synapses in their brain.
To understand the power of reframing and why it works, it is necessary to know a little about how the
human brain works. Scientists believe the brain has roughly 100 billion neurons, each analogous to a
computer chip capable of storing information. Each neuron has octopus-like arms called axons and
dendrites. Electrical impulses flow through these arms and are ferried by neurotransmitting chemicals
across the synaptic gap between neurons. Whenever two neurons are activated, the connections, or
synapses, between them are strengthened. The more frequently those same neurons are activated, the
stronger the path between them.
Once a person has started thinking about a problem one way, the same mental circuits or pathways are
activated and strengthened each time the person thinks about it. The benefit of this is that it facilitates
the retrieval of information one wants to remember. The downside is that these pathways become
mental ruts that make it difficult to see the information from a different perspective. When an analyst
reaches a judgment or decision, this thought process is embedded in the brain. Each time the analyst
thinks about it, the same synapses are triggered, and the analyst’s thoughts tend to take the same well-
worn pathway through the brain. Because the analyst keeps getting the same answer every time, she or
he will gain confidence, and often overconfidence, in that answer.
Another way of understanding this process is to compare these mental ruts to the route a skier will cut
looking for the best path down from a mountain top (see Figure 8.0). After several runs, the skier has
identified the ideal path and most likely will remain stuck in the selected rut unless other stimuli or
barriers force him or her to break out and explore new opportunities.
Fortunately, it is easy to open the mind to think in different ways. The techniques described in this
chapter are designed to serve that function. The trick is to restate the question, task, or problem from a
different perspective to activate a different set of synapses in the brain. Each of the applications of
reframing described in this chapter does this in a different way. Premortem Analysis, for example, asks
analysts to imagine themselves at some future point in time, after having just learned that a previous
analysis turned out to be completely wrong. The task then is to figure out how and why it might have
gone wrong. What If? Analysis asks the analyst to imagine that some unlikely event has occurred, and
then to explain how it could have happened along with the implications of the event.
These techniques are generally more effective in a small group than with a single analyst. Their
effectiveness depends in large measure on how fully and enthusiastically participants in the group
embrace the imaginative or alternative role they are playing. Just going through the motions is of limited
value. Practice in using Reframing Techniques—especially Outside-In Thinking, Premortem Analysis,
and Structured Self-Critique—will help analysts become proficient in the fifth habit of the Five Habits of
the Master Thinker: understanding the overarching context within which the analysis is being done.
Description
In addition, appropriate use of Reframing Techniques can help mitigate a variety of common cognitive
limitations and improve the analyst’s odds of getting the analysis right. They are particularly useful in
minimizing Mirror Imaging or the tendency to assume others will act in the same way we would, given
similar circumstances. They guard against the Anchoring Effect, which is accepting a given value of
something unknown as a proper starting point for generating an assessment. Reframing Techniques are
especially helpful in countering the intuitive traps of focusing on a narrow range of alternatives
representing only modest change (Expecting Marginal Change) and continuing to hold to a judgment
when confronted with a mounting list of contradictory evidence (Rejecting Evidence).
Three techniques for assessing cause and effect: Outside-In Thinking, Structured Analogies, and
Red Hat Analysis
Six techniques for challenging conventional wisdom or the group consensus: Quadrant
Crunching™, Premortem Analysis, Structured Self-Critique, What If? Analysis, High Impact/Low
Probability Analysis, and the Delphi Method
Two techniques for managing conflict: Adversarial Collaboration and Structured Debate
OVERVIEW OF TECHNIQUES
Outside-In Thinking broadens an analyst’s thinking about the
forces that can influence an issue of concern. The technique
prompts the analyst to reach beyond his or her specialty area to
consider broader social, organizational, economic, environmental,
political, legal, military, technological, and global forces or trends that
can affect the topic under study.
Description
Outside-In Thinking also is useful when assembling a large database, and analysts want to ensure they
have not forgotten important fields in the database architecture. For most analysts, important categories
of information (or database fields) are easily identifiable early in a research effort, but invariably one or
two additional fields emerge after an analyst or group of analysts is well into a project. This forces the
analyst or group to go back and review all previous reporting and input the additional data. Typically, the
overlooked fields are in the broader environment over which the analysts have little control. By applying
Outside-In Thinking, analysts can better visualize the entire set of data fields early in the research effort.
Value Added
Most analysts focus on familiar factors within their field of specialty,
but we live in a complex, interrelated world where events in our little
niche of that world are often affected by forces in the broader
environment over which we have no control. The goal of Outside-In
Thinking is to help analysts see the entire picture, not just the part of
the picture with which they are already familiar.
Form a group to brainstorm all the key forces and factors that could affect the topic but over which
decision makers or other individuals can exert little or no influence, such as globalization, the
emergence of new technologies, historical precedent, and the growing role of social media.
Employ the mnemonic STEMPLES + to trigger new ideas (Social, Technical, Economic, Military,
Political, Legal, Environmental, and Security, plus other factors such as Demographic, Religious, or
Psychological) and structure the discussion.
Assess specifically how each of these forces and factors might affect the problem.
Ascertain whether these forces and factors have an impact on the issue at hand, basing your
conclusion on the available evidence.
Generate new intelligence collection tasking or research priorities to fill in information gaps.
Relationship to Other Techniques
Outside-In Thinking is essentially the same as a business analysis
technique that goes by different acronyms such as STEEP,
STEEPLED, PEST, or PESTLE. For example, PEST is an acronym
for Political, Economic, Social, and Technological; STEEPLED also
includes Legal, Ethical, and Demographic. Military intelligence
organizations often use the mnemonic PMESII, which stands for
Political, Military, Economic, Social, Information, and Infrastructure.4
All require the analysis of external factors that may have either a
favorable or unfavorable influence on an organization or the
phenomenon under study.
Origins of This Technique
This technique has been used in planning and management
environments to ensure identification of outside factors that might
affect an outcome. The Outside-In Thinking approach described here
is from Randolph H. Pherson, Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019).
8.1.2 Structured Analogies
Analogies compare two situations to elicit and provoke ideas, help solve problems, and suggest history-
tested indicators. An analogy can capture shared attributes or it can be based on similar relationships or
functions being performed (see Figure 8.1.2). The Structured Analogies technique applies increased
rigor to analogical reasoning by requiring that the issue of concern be compared systematically with
multiple analogies.
Description
Identify experts who are familiar with the problem and have a
broad background that enables them to identify analogous
situations. Aim for at least five experts with varied backgrounds.
The technique introduces more human factors into the analysis such
as, “Who can I count on (e.g., do I have relatives, friends, or
business associates) to help me out?” “Is that operation within my
capabilities?” “What are my supporters expecting from me?” “Do I
really need to make this decision now?” and “What are the
consequences of making a wrong decision?”
Red Hat Analysis helps analysts guard against the common pitfall of
Overrating Behavioral Factors (also referred to as Fundamental
Attribution Error) by dampening the tendency to attribute the
behavior of other people, organizations, or governments to the
nature of the actor and underestimate the influence of situational
factors. Conversely, people tend to see their own behavior as
conditioned almost entirely by the situation in which they find
themselves. We seldom see ourselves as a bad person, but we often
see malevolent intent in others.7
Red Hat Analysis also protects analysts from falling prey to the
practitioners’ traps of not addressing the impact of the absence of
information on analytic conclusions (Ignoring the Absence of
Information) and continuing to hold to a judgment when confronted
with a mounting list of contradictory evidence (Rejecting Evidence).
The Method
Figure 8.1.3 shows how one might use Red Hat Analysis to catch
bank robbers.
Potential Pitfalls
Forecasting human decisions or the outcome of a complex organizational process is difficult in the best
of circumstances. For example, how successful would you expect to be in forecasting the difficult
decisions to be made by the U.S. president or even your local mayor? It is even more difficult when
dealing with a foreign culture with sizeable gaps in the available information. Mirror Imaging is hard to
avoid because, in the absence of a thorough understanding of the foreign situation and culture, your
own perceptions appear to be the only reasonable way to look at the problem.
A key first step in avoiding Mirror Imaging is to establish how you would behave and the reasons why.
After establishing this baseline, the analyst then asks if the adversary would act differently and why. Is
the adversary motivated by different stimuli, or does the adversary hold different core values? The task
of Red Hat Analysis then becomes illustrating how these differences would result in different policies or
behaviors.
A common error in our perceptions of the behavior of other people, organizations, or governments is to
fall prey to the heuristic of Overrating Behavior Factors, which is likely to be even more common when
assessing the behavior of foreign leaders or groups. This error is especially easy to make when one
assumes that the target has malevolent intentions, but our understanding of the pressures on that actor
is limited.
Analysts should always try to see the situation from the other side’s perspective, but if a sophisticated
grounding in the culture and operating environment of their subject is lacking, they will often be wrong.
Recognition of this pitfall should prompt analysts to consider using words such as “possibly” and “could
happen” rather than “likely” or “probably” when reporting the results of Red Hat Analysis.
Relationship to Other Techniques
Red Hat Analysis differs from Red Team Analysis in that it can be
done or organized by any analyst—or more often a team of analysts
—who needs to understand or forecast an adversary’s behavior and
who has, or can gain access to, the required cultural expertise. Red
Cell and Red Team Analysis are challenge techniques usually
conducted by a permanent organizational unit staffed by individuals
well qualified to think like or play the role of an adversary. The goal
of Red Hat Analysis is to exploit available resources to develop the
best possible analysis of an adversary’s or competitor’s behavior.
The goal of Red Cell or Red Team Analysis is usually to challenge
the conventional wisdom of established analysts or an opposing
team.
Origins of This Technique
Red Hat, Red Cell, and Red Team Analysis became popular during
the Cold War when “red” symbolized the Soviet Union, but they
continue to have broad applicability. This description of Red Hat
Analysis is a modified version of that in Randolph H. Pherson,
Handbook of Analytic Tools and Techniques, 5th ed. (Tysons, VA:
Pherson Associates, 2019).
8.2 CHALLENGE ANALYSIS TECHNIQUES
Challenge analysis encompasses a set of analytic techniques that
are also called contrarian analysis, alternative analysis, red teaming,
and competitive analysis. What all of these have in common is the
goal of challenging an established mental model or analytic
consensus to broaden the range of possible explanations or
estimates that should be seriously considered. That this same
activity has been called by so many different names suggests there
has been some conceptual diversity about how and why these
techniques are used and what they might help accomplish. All of
them apply some form of reframing to better understand past
patterns or foresee future events.
Logically, one might then expect that one of every four judgments
that intelligence analysts describe as “probable” will turn out to be
wrong. This perspective broadens the scope of what challenge
analysis might accomplish. It should not be limited to questioning the
dominant view to be sure it’s right. Even if the challenge analysis
confirms the initial probability judgment, it should go further to seek a
better understanding of the other 25 percent. In what circumstances
might there be a different assessment or outcome, what would that
be, what would constitute evidence of events moving in that
alternative direction, how likely is it, and what would be the
consequences?
For example, two contrary dimensions for a single attack would be simultaneous attacks and cascading
attacks. The various contrary dimensions are then arrayed in sets of 2-×-2 matrices. If four dimensions
are identified for a topic, the technique would generate six different 2-×-2 combinations of these four
dimensions (AB, AC, AD, BC, BD, and CD). Each of these pairs would be presented as a 2-×-2 matrix
with four quadrants. Participants then generate different stories or alternatives for each quadrant in each
matrix. If analysts create two stories for each quadrant in each of these 2-×-2 matrices, there will be a
total of forty-eight different ways the situation could evolve. Similarly, if six drivers are identified, the
technique will generate as many as 120 different stories to consider (see Figure 8.2.1.1a).
After a rich array of potential alternatives is generated, the analyst’s task is to identify which of the
various alternative stories are the most deserving of attention. The last step in the process is to develop
lists of indicators for each story and track them to determine which story is beginning to emerge.
The question, “How might terrorists attack a nation’s water system?” is useful for illustrating the Classic
Quadrant Crunching™ technique. State the conventional wisdom for the most likely way terrorists might
launch such an attack. For example, “Al-Qaeda or its affiliates will contaminate the water supply for a
large metropolitan area, causing mass casualties.”
Break down this statement into its component parts or key assumptions. For example, the
statement makes its key assumptions: (1) a single attack, (2) involving the contamination of drinking
water, (3) conducted by an outside attacker, (4) against a major metropolitan area, causing large
numbers of casualties.
Posit a contrary assumption for each key assumption. For example, what if there are multiple
attacks instead of a single attack?
Identify two or four dimensions of that contrary assumption. For example, what are different ways a
terrorist group could launch a multiple attack? Two possibilities would be simultaneous attacks (as
in the September 2001 attacks on the World Trade Center and the Pentagon or the London
bombings in 2005) or cascading attacks (as in the sniper killings in the Washington, D.C., area in
October 2002).
Description
Repeat this process for each of the key assumptions. Develop two or four contrary dimensions for
each contrary assumption. (See Figure 8.2.1.1b.)
Array pairs of contrary dimensions into sets of 2-×-2 matrices. In this case, ten different 2-×-2
matrices are the result. Two of the ten matrices are shown in Figure 8.2.1.1c.
For each cell in each matrix, generate one to three examples of how terrorists might launch an
attack. In some cases, such an attack might already have been imagined. In other quadrants, there
may be no credible attack concept. But several of the quadrants will usually stretch the analysts’
thinking, pushing them to consider the dynamic in new and different ways.
Description
Review all the attack plans generated; using a preestablished set of criteria, select those most
deserving of attention. In this example, possible criteria might be plans that are most likely to do the
following:
This process is illustrated in Figure 8.2.1.1d. In this case, three attack plans were deemed the most
likely. Attack plan 1 became Story A, attack plans 4 and 7 were combined to form Story B, and
attack plan 16 became Story C. It may also be desirable to select one or two additional attack plans
that might be described as “wild cards” or “nightmare scenarios.” These are attack plans that have a
low probability of being tried but are worthy of attention because their impact would be substantial if
they did occur. The figure shows attack plan 11 as a “nightmare scenario.”
Consider what decision makers might do to prevent bad stories from happening, mitigate their
impact, and deal with their consequences.
Generate a list of key indicators to help assess which, if any, of these attack plans is beginning to
emerge.
State what most analysts believe is the most likely future scenario.
Break down this statement into its component parts or key assumptions.
Repeat this process for each of the contrary assumptions—a process like that shown in Figure
8.2.1.1b.
Add the key assumption to the list of contrary dimensions, creating either one or two pairs.
Repeat this process for each row, creating one or two pairs, including a key assumption and one or
three contrary dimensions.
Array these pairs into sets of 2-×-2 matrices, a process shown in Figure 8.2.1.1c.
For each cell in each matrix, generate one to three credible scenarios. In some cases, such a
scenario may already have been imagined. In other quadrants, there may be no scenario that
makes sense. But several of the quadrants will usually stretch the analysts’ thinking, often
generating counterintuitive scenarios.
Review all the scenarios generated—a process outlined in Figure 8.2.1.1d; using a preestablished
set of criteria, select those scenarios most deserving of attention. The difference is that with Classic
Quadrant Crunching™, analysts are seeking to develop a set of credible alternative attack plans to
avoid surprise. In Foresight Quadrant Crunching™, analysts are engaging in a new version of
Multiple Scenarios Generation analysis.
Relationship to Other Techniques
Both Quadrant Crunching™ techniques are specific applications of a
generic method called Morphological Analysis (described in chapter
9). They draw on the results of the Key Assumptions Check and can
contribute to Multiple Scenarios Generation. They are also useful in
identifying indicators.
Origins of This Technique
Classic Quadrant Crunching™ was developed by Randolph Pherson
and Alan Schwartz to meet a specific analytic need in the
counterterrorism arena. It was first published in Randolph H.
Pherson, Handbook of Analytic Tools and Techniques, 4th ed.
(Reston, VA: Pherson Associates, LLC, 2008). Foresight Quadrant
Crunching™ was developed by Globalytica, LLC, in 2013 as a new
method for conducting Foresight analysis.
8.2.2 Premortem Analysis
Premortem Analysis is conducted prior to finalizing an analysis or a
decision to assess how a key analytic judgment, decision, or plan of
action could go spectacularly wrong. The goal is to reduce the risk of
surprise and the subsequent need for a postmortem investigation of
what went wrong. It is an easy-to-use technique that enables a group
of analysts who have been working together on any type of future-
oriented analysis or project to challenge effectively the accuracy of
their own conclusions. It is a specific application of the reframing
method, in which restating the question, task, or problem from a
different perspective enables one to see the situation differently and
come up with different ideas.
When to Use It
Premortem Analysis should be used by analysts who can devote a
few hours to challenging their own analytic conclusions about the
future to see where they might be wrong. It is much easier to
influence people’s decisions before they make up their mind than
afterward when they have a personal investment in that decision. For
this reason, analysts should use Premortem Analysis and its
companion technique, Structured Self-Critique, just before finalizing
their key analytic judgments.
A single analyst may use the two techniques, but, like all Structured
Analytic Techniques, they are most effective when used by a small
group. If a team assessment, the process should be initiated as soon
as the group starts to coalesce on a common position.
Group members tend to go along with the group leader, with the first
group member to stake out a position, or with an emerging majority
viewpoint for many reasons. Most benign is the common rule of
thumb that when we have no firm opinion, we take our cues from the
opinions of others. We follow others because we believe (often
rightly) that they know what they are doing. Analysts may also be
concerned that others will critically evaluate their views, or that
dissent will come across as disloyalty or as an obstacle to progress
that will just prolong the meeting.
Description
How does the team rate the quality and timeliness of its
evidence?
Analytic process. In the initial analysis, see if the team did the
following: Did it identify alternative hypotheses and seek out
information on these hypotheses? Did it identify key
assumptions? Did it seek a broad range of diverse opinions by
including analysts from other offices and agencies, academia, or
the private sector in the deliberations? If the team did not take
these steps, the odds of the team having a faulty or incomplete
analysis increased. Either consider doing some of these things
now or lower the team’s level of confidence in its judgment.
After responding to these questions, the analysts take off their black
hats and reconsider the appropriate level of confidence in the team’s
previous judgment. Should the initial judgment be reaffirmed or
modified?
Potential Pitfalls
The success of this technique depends in large measure on the
team members’ willingness and ability to make the transition from
supporters to critics of their own ideas. Some individuals lack the
intellectual flexibility to do this well. It must be clear to all members
that they are no longer performing the same function as before. They
should view their task as an opportunity to critique an analytic
position taken by some other group (themselves, but with a different
hat on).
A What If? Analysis can be done by an individual or as a team project. The time required is about
the same as that for drafting a short paper. It usually helps to initiate the process with a
brainstorming session. Additional brainstorming sessions can be interposed at various stages of the
process.
Figure 8.2.4 What If? Scenario: India Makes Surprising Gains from the Global
Financial Crisis
Source: This example was developed by Ray Converse and Elizabeth Manak, Pherson Associates, LLC.
Begin by assuming that what could happen has occurred. Often it is best to pose the issue in the
following way: “The New York Times reported yesterday that . . .” Be precise in defining both the
event and its impact. Sometimes it is useful to posit the new contingency as the outcome of a
specific triggering event, such as a natural disaster, an economic crisis, a major political
miscalculation, or an unexpected new opportunity that vividly reveals a key analytic assumption is
no longer valid.
Develop at least one chain of argumentation—based on both evidence and logic—to explain how
this outcome could have come about. In developing the scenario or scenarios, focus on what must
occur at each stage of the process. Work backwards from the event to the present day. This is
called “backwards thinking.” Try to envision more than one scenario or chain of argument.
Generate and validate a list of indicators or “observables” for each scenario that would help
analysts detect whether events are starting to play out in a way envisioned by that scenario.
Identify which scenarios deserve the most attention by taking into consideration the difficulty of
implementation and the potential significance of the impact.
Assess the level of damage or disruption that would result from a negative scenario and estimate
how difficult it would be to overcome or mitigate the damage incurred.
For new opportunities, assess how well developments could turn out and what can be done to
ensure that such a positive scenario might occur.
Report periodically on whether any of the proposed scenarios are beginning to emerge and why.
Relationship to Other Techniques
What If? Analysis is sometimes confused with the High Impact/Low
Probability Analysis technique, as each considers low-probability
events. However, only What If? Analysis uses the reframing
technique of positing that a future event has happened and then
works backwards in time to imagine how it could have happened.
High Impact/Low Probability Analysis requires new or anomalous
information as a trigger and then projects forward to what might
occur and the consequences if it does.
Origins of This Technique
Analysts and practitioners have applied the term “What If? Analysis”
to a variety of techniques for a long time. The version described here
is based on Randolph H. Pherson, “What If? Analysis,” in Handbook
of Analytic Tools and Techniques, 5th ed. (Tysons, VA: Pherson
Associates, LLC, 2019) and training materials from the Department
of Homeland Security, Office of Intelligence and Analysis.
A Cautionary Note
What If? Analysis does not require new or anomalous information to serve as a trigger. It reframes
the question by positing that a surprise event has happened. It then looks backwards from that
surprise event to map several ways it could have happened. It also tries to identify actions that, if
taken in a timely manner, might have prevented it.
High Impact/Low Probability Analysis is primarily a vehicle for warning decision makers that
recent, unanticipated developments suggest that an event previously deemed highly unlikely may
occur. Extrapolating on recent evidence or information, it projects forward to discuss what could
occur and the consequences if the event does occur. It challenges the conventional wisdom.
Figure 8.2.5 High Impact/Low Probability Scenario: Conflict in the Arctic20
Source: This example was developed by Pherson Associates, LLC.
Origins of This Technique
The description here is based on Randolph H. Pherson, “High
Impact/Low Probability Analysis,” in Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019);
Globalytica, LLC, training materials; and Department of Homeland
Security, Office of Intelligence and Analysis training materials. Tools
needed to create your own chart can be found at
https://www.mindtools.com/pages/article/newPPM_78.htm.
8.2.6 Delphi Method
Delphi is a method for eliciting ideas, judgments, or forecasts from a
group of experts who may be geographically dispersed. It is different
from a survey in that there are two or more rounds of questioning.
After the first round of questions, a moderator distributes all the
answers and explanations of the answers to all participants, often
anonymously. The expert participants are then given an opportunity
to modify or clarify their previous responses, if so desired, based on
what they have seen in the responses of the other participants. A
second round of questions builds on the results of the first round,
drills down into greater detail, or moves to a related topic. The
technique allows flexibility by increasing the number of rounds of
questions.
When to Use It
The RAND Corporation developed the Delphi Method at the
beginning of the Cold War in the 1950s to forecast the impact of new
technology on warfare. It was also used to assess the probability,
intensity, or frequency of future enemy attacks. In the 1960s and
1970s, Delphi became widely known and used as a method for
futures research, especially forecasting long-range trends in science
and technology. Futures research is like intelligence analysis in that
the uncertainties and complexities one must deal with often preclude
the use of traditional statistical methods, so explanations and
forecasts must be based on the experience and informed judgments
of experts.
Over the years, Delphi has been used in a wide variety of ways, and
for an equally wide variety of purposes. Although many Delphi
projects have focused on developing a consensus of expert
judgment, a variant called Policy Delphi is based on the premise that
the decision maker is not interested in having a group make a
consensus decision, but rather in having the experts identify
alternative policy options and present all the supporting evidence for
and against each option. That is the rationale for describing Delphi
as a reframing technique. It can be used to identify a set of divergent
opinions, many of which may be worth exploring.
One group of Delphi scholars advises that the Delphi technique “can
be used for nearly any problem involving forecasting, estimation, or
decision making”—if the problem is not so complex or so new as to
preclude the use of expert judgment. These Delphi advocates report
using it for diverse purposes that range from “choosing between
options for regional development, to predicting election outcomes, to
deciding which applicants should be hired for academic positions, to
predicting how many meals to order for a conference luncheon.”21
Value Added
We believe the development of Delphi panels of experts on areas of
critical concern should be standard procedure for outreach to experts
outside an analyst’s organization, particularly in the intelligence
community because of its more insular work environment. In the
United States, the Office of the Director of National Intelligence
(ODNI) encourages intelligence analysts to consult with relevant
experts in academia, business, and NGOs in Intelligence Community
Directive No. 205, on Analytic Outreach, dated July 2008.
The Delphi Method can help protect analysts from falling victim to
several intuitive traps, including giving too much weight to first
impressions or initial data, especially if they attract our attention and
seem important at the time (Relying on First Impressions). It also
protects against the tendency to continue holding to an analytic
judgment when confronted with a mounting list of evidence that
contradicts the initial conclusion (Rejecting Evidence).
The Method
In a Delphi project, a moderator or analyst sends a questionnaire to
a panel of experts usually in different locations. The experts respond
to these questions and are asked to provide short explanations for
their responses. The moderator collates the results from the first
questionnaire and sends the collated responses back to all panel
members, asking them to reconsider their responses based on what
they see and learn from the other experts’ responses and
explanations. Panel members may also be asked to answer another
set of questions. This cycle of question, response, and feedback
continues through several rounds using the same or a related set of
questions. It is often desirable for panel members to remain
anonymous so that they are not unduly influenced by the responses
of senior members. This method is illustrated in Figure 8.2.6.
Examples
Description
To show how Delphi can be used for intelligence analysis, we have developed three illustrative
applications:
Evaluation of another country’s policy options. The Delphi project manager or moderator
identifies several policy options that a foreign country might choose. The moderator then asks a
panel of experts on the country to rate the desirability and feasibility of each option, from the other
country’s point of view, on a five-point scale ranging from “Very Desirable” or “Feasible” to “Very
Undesirable” or “Definitely Infeasible.” Panel members also identify and assess any other policy
options that should be considered and identify the top two or three arguments or items of evidence
that guided their judgments. A collation of all responses is sent back to the panel with a request for
members to do one of the following: reconsider their position in view of others’ responses, provide
further explanation of their judgments, or reaffirm their previous response. In a second round of
questioning, it may be desirable to list key arguments and items of evidence and ask the panel to
rate them on their validity and their importance, again from the other country’s perspective.
Warning analysis or monitoring a situation over time. The facilitator asks a panel of experts to
estimate the probability of a future event. This might be either a single event for which the analyst is
monitoring early warning indicators or a set of scenarios for which the analyst is monitoring
milestones to determine the direction in which events seem to be moving. A Delphi project that
monitors change over time can be managed in two ways. One is to have a new round of questions
and responses at specific intervals to assess the extent of any change. The other is what is called
either Dynamic Delphi or Real-Time Delphi, where participants can modify their responses at any
time as new events occur or as a participant submits new information.22 The probability estimates
provided by the Delphi panel can be aggregated to furnish a measure of the significance of change
over time. They can also be used to identify differences of opinion among the experts that warrant
further examination.
Potential Pitfalls
A Delphi project involves administrative work to identify the experts,
communicate with panel members, and collate and tabulate their
responses through several rounds of questioning. The use of Delphi
by intelligence organizations can pose additional obstacles, such as
ensuring that the experts have appropriate security clearances or
requiring them to meet with the analysts in approved office spaces.
Another potential pitfall is that overenthusiastic use of the technique
can force consensus when it might be better to present two
competing hypotheses and the evidence supporting each position.
Origins of This Technique
The origin of Delphi as an analytic method was described above
under “When to Use It.” The following references were useful in
researching this topic: Murray Turoff and Starr Roxanne Hiltz,
“Computer Based Delphi Processes,” 1996,
http://web.njit.edu/~turoff/Papers/delphi3.html; and Harold A.
Linstone and Murray Turoff, The Delphi Method: Techniques and
Applications (Reading, MA: Addison-Wesley, 1975). A 2002 digital
version of Linstone and Turoff’s book is available online at
http://is.njit.edu/pubs/delphibook; see in particular the chapter by
Turoff on “The Policy Delphi”
(http://is.njit.edu/pubs/delphibook/ch3b1.pdf).
The use of ACH may not result in the elimination of all the
differences of opinion, but it can be a big step toward understanding
these differences and determining what might be reconcilable
through further intelligence collection or research. The analysts can
then make a judgment about the potential productivity of further
efforts to resolve the differences. ACH may not be helpful, however,
if two sides are already locked into their positions. It is all too easy in
ACH for one side to interpret the evidence and enter assumptions in
a way that deliberately supports its preconceived position. To
challenge a well-established mental model, other challenge or
conflict management techniques may be more appropriate.
The interesting point here is the ground rule that the team was
instructed to follow. After reviewing the evidence, each officer
identified those items of evidence thought to be of critical importance
in making a judgment on Nosenko’s bona fides. Any item that one
officer stipulated as critically important then had to be addressed by
the other two members.
It turned out that fourteen items were stipulated by at least one of the
team members and had to be addressed by the others. Each officer
prepared his own analysis, but they all had to address the same
fourteen issues. Their report became known as the “Wise Men”
report.
The ground rules used in the Nosenko case can be applied in any
effort to abate a long-standing analytic controversy. The key point
that makes these rules work is the requirement that each side must
directly address the issues important to the other side and thereby
come to understand the other’s perspective. This process guards
against the common propensity of analysts to make their own
arguments and then simply dismiss those of the other side as
unworthy of consideration.31
8.3.2 Structured Debate
Structured Debate is a planned debate between analysts or analytic
teams that hold opposing points of view on a specific issue. The
debate is conducted according to set rules and before an audience,
which may be a “jury of peers” or one or more senior analysts or
managers.
When to Use It
Structured Debate is called for when a significant difference of
opinion exists within or between analytic units or within the decision-
making community. It can also be used effectively when Adversarial
Collaboration has been unsuccessful or is impractical, and a choice
must be made between two opposing opinions or a decision to go
forward with a comparative analysis of both. Structured Debate
requires a significant commitment of analytic time and resources. A
long-standing policy issue, a critical decision that has far-reaching
implications, or a dispute within the analytic community that is
obstructing effective interagency collaboration would be grounds for
making this type of investment in time and resources.
Value Added
In the method proposed here, each side presents its case in writing
to the opposing side; then, both cases are combined in a single
paper presented to the audience prior to the debate. The oral debate
then focuses on refuting the other side’s position. Glib and
personable speakers can always make arguments for their own
position sound persuasive. Effectively refuting the other side’s
position is a different ball game, however. The requirement to refute
the other side’s position brings to the debate an important feature of
the scientific method: that the most likely hypothesis is the one with
the least evidence against it as well as good evidence for it. (The
concept of refuting hypotheses is discussed in chapter 7.)
He who knows only his own side of the case, knows little of
that. His reasons may be good, and no one may have been
able to refute them. But if he is equally unable to refute the
reasons on the opposite side, if he does not so much as
know what they are, he has no ground for preferring either
opinion.
—John Stuart Mill, On Liberty (1859)
The goal of the debate is to decide what to tell the client. If neither
side can effectively refute the other, then arguments for and against
both sides should be in the report. Customers of intelligence analysis
gain more benefit by weighing well-argued conflicting views than
from reading an assessment that masks substantive differences
among analysts or drives the analysis toward the lowest common
denominator. If participants routinely interrupt one another or pile on
rebuttals before digesting the preceding comment, the objective of
Structured Debate is defeated. The teams are engaged in emotional
conflict rather than constructive debate.
The Method
Start by defining the conflict to be debated. If possible, frame the
conflict in terms of competing and mutually exclusive hypotheses.
Ensure that all sides agree with the definition. Then follow these
steps:
Each side writes up the best case from its point of view. This
written argument must be structured with an explicit
presentation of key assumptions, key pieces of evidence, and
careful articulation of the logic behind the argument.
Each side presents the opposing side with its arguments, and
the two sides are given time to develop counterarguments to
refute the opposing side’s position.
Each side then presents to the audience its rebuttal of the other
side’s written position. The purpose here is to proceed in the
oral arguments by systematically refuting alternative hypotheses
rather than by presenting more evidence to support one’s own
argument. This is the best way to evaluate the strengths of the
opposing arguments.
After each side has presented its rebuttal argument, the other
side is given an opportunity to refute the rebuttal.
The jury discusses the issue and passes judgment. The winner
is the side that makes the best argument refuting the other
side’s position, not the side that makes the best argument
supporting its own position. The jury may also recommend
possible next steps for further research or intelligence collection
efforts. If neither side can refute the other’s arguments, it may
be because both sides have a valid argument; if that is the case,
both positions should appear in any subsequent analytic report.
Relationship to Other Techniques
Structured Debate is like the Team A/Team B technique that has
been taught and practiced throughout the intelligence community.
Structured Debate differs from Team A/Team B Analysis in its focus
on refuting the other side’s argument. Use of the technique also
reduces the chances that emotions overshadow the process of
conflict resolution. The authors have dropped Team A/Team B
Analysis and Devil’s Advocacy from the third edition of this book
because they believe neither is an appropriate model for how
analysis should be conducted.32
Origins of This Technique
The history of debate goes back to the Socratic dialogues in ancient
Greece, and even earlier. Many different forms of debate have
evolved since then. Richards J. Heuer Jr. formulated the idea of
focusing the debate on refuting the other side’s argument rather than
supporting one’s own.
NOTES
1. Rob Johnston, Analytic Culture in the U.S. Intelligence Community
(Washington, DC: CIA Center for the Study of Intelligence, 2005), 64.
9. Peter Schwartz, The Art of the Long View (New York: Doubleday,
1991).
11. Donald P. Steury, ed., Sherman Kent and the Board of National
Estimates: Collected Essays (Washington, DC: CIA Center for the
Study of Intelligence, 1994), 133.
13. Gary Klein, Intuition at Work: Why Developing Your Gut Instinct
Will Make You Better at What You Do (New York: Doubleday, 2002),
91.
20. A more robust discussion of how conflict could erupt in the Arctic
can be found in “Uncharted Territory: Conflict, Competition, or
Collaboration in the Arctic?” accessible at shop.globalytica.com.
30. Ibid.
Back to Figure
Back to Figure
Back to Figure
Back to Figure
Back to Figure
There are seven steps. Steps 1, 3, and 5 are from the facilitator to
the experts. Steps 2, 4, and 6 are from the experts to the facilitator.
The facilitator prepares the final report in step 7. The technique flows
as follows. Facilitator seeks individual assessments from a pool of
experts. Experts respond to the request, receive feedback from the
facilitator, and revise their responses. Facilitator compiles the
responses and sends a revised set of questions to each expert.
Several feedback cycles may be needed. Facilitator prepares report
on experts’ responses, noting key outliers.
CHAPTER 9 FORESIGHT TECHNIQUES
In the complex, evolving, uncertain situations that analysts and decision makers must deal with every
day, the future is not easily predictable. Some events are intrinsically of low predictability. The best the
analyst can do is to identify the driving forces that are likely to determine future outcomes and monitor
those forces as they interact to become the future. Scenarios are a principal vehicle for doing this.
Scenarios are plausible and provocative stories about how the future might unfold. When alternative
futures are clearly outlined, decision makers can mentally rehearse these futures and ask themselves,
“What should I be doing now to prepare for these futures?”
Analysts can best perform this function by using Foresight Techniques—a family of imaginative
structured techniques that infuse creativity into the analytic process. The techniques help decision
makers better structure a problem and anticipate the unanticipated. Figure 9.0 lists eleven techniques
described in this and other chapters, suggesting the best circumstances for using each technique. When
the Foresight Techniques are matched with Indicators, they can help warn of coming dangers or expose
new ways of responding to opportunities.
Description
The process of developing key drivers—and using them in combinations to generate a wide array of
alternative trajectories—forces analysts to think about the future in ways they never would have
contemplated if they relied only on intuition and their own expert knowledge. Generating a
comprehensive list of key drivers requires organizing a diverse team that is knowledgeable in a wide
variety of disciplines. A good guide for ensuring diversity is to engage a set of experts who can address
all the elements of STEMPLES+, that is, the Social, Technical, Economic, Military, Political, Legal,
Environmental, and Security dimensions of a problem plus possible additional factors such as
Demographics, Religion, or Psychology.
We begin this chapter with a discussion of two techniques for developing a list of key drivers. Key
Drivers Generation™ uses the Cluster Brainstorming technique to identify potential key drivers. The Key
Uncertainties Finder™ is an extension of the Key Assumptions Check. We recommend using both
techniques and melding their findings to generate a robust set of drivers. The authors’ decades of
experience in developing key drivers suggest that the number of mutually exclusive key drivers rarely
exceeds four or five. Practice in using these two techniques will help analysts become proficient in the
fourth of the Five Habits of the Master Thinker: identifying key drivers.
Scenarios provide a framework for considering multiple plausible futures by constructing alternative
trajectories or stories about how a situation could unfold. As Peter Schwartz, author of The Art of the
Long View, has argued, “The future is plural.”1 Trying to divine or predict a single outcome typically is a
disservice to senior policy officials, decision makers, and other clients. Generating several scenarios (for
example, those that are most likely, unanticipated, or most dangerous) can be more helpful because it
helps focus attention on the underlying forces and factors—or key drivers—that are most likely to
determine how the future will unfold. When High Impact/Low Probability scenarios are included, analysts
can use scenarios to examine assumptions, identify emerging trends, and deliver useful warning
messages. Foresight Techniques help analysts manage complexity and uncertainty by adding rigor to
the foresight process. They are based on the premise that generating numerous stories about how the
future will evolve will increase the practitioner’s sensitivity to outlier scenarios, reveal new opportunities,
and reduce the chances of surprise. By postulating different scenarios, analysts can identify the multiple
ways in which a situation might evolve. This process can help decision makers develop plans to take
advantage of whatever opportunities the future may hold—or, conversely, to avoid or mitigate risks.
It is vitally important that we think deeply and creatively about the future, or else we run the risk
of being surprised and unprepared. At the same time, the future is uncertain, so we must prepare
for multiple plausible futures, not just the one we expect to happen. Scenarios contain the stories
of these multiple futures, from the expected to the wildcard, in forms that are analytically
coherent and imaginatively engaging. A good scenario grabs us by the collar and says, “Take a
good look at this future. This could be your future. Are you going to be ready?”
— Peter Bishop, Andy Hines, and Terry Collins,“The Current State of Scenario Development,” Foresight (March 2007)
Foresight Techniques are most useful when a situation is complex or when the outcomes are too
uncertain to trust a single prediction. When decision makers and analysts first come to grips with a new
situation or challenge, a degree of uncertainty always exists about how events will unfold. At the point
when national policies or long-term corporate strategies are in the initial stages of formulation, Foresight
Techniques can have a strong impact on decision makers’ thinking.
One benefit of Foresight Techniques is that it provides an efficient mechanism for communicating
complex ideas, typically described in a scenario with a short and “catchy” label. These labels provide a
lexicon for thinking and communicating with other analysts and decision makers about how a situation
or a country is evolving. Examples of effective labels include “Red Ice,” which describes Russia’s
takeover of a melting Arctic Ocean, or “Glueless in Havana,” which describes the collapse of the Cuban
government and its replacement by the Russian mafia.
Scenarios do not predict the future, but a good set of scenarios bounds the range of possible futures for
which a decision maker may need to be prepared. Scenarios can be used as a strategic planning tool
that brings decision makers and stakeholders together with experts to envisage the alternative futures
for which they must plan.2
When analysts are thinking about scenarios, they are rehearsing the future so that decision makers can
be prepared for whatever direction that future takes. Instead of trying to estimate the most likely
outcome and being wrong quite often, scenarios provide a framework for considering multiple plausible
futures. Trying to divine or predict a single outcome is usually a fool’s errand. By generating several
scenarios, the decision makers’ attention is shifted to the key drivers that are most likely to influence
how a situation will develop.
Analysts have learned from experience that involving decision makers in a scenarios workshop is an
effective way to communicate the results of this technique and to sensitize them to important
uncertainties. Most participants find the process of developing scenarios much more useful than any
written report or formal briefing. Those involved in the process often benefit in several ways. Experience
has shown that scenarios can do the following:
Suggest indicators to monitor for signs that a specified future is becoming more or less likely.
Help analysts and decision makers anticipate what would otherwise be surprising developments by
forcing them to challenge assumptions and consider plausible “wild-card” scenarios or
discontinuous events.
Produce an analytic framework for calculating the costs, risks, and opportunities represented by
different outcomes.
Provide a means for weighing multiple unknown or unknowable factors and presenting a set of
plausible outcomes.
When decision makers or analysts from different disciplines or organizational cultures are included on
the team, new insights invariably emerge as new relevant information and competing perspectives are
introduced. Analysts from outside the organizational culture of an analytic unit are likely to see a
problem in different ways. They are likely to challenge key working assumptions and established mental
models of the analytic unit and avoid the trap of expecting only incremental change. Involving decision
makers, or at least a few individuals who work in the office of the ultimate client or decision maker, can
bring invaluable perspective and practical insights to the process.
When analysts look well into the future, they usually find it extremely difficult to do a simple, straight-line
projection, given the high number of variables they need to consider. By changing the “analytic lens”
through which the future is viewed, analysts are also forced to reevaluate their assumptions about the
priority order of key factors driving the issue. By pairing the key drivers to create sets of mutually
exclusive scenarios, scenario techniques help analysts think about the situation from sometimes
counterintuitive perspectives, often generating several unexpected and dramatically different potential
future worlds.
The amount of time and effort required depends upon the specific technique used. A single analyst can
use Reversing Assumptions, Simple Scenarios, What If? Analysis, and Cone of Plausibility without any
technical or methodological support, although a group effort typically yields more diverse and creative
results. Various forms of Brainstorming, Key Drivers Generation™, and Key Uncertainties Finder™
require a group and a facilitator but can be done in an hour or two. The time required for Foresight
Quadrant Crunching™, Alternative Futures Analysis, Multiple Scenarios Generation, Morphological
Analysis, and Counterfactual Reasoning varies, but these techniques usually require a team of experts
to spend several days working together on the project. Analysis by Contrasting Narratives and often
Multiple Scenarios Generation involve engaging decision makers directly in the process. How the
technique is applied will also vary depending on the topic and the target client. For this reason, we
strongly recommend engaging a facilitator who is knowledgeable about Foresight Techniques to save
time and ensure a high-quality product.
Criteria should be established for choosing which scenarios are the most important to bring to the
attention of the decision maker or ultimate client. The list should be tailored to the client’s needs and
should fully answer the focal question asked at the beginning of the exercise. Five criteria often used in
selecting scenarios follow:
Downside Risk. This criterion addresses the question most often asked: “How bad can it get?” The
response should be a credible scenario that has a reasonable chance of occurring and should
require the development of a contingency plan for avoiding, mitigating, or recovering if the selected
scenario comes to pass. A “nightmare scenario,” also described as a High Impact/Low Probability
scenario, is usually best portrayed in a tone box or text box in the paper and not as its own stand-
alone scenario.
Mainline Assessment. Most clients will usually ask, “What is most likely to happen?” The honest
answer is usually, “We do not really know; it depends on how the various key drivers play out in
influencing future developments.” Although the purpose of Foresight analysis is to show that
several scenarios are possible, scenarios can usually be ranked in order from most to least likely to
occur based on current trends and reasonable key assumptions. Providing a mainline scenario also
establishes a convenient baseline for conducting further analysis and deciding what actions are
critical.
New Opportunity. Every Foresight workshop should include pathways that show decision makers
how they can fashion a future or futures more to their liking. This can be accomplished by including
one or more scenario that showcases new opportunities or by including a section describing how a
bad outcome can be mitigated or remedied. Every adversity comes with an opportunity, and the
Foresight processes discussed in this chapter can be just as effective in developing positive,
opportunities-based scenarios as in describing all the bad things that can happen.
Emerging Trend. Often when conducting a Foresight workshop, new factors will appear or new
trends will be identified that analysts or decision makers had previously ignored. These new trends,
relationships, or dynamics often are integral to or reflected in several of the scenarios and can be
collapsed into a single scenario that best illustrates the significance of—and opportunities
presented by—the new trend.
Recognizable Anchor. If the client does not believe any of the scenarios of the Foresight analysis
exercise are credible or consistent with the client’s experience or convictions, then the recipient will
likely disregard the entire process and ignore key findings or discoveries made. On the other hand,
recipients who find a scenario that resonates with their current worldview will anchor their
understanding of the exercise on that scenario and more easily understand the alternatives.
Three scenarios introduce the Goldilocks effect, often implying that the “middle” scenario is an
appropriate compromise.
Five or more scenarios are usually too many for a decision maker to process cognitively.
Scenarios workshops will most likely fail if the group conducting the exercise is not highly diverse, with
representatives from a variety of disciplines, organizations, and even cultures to avoid the trap of
Groupthink. Foresight analysis can be a powerful instrument for overcoming well-known cognitive biases
and heuristics such as the Anchoring Effect, Groupthink, and Premature Closure. Scenarios techniques
also mitigate the intuitive traps of thinking of only one likely (and predictable) outcome instead of
acknowledging that several outcomes are possible (Assuming a Single Solution), focusing on a narrow
range of alternatives representing marginal—and not radical—change (Expecting Marginal Change),
and failing to factor something into the analysis because the analyst lacks an appropriate category or
“bin” for that item of information (Lacking Sufficient Bins).
Users of Foresight Techniques often find that members of the “futures group or study group” have
difficulty thinking outside of their comfort zone, resisting instructions to look far into the future, or to
explore or suggest concepts that do not fall within their area of expertise. Techniques that have worked
well to pry participants out of such analytic pitfalls are to (1) define a time period for the estimate (such
as five or ten years) that one cannot easily extrapolate from current events, or (2) post a list of concepts
or categories (such as the STEMPLES+ list of Social, Technical, Economic, Military, Political, Legal,
Environmental, and Security, plus other factors such as Demographic, Religious, or Psychological) to
stimulate thinking about an issue from different perspectives. Analysts involved in the process should
have a thorough understanding of the subject matter and possess the conceptual skills necessary to
identify the key drivers and assumptions that are likely to remain valid throughout the period of the
assessment.
Identification and monitoring of indicators or signposts can provide early warning of the direction in
which the future is heading, but these early signs are not obvious.3 Indicators take on meaning only in
the context of a specific scenario with which they are associated. The prior identification of a scenario
and related indicators can create an awareness that prepares the mind to recognize early signs of
significant change. Change sometimes happens so gradually that analysts do not notice it, or they
rationalize it as not being of fundamental importance until it is too obvious to ignore. After analysts take
a position on an issue, they typically are slow to change their minds in response to new evidence. By
going on the record in advance to specify what actions or events would be significant and might change
their minds, analysts can avert this type of rationalization.
The time required to use Foresight Techniques (such as Multiple Scenarios Generation, Foresight
Quadrant Crunching™, or Counterfactual Reasoning) ranges from a few days to several months,
depending on the complexity of the problem, scheduling logistics, and the number of participants
involved in the process.4 Most of the techniques involve several stages of analysis and employ different
techniques to (1) identify key drivers, (2) generate permutations to reframe how the topic could evolve,
(3) convert them into scenarios, (4) establish indicators for assessing the potential for each proposed
alternative trajectory, and (5) use Decision Support Techniques to help policymakers and decision
makers shape an action agenda.
This chapter addresses the first three stages. Decision Support Techniques such as the Opportunities
Incubator™ and the Impact Matrix that can be used to implement stage 5 are described in the next
chapter. In a robust Foresight exercise, several weeks may pass between each of these stages to
provide time to effectively capture, reflect on, and refine the results of each session.
OVERVIEW OF TECHNIQUES
Key Drivers Generation™ should be used at the start of a
Foresight exercise to assist in the creation of key drivers. These key
drivers should be mutually exclusive, fundamental to the issue or
problem under study, and usually not obvious to the uninformed.
Make a list of four to six critical variables that would best serve
as key drivers to use in conducting a Foresight analysis.
Mutually Exclusive. Each key driver does not share the same
basic dynamic as another driver.
Mutually exclusive
Clearly define the focal issue and the specific goals of the futures exercise.
Make a list of forces, factors, and events that are likely to influence the future using Simple
Brainstorming or Cluster Brainstorming techniques.
Organize the forces, factors, and events related to one another into five to ten affinity groups that
are the likely driving forces in determining how the focal issue will evolve.
Label each of these drivers and write a brief description of each. For example, analysts used the
technique to forecast the future of the fictional country of Caldonia, which was facing a chronic
insurgency and a growing threat from narcotics traffickers. Six drivers were identified using Cluster
Brainstorming.
Generate a matrix, as shown in Figure 9.4, with a list of drivers down the left side. The columns of
the matrix are used to describe scenarios. Each scenario is assigned a value for each driver. The
values are strong or positive (+), weak or negative (–), and blank if neutral or no change. In Figure
9.4, participants identified the following six key drivers and then rated them.
Government effectiveness. To what extent does the government exert control over all
populated regions of the country and effectively deliver services?
Civil society. Can nongovernmental and local institutions provide appropriate services and
security to the population?
Insurgency. Does the insurgency pose a viable threat to the government? Is it able to extend
its dominion over greater portions of the country?
Generate at least four different scenarios—a best case, worst case, mainline, and at least one other
by assigning different values (+, 0, –) to each driver. In this case, four scenarios were created by
varying the impact of six drivers: “Fragmentation” represents the downside scenario, “Descent into
Order” the mainline assessment, “An Imperfect Peace” a new opportunity, and “Pockets of Civility”
focuses on the strength of civil society.
Reconsider the list of drivers. Is there a better way to conceptualize and describe the drivers? Are
there important forces that have not been included? Look across the matrix to see the extent to
which each driver discriminates among the scenarios. If a driver has the same value across all
scenarios, it is not discriminating and should be deleted.
Ask if the set of selected scenarios is complete. To stimulate thinking about other possible
scenarios, consider the key assumptions made in deciding on the most likely scenario. What if
some of these assumptions turn out to be invalid? If they are invalid, how might that affect the
outcome, and are such outcomes included in the available set of scenarios?
For each scenario, write a one-page story to describe what that future looks like and/or how it might
come about. The story should illustrate the interplay of the drivers.
For each scenario, describe the implications for the decision maker.
Generate and validate a list of indicators, or “observables,” for each scenario that would aid in
discovering that events are starting to play out in a way envisioned by that scenario.
Convene a small group of experts with some diversity of background. Define the issue at hand and
set the time frame of the assessment. A common question to ask is, “What will X (e.g., a country,
regime, issue) look like in Y (e.g., two months, five years, twenty years)?”
Identify the drivers that are key factors or forces and thus most useful in defining the issue and
shaping the current environment. Analysts in various fields have created mnemonics to guide their
analysis of key drivers. One of the most common is PEST, which signifies Political, Economic,
Social, and Technological variables. Other analysts have combined PEST with legal, military,
environmental, psychological, or demographic factors to form abbreviations such as STEEP,
STEEPLE, or PESTLE.8 We recommend using STEMPLES+ (Social, Technical, Economic, Military,
Political, Legal, Environmental, and Security, plus other factors such as Demographic, Religious, or
Psychological).
Write the drivers as neutral statements that should be valid throughout the period of the
assessment. For example, write “the economy,” not “declining economic growth.” Be sure you are
listing true drivers and not just describing important players or factors relevant to the situation. The
technique works best when four to six drivers are generated.
Make assumptions about how the drivers are most likely to play out over the time frame of the
assessment. Be as specific as possible; for example, say, “The economy will grow 2–4 percent
annually over the next five years,” not simply that “The economy will improve.” Generate only one
assumption per driver.
Description
Generate a baseline scenario from the list of key drivers and key assumptions. This is often a
projection from the current situation forward, adjusted by the assumptions you are making about
future behavior. The scenario assumes that the drivers and their descriptions will remain valid
throughout the period. Write the scenario as a future that has come to pass and describe how it
came about. Construct one to three alternative scenarios by changing an assumption or several of
the assumptions that you included in your initial list. Often it is best to start by looking at those
assumptions that appear least likely to remain true. Consider the impact that change is likely to
have on the baseline scenario and describe this new end point and how it came about. Also
consider what impact changing one assumption would have on the other assumptions on the list.
Generate a possible wild-card scenario by radically changing the assumption that you judge as the
least likely to change. This should produce a High Impact/Low Probability scenario (see chapter 8)
that may not have been considered otherwise.
Origins of This Technique
Cone of Plausibility is a well-established technique used by
intelligence analysts in several countries. It is a favorite of analysts
working in Canada and the United Kingdom. For additional insight
and visuals on the Cone of Plausibility, visit
https://prescient2050.com/the-cone-of-plausibility-can-assist-your-
strategic-planning-process/.
9.6 ALTERNATIVE FUTURES ANALYSIS
Alternative Futures Analysis is a systematic method for identifying
alternative trajectories by developing plausible but mind-stretching
“stories” based on critical uncertainties to inform and illuminate
decisions, plans, and actions today.
When to Use It
Alternative Futures Analysis and Multiple Scenarios Generation (the
next technique to be described) differ from the previously described
techniques in that they are usually larger projects that rely on a
group of experts, often including decision makers, academics, and
other outside experts. They use a more systematic process and
usually require the assistance of a knowledgeable facilitator.
Clearly define the focal issue and the specific goals of the futures exercise.
Brainstorm to identify the key forces, factors, or events most likely to influence how the issue will
develop over a specified time period.
If possible, group these various forces, factors, or events to form two critical drivers that are
expected to determine the future outcome. In the example on the future of Cuba (Figure 9.6), the
two key drivers are “Effectiveness of Government” and “Strength of Civil Society.” If there are more
than two critical drivers, do not use this technique—use the Multiple Scenarios Generation
technique, which can handle a larger number of drivers.
As shown in the Cuba example, define the two ends of the spectrum for each driver.
Draw a 2-×-2 matrix. Label the two ends of the spectrum for each driver.
Note that the square is now divided into four quadrants. Each quadrant represents a scenario
generated by a combination of the two drivers. Now give a name to each scenario and write it in the
relevant quadrant.
Generate a narrative story of how each hypothetical scenario might come to pass. Include a
hypothetical chronology of key dates and events for each scenario.
Description
Generate and validate a list of indicators, or “observables,” for each scenario that would help
determine whether events are starting to play out in a way envisioned by that scenario.
Multiple Scenarios Generation is like Alternative Futures Analysis (described above) except that with
this technique, you are not limited to two critical drivers that generate four scenarios. By using multiple
2-×-2 matrices pairing every possible combination of multiple drivers, you can create many possible
scenarios. Doing so helps ensure nothing has been overlooked. Once generated, the scenarios can be
screened quickly without detailed analysis of each one. After becoming aware of the variety of possible
scenarios, analysts are more likely to pay attention to outlying data that would suggest that events are
playing out in a way not previously imagined.
Training and an experienced team of facilitators are needed to use this technique. Here are the basic
steps:
Clearly define the focal issue and the specific goals of the futures exercise.
Brainstorm to identify the key forces, factors, or events most likely to influence how the issue will
develop over a specified time period—often five or ten years.
From all the scenarios generated, select those most deserving of attention because they illustrate
compelling and challenging futures not yet under consideration.
Develop and validate indicators for each scenario that could be tracked to determine which
scenario is starting to develop.
The technique is illustrated by exploring the focal question, “What is the future of the insurgency in
Iraq?” (See Figure 9.7a.) Here are the steps:
Convene a group of experts (including some creative thinkers who can challenge the group’s
mental model) to brainstorm the forces and factors that are likely to determine the future of the
insurgency in Iraq.
Select those factors or drivers whose outcome is the hardest to predict or for which analysts cannot
confidently assess how the driver will influence future events. In the Iraq example, three drivers
meet these criteria:
Develop a story or a couple of stories describing how events might unfold for each quadrant of each
2-×-2 matrix. For example, in the 2-×-2 matrix defined by the role of neighboring states and the
capability of Iraq’s security forces, analysts would describe how the insurgency would function in
each quadrant on the basis of the criteria defined at the far end of each spectrum. In the upper-left
quadrant, the criteria would be stable and supportive neighboring states but ineffective internal
security capabilities (see Figure 9.7b). In this “world,” one might imagine a regional defense
umbrella that would help to secure the borders. Another possibility is that the neighboring states
would have the Shiites and Kurds under control, with Sunnis, who continue to harass the Shia-led
central government, as the only remaining insurgents.
Description
Description
Figure 9.7B Future of the Iraq Insurgency: Using Spectrums to Define Potential
Outcomes
Review all the stories generated and select those most deserving of attention. For example, which
scenario
Select a few scenarios that might be described as “wild cards” (High Impact/Low Probability
developments) or “nightmare scenarios” (see Figure 9.7c).
Consider what decision makers might do to prevent bad scenarios from occurring or enable good
scenarios to develop.
Generate and validate a list of key indicators to help monitor which scenario story best describes
how events are beginning to play out.
Description
Description
How could things have been different in the past and what does
this tell us about what to do today?
Stage 1: “When and where might this change plausibly come about?”
Stage 2: “When and where might this change cause broader uncertainties (and unexpected
consequences)?”
If one conceives of the possible change as a narrative or story, then the stages correspond to
developing the beginning, middle, and end of the story (see Figure 9.9). The first two stages can
sometimes be counterintuitive and, as a result, they are often overlooked by analysts.
Description
Obviously, the ideal scenario back story for the possible change
would be the most plausible one. To develop a plausible account,
analysts should first identify the reasons why the change itself is (or
was) unlikely. Then, they should identify the reasons why the change
is (or was) still possible nonetheless. Using these reasons, analysts
should develop possible “triggering events” that could weaken the
reasons why the change is unlikely and/or strengthen the reasons
why the change is still possible. The resulting scenarios can be a
single such event, or a series of distinct events that lead stepwise or
converge to produce the change. Usually, the less time covered by
the scenario the better. The more recent the history is to a scenario
coming to be, the more likely it would merit consideration by the
decision maker.
How might these actors respond in ways that are different from
what analysts currently expect them to do (or what they did in
the past)?
In what ways can the analysts no longer assume these actors
will maintain the “status quo” given the imagined changes?
What If? Analysis focuses its attention mostly on the same subject
as the first stage of Counterfactual Reasoning, emphasizing the
need to develop a “back story” to explain how a posited outcome
emerged. An analyst could (theoretically) use What If? Analysis to do
the first stage of Counterfactual Reasoning, or vice versa. Red Hat
Analysis also generates a specific back story by simulating how an
adversary would deal with a particular situation. Counterfactual
Reasoning, however, goes further by exploring the implications or
consequences of that scenario for the decision maker.
The method recognizes that the world has become much more
complex. As analysis expands beyond traditional military, political,
and economic issues to environmental, social, technical, and cyber
domains, there is growing need for analytic techniques that involve
more adaptive sense making, flexible organizational structures,
direct engagement with the decision maker, and liaising with
nontraditional intelligence and academic partners.
Analysis by Contrasting Narratives differs from many traditional
forms of intelligence analysis in that it seeks to integrate the
perceptions of the policymaker or decision maker into the analysis of
the behavior of adversaries and other hostile entities. By engaging
the decision maker in the analytic process, the method can also
reflect and interactively assess the impact of a decision maker’s
actions on the problem at hand.
The Method
The methodology consists of two phases: (1) basic analytic
narratives are identified, and (2) the narratives are analyzed to
assess their development, looking for effects of statements and
actions across multiple social domains. A central focus lies with
articulations of difference, expressing (or critiquing) an “us-against-
them” logic to enable or legitimize particular security policies or
decisions.
Analyze and link the macro narratives. Ask, “To what extent are
statements and actions in one narrative reflected in another
narrative?”
Explore:
The method is like Red Hat Analysis in that both techniques aim
to widen cultural empathy and understanding of a problem. Red
Hat Analysis differs from Analysis by Contrasting Narratives,
however, in that it is more likely to assume that the opposing
sides view the conflict in much the same way.
Defining explicit criteria for tracking and judging the course of events makes the analytic process more
visible and available to scrutiny by others, thus enhancing the credibility of analytic judgments. Including
an indicator list in the finished product helps decision makers track future developments and builds a
more concrete case for the analytic conclusions.
Preparation of a detailed indicator list by a group of knowledgeable analysts is usually a good learning
experience for all participants. It can be a useful medium for an exchange of knowledge between
analysts from different organizations or those with different types of expertise—for example, analysts
who specialize in a country and those who are knowledgeable in a given field, such as military
mobilization, political instability, or economic development.
When analysts or decision makers are sharply divided over (1) the interpretation of events (for example,
political dynamics in Saudi Arabia or how the conflict in Syria is progressing), (2) the guilt or innocence
of a “person of interest,” or (3) the culpability of a counterintelligence suspect, indicators can help
depersonalize the debate by shifting attention away from personal viewpoints to more objective criteria.
Strong emotions are often diffused and substantive disagreements clarified if both sides can agree on a
set of criteria before the fact that show developments are—or are not—moving in a particular direction
or a person’s behavior suggests guilt or innocence.
The process of developing indicators forces the analyst to reflect and explore all that might be required
for a specific event to occur. The process can also ensure greater objectivity if two sets of indicators are
developed: one pointing to a likelihood that the scenario will emerge and another showing that it is not
emerging.
Indicators help counteract Hindsight Bias because they provide a written record that more accurately
reflects what the analyst was thinking at the time rather than relying on that person’s memory. Indicators
can help analysts overcome the tendency to judge the frequency of an event by the ease with which
instances come to mind (Availability Heuristic) and the tendency to predict rare events based on weak
evidence or evidence that easily comes to mind (Associative Memory). Indicators also help analysts
avoid the intuitive traps of ignoring information that is inconsistent with what one wants to see (Ignoring
Inconsistent Evidence), continuing to hold to a judgment when confronted with a mounting list of
contradictory evidence (Rejecting Evidence), and assuming something is inevitable, for example, if the
indicators an analyst had expected to emerge are not actually realized (Assuming Inevitability).
9.11.1 The Method: Indicators Generation
Analysts can develop indicators in a variety of ways. The method can range from a simple process to a
sophisticated team effort. For example, with minimum effort, analysts can jot down a list of things they
would expect to see if a given situation were to develop as feared or foreseen. Or analysts could work
together to define multiple variables that would influence a situation and then rank the value of each
variable based on incoming information about relevant events, activities, or official statements.
When developing indicators, clearly define the issue, question, outcome, or hypothesis and then
generate a list of activities, events, or other observables that you would expect to see if that issue or
outcome emerged. Think in multiple dimensions using STEMPLES+ (Social, Technical, Economic,
Military, Political, Legal, Environmental, and Security, plus Demographic, Religious, Psychological, or
other factors) to stimulate new ways of thinking about the problem. Also consider analogous sets of
indicators from similar or parallel circumstances.
Indicators can be derived by applying a variety of Structured Analytic Techniques, depending on the
issue at hand and the frame of analysis.16 For example, analysts can use
Circleboarding™ to identify all the dimensions of a problem. It prompts the analyst to explore the
Who, What, How, When, Where, Why, and So What of an issue.
Key Assumptions Check to surface key variables or key uncertainties that could determine how a
situation unfolds.
Gantt Charts or Critical Path Analysis to identify markers. Markers are the various stages in a
process (planning, recruitment, acquiring materials, surveillance, travel, etc.) that note how much
progress the group has made toward accomplishing the task. Analysts can identify one or more
markers for each step of the process and then aggregate them to create a chronologically ordered
list of indicators.
Decision Trees to reveal critical nodes. The critical nodes displayed on a Decision Tree diagram
can often prove to be useful indicators.
Models to describe emerging phenomena. Analysts can identify indicators that correspond to the
various components or stages of a model that capture the essence of dynamics such as political
instability, civil-military actions presaging a possible coup, or ethnic conflict. The more indicators
observed, the more likely that the phenomenon represented by the model is present.
Structured Analogies to flag what caused similar situations to develop. When historical or generic
examples of the topic under study exist, analysis of what created these analogous situations can be
the basis for powerful indicators of how the future might evolve.
When developing indicators, analysts should take time to carefully define each indicator. It is also
important to establish what is “normal” for that indicator.
Consider the indicators as a set. Are any redundant? Have you generated enough? The set should be
comprehensive, consistent, and complementary. Avoid the temptation of creating too many indicators;
collectors, decision makers, and other analysts usually ignore long lists.
After completing your list, review and refine it, discarding indicators that are duplicative and combining
those that are similar. See Figure 9.11.1 for a sample list of anticipatory or foresight indicators.
The ideal indicator is highly likely or consistent for the scenario or hypothesis to which it is assigned
and highly unlikely or inconsistent for all other alternatives.
Application of the Indicators Evaluation method helps identify the most-diagnostic indicators for each
scenario or hypothesis—which are most deserving of monitoring and collection (see Figure 9.11.3a).
Employing Indicators Evaluation to identify and dismiss non-diagnostic indicators can increase the
credibility of an analysis. By applying the method, analysts can rank order their indicators from most to
least diagnostic and decide how far up the list they want to draw the line in selecting the indicators used
in the analysis. In some circumstances, analysts might discover that most or all the indicators for a given
scenario are also consistent with other scenarios, forcing them to brainstorm a new and better set of
indicators for that scenario. If analysts find it difficult to generate independent lists of diagnostic
indicators for two scenarios, it may be that the scenarios are not sufficiently dissimilar, suggesting the
two scenarios should be combined.
Indicators Evaluation can help overcome mindsets by showing analysts how a set of indicators that
point to one scenario may also point to others. It can also show how some indicators, initially perceived
to be useful or diagnostic, may not be. By placing an indicator in a broader context against multiple
scenarios, the technique helps analysts focus on which one(s) are useful and diagnostic instead of
simply supporting a given scenario.
The Method
The first step is to fill out a matrix like that used for Analysis of Competing Hypotheses.
List the alternative scenarios along the top of the matrix (as is done for hypotheses in Analysis of
Competing Hypotheses).
List indicators generated for all the scenarios down the left side of the matrix (as is done with
relevant information in Analysis of Competing Hypotheses).
In each cell of the matrix, assess the status of that indicator against the noted scenario. Would you
rate the indicator as
Likely to appear?
Could appear?
Unlikely to appear?
Indicators developed for the home scenario should be either “Highly Likely” or “Likely.”
After assessing the likelihood of all indicators against all scenarios, assign a score to each cell. If
the indicator is “Highly Likely” in the home scenario as we would expect it to be, then other cells for
that indicator should be scored as follows for the other scenarios:
Likely is 1 point
Could is 2 points
Unlikely is 4 points
If the indicator is deemed “Likely” in the home scenario, then the cells for the other scenarios for
that indicator should be scored as follows:
Likely is 0 points
Could is 1 point
Unlikely is 3 points
Highly Unlikely is 5 points
Tally up the scores across each row; the indicators with the highest scores are the most diagnostic
or discriminating.
Once this process is complete, re-sort the indicators for each scenario so that the most
discriminating indicators are displayed at the top and the least discriminating indicators at the
bottom.
The most discriminating indicator is “Highly Likely” to emerge in its scenario and “Highly
Unlikely” to emerge in all other scenarios.
The indicators with the most “Highly Unlikely” and “Unlikely” ratings are the most discriminating.
Review where analysts differ in their assessments and decide if adjustments are needed in their
ratings. Often, differences in how an analyst rates an indicator can be traced back to different
assumptions about the scenario when the analysts were doing the ratings.
Decide whether to retain or discard indicators that have no “Unlikely” or “Highly Unlikely” ratings. In
some cases, an indicator may be worth keeping if it is useful when viewed in combination with a
cluster of indicators.
Develop additional—and more diagnostic indicators—if a large number of initial indicators for a
given scenario have been eliminated.
Recheck the diagnostic value of any new indicators by applying the Indicators Evaluation technique
to them as well.
Analysts should think seriously before discarding indicators determined to be non-diagnostic. For
example, an indicator might not have diagnostic value on its own but be helpful when viewed as part of
a cluster of indicators. An indicator that a terrorist group “purchased guns” would not be diagnostic in
determining which of the following scenarios were likely to happen: armed attack, hostage taking, or
kidnapping; knowing that guns had been purchased could be critical in pointing to an intent to commit an
act of violence or even to warn of the imminence of the event. Figure 9.11.3b explores another reason
for not discarding a non-diagnostic indicator. It is called the INUS (Insufficient but
Nonredundant/Unnecessary but Sufficient) Condition.
A final argument for not discarding non-diagnostic indicators is that maintaining and publishing the list of
non-diagnostic indicators could prove valuable to collectors. If analysts initially believed the indicators
would be helpful in determining whether a specific scenario was emerging, then collectors and other
analysts working the issue, or a similar issue, might come to the same conclusion. For these reasons,
facilitators of the Indicators Validation and Evaluation techniques believe that the list of non-diagnostic
indicators should also be published to alert other analysts and collectors to the possibility that they might
also assume an indicator was diagnostic when it turned out on further inspection not to be.
If you take the time to develop a robust set or sets of anticipatory or foresight indicators (see Figure
9.11.3c), you must establish a regimen for monitoring and reviewing the indicators on a regular basis.
Analysts should evaluate indicators on a set schedule—every week or every month or every quarter—
and use preestablished criteria when doing so. When many or most of the indicators assigned to a given
scenario begin to “light up,” this should prompt the analyst to alert the broader analytic community and
key decision makers interested in the topic. A good set of indicators will give you advance warning of
which scenario is about to emerge and where to concentrate your attention. It can also alert you to
unlikely or unanticipated developments in time for decision makers to take appropriate action.
Any indicator list used to monitor whether something has happened, is happening, or will happen
implies at least one alternative scenario or hypothesis—that it has not happened, is not happening, or
will not happen. Many indicators that a scenario or hypothesis is happening are just the opposite of
indicators that it is not happening; some are not. Some are consistent with two or more scenarios or
hypotheses. Therefore, an analyst should prepare separate lists of indicators for each scenario or
hypothesis. For example, consider indicators of an opponent’s preparations for a military attack where
there may be three hypotheses—no attack, attack, and feigned intent to attack with the goal of forcing a
favorable negotiated solution. Almost all indicators of an imminent attack are also consistent with the
hypothesis of a feigned attack. The analyst must identify indicators capable of diagnosing the difference
between true intent to attack and feigned intent to attack. The mobilization of reserves is such a
diagnostic indicator. It is so costly that it is not usually undertaken unless there is a strong presumption
that the reserves will be needed.
After creating the indicator list or lists, the analyst or analytic team should regularly review incoming
reporting and note any changes in the indicators. To the extent possible, the analyst or the team should
decide well in advance which critical indicators, if observed, will serve as early-warning decision points.
In other words, if a certain indicator or set of indicators is observed, it will trigger a report advising of
some modification in the analysts’ appraisal of the situation.
Techniques for increasing the sophistication and credibility of an indicator list include the following:
Providing a narrative description for each point on the rating scale, describing what one would
expect to observe at that level.
Figure 9.11.3c is an example of a complex indicators chart that incorporates the first three techniques
listed above.
Description
10. N. J. Roese and J. M. Olson, eds., What Might Have Been: The
Social Psychology of Counterfactual Thinking (Hillsdale, NJ:
Lawrence Erlbaum Associates, 1995).
15. Ibid.
16. A robust discussion of the processes analysts use to generate,
validate, and evaluate the diagnosticity of indicators can be found in
Pherson and Pyrik, Analyst’s Guide to Indicators.
1 to 4 years. Simple situation: Brainstorming, reversing, and assumptions. Complex situation: Simple
scenarios, cone of plausibility, and morphological analysis. Primary objective: Avoiding surprises.
Anticipating the unanticipated.
5 to 10 years. Simple situation: Alternative futures analysis, and what if analysis. Complex situation:
Multiple scenarios, generation, foresight quadrant, crunching trademarked, and analysis by contrasting
narratives and counterfactual reasoning. Primary objective: Mapping the future. Finding opportunities.
Back to Figure
The present drivers and assumptions lead to the multiple scenarios after a period of stable regime. The
drivers and their corresponding assumptions are as follows. Economy: Growth likely 2 to 5 percent.
Popular support: Slowly eroding. Civil-Military relations: Growing tensions. Regional relations: Peaceful
and stable. Foreign relations: Substantial. The scenarios are as follows. Plausible Scenario 1: More
inclusive policies and sound economic decisions bring stability. Baseline scenario. President under fire
and struggling to retain control. Plausible scenario 2: Junior military officers stage successful coup. Wild-
card scenario: War breaks out with neighbor as regime collapses.
Back to Figure
The effectiveness of the government is fully operational and marginalized. The strength of the civil
society is nonexistent and robust. Fully operational and nonexistent: Keeping it all together. Fully
operational and robust: Competing power centers. Marginalized and nonexistent: Glueless in Havana.
Marginalized and robust: Drifting toward democracy.
Back to Figure
A. The role of neighboring states, example, Syria, Iran. B. The capability of Iraq’s security forces, such
as military and police. C. The political environment in Iraq. In the first scenario, the key drivers are A and
B. In the second scenario, the key drivers are A and C. In the third scenario, the key drivers are B and
C.
Back to Figure
The key definers are Iraqi security capability, which are ineffective and effective, and the helpfulness of
neighbors, which are stable and supportive, and unstable or disruptive. Neighboring states are stable
and supportive and ineffective security capability: Regional defensive umbrella secures borders.
Insurgency is pure Sunni, internal political solution? Neighboring states are stable and supportive and
effective security capability: Militias integrated into new Iraqi Army. Jordan brokers deal; economic aid to
Sunnis. Neighboring states are unstable or disruptive and ineffective security capability: Syria collapses,
influx of new fighters. Civil wars. Neighboring states are unstable or disruptive and effective security
capability: Insurgency fragments. Refugees flow into Iraq seeking safe haven.
Back to Figure
A. The role of neighboring states, example, Syria, Iran. B. The capability of Iraq’s security forces, such
as military and police. C. The political environment in Iraq. In the first scenario, the key drivers are A and
B. The Civil War is the nightmare scenario in the third quadrant. In the second scenario, the key drivers
are A and C. Sunni politics in the second quadrant deserve the most attention. In the third scenario, the
key drivers are B and C. New fighters in the third column deserve the most attention.
Back to Figure
The dimensions are group, type of attack, target, and impact. The first option is an outside group
planning multiple attacks on the treatment plant for disrupting economy. The second option is an insider
group planning a single type of attack on the drinking water for terrorizing the population. The third
option is a visitor group planning a threatening type of attack on wastewater to cause major casualties.
Back to Figure
Convergent scenario for X changes to divergent scenario for Y due to ripple effects, or possible change
X, if, is the consequence Y, then. Convergent scenario is the break with prior history. Divergent scenario
is he continuation to failure. Counterfactual: If X were to occur, then Y would or might occur. Stage 1.
Convergent scenario development. Ask, “When and where might this change plausibly come about?”
Assess the causes of the change and develop the story’s beginning. Stage 2. Ripple Effect Analysis.
Ask, “When and where might this change cause broader uncertainties and unintended consequences?
Asses the context of the change or new uncertainties and develop the story’s middle. Stage 3. Divergent
Scenario Development. Ask, “When and where might this change have long-term impact?” Assess the
consequences of the change and develop the story’s end.
Back to Figure
A tabular representation of Zambria Political Instability Indicators lists the five main indicators, with sub-
indicators, and their corresponding concerns for the first, second, third, and fourth quarters of 2008 and
2009 and the first and second quarters of 2010.
2008 2009
Force Field Analysis. A technique that analysts can use to help the
decision maker identify the most effective ways to solve a problem or
achieve a goal—and whether it is possible to do so. The analyst
identifies and assigns weights to the relative importance of all the
factors or forces that either help or hinder a solution to the problem
or achievement of the goal. After organizing all these factors in two
lists, pro and con, with a weighted value for each factor, the analyst
or decision maker is in a better position to recommend strategies
that would be most effective in either strengthening the impact of the
driving forces or reducing the impact of the restraining forces.
Determine your client’s perception of the scenario, projected trajectory, or anticipated outcome. Use
the following scale: Strongly Positive, Positive, Neutral, Negative, Strongly Negative.
Identify the primary actors in the scenario who have a stake in the projected trajectory or anticipated
outcome.
Assess how much each actor might care about the scenario’s projected outcome because of its
positive or negative (perceived or real) impact on the actor’s livelihood, status, prospects, and so
forth. This assessment considers how motivated the actor may be to act, not whether the actor is
likely to act or not. Use the scale: Very Desirable (DD), Desirable (D), Neutral (N), Undesirable (U),
Very Undesirable (UU).
Assess each actor’s capability or resources to respond to the scenario using a High, Medium, or
Low scale.
Assess each actor’s likely intent to respond to the scenario using a High, Medium, or Low scale.
Identify the actors who should receive the most attention based on the following tiers:
1st: DD or UU Level of Interest rating plus High ratings in both Capability and Intent
3rd: DD or UU Level of Interest rating plus a High rating in either Capability or Intent
Reorder the rows in the matrix so that the actors are listed from first to fifth tiers.
Record the two to three key drivers that would most likely influence or affect each actor or the
actor’s response.
Consider your client’s perception and determine how and when he or she might act to influence
favorably, counteract, or deter an actor’s response. From this discussion, develop a list of possible
actions the client can take.
Figure 10.1 Opportunities Incubator™
Source: Globalytica, LLC, 2019.
Origins of This Technique
The Opportunities Incubator™ was developed by Globalytica, LLC,
to provide a structured process decision makers can use to
implement the key findings of a Foresight exercise. This technique
and the Impact Matrix are the most common decision support tools
used to conclude a Foresight exercise.
10.2 BOWTIE ANALYSIS
Bowtie Analysis is a technique for mapping causes and
consequences of a disruptive event to facilitate the management of
both risks and opportunities.
The technique was first developed for the oil and gas industry but
has evolved into a generic method for assisting decision makers in
proactively managing potential hazards and anticipated
opportunities. Analysts can use the method to enhance their
understanding of causal relationships through mapping both
anticipatory and reactive responses to a disruptive event.
The Bowtie’s logical flow can make the analysis of risks and
opportunities more rapid and efficient. The graphical “bowtie” display
also makes it easier for analysts to communicate the interaction and
relative significance of causes and consequences of a disruptive
event, whether it presents a hazard or an opportunity.
When to Use It
Bowtie Analysis is used when an organization needs to thoroughly
examine its responses to a potential or anticipated disruptive event.
Traditionally, industry has used it to establish more control over a
potential hazard and improve industrial safety, but it is also useful in
identifying a potential opportunity. Bowtie Analysis helps analysts
and decision makers understand the causal relationships among
seemingly independent events. Decision makers can use the
technique to do the following:
Description
Causes. Make a list of threats or trends, placing them on the left and drawing connecting lines from
each to the centered Risk/Opportunity Event. Threats/trends are potential causes of the
Risk/Opportunity Event. Be specific (i.e., “weather conditions” can be specified as “slippery road
conditions”) so that actionable preventive barriers in the case of threats, or accelerators for positive
trends, can be created in a later step.
Consequences. Make a similar list of consequences, placing them on the right and drawing
connecting lines from each to the centered Risk/Opportunity Event. Consequences are results from
the event. Continue to be specific. This will aid in identifying relevant recovery barriers or
distributors to the consequences in a later step.
Preventive Barriers or Accelerators. Focus on the causes on the left of the Bowtie. If the causes
are threats, brainstorm barriers that would stop the threats from leading to the Risk Event. If the
causes are trends, brainstorm accelerators that would quicken the occurrence of the Opportunity
Event.
Recovery Barriers or Amplifiers. Similarly, focus on the consequences on the right of the Bowtie.
If the consequences are undesired, brainstorm barriers that would stop the Risk Event from leading
to the worst-case consequences. If the consequences are desired, brainstorm amplifiers that would
capitalize on the effects of the Opportunity Event.
Escalation Factor. Brainstorm escalation factors and connect each to a barrier, accelerator, or
amplifier. An escalation factor (EF) is anything that may cause a barrier to fail (e.g., “forgetting to
wear a seatbelt” is an escalation factor for a Risk Event because it impairs the effectiveness of the
“wearing a seatbelt” recovery barrier) or enhances the positive effects of an accelerator or
distributor (e.g., “getting all green lights” is an escalation factor for an Opportunity Event because it
increases the effectiveness of the amplifier “going the maximum speed limit”). An escalation factor
barrier stops or mitigates the impact of the escalation factor and its effects, while an escalation
factor accelerator or amplifier intensifies the escalation factor and its effects.
Potential Pitfalls
Specificity is necessary to create actionable barriers, accelerators,
and amplifiers in a Bowtie Analysis. However, a detailed Bowtie
Analysis can present the impression that the authors have thought of
all options or outcomes when they actually have not, as
unanticipated options are often available to decision makers and
unintended consequences result.
Relationship to Other Techniques
The Bowtie method is like Decision Tree analysis because both
methods analyze chains of events to illustrate possible future
actions. Bowtie Analysis also evaluates controls, or barriers, that an
organization has in place. The Opportunities Incubator™ is another
technique that can be used to facilitate positive outcomes and thwart
or mitigate less desirable outcomes. It structures the process
decision makers can use to develop strategies for leveraging or
mitigating the impact of key drivers that influence primary actors,
who are associated with differing levels of intent and capability.
Origins of This Technique
The University of Queensland, Australia, is credited with
disseminating the first Bowtie diagrams at a lecture on hazard
analysis in 1979. After the Piper Alpha offshore oil and gas platform
explosion in 1988, the oil and gas industry adopted the technique to
develop a systematic way of understanding the causal relationships
among seemingly independent events and asserting control over the
potentially lethal hazards in the industry. The versatile Bowtie
Analysis technique is now in widespread use throughout a variety of
industries, including chemicals, aviation, and health care. It is also
used by several intelligence services. Additional resources on Bowtie
Analysis can be found at
https://www.cgerisk.com/knowledgebase/The_bowtie_method.
10.3 IMPACT MATRIX
The Impact Matrix identifies the key actors involved in a decision,
their level of interest in the issue, and the impact of the decision on
them. It is a technique managers use to gain a better sense of how
well or how poorly a decision may be received, how it is most likely
to play out, and what would be the most effective strategies to
resolve a problem. Analysts can also use it to anticipate how
decisions will be made in another organization or by a foreign leader.
When to Use It
The best time for a manager to use this technique is when a major
new policy initiative is being contemplated or a mandated change is
about to be announced. The technique helps managers identify
where they are most likely to encounter both resistance and support.
Intelligence analysts can also use the technique to assess how the
public might react to a new policy pronouncement by a foreign
government or a new doctrine posted on the internet by a political
movement. Invariably, the technique will uncover new insights by
focusing in a systematic way on all possible dimensions of the issue.
The matrix template makes the technique easy to use. Most often,
an individual manager will apply the technique to develop a strategy
for how he or she plans to implement a new policy or respond to a
newly decreed mandate from superiors. Managers can also use the
technique proactively before they announce a new policy or
procedure. The technique can expose unanticipated pockets of
resistance or support, as well as individuals to consult before the
policy or procedure becomes public knowledge. A single intelligence
analyst or manager can also use the technique, although it is usually
more effective if done as a group process.
Value Added
The technique provides the user with a comprehensive framework
for assessing whether a new policy or procedure will be met with
resistance or support. A key concern is to identify any actor who will
be heavily affected in a negative way. Those actors should be
engaged early on or ideally before the policy is announced, in case
they have ideas on how to make the new policy more digestible. At a
minimum, they will appreciate that their views—either positive or
negative—were sought out and considered. Support can be enlisted
from those who will be strongly impacted in a positive way.
Identify all the individuals or groups involved in the decision or issue. The list should include me
(usually the manager); my supervisor; my employees or subordinates; my client(s), colleagues, or
counterparts in my office or agency; and counterparts in other agencies. If analyzing the decision-
making process in another organization, the “me” becomes the decision maker.
Rate how important this issue is to each actor or how much each actor is likely to care about it. Use
a three-point scale: Low, Moderate, or High. The level of interest should reflect how great an impact
the decision would have on such issues as each actor’s time, quality of work life, and prospects for
success.
Figure 10.3 Impact Matrix: Identifying Key Actors, Interests, and Impact
Categorize the impact of the decision on each actor as Mostly Positive (P), Neutral or Mixed (O), or
Mostly Negative (N). If a decision has the potential to be negative, mark it as negative. If in some
cases the impact on a person or group is mixed, then either mark it as neutral or split the group into
subgroups if specific subgroups can be identified.
Review the matrix after completion and assess the likely overall reaction to the policy or decision.
Identify where the decision is likely to have a major negative impact and consider the utility of prior
consultations.
Identify where the decision is likely to have a major positive impact and consider enlisting the
support of key actors in helping make the decision or procedure work.
Finalize the action plan reflecting input gained from consultations.
Fill in the SWOT table by listing Strengths, Weaknesses, Opportunities, and Threats that are
expected to facilitate or hinder achievement of the objective (see Figure 10.4). The significance of
the attributes’ and conditions’ impact on achievement of the objective is far more important than the
length of the list. It is often desirable to list the items in each quadrant in order of their significance
or to assign them values on a scale of 1 to 5.
Identify possible strategies for achieving the objective. This is done by asking the following
questions:
An alternative approach is to apply “matching and converting” techniques. Matching refers to matching
Strengths with Opportunities to make the Strengths even stronger. Converting refers to matching
Opportunities with Weaknesses to convert the Weaknesses into Strengths.
Potential Pitfalls
SWOT is simple, easy, and widely used, but it has limitations. It
focuses on a single goal without weighing the costs and benefits of
alternative means of achieving the same goal. In other words, SWOT
is a useful technique if the analyst or group recognizes that it does
not necessarily tell the full story of what decision should or will be
made. There may be other equally good or better courses of action.
Define the activities involved. List all the components or activities required to bring the project to
completion.
Calculate the time to do each task. Indicate the amount of time (duration) it will take to perform
the activity. This can be a set amount of time or a range from shortest to longest expected time.
Identify dependencies. Determine which activities are dependent on other activities and the
sequences that must be followed.
Identify pathways. Identify various ways or combinations of activities that would enable
accomplishment of the project.
Estimate time to complete. Calculate how much time would be required for each path taken. This
could be a set amount of time or a range.
Description
Identify the optimal pathway. Rank order the various pathways in terms of time required and
select the pathway that requires the least amount of time. Identify which activities are critical to
achieving this goal.
Identify key indicators. Formulate expectations (theories) about potential indicators at each stage
that could either expedite or impede progress toward achieving the end point of the project.
Generate a final product. Capture all the data in a final chart and distribute it to all participants for
comment.
Given the complexity of many projects, software is often used for project management and tracking.
Microsoft Project is purposely built for this task. There are also free charting tools like yEd that are
serviceable alternatives. Using this process, analysts can better recognize important nodes and
associated key indicators.
10.6 DECISION TREES
Decision Trees establish chains of decisions and/or events that
illustrate a comprehensive range of possible future decision points.
They paint a landscape for the decision maker showing the range of
options available, the estimated value or probability of each option,
and the likely implications or outcomes of choosing each option.
When to Use It
Decision Trees can be used to do the following:
Both this technique and the Decision Matrix are useful for countering
the impact of cognitive biases and heuristics such as the Anchoring
Effect, Satisficing, and Premature Closure. They also help analysts
avoid falling into the intuitive traps of Relying on First Impressions,
Assuming a Single Solution, and Overrating Behavioral Factors.
The Method
Using a Decision Tree is a fairly simple process involving two steps: (1) building the tree and (2)
calculating the value or probability of each outcome represented on the tree (see Figure 10.6). Follow
these steps:
Draw lines from the square representing a range of options that can be taken.
At the end of the line for each option indicate whether further options are available (by drawing an
oval followed by more lines) or by designating an outcome (by drawing a circle followed by one or
more lines describing the range of possibilities).
Continue this process along each branch of the tree until all options and outcomes are specified.
Establish a set of percentages (adding to 100) for each set of lines emanating from each oval.
Multiply the percentages shown along each critical path or branch of the tree and record these
percentages at the far right of the tree. Check to make sure all the percentages in this column
add to 100.
Description
Figure 10.6 Counterterrorism Attack Decision Tree
The most valuable or most probable outcome will have the highest percentage assigned to it, and the
least valuable or least probable outcome will have the lowest percentage assigned to it.
Potential Pitfalls
A Decision Tree is only as good as the reliability of the data,
completeness of the range of options, and validity of the qualitative
probabilities or values assigned to each option. A detailed Decision
Tree can present the misleading impression that the authors have
thought of all possible options or outcomes. For example, options
may be available that the authors of the analysis did not imagine,
just as there might be unintended consequences that the authors did
not anticipate.
Relationship to Other Techniques
A Decision Tree is similar structurally to Critical Path Analysis and
Program Evaluation and Review Technique (PERT) charts. Both of
these techniques, however, only show the activities and connections
that need to be undertaken to complete a complex task. A timeline
analysis (as is often done in support of a criminal investigation) is
essentially a Decision Tree drawn after the fact, showing only the
paths of actual events.
Origins of This Technique
This description of Decision Trees was taken from the Canadian
government’s Structured Analytic Techniques for Senior Analysts
course. The Intelligence Analyst Learning Program developed the
course, and the materials are used here with the permission of the
Canadian government. More detailed discussions of how to build
and use Decision Trees are readily available on the internet, for
example, at the MindTools website and at
https://medium.com/greyatom/decision-trees-a-simple-way-to-
visualize-a-decision-dc506a403aeb.
10.7 DECISION MATRIX
A Decision Matrix helps analysts identify the course of action that
best achieves specified goals or preferences.
When to Use It
The Decision Matrix technique should be used when a decision
maker has multiple options from which to choose, has multiple
criteria for judging the desirability of each option, and/or needs to
find the decision that maximizes a specific set of goals or
preferences. For example, a Decision Matrix can help choose among
various plans or strategies for improving intelligence analysis, select
one of several IT systems one is considering buying, determine
which of several job applicants is the right choice, or consider any
personal decision, such as what to do after retiring.
The matrix helps decision makers and analysts avoid the cognitive
traps of Premature Closure, Satisficing, and the Anchoring Effect. It
also helps analysts avoid falling prey to intuitive traps such as
Relying on First Impressions, Assuming a Single Solution, and
Overrating Behavioral Factors.
The Method
Create a Decision Matrix table. To do this, break down the decision problem into two main components
by making two lists—a list of options or alternatives for making a choice and a list of criteria to be used
when judging the desirability of the options. Then follow these steps:
Create a matrix with one column for each option. Write the name of each option at the head of one
of the columns. Add two more blank columns on the left side of the table.
Count the number of selection criteria, and then adjust the table so that it has that many rows plus
two more: one at the top to list the options and one at the bottom to show the scores for each
option. Try to avoid generating a large number of criteria: usually four to six will suffice. In the first
column on the left side, starting with the second row, write in the selection criteria down the left side
of the table, one per row. Listing them roughly in order of importance can sometimes add value but
doing so is not critical. Leave the bottom row blank. (Note: Whether you enter the options across
the top row and the criteria down the far-left column, or vice versa, depends on what fits best on the
page. If one of the lists is significantly longer than the other, it usually works best to put the longer
list in the left-side column.)
Assign weights based on the importance of each of the selection criteria. This can be done in
several ways, but the preferred way is to take 100 percent and divide these percentage points
among the selection criteria. Be sure that the weights for all the selection criteria combined add to
100 percent. Also, be sure that all the criteria are phrased in such a way that a higher weight is
more desirable. (Note: If this technique is being used by an intelligence analyst to support decision
making, this step should not be done by the analyst. The assignment of relative weights is up to the
decision maker.)
Work across the matrix one row at a time to evaluate the relative ability of each of the options to
satisfy each of the selection criteria. For example, assign ten points to each row and divide these
points according to an assessment of the degree to which each of the options satisfies each of the
selection criteria. Then multiply this number by the weight for that criterion. Figure 10.7 is an
example of a Decision Matrix with three options and six criteria.
Add the numbers calculated in the columns for each of the options. If you accept the judgments and
preferences expressed in the matrix, the option with the highest number will be the best choice.
When using this technique, many analysts will discover relationships or opportunities not previously
recognized. A sensitivity analysis may find that plausible changes in some values would lead to a
different choice. For example, the analyst might think of a way to modify an option in a way that makes it
more desirable or might rethink the selection criteria in a way that changes the preferred outcome. The
numbers calculated in the matrix do not make the decision. The matrix is just an aid to help the analyst
and the decision maker understand the trade-offs between multiple competing preferences.
Figure 10.7 Decision Matrix
Origins of This Technique
This is one of the most commonly used techniques for decision
analysis. Many variations of this basic technique have been called by
many different names, including decision grid, Multiple Attribute
Utility Analysis (MAUA), Multiple Criteria Decision Analysis (MCDA),
Multiple Criteria Decision Making (MCDM), Pugh Matrix, and Utility
Matrix. For a comparison of various approaches to this type of
analysis, see Panos M. Parlos and Evangelos Triantaphyllou, eds.,
Multi-Criteria Decision Making Methods: A Comparative Study
(Dordrecht, Netherlands: Kluwer Academic Publishers, 2000).
10.8 FORCE FIELD ANALYSIS
Force Field Analysis is a simple technique for listing and assessing
all the forces for and against a change, problem, or goal.
In the world of business and politics, the technique can help develop
and refine strategies to promote a particular policy or ensure that a
desired outcome actually occurs. In such instances, it is often useful
to define the various forces in terms of key individuals who need to
be persuaded. For example, instead of listing budgetary restrictions
as a key factor, one would write down the name of the person who
controls the budget. Similarly, Force Field Analysis can help
diagnose what forces and individuals need to be constrained or
marginalized to prevent a policy from being adopted or an outcome
from happening.
Value Added
The primary benefit of Force Field Analysis is that it requires an analyst to consider the forces and
factors (and, in some cases, individuals) that influence a situation. It helps analysts think through the
ways various forces affect the issue and fosters recognition that such forces can be divided into two
categories: driving forces and restraining forces. By sorting the evidence into two categories, the analyst
can delve deeply into the issue and consider less obvious factors.
By weighing all the forces for and against an issue, analysts can better recommend strategies that
would be most effective in reducing the impact of the restraining forces and strengthening the effect of
the driving forces.
Force Field Analysis offers a powerful way to visualize the key elements of the problem by providing a
simple tally sheet for displaying the different levels of intensity of the forces individually and together.
With the data sorted into two lists, decision makers can more easily identify which forces deserve the
most attention and develop strategies to overcome the negative elements while promoting the positive
elements. Figure 10.8 is an example of a Force Field diagram.
An issue is held in balance by the interaction of two opposing sets of forces—those seeking to
promote change (driving forces) and those attempting to maintain the status quo (restraining
forces).
—Kurt Lewin, Resolving Social Conflicts (1948)
Description
Figure 10.8 Force Field Analysis: Removing Abandoned Cars from City Streets
Source: Pherson Associates, LLC, 2019.
Force Field Analysis is a powerful tool for reducing the impact of several of the most common cognitive
biases and heuristics: Premature Closure, Groupthink, and the Availability Heuristic. It also is a useful
weapon against the intuitive traps of Relying on First Impressions, Expecting Marginal Change, and
Assuming a Single Solution.
The Method
The technique first requires a list of Pros and Cons about the new
idea or the choice between two alternatives. If there seems to be
excessive enthusiasm for an idea and a risk of acceptance without
critical evaluation, the next step is to look for “Faults.” A Fault is any
argument that a Pro is unrealistic, won’t work, or will have
unacceptable side effects. On the other hand, if there seems to be a
bias toward negativity or a risk of the idea being dropped too quickly
without careful consideration, the next step is to look for “Fixes.” A
Fix is any argument or plan that would neutralize or minimize a Con,
or even change it into a Pro. In some cases, it may be appropriate to
look for both Faults and Fixes before comparing the two lists and
finalizing a decision.
List the Pros in favor of the decision or choice. Think broadly and creatively, and list as many
benefits, advantages, or other positives as possible.
List the Cons, or arguments against what is proposed. The Cons usually will outnumber the Pros,
as most humans are naturally critical. It is often difficult to get a careful consideration of a new idea
because it is easier to think of arguments against something new than to imagine how the new idea
might work.
Review each list and consolidate similar ideas. If two Pros are similar or overlapping, consider
merging them to eliminate any redundancy. Do the same for any overlapping Cons.
If the choice is between two clearly defined options, go through the previous steps for the second
option. If there are more than two options, a technique such as the Decision Matrix may be more
appropriate than Pros-Cons-Faults-and-Fixes.
Decide whether the goal is to demonstrate that an idea will not work or show how best to make it
succeed.
If the goal is to challenge an initial judgment that an idea will not work, take the Cons and see if
they can be “Fixed.” How can their influence be neutralized? Can you even convert them to Pros?
Four possible strategies are to
Propose a modification of the Con that would significantly lower the risk of the Con being a
problem.
Identify a preventive measure that would significantly reduce the chances of the Con being a
problem.
Create a contingency plan that includes a change of course if certain indicators are observed.
Identify a need for further research to confirm the assumption that the Con is a problem.
If the goal is to challenge an initial optimistic assumption that the idea will work and should be
pursued, take the Pros, one at a time, and see if they can be “Faulted.” That means to try to figure
out how the Pro might fail to materialize or have undesirable consequences. This exercise is
intended to counter any wishful thinking or unjustified optimism about the idea. A Pro might be
Faulted in at least three ways:
Identify a reason why the Pro would not work or why the benefit would not be received.
Identify a need for further research or information gathering to confirm or refute the assumption
that the Pro will work or be beneficial.
A third option is to combine both approaches: to Fault the Pros and Fix the Cons.
Compare the Pros, including any Faults, against the Cons, including the Fixes. Weigh one against
the other and make the choice. The choice is based on your professional judgment, not on any
numerical calculation of the number or value of Pros versus Cons.
Potential Pitfalls
Often when listing the Pros and Cons, analysts will assign weights to
each Pro and Con on the list and then re-sort the lists, with the Pros
or Cons receiving the most points at the top of the list and those
receiving the fewest points at the bottom. This can be a useful
exercise, helping the analyst weigh the balance of one against the
other, but the authors strongly recommend against mechanically
adding up the scores on each side and deciding that the list with the
most points is the right choice. Any numerical calculation can be
easily manipulated by simply adding more Pros or more Cons to
either list to increase its overall score. The best protection against
this practice is simply not to add up the points in either column.
Origins of This Technique
Pros-Cons-Faults-and-Fixes is Richards J. Heuer Jr.’s adaptation of
the Pros-Cons-and-Fixes technique described by Morgan D. Jones
in The Thinker’s Toolkit: Fourteen Powerful Techniques for Problem
Solving (New York: Three Rivers Press, 1998), 72–79. Jones
assumed that humans are “compulsively negative” and that
“negative thoughts defeat creative objective thinking.” Thus, his
technique focused only on Fixes for the Cons. The technique
described here recognizes that analysts and decision makers can
also be biased by overconfidence, in which case Faulting the Pros
may be more important than Fixing the Cons.
10.10 COMPLEXITY MANAGER
Complexity Manager helps analysts and decision makers understand
and anticipate changes in complex systems. As used here, the word
“complexity” encompasses any distinctive set of interactions that are
more complicated than even experienced analysts can think through
solely in their heads.4
When to Use It
As a policy support tool, Complexity Manager can help assess the
chances for success or failure of a new or proposed program or
policy and identify opportunities for influencing the outcome of any
situation. It is also useful in identifying what would have to change to
achieve a specified goal as well as the unintended consequences
from the pursuit of a policy goal.
1. Define the problem. State the problem (plan, goal, outcome) to be analyzed, including the time
period covered by the analysis.
2. Identify and list relevant variables. Use one of the brainstorming techniques described in chapter
6 to identify the significant variables (factors, conditions, people, etc.) that may affect the situation of
interest during the designated time period. Think broadly to include organizational or environmental
constraints that are beyond anyone’s ability to control. If the goal is to estimate the status of one or
more variables several years in the future, those variables should be at the top of the list. Group the
other variables in some logical manner with the most important variables at the top of the list.
3. Create a Cross-Impact Matrix. Create a matrix in which the number of rows and columns are each
equal to the number of variables plus one header row (see chapter 7). Leaving the cell at the top-
left corner of the matrix blank, enter all the variables in the cells in the row across the top of the
matrix and the same variables in the column down the left side. The matrix then has a cell for
recording the nature of the relationship between all pairs of variables. This is called a Cross-Impact
Matrix—a tool for assessing the two-way interaction between each pair of variables. Depending on
the number of variables and the length of their names, it may be convenient to use the variables’
letter designations across the top of the matrix rather than the full names.
When deciding whether to include a variable, or to combine two variables into one, keep in mind
that the number of variables has a significant impact on the complexity and the time required for an
analysis. If an analytic problem has five variables, there are 20 possible two-way interactions
between those variables. That number increases rapidly as the number of variables increases. With
10 variables, as in Figure 10.10, there are 90 possible interactions. With 15 variables, there are
210. Complexity Manager may be impractical with more than 15 variables.
4. Assess the interaction between each pair of variables. Use a diverse team of experts on the
relevant topic to analyze the strength and direction of the interaction between each pair of
variables. Enter the results in the relevant cells of the matrix. For each pair of variables, ask the
question: Does this variable affect the paired variable in a manner that will increase or decrease the
impact or influence of that variable?
When entering ratings in the matrix, it is best to take one variable at a time, first going down the
column and then working across the row. Note that the matrix requires each pair of variables to be
evaluated twice—for example, the impact of variable A on variable B and the impact of variable B
on variable A. To record what variables impact variable A, work down column A and ask yourself
whether each variable listed on the left side of the matrix has a positive or negative influence, or no
influence at all, on variable A. To record the reverse impact of variable A on the other variables,
work across row A to analyze how variable A impacts the variables listed across the top of the
matrix.
Analysts can record the nature and strength of impact that one variable has on another in two
different ways. Figure 10.10 uses plus and minus signs to show whether the variable being
analyzed has a positive or negative impact on the paired variable. The size of the plus or minus
sign signifies the strength of the impact on a three-point scale. The small plus or minus sign shows
a weak impact; the medium size a medium impact; and the large size a strong impact. If the
variable being analyzed has no impact on the paired variable, the cell is left empty. If a variable
might change in a way that could reverse the direction of its impact, from positive to negative or vice
versa, this is shown by using both a plus and a minus sign.
The completed matrix shown in Figure 10.10 is the same matrix you will see in chapter 11, when
the Complexity Manager technique is used to forecast the future of Structured Analytic Techniques.
The plus and minus signs work well for the finished matrix. When first populating the matrix,
however, it may be easier to use letters (P and M for plus and minus) to show whether each
variable has a positive or negative impact on the other variable with which it is paired. Each P or M
is then followed by a number to show the strength of that impact. A three-point scale is used, with 3
indicating a Strong Impact, 2 Medium, and 1 Weak.
After rating each pair of variables, and before doing further analysis, consider pruning the matrix to
eliminate variables that are unlikely to have a significant effect on the outcome. It is possible to
measure the relative significance of each variable by adding up the weighted values in each row
and column. The sum of the weights in each row is a measure of each variable’s impact on the
entire system. The sum of the weights in each column is a measure of how much each variable is
affected by all the other variables. Those variables most impacted by the other variables should be
monitored as potential indicators of the direction in which events are moving or as potential sources
of unintended consequences.
5. Analyze direct impacts. Document the impact of each variable, starting with variable A. For each
variable, provide further clarification of the description, if necessary. Identify all the variables that
have an impact on that variable with a rating of 2 or 3, and briefly explain the nature, direction, and,
if appropriate, the timing of this impact. How strong is it and how certain is it? When might these
effects be observed? Will the effects be felt only in certain conditions? Next, identify and discuss all
variables on which this variable has an effect with a rating of 2 or 3 (Medium or Strong Impact),
including the strength of the impact and how certain it is to occur. Identify and discuss the
potentially good or bad side effects of these impacts.
6. Analyze loops and indirect impacts. The matrix shows only the direct effect of one variable on
another. When you are analyzing the direct impacts variable by variable, there are several things to
look for and make note of. One is feedback loops. For example, if variable A has a positive impact
on variable B, and variable B also has a positive impact on variable A, this is a positive feedback
loop. Or there may be a three-variable loop, from A to B to C and back to A. The variables in a loop
gain strength from one another, and this boost may enhance their ability to influence other
variables. Another thing to look for is circumstances where the causal relationship between
variables A and B is necessary but not sufficient for something to happen. For example, variable A
has the potential to influence variable B, and may even be trying to influence variable B, but it can
do so effectively only if variable C is also present. In that case, variable C is an enabling variable
and takes on greater significance than it ordinarily would have.
Description
All variables are either static or dynamic. Static variables are expected to remain unchanged during
the period covered by the analysis. Dynamic variables are changing or have the potential to
change. The analysis should focus on the dynamic variables, as these are the sources of surprise
in any complex system. Determining how these dynamic variables interact with other variables and
with each other is critical to any forecast of future developments. Dynamic variables can be either
predictable or unpredictable. Predictable change includes established trends or established policies
that are in the process of being implemented. Unpredictable change may be a change in leadership
or an unexpected change in policy or available resources.
7. Draw conclusions. Using data about the individual variables assembled in steps 5 and 6, draw
conclusions about the entire system. What is the most likely outcome, or what changes might be
anticipated during the specified time period? What are the driving forces behind that outcome?
What things could happen to cause a different outcome? What desirable or undesirable side effects
should be anticipated? If you need help to sort out all the relationships, it may be useful to sketch
out by hand a diagram showing all the causal relationships. A Concept Map (chapter 6) may be
useful for this purpose. If a diagram is helpful during the analysis, it may also be helpful to the
reader or customer to include such a diagram in the report.
8. Conduct an opportunity analysis. When appropriate, analyze what actions could be taken to
influence this system in a manner favorable to the primary customer of the analysis.
Relationship to Other Techniques
The same procedures for creating a matrix and coding data can be
applied in using a Cross-Impact Matrix (chapter 7). The difference is
that the Cross-Impact Matrix technique is used only to identify and
share information about the cross-impacts in a group or team
exercise. The goal of Complexity Manager is to build on the Cross-
Impact Matrix to analyze the working of a complex system.
Hazard or opportunity, cause or trend consisting of preventive barrier or accelerator, escalation factor
consisting of EF barrier or accelerator for their recovery, and consequences with recovery barrier or
amplifier leads to risk event or opportunity event. Causes are anticipatory and consequences are
reactive.
Back to Figure
Stimulus leads to response, which leads to different forms of instability. The sources of grievances and
conflict are domestic or international, intellectual, social, political, economic, and military. This is the
stimulus. Opposition’s ability to articulate grievance or mobilize discontent leads to government or
society’s capacity to respond. This is the response. Responses include legitimacy or leadership,
resource availability or responsiveness, institutional strength, and monopoly of coersive force. The
response leads to grievance and conflict, and different forms of instability. All and legitimate instability is
peaceful political change. Elite and illegitimate instability are conspiracy or coups d’etat, internal war or
insurgencies, and group-on-group violence. Mass and illegitimate stability are turmoil, internal war or
insurgencies, and group-on-group violence.
Back to Figure
A suspected terrorist is either arrested or not arrested. If arrested, they are either interrogated or
released. If interrogated, the plot to attack is described, or there is no evidence of plot. If the terrorist is
not arrested, they are either under no surveillance or under surveillance. If they are under surveillance,
there is no evidence of plotting or they meet with known terrorist. If there is meeting, then place covert
agent, who either discovers plans to attack or gets killed, or arrest all suspects to be detained in jail or
released on bail.
Back to Figure
Note: The number value and size of the type indicate the significance of each argument. With increase
in the number value, the size of the type increases. The arguments corresponding to each number value
are as follows. 1: The definition of “abandoned cars” is unclear to the public. 2: The public climate favors
cleaning up the city. It is difficult to locate abandoned cars. Health Department has cited old and
abandoned vehicles as potential health hazards. 3: A procedure is needed to verify a car’s status and
notify owners. Advocacy groups have expressed interest. The owners of old cars feel threatened.
Locating and disposing of cars will be expensive. 4: The City Council supports the plan. A location is
needed to put the abandoned cars once identified. 5: Local auto salvage yards will remove cars for free.
The public service director supports the plan.
Back to Figure
Reading the matrix: The cells in each row show the impact of the variable represented by that row on
each of the variables listed across the top of the matrix. The cells in each column show the impact of
each variable listed down the left side of the matrix on the variable represented by the column.
Combination of positive and negative means impact could go either direction. Empty cell equals no
impact.
A B C D E F G H I
A B C D E F G H I
A Increased Nil Strong Positive Nil Strong Weak Medium Strong Medium
use of positive positive positive positive positive positive
Structured and and and and
Analytic strong weak strong medium
Techniques negative negative negative negativ
B Executive Strong Nil Strong Medium Medium Medium Medium Strong Weak
support for positive positive positive positive positive positive positive positive
collaboration and and and
and medium strong weak
Structured negative negative negativ
Analytic
Techniques
Availability of Strong Medium Nil Medium Medium Nil Strong Nil Weak
virtual positive positive positive positive positive positive
technologies and
weak
negativ
E Availability Strong Strong Medium Nil Nil Nil Weak Weak Medium
of analytic positive positive positive positive positive positive
tradecraft and and and and and and
support strong strong medium weak weak medium
negative negative negative negative negative negativ
H Research Strong Medium Nil Nil Medium Weak Weak Nil Medium
on positive positive positive positive positive positive
effectiveness and and and and and and
of Structured strong medium medium weak weak medium
Analytic negative negative negative negative negative negativ
Techniques
The next step in Complexity Manager is to put these ten variables into a Cross-Impact Matrix. This is a
tool for the systematic description of the two-way interaction between each pair of variables. Each pair is
assessed using the following question: Does this variable affect the paired variable in a manner that will
contribute to increased or decreased use of Structured Analytic Techniques in 2030? The completed
matrix is shown in Figure 11.3.1. This is the same matrix that appears in chapter 10.
Description
The goal of this analysis is to assess the likelihood of a substantial increase in the use of Structured
Analytic Techniques by 2030, while identifying any side effects that might be associated with such an
increase. That is why increased use of structured techniques is the lead variable, variable A, which
forms the first column and top row of the matrix. The letters across the top of the matrix are
abbreviations of the same variables listed down the left side.
To fill in the matrix, the authors started with column A to assess the impact of each of the variables listed
down the left side of the matrix on the frequency of use of structured analysis. This exercise provides an
overview of what likely are the most important variables that will impact positively or negatively on the
use of structured analysis. Next, the authors completed row A across the top of the matrix. This shows
the reverse impact—the impact of increased use of structured analysis on the other variables listed
across the top of the matrix. Here one identifies the second-tier effects. Does the growing use of
structured techniques affect any of these other variables in ways that one needs to be aware of?11
The remainder of the matrix was then completed one variable at a time, while identifying and making
notes on potentially significant secondary effects. A secondary effect occurs when one variable
strengthens or weakens another variable, which in turn has an effect on or is affected by Structured
Analytic Techniques.
11.3.2 Identifying Key Drivers
A rigorous analysis of the interaction of all the variables suggests
several conclusions about the future of structured analysis. The
analysis focuses on those variables that (1) are changing or that
have the potential to change and (2) have the greatest impact on
other significant variables.
The principal potential positive drivers of the system are the extent to
which (1) senior executives support a culture of collaboration and (2)
the work environment supports the development of virtual
collaborative communities and technologies. These two variables
provide strong support to structured analysis through their
endorsement of and support for collaboration. Structured analysis
reinforces them in turn by providing an optimal process through
which collaboration occurs.
Two other variables are likely to play a major role because they have
the most cross-impact on other variables as shown in the matrix.
These two variables represent opportunities either to facilitate the
change or obstacles that need to be managed. The two variables are
(1) the level of support for analytic tradecraft cells, on-the-job
mentoring, and facilitators to assist analysts and analytic teams in
using structured techniques12 and (2) the results of ongoing research
on the effectiveness of structured techniques.
The speed and ease of the change in integrating structured
techniques into the analytic process will be significantly
influenced by the availability of senior mentors and facilitators
who can identify which techniques to use and explain how to
use them correctly.
One no longer hears the old claim that there is no proof that the use
of Structured Analytic Techniques improves analysis. The
widespread use of structured techniques in 2030 is partially
attributable to the debunking of that claim. Several European Union
and other foreign studies involving a sample of reports prepared with
the assistance of several structured techniques and a comparable
sample of reports where structured techniques had not been used
showed that the use of structured techniques had distinct value.
Researchers interviewed the authors of the reports, their managers,
and the clients who received these reports. The studies confirmed
that reports prepared with the assistance of the selected structured
techniques were more thorough, provided better accounts of how the
conclusions were reached, and generated greater confidence in the
conclusions than did reports for which such techniques were not
used. The findings were replicated by several government
intelligence services that use the techniques, and the results were
sufficiently convincing to quiet most of the doubters.
11. For a more detailed explanation of how each variable was rated
in the Complexity Analysis matrix, send an email requesting the data
to think@globalytica.com.
Reading the matrix: The cells in each row show the impact of the variable represented by that row on
each of the variables listed across the top of the matrix. The cells in each column show the impact of
each variable listed down the left side of the matrix on the variable represented by the column.
Combination of positive and negative means impact could go either direction.
A B C D E F G H I