SAT For Intelligence Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 677
At a glance
Powered by AI
The key takeaways are that structured analytic techniques can help analysts produce higher quality analysis when applied individually or together in teams. The book provides descriptions of 66 techniques sorted into families to help analysts select the appropriate technique(s) for different stages of the analytic process.

The six groups, or families, that the structured analytic techniques are sorted into are: planning and direction, collection, processing, analysis and production, review and dissemination, and evaluation.

The graphic on page 676 illustrates the relationships among the structured analytic techniques. Mapping them in this way highlights the mutually reinforcing nature of many of the techniques.

MORE ADVANCE PRAISE FOR

STRUCTURED ANALYTIC
TECHNIQUES FOR INTELLIGENCE
ANALYSIS

The 3rd edition of this wonderful book brings some


additional tools which are relevant and invaluable to any
analyst in any field. The structure and framework ensure
ease of use and are very practical—particularly the
grouping of the techniques into “families.” The applicability
of the techniques is easily understood and demonstrates
the value of their use in collaborative analytical teams. This
is a book that should be used for teaching, but analysts and
managers alike should have it beside them when
undertaking analysis or making decisions.
—Gillian Wilson, retired inspector, Victorian Police; life member,
Australian Institute of Professional Intelligence Officers

Pherson and Heuer’s third edition of Structured Analytic


Techniques consolidates its place as the go to source for
teaching and learning SATs. In an ever increasing, complex
security environment, this edition will help practitioners
navigate the shoals of uncertainty, deception, and
information overload for decision-makers.
—Dr. Patrick F. Walsh, associate professor, Charles Sturt University,
Australia

This is a must-read and essential reference guide for all


intelligence analysts. In simple, clear language, Pherson
and Heuer outline key structured techniques that when
applied result in better quality, higher impact analysis.
Kudos to the authors for providing an invaluable resource to
the intelligence community.
—Scott Leeb, head of knowledge, Fragomen LLC; former senior
intelligence officer

Structured Analytic Techniques for Intelligence Analysis is a


comprehensive and accessibly written text that will be of
interest to both novices and professionals alike. Its
comprehensive coverage of analytic techniques, and the
analytic process, make this work an essential reading for
students of intelligence analysis.

A comprehensive handbook on analytic techniques.


— Christopher K. Lamont, University of Groningen; associate
professor, Tokyo International University

In these pages, we start to understand the role of cognitive


bias, the value of words, the value of being. The authors
bring you back to the base. If our instincts spur our brain to
action, knowing the base helps you anticipate the ideas and
recognize the mindsets. In the field of intelligence analysis,
if you don’t understand this, you will never make sense of
how others behave.
—Sabrina Magris, École Universitaire Internationale, Rome, Italy

Excellent publication for the study of intelligence analysis,


structured analytical techniques and their application in this
increasingly dangerous environment. A must-read for
anyone entering the intelligence community as an analyst,
practitioner, stakeholder and leader.
— Charles E. Wilson, University of Detroit Mercy
STRUCTURED ANALYTIC TECHNIQUES
FOR INTELLIGENCE ANALYSIS
Third Edition
STRUCTURED ANALYTIC
TECHNIQUES FOR
INTELLIGENCE ANALYSIS
Third Edition

Randolph H. Pherson

Richards J. Heuer Jr.


FOR INFORMATION:

CQ Press

An Imprint of SAGE Publications, Inc.

2455 Teller Road

Thousand Oaks, California 91320

E-mail: order@sagepub.com

SAGE Publications Ltd.

1 Oliver’s Yard

55 City Road

London EC1Y 1SP

United Kingdom

SAGE Publications India Pvt. Ltd.

B 1/I 1 Mohan Cooperative Industrial Area

Mathura Road, New Delhi 110 044

India

SAGE Publications Asia-Pacific Pte. Ltd.

18 Cross Street #10-10/11/12

China Square Central

Singapore 048423

Copyright © 2021 by CQ Press, an Imprint of SAGE Publications, Inc. CQ


Press is a registered trademark of Congressional Quarterly Inc.

All rights reserved. Except as permitted by U.S. copyright law, no part of this
work may be reproduced or distributed in any form or by any means, or stored
in a database or retrieval system, without permission in writing from the
publisher.

All third party trademarks referenced or depicted herein are included solely for
the purpose of illustration and are the property of their respective owners.
Reference to these trademarks in no way indicates any relationship with, or
endorsement by, the trademark owner.
Printed in the United States of America

Library of Congress Cataloging-in-Publication Data

Names: Pherson, Randolph H., author. | Heuer, Richards J., author.

Title: Structured analytic techniques for intelligence analysis / by Randolph H. Pherson and Richards J.
Heuer Jr.

Description: Third edition. | Thousand Oaks, California : SAGE, CQ Press,[2021]

Identifiers: LCCN 2019033477 | ISBN 9781506368931 (spiral bound) | ISBN 9781506368917 (epub) |
ISBN 9781506368924 (epub) | ISBN 9781506368948 (pdf)

Subjects: LCSH: Intelligence service—United States. | Intelligence service—Methodology.

Classification: LCC JK468.I6 P53 2021 | DDC 327.12—dc23

LC record available at https://lccn.loc.gov/2019033477

This book is printed on acid-free paper.

Acquisitions Editor: Anna Villarruel

Editorial Assistant: Lauren Younker

Production Editor: Tracy Buyan

Copy Editor: Taryn Bigelow

Typesetter: C&M Digitals (P) Ltd.

Proofreader: Heather Kerrigan

Cover Artist: Adriana Gonzalez

Cover Designer: Janet Kiesel

Marketing Manager: Jennifer Jones


CONTENTS
Figures
Foreword by John McLaughlin
Preface
Chapter 1 • Introduction and Overview
1.1 Our Vision
1.2 Role of Structured Analytic Techniques
1.3 History of Structured Analytic Techniques
1.4 The Expanding Use of Structured Analytic Techniques
1.5 Selection of Techniques for This Book
1.6 Quick Overview of Chapters
Chapter 2 • The Role of Structured Techniques
2.1 Two Types of Thinking
2.2 Developing a Taxonomy of Structured Analytic
Techniques
2.3 Dealing with Cognitive Limitations
2.4 Matching Cognitive Limitations to Structured
Techniques
2.5 Combating Digital Disinformation
Chapter 3 • Choosing the Right Technique
3.1 The Six Families
3.2 Core Techniques
3.3 Selecting the Right Technique
3.4 Projects Using Multiple Techniques
3.5 Common Errors in Selecting Techniques
3.6 Making a Habit of Using Structured Techniques
Chapter 4 • Practitioner’s Guide to Collaboration
4.1 Social Networks and Analytic Teams
4.2 Dividing the Work
4.3 Value of Collaborative Processes
4.4 Common Pitfalls with Small Groups
4.5 Benefiting from Diversity
4.6 Advocacy versus Objective Inquiry
4.7 Leadership and Training
Chapter 5 • Getting Organized
5.1 Sorting
5.2 Ranking, Scoring, and Prioritizing
5.2.1 The Method: Ranked Voting
5.2.2 The Method: Paired Comparison
5.2.3 The Method: Weighted Ranking
5.3 Matrices
5.4 Process Maps
5.5 Gantt Charts
Chapter 6 • Exploration Techniques
6.1 Simple Brainstorming
6.2 Cluster Brainstorming
6.3 Nominal Group Technique
6.4 Circleboarding™
6.5 Starbursting
6.6 Mind Maps and Concept Maps
6.7 Venn Analysis
6.8 Network Analysis
Chapter 7 • Diagnostic Techniques
7.1 Key Assumptions Check
7.2 Chronologies and Timelines
7.3 Cross-Impact Matrix
7.4 Multiple Hypothesis Generation
7.4.1 The Method: Simple Hypotheses
7.4.2 The Method: Quadrant Hypothesis Generation
7.4.3 The Method: Multiple Hypotheses Generator®
7.5 Diagnostic Reasoning
7.6 Analysis of Competing Hypotheses
7.7 Inconsistencies Finder™
7.8 Deception Detection
7.9 Argument Mapping
Chapter 8 • Reframing Techniques
8.1 Cause and Effect Techniques
8.1.1 Outside-In Thinking
8.1.2 Structured Analogies
8.1.3 Red Hat Analysis
8.2 Challenge Analysis Techniques
8.2.1 Quadrant Crunching™
8.2.2 Premortem Analysis
8.2.3 Structured Self-Critique
8.2.4 What If? Analysis
8.2.5 High Impact/Low Probability Analysis
8.2.6 Delphi Method
8.3 Conflict Management Techniques
8.3.1 Adversarial Collaboration
8.3.2 Structured Debate
Chapter 9 • Foresight Techniques
9.1 Key Drivers Generation™
9.2 Key Uncertainties Finder™
9.3 Reversing Assumptions
9.4 Simple Scenarios
9.5 Cone of Plausibility
9.6 Alternative Futures Analysis
9.7 Multiple Scenarios Generation
9.8 Morphological Analysis
9.9 Counterfactual Reasoning
9.10 Analysis by Contrasting Narratives
9.11 Indicators Generation, Validation, and Evaluation
9.11.1 The Method: Indicators Generation
9.11.2 The Method: Indicators Validation
9.11.3 The Method: Indicators Evaluation
Chapter 10 • Decision Support Techniques
10.1 Opportunities Incubator™
10.2 Bowtie Analysis
10.3 Impact Matrix
10.4 SWOT Analysis
10.5 Critical Path Analysis
10.6 Decision Trees
10.7 Decision Matrix
10.8 Force Field Analysis
10.9 Pros-Cons-Faults-and-Fixes
10.10 Complexity Manager
Chapter 11 • The Future of Structured Analytic Techniques
11.1 Limits of Empirical Analysis
11.2 Purpose of Structured Techniques
11.3 Projecting the Trajectory of Structured Techniques
11.3.1 Structuring the Data
11.3.2 Identifying Key Drivers
11.4 Role of Structured Techniques in 2030
FIGURES

1.6 Six Families of Structured Analytic Techniques 11

2.1 System 1 and System 2 Thinking 19

2.3 Glossary of Cognitive Biases, Misapplied Heuristics, and


Intuitive Traps 24

2.4 Matching Cognitive Limitations to the Six Families of


Structured Techniques 27

3.3a Selecting the Right Structured Analytic Technique 39

3.3b When to Use Structured Analytic Techniques 40

3.6 The Five Habits of the Master Thinker 44

4.1a Traditional Analytic Team 51

4.1b Special Project Team 51

4.2 Functions of a Collaborative Website 54

4.6 Advocacy versus Inquiry in Small-Group Processes 59

4.7 Effective Small-Group Roles and Interactions 62

5.2a Paired Comparison Matrix 73

5.2b Weighted Ranking Matrix 75

5.3 Rethinking the Concept of National Security: A New Ecology


79

5.4 Commodity Flow Chart of Laundering Drug Money 82

5.5 Gantt Chart of Terrorist Attack Planning 85


6.1 Engaging All Participants Using the Index Card Technique
93

6.2 Cluster Brainstorming in Action 98

6.4 The Circleboarding™ Technique 103

6.5 Starbursting Diagram of a Lethal Biological Event at a


Subway Station 105

6.6a Mind Map of Mind Mapping 107

6.6b Concept Map of Concept Mapping 108

6.7a Venn Diagram of Components of Critical Thinking 112

6.7b Venn Diagram of Invalid and Valid Arguments 113

6.7c Venn Diagram of Zambrian Corporations 115

6.7d Zambrian Investments in Global Port Infrastructure


Projects 117

6.8a Social Network Analysis: The September 11 Hijackers 120

6.8b Social Network Analysis: September 11 Hijacker Key


Nodes 123

6.8c Social Network Analysis 125

7.1 Key Assumptions Check: The Case of Wen Ho Lee 137

7.2 Timeline Estimate of Missile Launch Date 141

7.3 Cross-Impact Matrix 144

7.4.1 Simple Hypotheses 149


7.4.2 Quadrant Hypothesis Generation: Four Hypotheses on the
Future of Iraq 151

7.4.3 Multiple Hypotheses Generator®: Generating Permutations


153

7.6a Creating an ACH Matrix 163

7.6b Evaluating Levels of Disagreement in ACH 163

7.9 Argument Mapping: Does North Korea Have Nuclear


Weapons? 178

8.0 Mount Brain: Creating Mental Ruts 186

8.1.1a An Example of Outside-In Thinking 192

8.1.1b Inside-Out Analysis versus Outside-In Thinking 193

8.1.2 Two Types of Structured Analogies 195

8.1.3 Using Red Hat Analysis to Catch Bank Robbers 202

8.2.1.1a Classic Quadrant Crunching™: Creating a Set of


Stories 207

8.2.1.1b Terrorist Attacks on Water Systems: Reversing


Assumptions 208

8.2.1.1c Terrorist Attacks on Water Systems: Sample Matrices


208

8.2.1.1d Selecting Attack Plans 209

8.2.2 Premortem Analysis: Some Initial Questions 216

8.2.4 What If? Scenario: India Makes Surprising Gains from the
Global Financial Crisis 223
8.2.5 High Impact/Low Probability Scenario: Conflict in the Arctic
228

8.2.6 Delphi Technique 233

9.0 Taxonomy of Foresight Techniques 252

9.4 Simple Scenarios 266

9.5 Cone of Plausibility 269

9.6 Alternative Futures Analysis: Cuba 271

9.7a Multiple Scenarios Generation: Future of the Iraq


Insurgency 274

9.7b Future of the Iraq Insurgency: Using Spectrums to Define


Potential Outcomes 275

9.7c Selecting Attention-Deserving and Nightmare Scenarios


276

9.8 Morphological Analysis: Terrorist Attack Options 279

9.9 Three Stages of Counterfactual Reasoning 281

9.11 Descriptive Indicators of a Clandestine Drug Laboratory


291

9.11.1 Using Indicators to Track Emerging Scenarios in Zambria


294

9.11.3a Indicators Evaluation 296

9.11.3b The INUS Condition 299

9.11.3c Zambria Political Instability Indicators 301

10.1 Opportunities Incubator™ 313


10.2 Bowtie Analysis 316

10.3 Impact Matrix: Identifying Key Actors, Interests, and Impact


320

10.4 SWOT Analysis 322

10.5 Political Instability Critical Path Analysis 325

10.6 Counterterrorism Attack Decision Tree 328

10.7 Decision Matrix 332

10.8 Force Field Analysis: Removing Abandoned Cars from City


Streets 334

10.9 Pros-Cons-Faults-and-Fixes Analysis 337

10.10 Variables Affecting the Future Use of Structured Analysis


344

11.3.1 Variables Affecting the Future Use of Structured Analysis


355
FOREWORD
John McLaughlin
Distinguished Practitioner-in-Residence, Paul H. Nitze School of
Advanced International Studies, Johns Hopkins University

Former Deputy Director and Acting Director of Central Intelligence,


Central Intelligence Agency

As intensively as America’s Intelligence Community has been


studied and critiqued, little attention has typically been paid to
intelligence analysis. Most assessments focus on such issues as
overseas clandestine operations and covert action, perhaps because
they accord more readily with popular images of the intelligence
world.

And yet, analysis has probably never been a more important part of
the profession—or more needed by policymakers. In contrast to the
bipolar dynamics of the Cold War, this new world is strewn with
failing states, proliferation dangers, regional crises, rising powers,
and dangerous non-state actors—all at play against a backdrop of
exponential change in fields as diverse as population and
technology.

To be sure, there are still precious secrets that intelligence collection


must uncover—things that are knowable and discoverable. But this
world is equally rich in mysteries having to do more with the future
direction of events and the intentions of key actors. Such things are
rarely illuminated by a single piece of secret intelligence data; they
are necessarily subjects for analysis.

Analysts charged with interpreting this world would be wise to absorb


the thinking in this book by Randolph Pherson and Richards J.
Heuer Jr. and in Heuer’s earlier work, Psychology of Intelligence
Analysis. The reasons are apparent if one considers the ways in
which intelligence analysis differs from similar fields of intellectual
endeavor.

Intelligence analysts must traverse a minefield of potential errors:

First, they typically must begin addressing their subjects where


others have left off; in most cases the questions they get are
about what happens next, not about what is known.

Second, they cannot be deterred by lack of evidence. As Heuer


pointed out in his earlier work, the essence of the analysts’
challenge is having to deal with ambiguous situations in which
information is never complete and arrives only incrementally—
but with constant pressure to arrive at conclusions.

Third, analysts must frequently deal with an adversary that


actively seeks to deny them the information they need and is
often working hard to deceive them.

Finally, for all of these reasons, analysts live with a high degree
of risk—essentially the risk of being wrong and thereby
contributing to ill-informed policy decisions.

The risks inherent in intelligence analysis can never be eliminated,


but one way to minimize them is through more structured and
disciplined thinking about thinking. On that score, I tell my students
at the Johns Hopkins School of Advanced International Studies that
the Heuer book is probably the most important reading I give them,
whether they are heading into the government or the private sector.
Intelligence analysts should reread it frequently. In addition,
Randolph Pherson’s work over the past decade and a half to
develop and refine a suite of Structured Analytic Techniques offers
invaluable assistance by providing analysts with specific techniques
they can use to combat mindsets, Groupthink, and all the other
potential pitfalls of dealing with ambiguous data in circumstances
that require clear and consequential conclusions.
The book you now hold augments Heuer’s pioneering work by
offering a clear and more comprehensive menu of more than sixty
techniques to build on the strategies he earlier developed for
combating perceptual errors. The techniques range from fairly simple
exercises that a busy analyst can use while working alone—the Key
Assumptions Check, Indicators Validation, or What If? Analysis—to
more complex techniques that work best in a group setting—Cluster
Brainstorming, Analysis of Competing Hypotheses, or Premortem
Analysis.

The key point is that all analysts should do something to test the
conclusions they advance. To be sure, expert judgment and intuition
have their place—and are often the foundational elements of sound
analysis—but analysts are likely to minimize error to the degree they
can make their underlying logic explicit in the ways these techniques
demand.

Just as intelligence analysis has seldom been more important, the


stakes in the policy process it informs have rarely been higher.
Intelligence analysts these days therefore have a special calling, and
they owe it to themselves and to those they serve to do everything
possible to challenge their own thinking and to rigorously test their
conclusions. The strategies offered by Randolph Pherson and
Richards J. Heuer Jr. in this book provide the means to do precisely
that.
PREFACE
ORIGIN AND PURPOSE
The investigative commissions that followed the terrorist attacks of
2001 and the erroneous 2002 National Intelligence Estimate on
Iraq’s weapons of mass destruction clearly documented the need for
a new approach to how analysis is conducted in the U.S. Intelligence
Community. Attention focused initially on the need for “alternative
analysis”—techniques for questioning conventional wisdom by
identifying and analyzing alternative explanations or outcomes. This
approach was later subsumed by a broader effort to transform the
tradecraft of intelligence analysis by using what have become known
as Structured Analytic Techniques. Structured analysis involves a
step-by-step process that externalizes an individual analyst’s
thinking in a manner that makes it readily apparent to others, thereby
enabling it to be shared, built on, and critiqued by others. When
combined with the intuitive judgment of subject-matter experts, such
a structured and transparent process can significantly reduce the risk
of analytic error.

Our current high-tech, global environment increasingly requires


collaboration among analysts with different areas of expertise and
different organizational perspectives. Structured Analytic Techniques
are ideal for this interaction. Each step in a technique prompts
relevant discussion, and, typically, this generates more divergent
information and more new ideas than any unstructured group
process. The step-by-step process of Structured Analytic Techniques
organizes the interaction among analysts in a small analytic group or
team in a way that helps avoid the multiple pathologies that often
degrade group or team performance.

Progress in the development and use of Structured Analytic


Techniques has been steady since the publication of the first edition
of this book in 2011. By defining the domain of Structured Analytic
Techniques and providing a manual for using and testing these
techniques, the first edition laid the groundwork for continuing
improvement in how analysis is done within the U.S. Intelligence
Community and a growing number of foreign intelligence services.
Since then, the techniques have also made significant inroads into
academic curricula and the business world. New techniques that the
authors developed to fill gaps in what is currently available for
intelligence analysis and that have broad applicability elsewhere are
introduced in this book for the first time.

The second edition of the book added several techniques including


Venn Analysis, Cone of Plausibility, Decision Trees, and the Impact
Matrix and made significant revisions to Red Hat Analysis and the
process of evaluating the diagnosticity of Indicators. The second
edition also introduced the concepts of System 1 and System 2
Thinking (intuitive versus analytic approaches to thinking) and
described ways to make the use of core structured techniques a
daily habit for analysts.

In the third edition, we dropped several contrarian techniques and


four critical thinking techniques that are described fully in a
companion book, Critical Thinking for Strategic Intelligence.1 That
allowed us to add several new techniques including Analysis by
Contrasting Narratives, Counterfactual Reasoning, Inconsistencies
Finder™, Opportunities Incubator™, Bowtie Analysis, Critical Path
Analysis, and two techniques for discovering key drivers. We
substantially expanded the discussion of cognitive biases,
misapplied heuristics, and intuitive traps and mapped which of the
sixty-six structured techniques described in this edition are most
effective in mitigating the impact of these cognitive limitations.

As the use of Structured Analytic Techniques becomes more


widespread, we anticipate that the ways these techniques are used
will continue to change. Our goal is to keep up with these changes in
future editions, so we welcome your suggestions, at any time, for
updating this third edition or otherwise enhancing its utility. To
facilitate the use of these techniques, CQ Press/SAGE published the
second edition of Cases in Intelligence Analysis: Structured Analytic
Techniques in Action, with seventeen case studies and detailed
exercises and lesson plans for learning how to use and teach
twenty-eight of the Structured Analytic Techniques.
AUDIENCE FOR THIS BOOK
This book is for practitioners, managers, teachers, and students in
the intelligence, law enforcement, and homeland security
communities, as well as in academia, business, medicine, and
elsewhere in the private sector. Managers, policymakers, corporate
executives, strategic planners, action officers, and operators who
depend on input from analysts to help them achieve their goals will
also find it useful. Academics and consultants who specialize in
qualitative methods for dealing with unstructured data will be
interested in this pathbreaking book as well.

Many of the techniques described here relate to strategic


intelligence, but almost all the techniques should be of interest to law
enforcement, counterterrorism, and competitive intelligence analysts,
as well as to business consultants, lawyers, doctors, and financial
planners with a global perspective. Many techniques developed for
these related fields have been adapted for use in intelligence
analysis, and now we are starting to see the transfer of knowledge
going in the other direction. Techniques such as Analysis of
Competing Hypotheses (ACH), Key Assumptions Check, Quadrant
Crunching™, and Indicators Validation and Evaluation developed
specifically for intelligence analysts are now being adapted for use in
other fields, such as law, medicine, and financial intelligence.
CONTENT AND DESIGN
The first four chapters describe structured analysis in general, how it
fits into the spectrum of methods used by analysts, how to select
which techniques are most suitable for your analytic project, and the
value of integrating these techniques into collaborative team
projects. The next six chapters describe when, why, and how to use
six families of structured techniques. The final chapter provides a
vision of how these techniques are likely to be used in the year 2030.

We designed the book for ease of use and quick reference. The
spiral binding allows analysts to have the book open while they
follow step-by-step instructions for each technique. In this edition, we
regrouped the techniques into six families based on the analytic
production process. Tabs separating each chapter contain a table of
contents for the selected chapter. For each family of techniques, we
provide an overarching description of that category and then a brief
summary of each technique covered in that chapter.
THE AUTHORS
Randolph H. Pherson is CEO of Globalytica, LLC; president of
Pherson Associates, LLC; and a founding director of the nonprofit
Forum Foundation for Analytic Excellence. He teaches advanced
analytic techniques and critical thinking skills to analysts in more
than two dozen countries, supporting major financial institutions,
global retailers, and security firms; facilitates Foresight workshops
for foreign governments, international foundations, and multinational
corporations; and consults with senior corporate and government
officials on how to build robust analytic organizations. Mr. Pherson
collaborated with Richards J. Heuer Jr. in developing and launching
use of ACH.

Mr. Pherson coauthored Critical Thinking for Strategic Intelligence


with Katherine Hibbs Pherson, Cases in Intelligence Analysis:
Structured Analytic Techniques in Action with Sarah Miller Beebe,
the Analyst’s Guide to Indicators with John Pyrik, and guides for
analysts on writing, briefing, managing the production process, and
communicating intelligence analysis in the digital age. His most
recent book is How to Get the Right Diagnosis: 16 Tips for
Navigating the Medical System. He has published over a dozen
articles on structured techniques, collaboration, cognitive bias,
Digital Disinformation, Foresight analysis, and the Five Habits of the
Master Thinker.

Mr. Pherson completed a twenty-eight-year career in the Intelligence


Community in 2000, last serving as National Intelligence Officer
(NIO) for Latin America. Previously at the Central Intelligence
Agency (CIA), he managed the production of intelligence analysis on
topics ranging from global instability to Latin America, served on the
Inspector General’s staff, and was chief of the CIA’s Strategic
Planning and Management Staff and Executive Assistant to the
Executive Director of the Agency. He is the recipient of the
Distinguished Intelligence Medal for his service as NIO and the
Distinguished Career Intelligence Medal. Mr. Pherson received his
BA from Dartmouth College and an MA in International Relations
from Yale University.

Richards J. Heuer Jr. is best known for his book Psychology of


Intelligence Analysis and for developing and then guiding automation
of the ACH technique. Both are being used to teach and train
intelligence analysts throughout the intelligence community and in a
growing number of academic programs on intelligence or national
security.

After retiring from the CIA, Mr. Heuer was associated with the U.S.
Intelligence Community in various roles for more than five decades
until his death in August 2018. He wrote extensively on personnel
security, counterintelligence, deception, and intelligence analysis. Mr.
Heuer earned a BA in philosophy from Williams College and an MA
in international relations from the University of Southern California.
He also pursued graduate studies at the University of California,
Berkeley, and the University of Michigan.
ACKNOWLEDGMENTS
For this edition, the authors would like to acknowledge the thoughtful
comments and suggestions they received from managers of
intelligence analysis in the United States, United Kingdom, Canada,
Spain, and Romania. We thank Peter de Werd and Noel
Hendrickson for reviewing and offering useful critiques on the
sections describing Analysis by Contrasting Narratives and
Counterfactual Reasoning, respectively. We also deeply appreciate
the invaluable recommendations and edits provided by Rubén Arcos,
Abigail DiOrio, Alysa Gander, Cindy Jensen, Kristine Leach,
Penelope Mort Ranta, Mary O’Sullivan, Richard Pherson, Karen
Saunders, and Roy Sullivan as well as the insightful graphics design
support provided by Adriana Gonzalez.

For the second edition, the authors greatly appreciate the


contributions made by Mary Boardman, Kathrin Brockmann and her
colleagues at the Stiftung Neue Verantwortung, Nick Hare and his
colleagues at the United Kingdom Cabinet Office, Mary O’Sullivan,
Katherine Hibbs Pherson, John Pyrik, Todd Sears, and Cynthia
Storer to expand and improve the chapters on analytic techniques,
as well as the graphic design and editing support provided by
Adriana Gonzalez and Richard Pherson.

Both authors recognize the large contributions many individuals


made to the first edition, reviewing all or large portions of the draft
text. These include Professor J. Scott Armstrong at the Wharton
School, University of Pennsylvania; Sarah Miller Beebe, a former
CIA analyst who served on the National Security Council staff; Jack
Davis, a noted teacher and writer on intelligence analysis; and
Robert R. Hoffman, noted author of books on naturalistic decision
making.

Valuable comments, suggestions, and assistance were also received


from many others during the development of the first and second
editions, including Todd Bacastow, Michael Bannister, Aleksandra
Bielska, Arne Biering, Jim Bruce, Hriar Cabayan, Ray Converse,
Steve Cook, John Donelan, Averill Farrelly, Stanley Feder, Michael
Fletcher, Roger George, Jay Hillmer, Donald Kretz, Terri Lange,
Darci Leonhart, Mark Lowenthal, Elizabeth Manak, Stephen Marrin,
William McGill, David Moore, Mary O’Sullivan, Emily Patterson,
Amanda Pherson, Katherine Hibbs Pherson, Steve Rieber, Grace
Scarborough, Alan Schwartz, Marilyn Scott, Gudmund Thompson,
Kristan Wheaton, and Adrian “Zeke” Wolfberg.

The ideas, interest, and efforts of all the above contributors to this
book are greatly appreciated, but the responsibility for any
weaknesses or errors rests solely on the shoulders of the authors.
DISCLAIMER
All statements of fact, opinion, or analysis expressed in this book are
those of the authors and do not reflect the official positions of the
CIA or any other U.S. government agency. Nothing in the contents
should be construed as asserting or implying U.S. government
authentication of information or agency endorsement of the authors’
views. This material has been reviewed by the CIA only to prevent
the disclosure of classified information.
NOTE
1. Katherine Hibbs Pherson and Randolph H. Pherson, Critical
Thinking for Strategic Intelligence, 2nd ed. (Washington, DC: CQ
Press/SAGE, 2015).
CHAPTER 1 INTRODUCTION AND
OVERVIEW

1.1 Our Vision [ 3 ]

1.2 Role of Structured Analytic Techniques [ 4 ]

1.3 History of Structured Analytic Techniques [ 5 ]

1.4 The Expanding Use of Structured Analytic Techniques [


7]

1.5 Selection of Techniques for This Book [ 8 ]

1.6 Quick Overview of Chapters [ 10 ]

Analysis as practiced in the intelligence, law enforcement, and


business communities is steadily evolving from a mental activity
done predominantly by a sole analyst to a collaborative team or
group activity.1 The driving forces behind this transition include the
following:

The growing complexity of international issues and the


consequent requirement for multidisciplinary input to most
analytic products.2

The need to share more information quickly across


organizational boundaries.

The dispersion of expertise, especially as the boundaries


between analysts, collectors, operators, and decision makers
become blurred.
The need to identify and evaluate the validity of alternative
mental models.

The need to counter the use of social media to distribute Digital


Disinformation or fake news.

This transition is being enabled by advances in technology, such as


new collaborative networks, artificial intelligence, and blockchain as
well as the mushrooming growth of social networking practices
among the upcoming generation of analysts. The use of Structured
Analytic Techniques facilitates the transition by guiding the exchange
of information and reasoning among analysts in ways that identify
and eliminate a wide range of cognitive biases and other shortfalls of
intuitive judgment.
1.1 OUR VISION
This book defines the role and scope of Structured Analytic
Techniques as a distinct analytic approach that provides a step-by-
step process for dealing with the kinds of incomplete, ambiguous,
and sometimes deceptive information with which analysts must work.
Structured analysis is a mechanism by which internal thought
processes are externalized in a systematic and transparent manner
so that they can be shared, built on, and easily critiqued by others.
Each technique leaves a trail that other analysts and managers can
follow to see the basis for an analytic judgment. These techniques
are used by individual analysts but are perhaps best utilized in a
collaborative team effort in which each step of the analytic process
exposes participants to divergent or conflicting perspectives. This
transparency helps ensure that differences of opinion among
analysts are heard and seriously considered early in the analytic
process. Analysts tell us that this is one of the most valuable benefits
of any structured technique.

Structured analysis helps analysts ensure that their analytic


framework—the foundation upon which they form their analytic
judgments—is as solid as possible. By helping break down a specific
analytic problem into its component parts and specifying a step-by-
step process for handling these parts, Structured Analytic
Techniques help organize the amorphous mass of data with which
most analysts must contend. Such techniques make our thinking
more open and available for review and critique by ourselves as well
as by others. This transparency enables the effective communication
at the working level that is essential for intraoffice and interagency
collaboration.

We call the various approaches described in this book “techniques”


because they usually guide the analyst in thinking about a problem
rather than provide the analyst with a definitive answer, as one might
expect from a predictive tool. Structured techniques help analysts
think more rigorously about a problem; they do not solve it.
Structured Analytic Techniques, however, do form a methodology—a
set of principles and procedures for qualitative analysis of the kinds
of uncertainties that many analysts must deal with daily.
1.2 ROLE OF STRUCTURED ANALYTIC
TECHNIQUES
Structured Analytic Techniques are debiasing techniques. They do
not replace intuitive judgment. Their role is to question intuitive
judgments by identifying a wider range of options for analysts to
consider. For example, a Key Assumptions Check requires the
identification and consideration of additional assumptions. Analysis
of Competing Hypotheses requires identification of alternative
hypotheses, a focus on refuting rather than confirming hypotheses,
and a more systematic analysis of the evidence. All structured
techniques described in this book have a “Value Added” section that
describes how this technique contributes to better analysis.
Structured Analytic Techniques help mitigate cognitive biases,
misapplied heuristics, and intuitive traps that analysts often fall victim
to when relying only on expert-aided judgment. For many
techniques, the benefit is self-evident. None purports to always give
the correct answer; instead, they identify alternatives that merit
serious consideration.

No formula exists, of course, for always getting it right, but the use of
structured techniques can reduce the frequency and severity of error.
These techniques can help analysts deal with proven cognitive
limitations, sidestep some of the known analytic biases, and explicitly
confront the problems associated with unquestioned mental models
or mindsets. They help analysts think more rigorously about an
analytic problem and ensure that preconceptions and assumptions
are explicitly examined and, when possible, tested.3

The most common criticism of Structured Analytic Techniques is “I


don’t have enough time to use them.” The experience of many
analysts shows that this criticism is not justified. All the techniques
will save an analyst time, on balance, when considering the entire
arc of the analytic production schedule. Anything new does take time
to learn; however, once learned, the incorporation of Structured
Analytic Techniques into the analytic process saves analysts time
over time. They enable individual analysts to work more efficiently,
especially at the start of a project, when the analyst may otherwise
flounder trying to figure out how to proceed. Structured techniques
also aid group processes by improving communication as well as
enhancing the collection and interpretation of evidence. And, in the
end, use of a structured technique results in a product in which the
reasoning behind the conclusions is more transparent and more
readily accepted than one derived from other methods. Transparent
reasoning expedites review by supervisors and editors while also
compressing the coordination process.4

Analytic methods are important, but method alone is far from


sufficient to ensure analytic accuracy or value. Method must be
combined with substantive expertise and an inquiring and
imaginative mind. And these, in turn, must be supported and
motivated by the organizational environment in which the analysis is
done.
1.3 HISTORY OF STRUCTURED
ANALYTIC TECHNIQUES
The term “structured analytic techniques” was first used in the U.S.
Intelligence Community in 2005. The concept originated in the
1980s, when the eminent teacher of intelligence analysis, Jack
Davis, first began teaching and writing about what he called
“alternative analysis.”5 The term referred to the evaluation of
alternative explanations or hypotheses, better understanding of other
cultures, and analysis of events from the other country’s point of
view rather than by Mirror Imaging. In the mid-1980s, some initial
efforts were made to initiate the use of more alternative analytic
techniques in the Central Intelligence Agency’s Directorate of
Intelligence. Under the direction of Robert Gates, then CIA Deputy
Director for Intelligence, analysts employed several new techniques
to generate scenarios of dramatic political change, track political
instability, and anticipate military coups. Douglas MacEachin, Deputy
Director for Intelligence from 1993 to 1996, supported new standards
for systematic and transparent analysis that helped pave the path to
further change.6

The term “alternative analysis” became widely used in the late 1990s
after (1) Adm. David Jeremiah’s postmortem analysis of the U.S.
Intelligence Community’s failure to foresee India’s 1998 nuclear test,
(2) a U.S. congressional commission’s review of the Intelligence
Community’s global missile forecast in 1998, and (3) a report from
the CIA Inspector General that focused higher-level attention on the
state of the Directorate of Intelligence’s analytic tradecraft. The
Jeremiah report specifically encouraged increased use of what it
called “red team analysis.”

The beginning of wisdom is the definition of terms.


—Socrates, Greek philosopher
When the Sherman Kent School for Intelligence Analysis at the CIA
was created in 2000 to improve the effectiveness of intelligence
analysis, John McLaughlin, then Deputy Director for Intelligence,
tasked the school to consolidate techniques for doing what was then
referred to as “alternative analysis.” In response to McLaughlin’s
tasking, the Kent School developed a compilation of techniques that
the CIA’s Directorate of Intelligence started teaching in a course that
later evolved into the Advanced Analytic Tools and Techniques
Workshop. The Kent School subsequently opened the class to
analysts from the Defense Intelligence Agency and other elements of
the U.S. Intelligence Community.

The various investigative commissions that followed the surprise


terrorist attacks of September 11, 2001, as well as the erroneous
analysis of Iraq’s possession of weapons of mass destruction,
cranked up pressure for more rigorous approaches to intelligence
analysis. For example, the Intelligence Reform Act of 2004 assigned
to the Director of National Intelligence (DNI) “responsibility for
ensuring that, as appropriate, elements of the intelligence community
conduct alternative analysis (commonly referred to as ‘red-team’
analysis) of the information and conclusions in intelligence analysis.”

Over time, analysts who misunderstood, or resisted the call for more
rigor, interpreted alternative analysis as simply meaning an
alternative to the normal way that analysis is done. For them the
term implied that alternative procedures were needed only in
exceptional circumstances when an analysis is of critical importance.
Kent School instructors countered that the techniques were not
alternatives to traditional analysis but were central to good analysis
and should become routine—instilling rigor and structure into the
analysts’ everyday work process.

In 2004, when the Kent School decided to update its training


materials based on lessons learned during the previous several
years and publish A Tradecraft Primer,7 Randolph H. Pherson and
Roger Z. George were the primary drafters. As George observed at
the time, “There was a sense that the name ‘alternative analysis’
was too limiting and not descriptive enough. At least a dozen
different analytic techniques were all rolled into one term, so we
decided to find a name that was more encompassing and suited this
broad array of approaches to analysis.”8 Randy Pherson credits his
wife, Kathy, with creating the name “Structured Analytic Techniques”
during a dinner table conversation. George organized the techniques
into three categories: diagnostic techniques, contrarian techniques,
and imagination techniques. The term “Structured Analytic
Techniques” became official in June 2005, when the Kent School
formally approved the updated training materials.

The use of the term “alternative analysis,” however, persists in


official directives. The DNI is tasked under the Intelligence Reform
Act of 2004 with ensuring that elements of the U.S. Intelligence
Community conduct alternative analysis, which it now describes as
the inclusion of alternative outcomes and hypotheses in analytic
products. We view “alternative analysis” as covering only a subset of
what now is regarded as Structured Analytic Techniques and
recommend avoiding use of the term “alternative analysis” to
forestall any confusion. We strongly endorse, however, the “analysis
of alternatives”—be they hypotheses or scenarios—as an essential
component of good analysis.
1.4 THE EXPANDING USE OF
STRUCTURED ANALYTIC TECHNIQUES
Intelligence community analysts in the United States and in foreign
intelligence services have used structured techniques for over a
decade, but the general use of these techniques by the typical
analyst is a relatively new phenomenon. Most analysts using the
techniques today were not exposed to them when they were college
students. The driving forces behind the development and use of
these techniques in the intelligence profession, and increasingly in
the private sector, are (1) an increased appreciation of cognitive
limitations that make intelligence analysis so difficult, (2) prominent
intelligence failures that have prompted reexamination of how
intelligence analysis is generated, (3) increased policy support and
technical support for intraoffice and interagency collaboration, and
(4) a desire by policymakers to make analytic conclusions more
transparent.

In the early 2000s, the Directorate of Intelligence’s senior


management, which strongly supported using Structured Analytic
Techniques, created Tradecraft Cells in its analytic units to mentor
analysts in how to use structured techniques and to facilitate the
integration of the techniques into ongoing projects. The Federal
Bureau of Investigation (FBI), the Defense Intelligence Agency (DIA),
and the Department of Homeland Security (DHS) were the next
agencies to incorporate structured techniques formally into their
training programs followed by the National Security Agency (NSA),
the National Geospatial-Intelligence Agency (NGA), and the Office of
Naval Intelligence (ONI). Structured techniques are now used
throughout the entire U.S. Intelligence Community.

The U.S. Intelligence Community’s adoption of structured techniques


spurred many academic institutions to incorporate training in the
techniques into their homeland security and intelligence studies
programs, which were quickly propagating across the United States.
Foreign universities also incorporated instruction on structured
techniques into their undergraduate and master’s degree programs,
including institutions in Spain, Canada, the United Kingdom,
Denmark, Germany, and Australia, followed by other universities on
six continents.

Publication of the first edition of this book in 2011, followed by an


expanded second edition in 2015, and discussions of their utility in
various annual international conferences helped propagate the use
of Structured Analytic Techniques initially in the intelligence services
of the Five Eyes countries (United States, Canada, United Kingdom,
Australia, and New Zealand). Use of this book and the techniques
has expanded over the years to almost all European intelligence
services and other services around the world. The book is now being
used by analysts in intelligence services, businesses, universities,
and nongovernmental organizations in at least two dozen countries.9

Structured Analytic Techniques for Intelligence Analysis has been


translated into Spanish, Chinese, and Korean. The publisher has
received inquiries about translations into several other languages. A
companion volume, Critical Thinking for Strategic Intelligence, has
also been translated into Chinese and Polish.
1.5 SELECTION OF TECHNIQUES FOR
THIS BOOK
The techniques described in this book are limited to those that meet
our definition of Structured Analytic Techniques, as discussed earlier
in this chapter. Although the focus is on techniques for strategic
intelligence analysis, many of the techniques described in this book
have wide applicability to tactical military analysis, law enforcement
intelligence analysis, homeland security, business consulting, the
medical profession, financial planning, cyber analysis, and complex
decision making in any field. The book focuses on techniques that
can be used by a single analyst working alone or, preferably, with a
small group or team of analysts. We excluded techniques that
require sophisticated computing or complex projects of the type
usually sent to an outside expert or company. Several promising
techniques recommended to us were not included for this reason.

From the several hundred techniques that might have been included
in this book, we selected a core group of sixty-six techniques that
appear to be most useful for the intelligence profession as well as
analytic pursuits in government, academia, and the private sector.10
We omitted techniques that tend to be used exclusively for a single
type of analysis in fields such as law enforcement or business
consulting.

This list is not static, and we expect it to increase and decrease as


new techniques are identified and others are tested and found
wanting. In the second edition, we dropped two techniques and
added five new ones. In this edition, Devil’s Advocacy, Red Team
Analysis, Role Playing, and Virtual Brainstorming were dropped for
reasons explained in later chapters. A suite of techniques that relate
more to analytic production—Getting Started Checklist, Client
Checklist, AIMS (Audience, Issue, Message, and Storyline), and
Issue Redefinition—were also dropped because they are described
fully in Pherson and Pherson’s Critical Thinking for Strategic
Intelligence. Nine new techniques were added: Inconsistencies
Finder™, Key Uncertainties Finder™, Key Drivers Generation™,
Reversing Assumptions, Analysis by Contrasting Narratives,
Counterfactual Reasoning, Opportunities Incubator™, Bowtie
Analysis, and Critical Path Analysis.

Some training programs may have a need to boil down the list of
techniques to the essentials required for a given type of analysis. No
one list will meet everyone’s needs. However, we hope that having
one reasonably comprehensive list and lexicon of common
terminology available to the growing community of analysts now
employing Structured Analytic Techniques will help to facilitate
discussion and use of these techniques in projects involving
collaboration across organizational boundaries.

This collection of techniques builds on work previously done in the


U.S. Intelligence Community. We also have included several
techniques developed and used by our British, Canadian, Spanish,
Dutch, and Australian colleagues. To select the most appropriate
techniques for the initial edition of this book, Richards J. Heuer Jr.
reviewed a large number of books and websites dealing with
intelligence analysis methodology, qualitative methods in general,
decision making, problem solving, competitive intelligence, law
enforcement intelligence, strategic foresight or futures research, and
social science research in general. In preparation for writing the third
edition, Pherson interviewed managers of intelligence programs in
over a dozen agencies and foreign intelligence services to identify
which techniques could be dropped and which should be added.
Given the immensity of this literature, there can be no guarantee that
nothing was missed.

Almost half of the techniques described in this edition have become


“standard fare” in training materials used by the CIA, DIA, Office of
Intelligence and Analysis in the DHS, or other intelligence agencies.
Over half were newly created or adapted to the needs of intelligence
analysts by Richards J. Heuer Jr. or Randolph H. Pherson to fill
perceived gaps. Several of the techniques that were originally
created by Randolph H. Pherson, while teaching structured
techniques to intelligence analysts, students, and private-sector
clients, have since been revised, reflecting lessons learned when
applying the techniques to current issues.

Specific guidance is provided on how to use each technique, but this


guidance is not written in stone. Many of the techniques can be
implemented in more than one way, and some techniques have
several different names. An experienced government analyst told
one of the authors that he seldom uses a technique the same way
twice. He adapts techniques to the requirements of the specific
problem, and his ability to do that effectively is a measure of his
experience.

In the popular literature, the names of some techniques are normally


capitalized, but many are not. We have chosen to capitalize the
names of all techniques for consistency’s sake and to make them
stand out.
1.6 QUICK OVERVIEW OF CHAPTERS
Chapter 2 (“The Role of Structured Techniques”) defines the domain of Structured Analytic Techniques
by describing how it differs from three other major categories of intelligence analysis methodology. It
presents a taxonomy with six distinct categories or families of Structured Analytic Techniques. The
families are based on how each set of techniques contributes to a different phase of analytic tasks in the
intelligence production process. The chapter discusses how structured techniques can help analysts
avoid, overcome, or at least mitigate the cognitive biases, misapplied heuristics, and intuitive traps they
fall prey to every day. It concludes with a discussion of how perpetrators of Digital Disinformation
leverage these cognitive limitations to promote their agendas and how structured techniques can help
counter this phenomenon.

Chapter 3 (“Choosing the Right Technique”) describes the criteria we used for selecting techniques,
discusses which techniques might be learned first and used the most, and provides a guide for matching
techniques to analysts’ needs. The guide asks twelve questions about what the analyst wants or needs
to do. An affirmative answer to any question directs the analyst to the appropriate chapter(s), where the
analyst can quickly zero in on the most applicable technique(s). It concludes with a description of the
value of instilling five core habits of thinking into the analytic process.

Chapter 4 (“Practitioner’s Guide to Collaboration”) builds on our earlier observation that analysis done
across the global intelligence community is in a transitional stage from a mental activity performed
predominantly by a sole analyst to a collaborative team or group activity. The chapter discusses, among
other things, how to expand the analytic process to include rapidly growing social networks of area and
functional specialists who often work from several different geographic locations. It proposes that most
analysis be done in two phases: a divergent analysis or creative phase with broad participation by a
social network, followed by a convergent analysis phase and final report done by a small analytic team.

Chapters 5 through 10 each describe a different family of structured techniques, which taken together
cover sixty-six structured techniques (see Figure 1.6).11 Each of these chapters starts with a description
of the specific family and how techniques in that family help to mitigate known cognitive biases,
misapplied heuristics, or intuitive traps. A brief overview of each technique is followed by a detailed
discussion of each, including when to use it, the value added, description of the method, potential pitfalls
when noteworthy, relationship to other techniques, and origins of the technique.
Figure 1.6 Six Families of Structured Analytic Techniques

Readers who go through these six chapters of techniques from start to finish may perceive some
overlap. This repetition is for the convenience of those who use this book as a reference guide and seek
out individual sections or chapters. The reader seeking only an overview of the techniques can save
time by reading the introduction to each family of techniques, the brief overview of each technique, and
the full descriptions of only those specific techniques that pique the reader’s interest.

Highlights of the six chapters of techniques are as follows:

Chapter 5: Getting Organized. The eight techniques cover the basics, such as checklists, sorting,
ranking, and organizing your data.

Chapter 6: Exploration Techniques. The nine techniques include several types of brainstorming,
including Circleboarding™, Starbursting, and Cluster Brainstorming, which was called Structured
Brainstorming in previous editions. The Nominal Group Technique is a form of brainstorming that is
appropriate when there is concern that a brainstorming session might be dominated by a
particularly aggressive analyst or constrained by the presence of a senior officer. It also introduces
several mapping techniques, Venn Analysis, and Network Analysis.

Chapter 7: Diagnostic Techniques. The eleven techniques covered in this chapter include the
widely used Key Assumptions Check and Chronologies and Timelines. The Cross-Impact Matrix
supports group learning about relationships in a complex system. Several techniques fall in the
domain of hypothesis generation and testing (Multiple Hypothesis Generation, Diagnostic
Reasoning, Analysis of Competing Hypotheses [ACH], Argument Mapping, and Deception
Detection), including a new technique called the Inconsistencies Finder™, which is a simplified
version of ACH.
Chapter 8: Reframing Techniques. The sixteen techniques in this family help analysts break away
from established mental models by using Outside-In Thinking, Structured Analogies, Red Hat
Analysis, Quadrant Crunching™, and the Delphi Method to reframe an issue or imagine a situation
from a different perspective. What If? Analysis and High Impact/Low Probability Analysis are tactful
ways to suggest that the conventional wisdom could be wrong. Two important techniques
developed by the authors, Premortem Analysis and Structured Self-Critique, give analytic teams
viable ways to imagine how their own analysis might be wrong. The chapter concludes with a
description of a subset of six techniques grouped under the umbrella of Adversarial Collaboration
and an original approach to Structured Debate.

Chapter 9: Foresight Techniques. This family of twelve techniques includes four new techniques
for identifying key drivers, analyzing contrasting narratives, and engaging in Counterfactual
Reasoning. The chapter also describes five methods for developing scenarios and expands the
discussion of Indicators Validation and Evaluation by presenting several new techniques for
generating indicators.

Chapter 10: Decision Support Techniques. The ten techniques in this family include three new
Decision Support Techniques: Opportunities Incubator™, Bowtie Analysis, and Critical Path
Analysis. The chapter also describes six classic Decision Support Techniques, including Decision
Matrix, Force Field Analysis, and Pros-Cons-Faults-and-Fixes, all of which help managers,
commanders, planners, and policymakers make choices or trade-offs between competing goals,
values, or preferences. The chapter concludes with a description of the Complexity Manager, which
was developed by Richards J. Heuer Jr.

How can we know that the use of Structured Analytic Techniques does, in fact, improve the overall
quality of the analytic product? Chapter 11 (“The Future of Structured Analytic Techniques”) begins with
a discussion of two approaches to answer this question: logical reasoning and empirical research. The
chapter then employs one of the techniques in this book, Complexity Manager, to assess the prospects
for continued growth in the use of Structured Analytic Techniques. It asks the reader to imagine it is
2030 and answer the following questions based on an analysis of ten variables that could support or
hinder the growth of Structured Analytic Techniques during this time period: Will structured techniques
gain traction and be used with greater frequency by intelligence agencies, law enforcement, and the
business sector? What forces are spurring the increased use of structured analysis? What obstacles are
hindering its expansion?
NOTES
1. Vision 2015: A Globally Networked and Integrated Intelligence
Enterprise (Washington, DC: Director of National Intelligence, 2008).

2. National Intelligence Council, Global Trends 2025: A Transformed


World (Washington, DC: U.S. Government Printing Office, November
2008).

3. Judgments in this and the next sections are based on our


personal experience and anecdotal evidence gained in work or
discussion with other experienced analysts. As we will discuss in
chapter 11, there is a need for systematic research on these and
other benefits believed to be gained by using Structured Analytic
Techniques.

4. Again, these statements are our professional judgments based on


discussions with working analysts using Structured Analytic
Techniques. As discussed in chapter 11, we strongly recommend
research by both academia and the intelligence community on the
benefits and costs associated with all aspects of the use of
Structured Analytic Techniques.

5. Information on the history of the terms “structured analytic


techniques” and “alternative analysis” is based on information
provided by Jack Davis, Randolph H. Pherson, and Roger Z.
George, all of whom were key players in developing and teaching
these techniques at the CIA.

6. See Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, Reston, VA, 2007), xvii–xix.

7. A Tradecraft Primer: Structured Analytic Techniques for Improving


Intelligence Analysis, 2nd ed. (Washington, DC: Central Intelligence
Agency, 2009), https://www.cia.gov/library/center-for-the-study-of-
intelligence/csi-publications/books-and-
monographs/Tradecraft%20Primer-apr09.pdf

8. Personal communication to Richards Heuer from Roger Z.


George, October 9, 2007.

9. This number was derived by examining the addresses of


individuals or institutions purchasing the book from
shop.globalytica.com and adding countries where workshops using
the techniques have been taught.

10. Although the table of contents lists seventy techniques, three of


them (Key Assumptions Check, Analysis of Competing Hypotheses,
and Argument Mapping) are listed twice because they can be used
to perform different functions and a fourth (Indicators Generation) is
a compendium of techniques described elsewhere in the book.

11. Previous editions of this book listed eight categories of


techniques based on the analytic function being performed:
Decomposition and Visualization, Idea Generation, Scenarios and
Indicators, Hypothesis Generation and Testing, Assessment of
Cause and Effect, Challenge Analysis, Conflict Management, and
Decision Support. In this edition, the techniques were re-sorted into
six families that mirror the analytic production process to make it
easier to locate a technique in the book.
CHAPTER 2 THE ROLE OF
STRUCTURED TECHNIQUES

2.1 Two Types of Thinking [ 17 ]

2.2 Developing a Taxonomy of Structured Analytic


Techniques [ 19 ]

2.3 Dealing with Cognitive Limitations [ 22 ]

2.4 Matching Cognitive Limitations to Structured


Techniques [ 27 ]

2.5 Combating Digital Disinformation [ 28 ]


2.1 TWO TYPES OF THINKING
In the last thirty years, important gains have been made in psychological research on human judgment.
Dual process theory, positing two systems of decision making called System 1 and System 2, has
emerged as the predominant approach.1 The basic distinction between System 1 and System 2 is
intuitive versus analytical thinking.

System 1 Thinking is intuitive, fast, efficient, and often unconscious. It draws naturally on available
knowledge, experience, and often a long-established mental model of how people or things work in
a specific environment. System 1 Thinking requires little effort; it allows people to solve problems
and make judgments quickly and efficiently. Although it is often accurate, intuitive thinking is a
common source of cognitive biases and other intuitive mistakes that lead to faulty analysis. Three
types of cognitive limitations—cognitive bias, misapplied heuristics, and intuitive traps—are
discussed later in this chapter.

System 2 Thinking is analytic. It is slow, methodical, and conscious, the result of deliberate
reasoning. It includes all types of analysis, such as critical thinking and Structured Analytic
Techniques, as well as the whole range of empirical and quantitative methods.

The description of each Structured Analytic Technique in this book includes a discussion of which
cognitive biases, misapplied heuristics, and intuitive traps are most effectively avoided, overcome, or at
least mitigated by using that technique. The introduction to each family of techniques also identifies how
the techniques discussed in that chapter help counter one or more types of cognitive bias and other
common intuitive mistakes associated with System 1 Thinking.

Intelligence analysts have largely relied on intuitive judgment—a System 1 process—in constructing
their analyses. When done well, intuitive judgment—sometimes referred to as traditional analysis—
combines subject-matter expertise with basic thinking skills. Evidentiary reasoning, historical method,
case study method, and reasoning by analogy are examples of this category of analysis.2 The key
characteristic that distinguishes intuitive judgment from structured analysis is that intuitive judgment is
usually an individual effort in which the reasoning remains largely in the mind of the individual analyst
until it is written down in a draft report. Training in this type of analysis is generally acquired through
postgraduate education, especially in the social sciences and liberal arts, and often along with some
country or language expertise.

This chapter presents a taxonomy that defines the domain of System 2 Thinking. A taxonomy is a
classification of all elements of some body of information or knowledge. It defines the domain by
identifying, naming, and categorizing all the various objects in a specialized discipline. The objects are
organized into related groups based on some factor common to each object in the group.

The word “taxonomy” comes from the Greek taxis, meaning arrangement, division, or order, and nomos,
meaning law. Classic examples of a taxonomy are Carolus Linnaeus’s hierarchical classification of all
living organisms by kingdom, phylum, class, order, family, genus, and species that is widely used in the
biological sciences. The periodic table of elements used by chemists is another example. A library
catalog is also considered a taxonomy, as it starts with a list of related categories that are then
progressively broken down into finer categories.

Development of a taxonomy is an important step in organizing knowledge and furthering the


development of any discipline. Rob Johnston developed a taxonomy of variables that influenced
intelligence analysis but did not go into depth on analytic techniques or methods. He noted that “a
taxonomy differentiates domains by specifying the scope of inquiry, codifying naming conventions,
identifying areas of interest, helping to set research priorities, and often leading to new theories.
Taxonomies are signposts, indicating what is known and what has yet to be discovered.”3

Robert Clark has described a taxonomy of intelligence sources.4 He also categorized some analytic
methods commonly used in intelligence analysis, but not to the extent of creating a taxonomy. To the
best of our knowledge, no one has developed a taxonomy of analytic techniques for intelligence
analysis, although taxonomies have been developed to classify research methods used in forecasting,5
operations research,6 information systems,7 visualization tools,8 electronic commerce,9 knowledge
elicitation,10 and cognitive task analysis.11

After examining taxonomies of methods used in other fields, we found that there is no single right way to
organize a taxonomy—only different ways that are useful in achieving a specified goal. In this case, our
goal is to gain a better understanding of the domain of Structured Analytic Techniques, investigate how
these techniques contribute to providing a better analytic product, and consider how they relate to the
needs of analysts. The objective has been to identify various techniques that are currently available,
identify or develop additional potentially useful techniques, and help analysts compare and select the
best technique for solving any specific analytic problem. Standardization of terminology for Structured
Analytic Techniques will facilitate collaboration across agency and international boundaries during the
use of these techniques.

Description

Figure 2.1 System 1 and System 2 Thinking


Source: Pherson Associates, LLC, 2019.

The taxonomy presented in Figure 2.1 distinguishes System 1, or intuitive thinking, from the four broad
categories of analytic methods used in System 2 Thinking. It describes the nature of these four
categories, one of which is structured analysis. The others are critical thinking, empirical analysis, and
quasi-quantitative analysis. This chapter describes the rationale for these four broad categories. In the
next chapter, we review the six categories or families of Structured Analytic Techniques.
2.2 DEVELOPING A TAXONOMY OF
STRUCTURED ANALYTIC TECHNIQUES
Intelligence analysts employ a wide range of methods to deal with an
even wider range of subjects. Although this book focuses on the field
of structured analysis, it is appropriate to identify some initial
categorization of all the methods to see where structured analysis
fits. Many researchers write of only two general approaches to
analysis, contrasting qualitative with quantitative, intuitive with
empirical, or intuitive with scientific. Others might claim that there are
three distinct approaches: intuitive, structured, and scientific. In our
taxonomy, we have sought to address this confusion by describing
two types of thinking (System 1 and System 2) and defining four
categories of System 2 Thinking.

The first step of science is to know one thing from another.


This knowledge consists in their specific distinctions; but in
order that it may be fixed and permanent, distinct names
must be given to different things, and those names must be
recorded and remembered.
—Carolus Linnaeus, Systema Naturae (1738)

Whether intelligence analysis is, or should be, an art or science is


one of the long-standing debates in the literature on intelligence
analysis. As we see it, intelligence analysis has aspects of both
spheres. The range of activities that fall under the rubric of
intelligence analysis spans the entire range of human cognitive
abilities, and it is not possible to divide it into just two categories—art
and science—or to say that it is only one or the other. The extent to
which any part of intelligence analysis is either art or science is
entirely dependent upon how one defines “art” and “science.”
The taxonomy described here posits four functionally distinct
methodological approaches to intelligence analysis. These
approaches are distinguished by the nature of the analytic methods
used, the type of quantification if any, the type of data that is
available, and the type of training that is expected or required.
Although each method is distinct, the borders between them can be
blurry.

Critical thinking. Critical thinking, as defined by longtime


intelligence methodologist and practitioner Jack Davis, is the
application of the processes and values of scientific inquiry to
the special circumstances of strategic intelligence.12 Good
critical thinkers will stop and reflect on who is the client, what is
the question, where can they find the best information, how can
they make a compelling case, and what is required to convey
their message effectively. They recognize that this process
requires checking key assumptions, looking for disconfirming
data, and entertaining multiple explanations. Most students are
exposed to critical-thinking techniques at some point in their
education—from grade school to university—but few colleges or
universities offer specific courses to develop critical thinking and
writing skills.

Structured analysis. Structured Analytic Techniques involve a


step-by-step process that externalizes the analyst’s thinking in a
manner that makes it readily apparent to others, thereby
enabling it to be reviewed, discussed, and critiqued piece by
piece. For this reason, structured analysis usually becomes a
collaborative effort in which the transparency of the analytic
process exposes participating analysts to divergent or conflicting
perspectives. We believe this type of analysis helps to mitigate
some of the adverse effects of a single analyst’s cognitive
limitations, an ingrained mindset, and the whole range of
cognitive biases, misapplied heuristics, and intuitive traps.
Frequently used techniques include Cluster Brainstorming,
Foresight analysis, Indicators, Analysis of Competing
Hypotheses, and Key Assumptions Check. Structured
techniques are taught in undergraduate and graduate school
programs as well as many intelligence service training courses
and can be used by analysts who do not have a background in
statistics, advanced mathematics, or the hard sciences.

Empirical analysis. When large stores of quantitative data or


social media reporting are available, analysts can engage
quantitative methods to study the available information or “Big
Data.” Quantifiable empirical data are so different from expert-
generated data that the methods and types of problems the data
are used to analyze are also quite different. Econometric
modeling is a common example of this method. With the
mushrooming of data obtainable from social media providers
and the internet of things, sophisticated algorithms can identify
trends and test hypotheses. Empirical data are collected by
various types of sensors and are used, for example, in analysis
of weapons systems or public response to a new product
placement. Training is generally obtained through graduate
education in statistics, economics, cyber analysis, or the hard
sciences.

Quasi-quantitative analysis. When analysts lack the empirical


data needed to analyze an intelligence problem, one strategy is
to fill the gaps using expert-generated data. Many methods rely
on experts to rate key variables as High, Medium, Low, or Not
Present, or by assigning a subjective probability judgment.
Experts use special procedures to elicit these judgments, and
the ratings usually are integrated into a larger model that
describes a phenomenon, such as the vulnerability of a civilian
leader to a military coup, the level of political instability, or the
likely outcome of a legislative debate. This category includes
methods such as Bayesian inference, dynamic modeling, and
simulation. Training in the use of these methods is provided
through graduate education in fields such as mathematics,
information science, political science, operations research, or
business.
No one of these four methods is better or more effective than
another. All are needed in various circumstances to optimize the
odds of finding the right answer. The use of multiple methods over
the course of a single analytic project should be the norm, not the
exception. For example, even a highly quantitative technical analysis
may entail assumptions about motivation, intent, or capability that
are best handled with critical thinking approaches and/or structured
analysis. A brainstorming technique might be used to identify the
variables to include in a dynamic model that uses expert-generated
data to quantify these variables.

Of these four methods, structured analysis is the “new kid on the


block,” so it is useful to consider how it relates to System 1 Thinking.
System 1 Thinking combines subject-matter expertise and intuitive
judgment in an activity that takes place largely in an analyst’s head.
Although the analyst may gain input from others, the analytic product
is frequently perceived as the product of a single analyst, and the
analyst tends to feel “ownership” of his or her analytic product. The
work of a single analyst is particularly susceptible to the wide range
of cognitive pitfalls described in Psychology of Intelligence Analysis,
Critical Thinking for Strategic Intelligence, and throughout this
book.13

Structured analysis, which is System 2 Thinking, follows a step-by-


step process that can be used by an individual analyst, but we
believe a group process provides more benefit. Structured Analytic
Techniques guide the dialogue among analysts with common
interests as they work step-by-step through an analytic problem. The
critical point is that this approach exposes participants with various
types and levels of expertise to alternative ideas, evidence, or
mental models early in the analytic process and helps even experts
avoid some common cognitive pitfalls. The structured group process
that identifies and assesses alternative perspectives can also help to
avoid Groupthink, the most common problem of small-group
processes.
When used by a group or a team, structured techniques can become
a mechanism for information sharing and group learning that helps to
compensate for gaps or weaknesses in subject-matter expertise.
This is especially useful for complex projects that require a synthesis
of multiple types of expertise.
2.3 DEALING WITH COGNITIVE LIMITATIONS
As good as intuitive judgment often is, such judgment is still System 1 activity in the brain and is subject
to many different types of cognitive limitations. Potential causes of such biases and mental mistakes
include professional experience leading to an ingrained analytic mindset, training or education, the
nature of one’s upbringing, type of personality, a salient personal experience, or personal equity in a
decision.

In this chapter, we distinguish between three types of cognitive limitations (see Figure 2.3):

Cognitive biases are inherent thinking errors that people make in processing information. They
prevent an analyst from accurately understanding reality even when all the needed data and
evidence that would form an accurate view is in hand.

Heuristics are experience-based techniques that can give a solution that is not guaranteed to be
optimal. The objective of a heuristic is to produce quickly a solution that is good enough to solve
the problem at hand. Analysts can err by overrelying on or misapplying heuristics. Heuristics help
an analyst generate a quick answer, but sometimes that answer will turn out to be wrong.

Intuitive traps are practical manifestations of commonly recognized cognitive biases or heuristics
that analysts in the intelligence profession—and many other disciplines—often fall victim to in their
day-to-day activities.

There is extensive literature on how cognitive biases and heuristics affect a person’s thinking in many
fields. Intuitive traps, however, are a new category of bias first identified by Randolph Pherson and his
teaching colleagues as they explored the value of using Structured Analytic Techniques to counter the
negative impact of cognitive limitations. Additional research is ongoing to refine and revise the list of
eighteen intuitive traps.

All cognitive biases, misapplied heuristics, or intuitive traps, except perhaps the personal equity bias,
are more frequently the result of fast, unconscious, and intuitive System 1 Thinking and not the result of
thoughtful reasoning (System 2). System 1 Thinking—though often correct—is more often influenced by
cognitive biases and mindsets as well as insufficient knowledge and the inherent unknowability of the
future. Structured Analytic Techniques—a type of System 2 Thinking—help identify and overcome the
analytic biases inherent in System 1 Thinking.

Behavioral scientists have studied the impact of cognitive biases on analysis and decision making in
many fields, such as psychology, political science, medicine, economics, business, and education ever
since Amos Tversky and Daniel Kahneman introduced the concept of cognitive biases in the early
1970s.14 Richards Heuer’s work for the CIA in the late 1970s and the 1980s, subsequently followed by
his book Psychology of Intelligence Analysis, first published in 1999, applied Tversky and Kahneman’s
insights to problems encountered by intelligence analysts.15 Since the publication of Psychology of
Intelligence Analysis, other authors associated with the U.S. Intelligence Community (including Jeffrey
Cooper and Rob Johnston) have identified cognitive biases as a major cause of analytic failure at the
CIA.16
Figure 2.3 Glossary of Cognitive Biases, Misapplied Heuristics, and Intuitive Traps

This book is a logical follow-on to Psychology of Intelligence Analysis, which described in detail many of
the biases and heuristics that influence intelligence analysis.17 Since then, hundreds of cognitive biases
and heuristics have been described in the academic literature using a wide variety of terms. As Heuer
noted many years ago, “Cognitive biases are similar to optical illusions in that the error remains
compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not
produce a more accurate perception.”18 This is why cognitive limitations are exceedingly difficult to
overcome. For example, Emily Pronin, Daniel Y. Lin, and Lee Ross observed in three different studies
that people see the existence and operation of cognitive and motivational biases much more in others
than in themselves.19 This explains why so many analysts believe their own intuitive thinking (System
1) is sufficient.

Analysts in the intelligence profession—and many other disciplines—often fall victim to cognitive
biases, misapplied heuristics, and intuitive traps that are manifestations of commonly recognized
biases. Structured Analytic Techniques help analysts avoid, overcome, or at least mitigate their impact.
How a person perceives information is strongly influenced by factors such as experience, education,
cultural background, and what that person is expected to do with the data. Our brains are trained to
process information quickly, which often leads us to process data incorrectly or to not recognize its
significance if it does not fit into established patterns. Some heuristics, such as the fight-or-flight instinct
or knowing you need to take immediate action when you smell a gas leak, are helpful. Others are
nonproductive. Defaulting to “rules of thumb” while problem solving can often lead to inherent thinking
errors, because the information is being processed too quickly or incorrectly.

Cognitive biases, such as Confirmation Bias or Hindsight Bias, impede analytic thinking from the
very start.20

Misapplied heuristics, such as Groupthink or Premature Closure, could lead to a correct decision
based on a non-rigorous thought process if one is lucky. More often, they impede the analytic
process because they prevent us from considering a full range of possibilities.

Intuitive traps, such as Projecting Past Experiences or Overinterpreting Small Samples, are
mental mistakes practitioners make when conducting their business. A classic example is when a
police detective assumes that the next case he or she is working will be like the previous case or a
general prepares to fight the last war instead of anticipating that the next war will have to be fought
differently.

Unfortunately for analysts, these biases, heuristics, and traps are quick to form and extremely hard to
correct. After one’s mind has reached closure on an issue, even a substantial accumulation of
contradictory evidence is unlikely to force a reappraisal. Analysts often do not see new patterns
emerging or fail to detect inconsistent data. An even larger concern is the tendency to ignore or dismiss
outlier data as “noise.”

Structured Analytic Techniques help analysts avoid, overcome, or at least mitigate these common
cognitive limitations. Structured techniques help analysts do the following:

Reduce error rates.

Avoid intelligence and other analytic failures.

Embrace more collaborative work practices.

Ensure accountability.

Make the analysis more transparent to other analysts and decision makers.
2.4 MATCHING COGNITIVE LIMITATIONS TO STRUCTURED
TECHNIQUES

Figure 2.4 Matching Cognitive Limitations to the Six Families of Structured Techniques

In this book, we proffer guidance on how to reduce an analyst’s vulnerability to cognitive limitations. In
the overview of each family of Structured Analytic Techniques, we list two cognitive biases or misapplied
heuristics as well as two intuitive traps that the techniques in that family are most effective in countering
(see Figure 2.4). The descriptions of each of the sixty-six techniques include commentary on which
biases, heuristics, and traps that specific technique helps mitigate. In our view, most techniques help
counter cognitive limitations with differing degrees of effectiveness, and the matches we selected are
only illustrative of what we think works best. Additional research is needed to empirically validate the
matches we have identified from our experience teaching the techniques over the past decade and
exploring their relationship to key cognitive limitations.
2.5 COMBATING DIGITAL
DISINFORMATION
The growing use of social media platforms to manipulate popular
perceptions for partisan political or social purposes has made
democratic processes increasingly vulnerable in the United States
and across the world. Largely unencumbered by commercial or legal
constraints, international standards, or morality, proponents of Digital
Disinformation21 have become increasingly adept at exploiting
common cognitive limitations, such as Confirmation Bias,
Groupthink, and Judging by Emotion. History may show that we
have grossly underestimated how easy it has been to influence
popular opinion by leveraging cognitive biases, misapplied
heuristics, and intuitive traps.

Digital Disinformation is purposely intended to mislead the reader.


Perpetrators of Digital Disinformation compose compelling and
seemingly coherent narratives that usually dismiss inconsistent
evidence and ignore basic rules of logic. The primary objective of
digital deceivers is to provide incorrect information in a seemingly
persuasive format that confirms the readers’ biases and either
hardens mental mindsets or sows apathy or disbelief in the ability to
know the truth.22 Uncritical readers will often believe they have
“found the truth” when actually they are functioning as both victims
and perpetrators of cognitive bias, misapplied heuristics, and intuitive
traps.

Purposeful misinformation, conspiracy theories, deception, and


active measures have been used by activists and nation-states to
influence people for decades, if not centuries.23 Such efforts at
perception management appear to have had greater impact in recent
years because of the following:
The breadth and volume of misinformation has become
staggering, owing to the power of social media platforms.

The speed of the spread of disinformation is breathtaking as


stories can quickly go “viral,” spreading to millions of readers. A
Massachusetts Institute of Technology study in Science
documents that false rumors travel across the internet six times
faster than factual stories.24

People appear to be increasingly seeking simple answers to


complex problems. Social network platforms usually present
information in simplified form, which makes the message more
digestible but far less nuanced—and often inaccurate.25

The incentives for digital deceivers to leverage social media


platforms to manipulate popular perceptions have also increased
dramatically because of the following:

Millions of people can be reached almost instantaneously.

Few perpetrators are held accountable for their posts.

Perpetrators can micro-target their messages to those most


easily swayed and open to persuasion.

Another underlying and often overlooked factor explaining the


growing impact of Digital Disinformation is the susceptibility of
individuals to false messaging. Perpetrators of conspiracy theories
know what is most likely to “stick” in the minds of their audiences.
This “stickiness” is usually attributable to the exploitation of human
vulnerabilities that are manifestations of omnipresent, and well-
ingrained, cognitive biases, misapplied heuristics, and intuitive traps.

Perpetrators of Digital Disinformation know that the best way to


manipulate popular perceptions is to exploit well-ingrained cognitive
limitations. They can anticipate when a person is likely to fall victim
to a cognitive bias or to misapply a heuristic, and they leverage this
knowledge to increase the impact of their messaging. Experts in
false messaging, for example, are aware that people’s perceptions of
data are strongly influenced by their past experiences, education,
cultural values, and how they identify themselves. People with
different backgrounds will perceive information differently.

Moreover, knowledge of someone’s social media profile greatly


facilitates the process of identifying how best to package
misinformation to reinforce that person’s thinking. With the explosive
growth in the use of social media platforms and databases, the use
of such micro-targeting strategies has proven increasingly effective
in product marketing and more recently in political campaigns.

Two of the most powerful biases that perpetrators of misinformation


exploit are Confirmation Bias—seeking only information that
confirms your viewpoint—and Vividness Bias—focusing attention
only on the most vivid possibility.26,27 Digital deceivers have also
become masters of exploiting misapplied heuristics, such as the
Anchoring Effect, Groupthink, and Satisficing. Intuitive traps that
create vulnerabilities include Judging by Emotion, Presuming
Patterns, and Overinterpreting Small Samples.

Recognizing one’s vulnerability to Digital Disinformation is insufficient


for mitigating the threat. A more productive strategy is needed—one
that involves the use of critical thinking strategies and Structured
Analytic Techniques. People are less likely to be deceived if they
make it a habit to evaluate the quality of the evidence used to
support a claim and ask what other credible, alternative narratives
could explain what has occurred. Four Structured Analytic
Techniques that are particularly effective in helping counter the
impact of Digital Disinformation are as follows:28

Key Assumptions Check. Making explicit and questioning the


assumptions that guide an analyst’s interpretation of evidence
and the reasoning underlying a judgment or conclusion.

Analysis of Competing Hypotheses. The evaluation of


information against a set of alternative hypotheses to determine
the consistency/inconsistency of each piece of data against
each hypothesis and the rejection of hypotheses with much
inconsistent data.

Premortem Analysis and Structured Self-Critique. A


systematic process using brainstorming and checklist
procedures to identify critical weaknesses in an argument and
assess how a key analytic judgment could be spectacularly
wrong.
NOTES
1. For further information on dual process theory, see the research
by Jonathan Evans and Keith Frankish, In Two Minds: Dual
Processes and Beyond (Oxford, UK: Oxford University Press, 2009);
and Pat Croskerry, “A Universal Model of Diagnostic Reasoning,”
Academic Medicine 84, no. 8 (August 2009).

2. Reasoning by analogy can also be a structured technique called


Structured Analogies, as described in chapter 8.

3. Rob Johnston, Analytic Culture in the U.S. Intelligence Community


(Washington, DC: CIA Center for the Study of Intelligence, 2005), 34.

4. Robert M. Clark, Intelligence Analysis: A Target-Centric Approach,


2nd ed. (Washington, DC: CQ Press, 2007), 84.

5. Forecasting Principles website, last accessed November 6, 2019,


www.forecastingprinciples.com/files/pdf/methodsselectionchart.pdf

6. Russell W. Frenske, “A Taxonomy for Operations Research,”


Operations Research 19, no. 1 (January–February 1971).

7. Kai R. T. Larson, “A Taxonomy of Antecedents of Information


Systems Success: Variable Analysis Studies,” Journal of
Management Information Systems 20, no. 2 (Fall 2003).

8. Ralph Lengler and Martin J. Epler, “A Periodic Table of


Visualization Methods,” n.d., www.visual-
literacy.org/periodic_table/periodic_table.html

9. Roger Clarke, Appropriate Research Methods for Electronic


Commerce (Canberra, Australia: Xanax Consultancy Pty Ltd., 2000),
www.forecastingprinciples.com/files/pdf/methodsselectionchart.pdf

10. Robert R. Hoffman, Nigel R. Shadbolt, A. Mike Burton, and Gary


Klein, “Eliciting Knowledge from Experts,” Organizational Behavior
and Human Decision Processes 62 (May 1995): 129–158.

11. Robert R. Hoffman and Laura G. Militello, Perspectives on


Cognitive Task Analysis: Historical Origins and Modern Communities
of Practice (Boca Raton, FL: CRC Press/Taylor and Francis, 2008);
Beth Crandall, Gary Klein, and Robert R. Hoffman, Working Minds: A
Practitioner’s Guide to Cognitive Task Analysis (Cambridge, MA: MIT
Press, 2006).

12. See Katherine Hibbs Pherson and Randolph H. Pherson, Critical


Thinking for Strategic Intelligence, 2nd ed. (Washington, DC: CQ
Press/SAGE, 2017), xxii.

13. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, Reston, VA, 2007).

14. Amos Tversky and Daniel Kahneman, “Judgment under


Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974):
1124–1131.

15. Psychology of Intelligence Analysis was republished by Pherson


Associates, LLC, in 2007, and can be purchased on its website at
shop.globalytica.com.

16. Jeffrey R. Cooper, Curing Analytic Pathologies: Pathways to


Improved Intelligence Analysis (Washington, DC: CIA Center for the
Study of Intelligence, 2005); Rob Johnston, Analytic Culture in the
U.S. Intelligence Community: An Ethnographic Study (Washington,
DC: CIA Center for the Study of Intelligence, 2005).

17. Heuer, Psychology of Intelligence Analysis.

18. Ibid., 112.

19. Emily Pronin, Daniel Y. Lin, and Lee L. Ross, “The Bias Blind
Spot: Perceptions of Bias in Self versus Others,” Personality and
Social Psychology Bulletin 28, no. 3 (2002): 369–381.
20. Definitions of these and other cognitive biases, misapplied
heuristics, and intuitive traps mentioned later in this chapter are
provided in Figure 2.3 on pages 24–25.

21. Efforts to purposefully mislead or misinform have also been


described as “Fake News,” “False News,” or “Agenda-Driven News.”
The phrase most often used in the public domain is Fake News, but
the inaccurate use of this term to describe any critical news reporting
has undermined its usefulness.

22. Rob Brotherton, “Five Myths about Conspiracy Theories,”


Washington Post, January 17, 2019,
https://www.washingtonpost.com/outlook/five-myths/five-myths-
about-conspiracy-theories/2019/01/17/0ef1b840-1818-11e9-88fe-
f9f77a3bcb6c_story

23. The term “active measures” refers to actions taken by the Soviet
Union, and later Russia, beginning in the 1920s to influence popular
perceptions through propaganda, false documentation, penetration
of institutions, persecution of political activists, and political violence,
including assassinations. For more information, see the testimony of
Gen. (ret.) Keith B. Alexander, Disinformation: A Primer in Russian
Active Measures and Influence Campaigns, United States Senate
Select Committee on Intelligence, March 30, 2017,
https://www.intelligence.senate.gov/sites/default/files/documents/os-
kalexander-033017.pdf.

24. Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of
True and False News Online,” Science 359, no. 6380 (2018): 1146–
1151.

25. Elisa Shearer and Jeffrey Gottfried, News Use across Social
Media Platforms 2017, (Washington, DC: Pew Research Center,
September 7, 2017), http://www.journalism.org/2017/09/07/news-
use-across-social-media-platforms-2017/

26. A fuller discussion of this issue can be found in Randolph H.


Pherson and Penelope Mort Ranta, “Cognitive Bias, Digital
Disinformation, and Structured Analytic Techniques,” Revista
Romănă de studii de intelligence (Romanian Journal of Intelligence
Studies), Vol. 21, 2019). The article was inspired in large part by
observations made during the U.S. presidential election in 2016.
Similar dynamics, however, have been observed in subsequent
elections in France, Germany, and several other European states as
well as the Brexit campaign in the United Kingdom.

27. A review of the cognitive biases and misapplied heuristics most


often experienced by intelligence analysts can be found in Katherine
Hibbs Pherson and Randolph H. Pherson, Critical Thinking for
Strategic Intelligence, 2nd ed. (Washington, DC: CQ Press/SAGE,
2017), 55.

28. Randolph H. Pherson, Handbook of Analytic Tools and


Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019),
5, 9, 19, 31, 43, 53.
Descriptions of Images and Figures
Back to Figure

In system 1 thinking, the judgment is intuitive. System 2 thinking


involves critical thinking, structured analysis, quasi-quantitative
analysis, and empirical analysis. Critical thinking is qualitative with
known data and includes getting started, source validation,
argumentation, and presentation. Structured analysis is qualitative
with known and unknown data, and includes exploration, diagnostic,
reframing, and foresight. Quasi-Quantitative Analysis is quantitative
with known and unknown data, and includes computer-based tools
using expert-generated data. Empirical analysis is quantitative with
known data, and includes data-based computer tools and
visualization techniques.
CHAPTER 3 CHOOSING THE
RIGHT TECHNIQUE

3.1 The Six Families [ 35 ]

3.2 Core Techniques [ 36 ]

3.3 Selecting the Right Technique [ 38 ]

3.4 Projects Using Multiple Techniques [ 41 ]

3.5 Common Errors in Selecting Techniques [ 41 ]

3.6 Making a Habit of Using Structured Techniques [ 42 ]

This chapter provides analysts with a practical guide to identifying


the various techniques that are most likely to meet their needs. It
also does the following:

Reviews the reorganization of the book, decreasing the number


of families of techniques from eight to six.

Identifies a set of core techniques that are used frequently and


should be part of every analyst’s toolkit. Instructors may want to
review this list when deciding which techniques to teach.

Provides a framework for deciding which techniques to employ


for a given problem or project.

Discusses the value of using multiple techniques for a single


project.

Lists common mistakes analysts make when deciding which


technique or techniques to use for a project.
Describes five habits of thinking that an analyst should draw
upon when under severe time pressure to deliver an analytic
product.
3.1 THE SIX FAMILIES
Considering that the U.S. Intelligence Community started focusing
on structured techniques to enhance the rigor of analysis, it is fitting
to categorize these techniques by the various ways they help
achieve this goal. Structured Analytic Techniques can mitigate some
human cognitive limitations, sidestep some of the well-known
analytic pitfalls, and address the problems associated with
unquestioned assumptions and outdated mental models. They can
ensure that assumptions, preconceptions, and mental models are
not taken for granted but are explicitly examined and tested. They
can support the decision-making process, and the use and
documentation of these techniques can facilitate information sharing
and collaboration.

A secondary goal when categorizing structured techniques is to


correlate categories with different types of common analytic tasks.
These often map to the basic analytic production process: getting
started, collecting and organizing your information, developing your
conceptual framework, conducting the analysis, challenging your
conclusions, estimating future trends, and supporting use of the
analysis by decision makers. In this book we have organized sixty-
six techniques into six families: Getting Organized, Exploration
Techniques, Diagnostic Techniques, Reframing Techniques,
Foresight Techniques, and Decision Support Techniques. We have
allocated about ten techniques to each family, but several techniques
—such as the Key Assumptions Check, Outside-In Thinking, What
If? Analysis, and Argument Mapping—fit comfortably in several
categories because they serve multiple analytic functions.

The six families of Structured Analytic Techniques are described in


detail in chapters 5–10. The introduction to each chapter describes
how that specific category of techniques helps to improve analysis.
3.2 CORE TECHNIQUES
The average analyst is not expected to know how to use every
technique in this book. All analysts should, however, understand the
functions performed by various types of techniques and recognize
the analytic circumstances in which it is advisable to use them. An
analyst can gain this knowledge by reading the introductions to each
of the technique chapters and the overviews of each technique.
Tradecraft or methodology specialists should be available to assist
when needed in the actual implementation of many of these
techniques. In the U.S. Intelligence Community, for example, the CIA
and several other agencies support the use of these techniques
through the creation of analytic tradecraft support cells or mentoring
programs.

All analysts should be trained to use the core techniques discussed


here because they support several of the basic requirements of
generating high-quality analysis. They are also widely applicable
across many different types of analysis—strategic and tactical,
intelligence and law enforcement, and cyber and business. Eight
core techniques are described briefly in the following paragraphs.

Cluster Brainstorming (chapter 6).


A commonly used technique, Cluster Brainstorming (referred to as
Structured Brainstorming in previous editions of this book) is a
simple exercise employed at the beginning of an analytic project to
elicit relevant information or insight from a small group of
knowledgeable analysts. The group’s goal might be to identify a list
of such things as relevant variables, driving forces, a full range of
hypotheses, key players or stakeholders, and available evidence or
sources of information. Analysts can also use Cluster Brainstorming
to explore potential solutions to a problem, potential outcomes or
scenarios, or potential responses by an adversary or competitor to
some action or situation. Law enforcement analysts can use the
technique to brainstorm potential suspects or develop avenues of
investigation. Analysts should consider using other silent
brainstorming techniques or the Nominal Group Technique (chapter
6) as an alternative to Cluster Brainstorming when there is concern
that a senior officer or recognized expert might dominate a regular
brainstorming session or that participants may be reluctant to speak
up.

Key Assumptions Check (chapter 7).


One of the most frequently used techniques is the Key Assumptions
Check. It requires analysts to explicitly list and question the most
important working assumptions underlying their analysis. Any
explanation of current events or estimate of future developments
requires the interpretation of incomplete, ambiguous, or potentially
deceptive evidence. To fill in the gaps, analysts typically make
assumptions about such things as the relative strength of political
forces, another country’s intentions or capabilities, the way
governmental processes usually work in that country, the
trustworthiness of key sources, the validity of previous analyses on
the same subject, or the presence or absence of relevant changes in
the context in which the activity is occurring. It is important that
analysts explicitly recognize and question their assumptions.

Analysis of Competing Hypotheses (chapter 7).


This technique requires analysts to start with a full set of plausible
hypotheses rather than with a single most likely hypothesis. Analysts
then take each item of relevant information, one at a time, and judge
its consistency or inconsistency with each hypothesis. The idea is to
refute hypotheses rather than confirm them. The most likely
hypothesis is the one with the least inconsistent information that
would argue against it, not the one with the most relevant information
that supports it. This process applies a key element of the scientific
method to intelligence analysis.
Premortem Analysis and Structured Self-Critique
(chapter 8).
This pair of easy-to-use techniques enables a small team of analysts
who have been working together on any type of analysis to
challenge effectively the accuracy of its own conclusions. Premortem
Analysis uses a form of reframing, in which restating the question or
problem from another perspective enables one to see it in a different
way and formulate different answers. For example, analysts could
place themselves months or years in the future and imagine that
they suddenly learn from an unimpeachable source that their original
estimate was wrong. Then imagine what could have happened to
cause the estimate to be wrong. Looking back to explain something
that has happened is much easier than looking into the future to
forecast what will happen.

With the Structured Self-Critique, analysts respond to a list of


questions about a variety of factors, including sources of uncertainty,
analytic processes, critical assumptions, diagnosticity of evidence,
information gaps, and the potential for deception. Rigorous use of
both techniques can help prevent a future need for a postmortem.

What If? Analysis (chapter 8).


In conducting a What If? Analysis, one imagines that an unexpected
event has happened and then, with the benefit of “hindsight,”
analyzes how it could have come about and considers the potential
consequences. This reframing approach creates an awareness that
prepares the analyst’s mind to recognize early signs of a significant
change. It can also enable decision makers to plan for
contingencies. In addition, a What If? Analysis can be a tactful way
of alerting a decision maker to the possibility that he or she may be
wrong.

Multiple Scenarios Generation (chapter 9).


One of the most commonly used Foresight analysis techniques,
Multiple Scenarios Generation, uses key drivers in a 2-×-2 matrix to
generate multiple explanations for how a situation may develop
when considerable uncertainty is present. The technique leverages
the knowledge and imagination of a diverse group of experts to
identify alternative future trajectories that both warn decision makers
of downside risks and illuminate new opportunities.

Indicators Generation, Validation, and Evaluation


(chapter 9).
Indicators are observable actions or events that can be generated
using a variety of structured techniques. They can be monitored to
detect or anticipate change. For example, analysts can use
Indicators to measure changes toward an undesirable condition,
such as political instability, a pending financial crisis, or a coming
attack. Indicators can also point toward a desirable condition, such
as economic reform or democratic institution building. The special
value of Indicators is that they create an awareness that prepares an
analyst’s mind to recognize the earliest signs of significant change
that might otherwise be overlooked. Indicators must be validated,
and the Indicators Evaluation process helps analysts assess the
diagnostic value of their Indicators.
3.3 SELECTING THE RIGHT TECHNIQUE
Analysts must be able, with minimal effort, to identify and learn how to use the techniques that best
meet their needs and fit their styles. The selection guide provided in Figure 3.3a lists twelve tasks that
analysts perform and matches the task to several Structured Analytic Techniques that would maximize
their performance. The tasks are organized to conform generally with the analytic production process as
represented by the six families of techniques. For the purposes of the graphic, the Getting Organized
family was incorporated into the Exploration task.

Figure 3.3A Selecting the Right Structured Analytic Technique


Source: Pherson Associates, LLC, 2019.
Description

Figure 3.3B When to Use Structured Analytic Techniques


Source: Pherson Associates, LLC, 2019.

To identify the structured techniques that would be most helpful in learning how to perform a task with
more rigor and imagination, analysts pick the statement that best describes their objectives and then
choose one or two of the techniques listed below the task. Analysts should refer to the appropriate
chapter in the book and first read the brief discussion of that family of techniques (which includes a short
description of each technique in the chapter) to validate their choice(s). The next step is to read the
section of the chapter that describes when, why, and how to use the chosen technique. For many
techniques, the information provided sufficiently describes how to use the technique. Some more
complex techniques require specialized training or facilitation support by an experienced user.

Another question often asked is, “When should I use the techniques?” Figure 3.3b provides a reference
guide for when to use thirty-three of the most used structured techniques.
3.4 PROJECTS USING MULTIPLE
TECHNIQUES
Many projects require the use of multiple techniques, which is why
this book includes sixty-six different techniques. Each technique may
provide only one piece of a complex puzzle; knowing how to put
these pieces together for a specific project is part of the art of
structured analysis. Separate techniques might be used for
organizing the data, evaluating ideas, and identifying assumptions.
There are also several techniques appropriate for generating and
testing hypotheses, drawing conclusions, challenging key findings,
and implementing new strategies.

Multiple techniques can be used to check the accuracy of and


increase confidence in an analytic conclusion. Research shows that
forecasting accuracy is increased by combining “forecasts derived
from methods that differ substantially and draw from different
sources of information.”1 This is a particularly appropriate function
for the Delphi Method (chapter 8), which is a structured process for
eliciting judgments from a panel of outside experts. If a Delphi panel
produces results similar to the initial internal analysis, one can have
significantly greater confidence in those results. If the results differ,
further research may be appropriate to understand why and to
evaluate the differences.

A key lesson learned from mentoring analysts in the use of


structured techniques is that major benefits can result—and major
mistakes be avoided—if analysts use two different techniques to
conduct the same analysis (for example, pairing Cluster
Brainstorming with Diagnostic Reasoning or pairing Key
Uncertainties Finder™ with Key Drivers Generation™). Two groups
can either (1) attack the same problem independently applying the
same structured technique or (2) work the same problem
independently using a different but complementary technique. They
then should share their findings with each other and meld their
results into a single more comprehensive solution.
3.5 COMMON ERRORS IN SELECTING
TECHNIQUES
The value and accuracy of an analytic product depends in part upon
selection of the most appropriate technique or combination of
techniques for doing the analysis. Unfortunately, it is easy for
analysts to go astray when selecting the best method. Lacking
effective guidance, analysts are vulnerable to various influences:2

College or graduate-school recipe. Analysts are inclined to


use the tools they learned in college or graduate school
regardless of whether those tools are most appropriate for
dealing with an intelligence problem.

Tool rut. Analysts are inclined to use whatever tool they already
know or have readily available. Psychologist Abraham Maslow
observed that “if the only tool you have is a hammer, it is
tempting to treat everything as if it were a nail.”3

Convenience shopping. The analyst, guided by the evidence


that happens to be available, uses a method appropriate for that
evidence, rather than seeking out the evidence that is really
needed to address the intelligence issue. In other words, the
evidence may sometimes drive the technique selection instead
of the analytic need driving the evidence collection.

Time constraints. Analysts can easily be overwhelmed by their


inboxes and the myriad tasks they must perform in addition to
their analytic workload. The temptation is to avoid techniques
that would “take too much time.” However, many useful
techniques take relatively little time to perform, even as little as
an hour or two. This ultimately helps analysts produce higher-
quality and more compelling analysis than might otherwise be
possible.
3.6 MAKING A HABIT OF USING STRUCTURED TECHNIQUES
Analysts sometimes express concern that they do not have enough time to use Structured Analytic
Techniques. The experience of most analysts and particularly managers of analysts is that this concern
is unfounded. If analysts stop to consider how much time it takes to research an issue and draft a report,
coordinate the analysis, walk the paper through the editing process, and get it approved and
disseminated, they will discover that the use of structured techniques typically speeds the process.

Many of the techniques, such as Key Assumptions Check, Circleboarding™, Inconsistencies


Finder™, and Indicators Validation and Evaluation, take little time and substantially improve the
rigor of the analysis.

Some take a little more time to learn, but, once learned, often save analysts considerable time over
the long run. Cluster Brainstorming, Analysis of Competing Hypotheses (ACH), and Red Hat
Analysis are good examples of this phenomenon.

Most Foresight Techniques, Premortem Analysis, and Structured Self-Critique take more time to
perform but offer major rewards for discovering both “unknown unknowns” and errors in the original
analysis that can be remedied.

When working on quick-turnaround items, such as a current situation report or an alert that must be
produced the same day, one can credibly argue that it is not possible to take time to use a structured
technique. When deadlines are short, gathering the right people in a small group to employ a structured
technique can prove to be impossible.

The best response to this valid observation is to encourage analysts to practice using core structured
techniques when deadlines are less pressing. In so doing, they ingrain new habits of thinking. If they,
and their colleagues, practice how to apply the concepts embedded in the structured techniques when
they have time, they will be more capable of applying these critical thinking skills instinctively when
under pressure. The Five Habits of the Master Thinker are described in Figure 3.6.4 Each habit can be
mapped to one or more Structured Analytic Techniques.

Key Assumptions.
In a healthy work environment, challenging assumptions should be commonplace, ranging from “Why do
you assume we all want pepperoni pizza?” to “Won’t higher oil prices force them to reconsider their
export strategy?” If you expect your colleagues to challenge your key assumptions on a regular basis,
you will become more sensitive to them yourself and will increasingly question if your assumptions are
well-founded.

Alternative Explanations.
When confronted with a new development, the first instinct of a good analyst is to develop a hypothesis
to explain what has occurred based on the available evidence and logic. A master thinker goes one step
further and immediately asks whether any alternative explanations should be considered. If envisioning
one or more alternative explanations is difficult, a master thinker will simply posit a single alternative that
the initial or lead hypothesis is not true. Although at first glance these alternatives may appear much
less likely, as new evidence surfaces over time, one of the alternatives may evolve into the lead
hypothesis. Analysts who do not generate a set of alternative explanations at the start of a project but
rather quickly lock on to a preferred explanation will often fall into the trap of Confirmation Bias—
focusing on the data that are consistent with their explanation and ignoring or rejecting other data that
are inconsistent.
Inconsistent Data.
Looking for inconsistent data is probably the hardest of the five habits to master, but it is the one that
can reap the most benefits in terms of time saved when investigating or researching an issue. The best
way to train your brain to look for inconsistent data is to conduct a series of ACH or Inconsistencies
Finder™ exercises. Such practice helps analysts readily identify what constitutes compelling contrary
evidence. If an analyst encounters an item of data that is compellingly inconsistent with one of the
hypotheses (for example, a solid alibi), then that hypothesis can be quickly discarded. This will save the
analyst time by redirecting his or her attention to more likely solutions.

Figure 3.6 The Five Habits of the Master Thinker


Source: Pherson Associates, LLC, 2019.

Key Drivers.
Asking at the outset what key drivers best explain what has occurred or foretell what is about to happen
is a key attribute of a master thinker. If analysts quickly identify key drivers, the chance of surprise will
be diminished. An experienced analyst should know how to vary the weights of these key drivers (either
instinctively or by using techniques such as Multiple Scenarios Generation or Quadrant Crunching™) to
generate a set of credible alternative scenarios that capture the range of possible outcomes.

Context.
Analysts often get so engaged in collecting and sorting data that they miss the forest for the trees.
Learning to stop and reflect on the overarching context for the analysis is a key habit to learn. Most
analysis is done under considerable time pressure, and the tendency is to plunge in as soon as a task is
assigned. If the analyst does not take time to reflect on what the client is really seeking, the resulting
analysis could prove inadequate and much of the research a waste of time. Ask yourself: “What do they
need from me,” “How can I help them frame the issue,” and “Do I need to place their question in a
broader context?” Failing to do this at the outset can easily lead the analyst down blind alleys or require
reconceptualizing an entire paper after it has been drafted. Key structured techniques for developing
context include Starbursting, Mind Mapping, Outside-In Thinking, and Cluster Brainstorming.

Learning how to internalize the five habits will take a determined effort. Applying each core technique to
three to five real problems should implant the basic concepts firmly in any analyst’s mind. With every
repetition, the habits will become more ingrained and, over time, will become instinctive. Few analysts
can wish for more. If they master the habits, they will produce a superior product in less time.
NOTES
1. J. Scott Armstrong, “Combining Forecasts,” in Principles of
Forecasting, ed. J. Scott Armstrong (New York: Springer
Science+Business Media, 2001), 418–439.

2. The first three items in this list are from Craig S. Fleisher and
Babette E. Bensoussan, Strategic and Competitive Analysis:
Methods and Techniques for Analyzing Business Competition (Upper
Saddle River, NJ: Prentice Hall, 2003), 22–23.

3. Abraham Maslow, Psychology of Science (New York: Harper &


Row, 1966). A similar quote is attributed to Abraham Kaplan: “Give a
child a hammer and he suddenly discovers that everything he
encounters needs pounding.”

4. For a fuller discussion of this topic, see Randolph H. Pherson,


“Five Habits of the Master Thinker,” Journal of Strategic Security 6,
no. 3 (Fall 2013), http://scholarcommons.usf.edu/jss.
Descriptions of Images and Figures
Back to Figure

The four stages in a project are getting started, finding and


assessing information, building an argument, and conveying the
message. The types of exploration techniques used during the first
and the third stages are simple brainstorming, cluster brainstorming,
circleboarding, starbursting, mind maps, concept maps, and Venn
analysis. The diagnostic techniques used during the first and the
fourth stages are chronologies and timelines, and key assumptions
check, and during the second and the third stages are key
assumptions check, multiple hypothesis generation, diagnostic
reasoning, analysis of competing hypotheses, inconsistencies finder,
and deception detection. The reframing techniques used in the first
stage are outside-in thinking and structured analogies; in the second
and the third stages are classic quadrant crunching and red hat
analysis; in the third stage are premortem analysis and structured
self-critique, and in all the stages are high impact, low probability
analysis, and what if analysis. The foresight techniques used in the
first stage are key uncertainties finder and key drivers generation;
and in the second, third, and the fourth stages are multiple scenarios
generation, indicator generation, and validation and evaluation. The
decision support techniques used in the third stage are opportunities
incubator, SWOT analysis, impact matrix, and decision matrix; and in
the third and the fourth stages are force field analysis, and pros-
cons-faults-and-fixes.
CHAPTER 4 PRACTITIONER’S
GUIDE TO COLLABORATION

4.1 Social Networks and Analytic Teams [ 49 ]

4.2 Dividing the Work [ 53 ]

4.3 Value of Collaborative Processes [ 55 ]

4.4 Common Pitfalls with Small Groups [ 56 ]

4.5 Benefiting from Diversity [ 57 ]

4.6 Advocacy versus Objective Inquiry [ 58 ]

4.7 Leadership and Training [ 60 ]

The rapid growth of social networks across organizational


boundaries and the increased geographic distribution of their
members are changing how analysis needs to be done within the
intelligence profession and even more so in business. Analysis in the
intelligence profession and other comparable disciplines is evolving
from being predominantly an activity done by a single analyst to a
collaborative group process. The increased use of Structured
Analytic Techniques is one of several factors spurring this transition
to more collaborative work products.

In this chapter, we identify three different groups that engage in


analysis—two types of teams and a group described here as a
“social network.” We recommend that analysis be done in two
phases: (1) an initial, divergent analysis phase often conducted by a
geographically distributed social network and (2) a convergent
analysis phase done by a smaller analytic team.
The chapter provides some practical guidance on how to take
advantage of the collaborative environment while preventing or
avoiding the many well-known problems associated with small-group
processes. Many things change when the internal thought processes
of analysts are externalized in a transparent manner so that
evidence is shared early and differences of opinion are identified,
refined, and easily critiqued by others. The chapter then identifies
problems known to impair the performance of teams and small
groups and concludes with some practical measures for limiting the
occurrence of such problems.
4.1 SOCIAL NETWORKS AND ANALYTIC TEAMS
Teams and groups can be categorized in several ways. When the purpose of the group is to generate
an analytic product, it seems most useful to deal with three types: the traditional analytic team, the
special project team, and teams supported by social networks. Traditional teams are usually co-located
and focused on a specific task. Special project teams are most effective when their members are co-
located or working in a synchronous virtual world. Teams supported by social networks can operate
effectively in co-located, geographically distributed, and synchronous as well as asynchronous modes.
These three types of groups differ in leadership, frequency of face-to-face and virtual-world meetings,
breadth of analytic activity, and amount of time pressure under which they work.1

Traditional analytic team: This is the typical work team assigned to perform a specific task. It has
a leader appointed by a manager or chosen by the team, and all members of the team are
collectively accountable for the team’s product. The team may work jointly to develop the entire
product, or each team member may be responsible for a specific section of the work. Historically, in
the U.S. Intelligence Community, many teams were composed of analysts from a single agency,
and involvement of other agencies was through coordination during the latter part of the production
process rather than by collaborating from the beginning. This approach is now evolving because of
changes in policy and easier access to secure interagency communications and collaborative
software. Figure 4.1a shows how the traditional analytic team works. The core analytic team, with
participants usually working at the same office, drafts a paper and sends it to other members of the
community for comment and coordination. Ideally, the core team will alert other stakeholders in the
community of their intent to write on a specific topic; but, too often, such dialogue occurs much later,
when the author is seeking to coordinate the finished draft. In most cases, analysts must obtain
specific permissions or follow established procedures to tap the knowledge of experts outside the
office or outside the government.

Special project team: Such a team is usually formed to provide decision makers with near real-
time analytic support during a crisis or an ongoing operation. A crisis support task force or field-
deployed interagency intelligence team that supports a military operation exemplifies this type of
team. Members typically are in the same physical office space or are connected by video
communications. There is strong team leadership, often with close personal interaction among team
members. Because the team is created to deal with a specific situation, its work may have a
narrower focus than a social network or regular analytic team, and its duration may be limited.
There is usually intense time pressure, and around-the-clock operation may be required. Figure
4.1b is a diagram of a special project team.

Social networks: Experienced analysts have always had their own network of experts in their field
or related fields with whom they consult from time to time and whom they may recruit to work with
them on a specific analytic project. Social networks are critical to the analytic business. Members of
the network do the day-to-day monitoring of events, produce routine products as needed, and may
recommend the formation of a more formal analytic team to handle a specific project. This form of
group activity is now changing dramatically with the growing ease of cross-agency secure
communications and the availability of collaborative software. Social networks are expanding
exponentially across organization boundaries. The term “social network,” as used here, includes all
analysts in government or business working anywhere in the world on any issue. It can be limited to
a small group with special clearances or comprise a broad array of government, business,
nongovernmental organization (NGO), and academic experts. The network can be in the same
office, in different buildings in the same metropolitan area, or, increasingly, at multiple locations
around the globe.
Description

Figure 4.1A Traditional Analytic Team


Source: Pherson Associates, LLC, 2019.

Description

Figure 4.1B Special Project Team


Source: Pherson Associates, LLC, 2019.

The key problem that arises with social networks is the geographic distribution of their members. For
widely dispersed teams, air travel is often an unaffordable expense. Even within the Washington, D.C.,
metropolitan area, distance is a factor that limits the frequency of face-to-face meetings, particularly as
traffic congestion becomes a growing nightmare. From their study of teams in diverse organizations,
which included teams in the U.S. Intelligence Community, Richard Hackman and Anita Woolley came to
this conclusion:

Distributed teams do relatively well on innovation tasks for which ideas and solutions need to
be generated but generally underperform face-to-face teams on decision-making tasks.
Although decision-support systems can improve performance slightly, decisions made from
afar still tend to take more time, involve less exchange of information, make error detection and
correction more difficult, and can result in less participant satisfaction with the outcome than is
the case for face-to-face teams.2
In sum, distributed teams are appropriate for many, but not all, team tasks. Using them well requires
careful attention to team structure, a face-to-face launch when members initially come together, and
leadership support throughout the life of the team to keep members engaged and aligned with collective
purposes.3

Research on effective collaborative practices has shown that geographically and organizationally
distributed teams are most likely to succeed when they satisfy six key imperatives of effective
collaboration:

Mutual Trust. Know and trust one another; this usually requires that they meet face to face at least
once.

Mission Criticality. Feel a personal need to engage with the group to perform a critical task.

Mutual Benefit. Derive mutual benefits from working together.

Access and Agility. Connect with one another virtually on demand and easily add new members.

Incentives. Perceive incentives for participating in the group, such as saving time, gaining new
insights from interaction with other knowledgeable analysts, or increasing the impact of their
contribution.

Common Understanding. Share a common lexicon and understanding of the problem with agreed
lists of terms and definitions.4
4.2 DIVIDING THE WORK
Managing the geographic distribution of the social network can be addressed by dividing the analytic
task into two parts: (1) exploiting the strengths of the social network for divergent or creative analysis to
identify ideas and gather information and (2) forming a smaller analytic team that employs convergent
analysis to meld these ideas into an analytic product. When the draft is completed, it goes back for
review to all members of the social network who contributed during the first phase of the analysis, and
then back to the team to edit and produce the final paper.

Structured Analytic Techniques, web-based discussions, and other types of collaborative software
facilitate this two-part approach to analysis. The use of Exploration Techniques to conduct divergent
analysis early in the analytic process works well for a geographically distributed social network
communicating online. The products of these techniques can provide a solid foundation for the smaller
analytic team to do the subsequent convergent analysis. In other words, each type of group performs
the type of task for which it is best qualified. This process is applicable to most analytic projects. Figure
4.2 shows the functions that collaborative websites can perform.

A project leader informs a social network of an impending project and provides a tentative project
description, target audience, scope, and process to be followed. The leader also broadcasts the name
and internet address of the collaborative virtual workspace to be used and invites interested analysts
knowledgeable in that area to participate. Any analyst with access to the collaborative network is
authorized to add information and ideas to it. Any of the following techniques may come into play during
the divergent analysis phase as specified by the project leader:

Collaboration in sharing and processing data using other techniques, such as timelines, sorting,
networking, mapping, and charting, as described in chapters 5 and 6.

Some form of brainstorming, as described in chapters 6 and 9, to generate a list of driving forces,
variables, players, and so on.

Ranking or prioritizing the list, as described in chapter 5.

Description

Figure 4.2 Functions of a Collaborative Website


Source: Pherson Associates, LLC, 2019.

Putting the list into a Cross-Impact Matrix, as described in chapter 7, and then discussing and
recording in the web discussion stream the relationship, if any, between each pair of driving forces,
variables, or players in that matrix.

Developing a list of alternative explanations or outcomes (hypotheses), as described in chapter 7.

Developing a list of relevant information for consideration when evaluating generated hypotheses,
as described in chapter 7.

Doing a Key Assumptions Check, as described in chapter 7. This can take less time using a
synchronous collaborative virtual setting than when done in a face-to-face meeting; conducting
such a check can uncover the network’s thinking about key assumptions.

Most of these steps involve making lists, which can be done quite effectively in a virtual environment.
Making such input online in a chat room or asynchronous email discussion thread can be even more
productive than a face-to-face meeting. Analysts have more time to think about and write up their
thoughts. They can look at their contribution over several days and make additions or changes as new
ideas come to them.

Ideally, a project leader should oversee and guide the process. In addition to providing a sound
foundation for further analysis, this process enables the project leader to identify the best analysts for
inclusion in the smaller team that conducts the project’s second phase—making analytic judgments and
drafting the report. The project lead should select second-phase team members to maximize the
following criteria: level of expertise on the subject, level of interest in the outcome of the analysis, and
diversity of opinions and collaboration styles among members of the group. The action then moves from
the social network to a small, trusted team (preferably no larger than eight analysts) to complete the
project, perhaps using other techniques, such as Analysis of Competing Hypotheses, Red Hat Analysis,
or What If? Analysis. At this stage in the process, the use of virtual collaborative software is usually
more efficient than face-to-face meetings. Software used for exchanging ideas and revising text should
allow for privacy of deliberations and provide an audit trail for all work done.

The draft report is best done by a single person. That person can work from other team members’
inputs, but the report usually reads better if it is crafted in one voice. As noted earlier, the working draft
should be reviewed by those members of the social network who participated in the first phase of the
analysis.
4.3 VALUE OF COLLABORATIVE
PROCESSES
In our vision for the future, intelligence analysis increasingly
becomes a collaborative enterprise, with the focus shifting “away
from coordination of draft products toward regular discussion of data
and hypotheses early in the research phase.”5 This is a major
change from the traditional concept of intelligence analysis as largely
an individual activity with coordination as the final step in the
process. In this scenario, instead of reading a static, hard copy
paper, decision makers would obtain analysis of the topic of interest
by accessing a web-based knowledge database that was
continuously updated. The website might also include dropdowns
providing lists of key assumptions, critical information gaps, or
indicators; a Source Summary Statement; or a map showing how the
analytic line has shifted over time.

In a collaborative enterprise, Structured Analytic Techniques are the


process by which collaboration occurs. Just as these techniques
provide structure to our individual thought processes, they can also
structure the interaction of analysts within a small team or group.
Because the thought process in these techniques is transparent,
each step in the technique prompts discussion within the team. Such
discussion can generate and evaluate substantially more divergent
information and new information than can a group that does not use
a structured process. When a team is dealing with a complex issue,
the synergy of multiple minds using structured analysis is usually
more effective than the thinking of a lone analyst. Structured Analytic
Techniques when paired with collaborative software can provide a
framework to guide interagency collaboration and coordination and
connect team members in different offices, agencies, parts of traffic-
congested metropolitan areas, and even around the world.

Team-based analysis can, of course, bring with it a new set of


challenges equivalent to the cognitive biases and other pitfalls faced
by the individual analyst. However, using structured techniques that
guide interaction among members of a team or group can minimize
well-known group-process problems. A structured process helps
keep discussions from getting sidetracked and facilitates the
elicitation of alternative views from all team members.

Analysts have found that use of a structured process helps to


depersonalize arguments when there are differences of opinion. This
is discussed further in the review of Adversarial Collaboration
techniques at the end of chapter 8. Moreover, today’s information
technology and social networking programs make structured
collaboration much easier than in the past.
4.4 COMMON PITFALLS WITH SMALL
GROUPS
As more analysis is done collaboratively, the quality of intelligence
products is increasingly influenced by the success or failure of small-
group processes. The various problems that afflict small-group
processes have been the subject of considerable research.6 One
might reasonably be concerned that more collaboration will create
more conflict and more interagency battles. However, as we explain
here, it turns out that the use of Structured Analytic Techniques
frequently helps analysts avoid many of the common pitfalls of the
small-group process.

Some group-process problems are obvious to anyone who has tried


to arrive at decisions or judgments in a group meeting. Guidelines for
how to run meetings effectively are widely available, but many group
leaders fail to follow them.7 Key individuals are absent or late, and
participants are unprepared. Senior members or those with strong
personalities often dominate meetings, and some participants are
reluctant to speak up or to express their true beliefs. Discussion can
get stuck on several salient aspects of a problem, rather than
covering all aspects of the subject. Decisions are hard to reach and,
if reached, may not be implemented. Such problems are often
magnified when the meeting is conducted virtually, over telephones
or computers.

If you had to identify, in one word, the reason that the human
race has not achieved, and never will achieve, its full
potential, that word would be meetings.
Dave Barry, American humorist
Academic studies show that “the order in which people speak has a
profound effect on the course of a discussion. Earlier comments are
more influential, and they tend to provide a framework within which
the discussion occurs.”8 Once that framework is in place, discussion
tends to center on that framework, to the exclusion of other options.
This phenomenon is also easily observed when attending a panel
discussion at a conference. Whoever asks the first question or two in
the Q&A session often sets the agenda (or the analytic framework,
depending on the astuteness of the question) for the remainder of
the discussion.

Much research documents that the desire for consensus is an


important cause of poor group decisions. Development of a group
consensus is usually perceived as success but often indicates
failure. Premature consensus is one of the more common causes of
suboptimal group performance. It leads to failure to identify or
seriously consider alternatives, failure to examine the negative
aspects of the preferred position, and failure to consider the
consequences that might follow if the preferred position is wrong.9
This phenomenon is what is commonly called Groupthink.

Academic researchers have documented other problems that are


less obvious, but no less significant. Often, some reasonably
satisfactory solution is proposed on which all members can agree,
and the discussion is ended without further search to see if there
may be a better answer. Such a decision often falls short of the
optimum that might be achieved with further inquiry; it is an example
of the misapplied heuristic called Satisficing. Another phenomenon,
known as group “polarization,” leads in certain predictable
circumstances to a group decision that is more extreme than the
average group member’s view prior to the discussion. “Social
loafing” is the term used to describe the phenomenon that people
working in a group will often expend less effort than if they were
working to accomplish the same task on their own. In any of these
situations, the result is often an inferior product that suffers from a
lack of analytic rigor.
4.5 BENEFITING FROM DIVERSITY
Improvement of group performance requires an understanding of
these problems and a conscientious effort to avoid or mitigate them.
The literature on small-group performance is virtually unanimous in
emphasizing that groups make better decisions when their members
bring to the table a diverse set of ideas, opinions, and perspectives.
What Premature Closure, Groupthink, Satisficing, and the
polarization of group dynamics all have in common is a failure to
recognize assumptions, to work from a common lexicon, and to
adequately identify and consider alternative points of view.

Laboratory experiments have shown that even a single dissenting


opinion, all by itself, makes a group’s decisions more nuanced and
its decision-making process more rigorous.10 “The research also
shows that benefits from dissenting opinions occur regardless of
whether or not the dissenter is correct. The dissent stimulates a
reappraisal of the situation and identification of options that
otherwise would have gone undetected.”11 To be effective, however,
dissent must be genuine—not generated artificially, a common pitfall
in applying Team A/Team B Analysis or the Devil’s Advocacy
technique.12

Small, distributed asynchronous groups are particularly good at


generating and evaluating lists of assumptions, indicators, drivers,
potential explanations of current events, or potential outcomes. They
are also good for making lists of pros and cons on a given subject.
With the aid of distributed group-support software, the group can
categorize items on a list and prioritize, score, rank, scale, or vote on
them. For such tasks, a distributed, virtual asynchronous meeting
may be more productive than a traditional face-to-face meeting. That
is because analysts have more time to think about their input; they
can reflect on their contribution over several hours or days and make
additions or changes as additional ideas come to mind. If rank or
position of some group members is likely to have an undue
influence, group members can provide their input anonymously.

Briefly, then, the route to better analysis is to create small groups of


analysts who are strongly encouraged by their leader to speak up
and express a wide range of ideas, opinions, and perspectives. The
use of Structured Analytic Techniques—and silent brainstorming
techniques in particular—will generally ensure that all participants
contribute to the process. These techniques prod all participants to
engage, and a more diverse set of ideas are put on the table. They
guide the dialogue among analysts as they share evidence and
alternative perspectives on the meaning and significance of the
evidence. Each step in the technique prompts relevant discussion
within the team. Such discussion can generate and evaluate
substantially more divergent information and new ideas than can a
group that does not use such a structured process.

The more heterogeneous the group, the lower the risk of Premature
Closure, Groupthink, Satisficing, and polarization. Use of a
structured technique also sets a clear step-by-step agenda for any
meeting where that technique is used. This makes it easier for a
group leader to keep a meeting on track to achieve its goal.13

The same procedures work either on classified systems or with


outside experts on an unclassified network. Open-source information
has rapidly come to play a larger role in intelligence analysis than in
the past. Distributed asynchronous collaboration followed by
distributed synchronous collaboration that uses some of the basic
structured techniques is one of the best ways to tap the expertise of
a group of knowledgeable individuals. The Delphi Method, discussed
in chapter 8, is one well-known method for accomplishing the
asynchronous phase, and virtual collaboration systems are showing
increasing promise for optimizing work done in the synchronous
phase.
4.6 ADVOCACY VERSUS OBJECTIVE INQUIRY
The desired diversity of opinion is, of course, a double-edged sword, as it can become a source of
conflict that degrades group effectiveness.14 It is not easy to introduce true collaboration and teamwork
into a community with a history of organizational rivalry and mistrust. Analysts must engage in inquiry,
not advocacy, and they must be critical of ideas but not people.

In a task-oriented team environment, advocacy of a specific position can lead to emotional conflict and
reduced team effectiveness. Advocates tend to examine evidence in a biased manner, accepting at face
value information that seems to confirm their own point of view and critically evaluating any contrary
evidence. Advocacy is appropriate in a meeting of stakeholders that one is attending for the purpose of
representing a specific interest. It is also “an effective method for making decisions in a courtroom when
both sides are effectively represented, or in an election when the decision is made by a vote of the
people.”15 However, it is not an appropriate method of discourse within a team “when power is unequally
distributed among the participants, when information is unequally distributed, and when no clear rules of
engagement exist—especially about how the final decision will be made.”16 An effective resolution may
be found only through the creative synergy of alternative perspectives.

Figure 4.6 displays the differences between advocacy and the objective inquiry expected from a team
member or a colleague.17 When advocacy leads to emotional conflict, it can lower team effectiveness by
provoking hostility, distrust, cynicism, and apathy among team members. Such tensions are often
displayed when challenge techniques, such as Devil’s Advocacy and Team A/Team B Analysis, are
employed (a factor that argued strongly for dropping them from the third edition of this book). On the
other hand, objective inquiry, which often leads to cognitive conflict, can lead to new and creative
solutions to problems, especially when it occurs in an atmosphere of civility, collaboration, and common
purpose. Several effective methods for managing analytic differences are described at the end of
chapter 8.

Figure 4.6 Advocacy versus Inquiry in Small-Group Processes


Source: Pherson Associates, LLC, 2019.

We believe a team or group using Structured Analytic Techniques is less vulnerable to group-process
traps than a comparable group doing traditional analysis because the techniques move analysts away
from advocacy and toward inquiry. This idea has not yet been tested and demonstrated empirically, but
the rationale is clear. These techniques work best when an analyst is collaborating with a small group of
other analysts. Just as these techniques provide structure to our individual thought processes, they play
an even stronger role in guiding the interaction of analysts within a small team or group.18

Some techniques, such as the Key Assumptions Check, Analysis of Competing Hypotheses (ACH), and
Argument Mapping, help analysts gain a clear understanding of how and exactly why they disagree. For
example, many CIA and FBI analysts report that they use ACH to gain a better understanding of the
differences of opinion between them and other analysts or between analytic offices. The process of
creating an ACH matrix requires identification of the evidence and arguments being used and
ascertaining the basis for labeling items and arguments as either consistent or inconsistent with the
various hypotheses. Review of this matrix provides a systematic basis for identification and discussion
of differences between two or more analysts.

CIA and FBI analysts also note that jointly building an ACH matrix helps to depersonalize arguments
when differences of opinion emerge.19 One side might suggest evidence that the other had not known
about, or one side will challenge an assumption and a consensus will emerge that the assumption is
unfounded. In other words, ACH can help analysts, operators, and decision makers learn from their
differences rather than fight over them. Other structured techniques, including those discussed in the
section on Adversarial Collaboration in chapter 8, do this as well.
4.7 LEADERSHIP AND TRAINING
Considerable research on virtual teaming shows that leadership effectiveness is a major factor in the
success or failure of a virtual team.20 Although leadership usually is provided by a group’s appointed
leader, it can also emerge as a more distributed peer process. A trained facilitator can increase a team’s
effectiveness (see Figure 4.7). When face-to-face contact is limited, leaders, facilitators, and team
members must compensate by paying more attention than they might otherwise devote to the following
tasks:

Articulating a clear mission, goals, specific tasks, and procedures for evaluating results.

Defining measurable objectives with milestones and timelines for achieving them.

Establishing a common lexicon.

Identifying clear and complementary roles and responsibilities.

Building relationships with and among team members and with stakeholders.

Agreeing on team norms and expected behaviors.

Defining conflict resolution procedures.

Developing specific communication protocols and practices.21

As illustrated in Figure 4.7, the interactions among the various types of team participants—whether
analyst, leader, facilitator, or technologist—are as important as the individual roles played by each. For
example, analysts on a team will be most effective not only when they have subject-matter expertise or
knowledge that lends a new viewpoint, but also when the rewards for their participation are clearly
defined by their manager. Likewise, a facilitator’s effectiveness is greatly increased when the goals,
timeline, and general focus of the project are establishing with the leader in advance. When roles and
interactions are explicitly defined and functioning, the group can more easily turn to the more
challenging analytic tasks at hand.

As greater emphasis is placed on intra- and interoffice collaboration and more work is done through
computer-mediated communications, it becomes increasingly important that analysts be trained in the
knowledge, skills, and abilities required for facilitation and management of both face-to-face and virtual
meetings, with a strong emphasis on using silent brainstorming techniques and Adversarial
Collaboration during such meetings. Training is more effective when it occurs just before the skills and
knowledge must be used. Ideally, it should be fully integrated into the work process and reinforced with
mentoring. Good instructors should aspire to wear three different hats, acting in the roles of coaches,
mentors, and facilitators.

Multi-agency or intelligence community–wide training programs of this sort could provide substantial
support to interagency collaboration and the formation of virtual teams. Whenever a new interagency or
virtual team or a distributed global project team is formed, all members should have benefited from
training in understanding the pitfalls of group processes, performance expectations, standards of
conduct, differing collaboration styles, and conflict resolution procedures. Standardization of this training
across multiple organizations or agencies will accelerate the development of a shared analytic culture
and reduce the start-up time needed when launching a new interagency project or orchestrating the
work of a globally distributed group.
Description

Figure 4.7 Effective Small-Group Roles and Interactions


Source: Pherson Associates, LLC, 2019.
NOTES
1. This chapter was inspired by and draws on the research done by
the Group Brain Project at Harvard University. That project was
supported by the National Science Foundation and the CIA
Intelligence Technology Innovation Center. See in particular J.
Richard Hackman and Anita W. Woolley, “Creating and Leading
Analytic Teams,” Technical Report 5 (February 2007),
http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.456.4854&rep=rep1&type=pdf

2. Hackman and Woolley, “Creating and Leading Analytic Teams,” 8.

3. Ibid.

4. Randolph H. Pherson and Joan McIntyre, “The Essence of


Collaboration: The IC Experience,” in Scientific Underpinnings of
“Collaboration” in the National Security Arena: Myths and Reality—
What Science and Experience Can Contribute to Its Success, ed.
Nancy Chesser (Washington, DC: Strategic Multi-Layer Assessment
Office, Office of the Secretary of Defense, Director of Defense
Research and Engineering/Rapid Reaction Technology Office, June
2009).

5. Vision 2015: A Globally Networked and Integrated Intelligence


Enterprise (Washington, DC: Director of National Intelligence, 2008),
13.

6. For example, Paul B. Paulus and Bernard A. Nijstad, Group


Creativity: Innovation through Collaboration (New York: Oxford
University Press, 2003).

7. J. Scott Armstrong, “How to Make Better Forecasts and Decisions:


Avoid Face-to-Face Meetings,” Foresight 5 (Fall 2006).
8. James Surowiecki, The Wisdom of Crowds (New York: Doubleday,
2004), 184.

9. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than


Individuals? The Potential Benefits of Dissent and Diversity for
Group Creativity,” in Group Creativity, eds. Paul B. Paulus and
Bernard A. Nijstad (New York: Oxford University Press, 2003), 63–
64.

10. Surowiecki, The Wisdom of Crowds, 183–184.

11. Nemeth and Nemeth-Brown, “Better Than Individuals?” 73.

12. Ibid., 76–78.

13. This paragraph and the previous paragraph express the authors’
professional judgment based on personal experience and anecdotal
evidence gained in discussion with other experienced analysts. As
discussed in chapter 11, there is a clear need for systematic
research on this topic and other variables related to the effectiveness
of Structured Analytic Techniques.

14. Frances J. Milliken, Caroline A. Bartel, and Terri R. Kurtzberg,


“Diversity and Creativity in Work Groups,” in Group Creativity, eds.
Paul B. Paulus and Bernard A. Nijstad (New York: Oxford University
Press, 2003), 33.

15. Martha Lagace, “Four Questions for David Garvin and Michael
Roberto,” Working Knowledge: Business Research for Business
Leaders (Harvard Business School weekly newsletter), October 15,
2001, http://hbswk.hbs.edu/item/3568.html

16. Ibid.

17. The table is from David A. Garvin and Michael A. Roberto, “What
You Don’t Know about Making Decisions,” Working Knowledge:
Business Research for Business Leaders (Harvard Business School
weekly newsletter), October 15, 2001,
http://hbswk.hbs.edu/item/2544.html.

18. This paragraph expresses our professional judgment based on


personal experience and anecdotal evidence gained in discussion
with other experienced analysts. As we discuss in chapter 11, there
is a clear need for systematic research on this topic and other
variables related to the effectiveness of Structured Analytic
Techniques.

19. This information was provided by two senior educators in the


U.S. Intelligence Community.

20. Jonathan N. Cummings, “Leading Groups from a Distance: How


to Mitigate Consequences of Geographic Dispersion,” in Leadership
at a Distance: Research in Technologically-Supported Work, ed.
Susan Weisband (New York: Routledge, 2007).

21. Sage Freechild, “Team Building and Team Performance


Management.” Originally online at www.phoenixrisingcoaching.com.
This article is no longer available online.
Descriptions of Images and Figures
Back to Figure

The broader social network, such as academia, nongovernmental


organizations, and business, includes governmental social network,
such as intelligence communities and government agencies. The
governmental social network includes the core analytic team. The
barrier separating each network signifies established rules of
engagement for cross-boundary interaction.

Back to Figure

The broader social network, such as academia, nongovernmental


organizations, and business, includes intelligence community and
governmental organizations. They are signals intelligence, open-
source intelligence, human intelligence, geospatial intelligence, and
measurement and signals intelligence. The intelligence community
includes the special project team, and has direct reach back with the
project team and the broader social network. The barrier between
the networks signifies established rules of engagement for cross-
boundary interaction.

Back to Figure

Websites can aid analytic collaboration by summarizing facts,


capturing the analytic process, and tracking and validating
judgments, so that analysts can better share, understand, and
challenge judgments regardless of physical geography, time
elapsed, or analyst turnover. Summarizing facts: What is known;
How it is known; Confidence in it; Any factual changes and why.
Capturing the analytic process: Key assumptions; Conceptual
frameworks; Relevant models; Indicators; Scope; Known unknowns.
Tracking and validating judgments: Virtual library; Internal analytic
integrity; External transparency.

Back to Figure
Leader articulates goals, establishes team, and enforces
accountability. Facilitator identifies appropriate techniques, and leads
structured analytic sessions. Technologist builds and optimizes tools.
Analysts are subject-matter experts. Leader and facilitator agree on
project timeline, focus, and applicability of small-group process.
Leader and analysts clearly articulate individual performance
expectations, evaluation metrics, and rewards for constructive
participation. Analysts, facilitator, and technologists build and
maintain collaborative analytic workspace, identify technology needs.
All participants work from same key analytic question; establish
agreed team norms and expectations for communications, dispute
resolution, and allocation of individual responsibilities.
CHAPTER 5 GETTING ORGANIZED

5.1 Sorting [ 69 ]

5.2 Ranking, Scoring, and Prioritizing [ 71 ]

5.2.1 The Method: Ranked Voting [ 72 ]

5.2.2 The Method: Paired Comparison [ 73 ]

5.2.3 The Method: Weighted Ranking [ 74 ]

5.3 Matrices [ 76 ]

5.4 Process Maps [ 80 ]

5.5 Gantt Charts [ 82 ]

A significant challenge analysts face is managing the large volume


of data that must be processed and evaluated to develop an analytic
line and generate a well-sourced, high-quality product. Most people
can keep only a limited amount of information in the forefront of their
minds. Imagine that a person must make a difficult decision, such as
what car to buy or where to go on vacation. A good first step is to
make a list of all the relevant information and arguments for or
against taking an action. When it comes time to make the decision,
however, the list may be so long that weighing all the pros against
the cons at the same time becomes a daunting task. As a result, the
person is likely to vacillate, focusing first on the pros and then on the
cons, favoring first one decision and then the other. Suppose how
much more difficult it would be for an analyst to think through a
complex intelligence problem with large volumes of data and many
interacting variables. The limitations of human thought make it
difficult, if not impossible, to do error-free analysis without the
support of some external representation of the parts of the problem
at hand.

Two common approaches for coping with this limitation of our


working memory are as follows:

Decomposition, which is breaking down the problem or issue


into its component parts so that each part can be considered
separately

Visualization, which is capturing all the parts of the problem or


issue in some organized manner, often a visual representation,
that is designed to facilitate understanding how the various parts
interrelate

All Structured Analytic Techniques employ these approaches, as the


externalization of one’s thinking is part of the definition of structured
analysis. For some of the simpler techniques, however,
decomposing an issue to present the data in an organized manner is
the principal contribution they make to producing more effective
analysis.

The use of a simple checklist can be extremely productive in getting


organized at the start of a project.1 In Critical Thinking for Strategic
Intelligence, Pherson and Pherson discuss how the use of checklists
and critical-thinking mnemonics such as the Knowing Your Client
Checklist, Getting Started Checklist, AIMS (Audience, Issue,
Message, and Storyline), and Issue Redefinition can aid the
process.2 These techniques can be combined to help analysts
conceptualize and launch a new project. Critical time can be saved if
an analyst can start off in the right direction and avoid having to
change course later.

In this chapter, we describe five basic approaches to organizing your


data and getting started with your analysis.
Analysis is breaking information down into its component
parts. Anything that has parts also has a structure that relates
these parts to each other. One of the first steps in doing
analysis is to determine an appropriate structure for the
analytic problem, so that one can then identify the various
parts and begin assembling information on them. Because
there are many different kinds of analytic problems, there are
also many different ways to structure analysis.
—Richards J. Heuer Jr., Psychology of Intelligence Analysis (1999)
OVERVIEW OF TECHNIQUES
Sorting is a basic technique for organizing data in a manner that
often yields new insights. It is particularly effective during initial data
gathering and hypothesis generation.

Ranking, Scoring, and Prioritizing techniques are used to


organize items on any list according to the item’s importance,
desirability, priority, value, or probability of happening.

Matrices are generic analytic tools for sorting and organizing data in
a manner that facilitates comparison and analysis. They are used to
analyze the relationships among any two sets of variables or the
interrelationships among a single set of variables. A matrix consists
of a grid with as many cells as needed for the problem under study.
Matrices are used so frequently to analyze a problem that we have
included several distinct techniques in this book.

Process Maps are used to identify and diagram each step in a


complex process. Many different versions exist, including Event Flow
Charts, Activity Flow Charts, and Value Stream Maps. Analysts can
use the techniques to track the progress of plans or projects
undertaken by a business competitor, foreign government, a criminal
or terrorist group, or any other non-state actor.

Gantt Charts are a specific type of Process Map that uses a matrix
to chart the progression of a multifaceted process over a specific
time period. Process Maps and Gantt Charts were developed
primarily for use in business and the military, but they are also useful
to intelligence analysts.

Other comparable techniques for organizing and presenting data


include various types of graphs, diagrams, and trees. We did not
discuss them in this book because they are well covered in other
works, and it was necessary to draw a line on the number of
techniques included here.
5.1 SORTING
Sorting is a basic technique for organizing a large body of data in a
manner that often yields new insights.
When to Use It
Sorting is effective when information elements can be broken out
into categories or subcategories for comparison with one another,
most often by using a computer program, such as a spreadsheet.
This technique is particularly useful during the initial data-gathering
and hypothesis-generation phases of analysis.
Value Added
Sorting large amounts of data into relevant categories that are
compared with one another can provide analysts with insights into
trends, similarities, differences, or abnormalities that otherwise would
go unnoticed. When you are dealing with transactional data in
particular (for example, geospatial information or transfers of goods
or money), it is helpful—if not essential—to sort the data first.
The Method
Follow these steps:

Review the categories of information to determine which


category or combination of categories might show trends or
abnormalities that would provide insight into the problem you
are studying. Use a structured brainstorming technique, such as
Cluster Brainstorming or Outside-In Thinking, to ensure you
have generated a comprehensive list of categories.

Place the data into a spreadsheet or a database using as many


fields (columns) as necessary to differentiate among the data
types (dates, times, locations, people, activities, amounts, etc.).

List each of the facts, pieces of information, or hypotheses


involved in the problem that are relevant to your sorting schema.
(Use paper, whiteboard, movable self-stick notes, or other
means for this.)

Review the listed facts, information, or hypotheses in the


database or spreadsheet to identify key fields that may allow
you to uncover possible patterns or groupings. Those patterns
or groupings then illustrate the schema categories that deserve
the most attention. For example, if an examination of terrorist
activity shows that most attacks occur in hotels and restaurants
but that the time of day of the attack varies, “Location” and
“Time of Day” become key categories.

Choose a category and sort the data within that category. Look
for any insights, trends, or oddities. Good analysts notice trends;
great analysts notice anomalies.

Review (or ask others to review) the sorted facts, information, or


hypotheses to see if there are alternative ways to sort them. List
any alternative sorting schema for your problem. One of the
most useful applications for this technique is to sort according to
multiple schemas and examine results for correlations between
data and categories. But remember that correlation is not the
same as causation.
Examples
Example 1: Are a foreign adversary’s military leaders pro-United
States, anti-United States, or neutral on their attitudes toward U.S.
policy in the Middle East? To answer this question, analysts sort the
leaders by various factors that might give insight into the issue, such
as birthplace, native language, religion, level of professional
education, foreign military or civilian/university exchange training
(where/when), field/command assignments by parent service,
political influences in life, and political decisions made. Then
analysts review the information to see if any parallels exist among
the categories.

Example 2: Analysts review the data from cell-phone


communications among five conspirators to determine the frequency
of calls, patterns that show who is calling whom, changes in patterns
of frequency of calls prior to a planned activity, dates and times of
calls, and subjects discussed.

Example 3: Analysts are reviewing all information related to an


adversary’s weapons of mass destruction (WMD) program.
Electronic intelligence reporting shows more than 300,000 emitter
transmissions were collected over the past year alone. The analysts’
sorting of the data by type of emitter, dates of emission, and location
shows varying increases and decreases of emitter activity with some
minor trends identifiable. The analysts filter out all collections except
those related to air defense. The unfiltered information is sorted by
type of air defense system, location, and dates of activity.

Of note is a period when there is an unexpectedly large increase of


activity in the air defense surveillance and early warning systems.
The analysts review relevant external events and find that a major
opposition movement outside the country held a news conference
where it detailed the adversary’s WMD activities, including locations
of the activity within the country. The air defense emitters for all
suspected locations of WMD activity, including several not included
in the press conference, increased to a war level of surveillance
within four hours of the press conference. The analysts reviewed all
air defense activity locations that showed the increase assumed to
be related to the press conference and the WMD program and found
two locations showing increased activity but not previously listed as
WMD-related. These new locations were added to collection
planning to determine what relationship, if any, they had to the WMD
program.
Potential Pitfalls
Improper sorting can hide valuable insights as easily as it can
illuminate them. Standardizing the data being sorted is imperative.
Working with an analyst who has experience in sorting can help you
avoid this pitfall.
Origins of This Technique
Sorting is a long-established procedure for organizing data. The
description in this chapter is from Military Intelligence training
materials.
5.2 RANKING, SCORING, AND
PRIORITIZING
This section provides guidance for using three different ranking
techniques—Ranked Voting, Paired Comparison, and Weighted
Ranking. Combining an Exploration Technique, such as Cluster
Brainstorming, with a ranking technique is an effective way for an
analyst to start a new project. Brainstorming techniques are helpful
in developing lists of driving forces, variables for consideration,
indicators, possible scenarios, important players, historical
precedents, sources of information, questions to be answered, and
so forth. Such lists are even more useful once they are ranked,
scored, or prioritized to determine which items are most important,
most useful, most likely, or should be at the top of the priority list.
When to Use It
A ranking technique is often the next step following the use of a
structured brainstorming technique, such as Cluster Brainstorming,
Mind Maps, or Nominal Group Technique (see chapter 6). A ranking
technique is appropriate whenever there are too many items to rank
easily just by looking at the list, the ranking has significant
consequences and must be done as accurately as possible, or it is
useful to aggregate the opinions of a group of analysts.
Value Added
A Getting Organized technique is often used to develop lists of
critical events, key factors, variables to be considered, or important
players. By ranking, scoring, and prioritizing such lists, analysts can
determine which items are most important, most useful, most
probable, and require immediate action. Combining a brainstorming
technique with a ranking technique is an excellent way for an analyst
to provide a foundation for collaboration within or external to an
office.
The Method
Of the three methods discussed here, Ranked Voting is the easiest
and quickest to use, and it is often good enough. However, it is not
as accurate after you get past the top two or three ranked items,
because the group usually has not thought as much (and may not
care as much) about the lower-ranked items. Ranked Voting also
provides less information than either Paired Comparison or
Weighted Ranking. Ranked Voting shows only that one item is
ranked higher or lower than another; it does not show how much
higher or lower. Paired Comparison does provide this information;
Weighted Ranking provides even more information. It specifies the
criteria used in making the ranking, and weights are assigned to
those criteria for each of the items in the list.
5.2.1 The Method: Ranked Voting
In a Ranked Voting exercise, members of the group individually rank
each item in order according to the member’s preference or what the
member regards as the item’s importance. Depending upon the
number of items or the specific analytic needs, the group can decide
to rank all the items or only the top three to five. The group leader or
facilitator passes out simple ballots listing all the items to be voted
on. Each member votes his or her order of preference. If a member
views two items as being of equal preference, the votes can be split
between them. For example, if two items are tied for second place,
each receives a 2.5 ranking (by taking the average of 2 and 3). Any
items that are not voted on fall to the bottom of the ranking list. After
members of the group have voted, the votes are added up. The item
with the lowest number is ranked number 1.
5.2.2 The Method: Paired Comparison
Paired Comparison compares each item against every other item, and the analyst can assign a score to
show how much more important, preferable, or probable one item is than the others. This technique
provides more than a simple ranking, as it shows the degree of importance or preference for each item.
The list of items can then be ordered along a dimension, such as importance or preference, using an
interval-type scale.

Follow these steps to use the technique:

List the items to be compared. Assign a letter to each item.

Create a table with the letters across the top and down the left side, as in Figure 5.2a. The results
of the comparison of each pair of items are marked in the cells of this table. Note the diagonal line
of darker-colored cells. These cells are not used, as each item is never compared with itself. The
cells below this diagonal line are not used because they would duplicate a comparison in the cells
above the diagonal line. If you are working in a group, distribute a blank copy of this table to each
participant.

Looking at the cells above the diagonal row of darkened cells, compare the item in the row with the
one in the column. For each cell, decide which of the two items is more important (or preferable or
probable). Write the letter of the winner of this comparison in the cell, and score the degree of
difference on a scale from 0 (no difference) to 3 (major difference), as in Figure 5.2a.

Description

Figure 5.2A Paired Comparison Matrix

Consolidate the results by adding up the total of all the values for each letter and put this number in
the “Score” column. For example, in Figure 5.2a, item B has one 3 in the first row, plus one 2 and
two 1s in the second row, for a score of 7.

Finally, it may be desirable to convert these values into a percentage of the total score. To do this,
divide the total number of scores (20 in the example) by the score for each individual item. Item B,
with a score of 7, is ranked most important or most preferred. Item B received a score of 35 percent
(7 divided by 20), as compared with 25 percent for item D and only 5 percent each for items C and
E, which received only one vote each. This example shows how Paired Comparison captures the
degree of difference between each ranking.

To aggregate rankings received from a group of analysts, simply add the individual scores for each
analyst.
5.2.3 The Method: Weighted Ranking
In Weighted Ranking, a specified set of criteria is used to rank items. The analyst creates a table with
items to be ranked listed across the top row and criteria for ranking these items listed down the far-left
column (see Figure 5.2b). There are a variety of valid ways to conduct this ranking. We have chosen to
present a simple version of Weighted Ranking here because analysts usually are making subjective
judgments rather than dealing with hard numbers. As you read the following steps, refer to Figure 5.2b:

Create a table with one column for each item. At the head of each column, write the name of an
item or assign it a letter to save space.

Add two more blank columns on the left side of this table. Count the number of selection criteria,
and then adjust the table so that it has that number of rows plus three more, one at the top to list
the items and two at the bottom to show the raw scores and percentages for each item. In the first
column on the left side, starting with the second row, write in all the selection criteria down the left
side of the table. There is some value in listing the criteria roughly in order of importance, but that is
not critical. Leave the bottom two rows blank for the scores and percentages.

Now work down the second column, assigning weights to the selection criteria based on their
relative importance for judging the ranking of the items. Depending on how many criteria are listed,
take either 10 points or 100 points and divide these points among the selection criteria based on
what the analysts believe to be their relative importance in ranking the items. In other words, decide
what percentage of the decision should be based on each of these criteria. Be sure that the weights
for all the selection criteria combined add up to either 10 or 100, whichever is selected. The criteria
should be phrased in such a way that a higher weight is more desirable. For example, a proper
phrase would be “ability to adapt to changing conditions,” and an improper phrase would be
“sensitivity to changing conditions,” which does not indicate whether the entity reacts well or poorly.

Work across the rows to write the criterion weight in the left side of each cell.

Next, work across the matrix one row (selection criterion) at a time to evaluate the relative ability of
each of the items to satisfy that selection criteria. Use a ten-point rating scale, where 1 = low and 10
= high, to rate each item separately. (Do not spread the ten points proportionately across all the
items as was done to assign weights to the criteria.) Write this rating number after the criterion
weight in the cell for each item.

Again, work across the matrix one row at a time to multiply the criterion weight by the item rating for
that criterion, and enter this number for each cell, as shown in Figure 5.2b.
Figure 5.2B Weighted Ranking Matrix

Now add the columns for all the items. The result will be a ranking of the items from highest to
lowest score. To gain a better understanding of the relative ranking of one item as compared with
another, convert these raw scores to percentages. To do this, first add together all the scores in the
“Totals” row to get a total number. Then divide the score for each item by this total score to get a
percentage ranking for each item. All the percentages together must add up to 100 percent. In
Figure 5.2b, it is apparent that item B has the number one ranking (with 20.3 percent), while item E
has the lowest (with 13.2 percent).
Potential Pitfalls
When any of these techniques is used to aggregate the opinions of a
group of analysts, the rankings provided by each group member are
totaled and averaged. This means that the opinions of the outliers,
whose views are quite different from the others, are blended into the
average. As a result, the ranking does not show the range of
different opinions that might be present in a group. In some cases,
the identification of outliers with a minority opinion can be of great
value. Further research might show that the outliers are correct.
Relationship to Other Techniques
Some form of ranking, scoring, or prioritizing is commonly used with
Cluster Brainstorming, Mind Mapping, Nominal Group Technique,
and the Decision Matrix, all of which generate ideas that should be
evaluated or prioritized. Applications of the Delphi Method may also
generate ideas from outside experts that need to be evaluated or
prioritized.
Origins of This Technique
Ranking, Scoring, and Prioritizing are common analytic processes in
many fields. All three forms of ranking described here are based
largely on internet sources. For Ranked Voting, we referred to
http://en.wikipedia.org/wiki/Voting_system; for Paired Comparison,
http://www.mindtools.com; and for Weighted Ranking,
www.ifm.eng.cam.ac.uk/dstools/choosing/criter.html.
5.3 MATRICES
A matrix is an analytic tool for sorting and organizing data in a
manner that facilitates comparison and analysis. It consists of a
simple grid with as many cells as needed for the problem being
analyzed.

Some analytic topics or problems that use a matrix occur so


frequently that they are handled in this book as separate techniques.
For example:

Gantt Charts (this chapter) use a matrix to analyze the


relationships between tasks to be accomplished and the time
period for those tasks.

Paired Comparisons (this chapter) uses a matrix to record the


relationships between pairs of items.

Weighted Ranking (this chapter) uses a matrix to analyze the


relationships between specific items and criteria for judging
those items.

Analysis of Competing Hypotheses and the Inconsistencies


Finder™ (chapter 7) use a matrix to analyze the relationships
between relevant information and hypotheses.

Cross-Impact Matrix (chapter 7) uses a matrix to analyze the


interactions among variables or driving forces that will determine
an outcome. Such a Cross-Impact Matrix is part of Complexity
Manager (chapter 10).

Indicators Evaluation (chapter 9) uses a matrix to evaluate the


uniqueness or diagnostic power of indicators when compared
across scenarios.
Opportunities Incubator™ (chapter 10) uses a matrix to inform
decision makers of which actions would be most effective in
preventing a negative scenario from occurring or fostering the
emergence of a good scenario.

Decision Matrix (chapter 10) uses a matrix to analyze the


relationships between goals or preferences and decision
options.

The Impact Matrix (chapter 10) uses a matrix to chart the impact
a new policy or decision is likely to have on key players and how
best to manage that impact.
When to Use It
Matrices are used to analyze the relationships between any two sets
of variables or the interrelationships between a single set of
variables. Among other things, matrices enable analysts to

compare one type of information with another,

compare pieces of information of the same type,

categorize information by type,

identify patterns in the information,

separate elements of a problem.

A matrix is such an easy and flexible tool to use that it should be one
of the first tools analysts think of when dealing with a large body of
data. One limiting factor in the use of matrices is that information
must be organized along only two dimensions.
Value Added
Matrices provide a visual representation of a complex set of data. By
presenting information visually, a matrix enables analysts to deal
effectively with more data than they could manage by juggling
various pieces of information in their head. The analytic problem is
broken down into component parts so that each part (that is, each
cell in the matrix) can be analyzed separately, while ideally
maintaining the context of the problem in its entirety.

A matrix can also help establish an analytic framework for


understanding a problem, to suggest a more rigorous structure for
explaining a phenomenon, or to generate a more comprehensive set
of alternatives.
The Method
A matrix can be used in many different ways and for many different purposes. What matrices have in
common is that each has a grid with columns and rows for entering two sets of data for comparison.
Organize the category headings for each set of data in some logical sequence before entering the
headings for one set of data in the top row and the headings for the other set in the far-left column. Then
enter the data in the appropriate cells.

Figure 5.3, “Rethinking the Concept of National Security: A New Ecology,” is an example of a complex
matrix that not only organizes data but also tells its own analytic story.3 It shows how the concept of
national security has evolved over recent decades and suggests that the way we define national
security will continue to expand in the coming years. In this matrix, threats to national security are
arrayed along the vertical axis, beginning at the top with the most traditional actor, the nation-state. At
the bottom end of the spectrum are systemic threats, such as infectious diseases or threats that “have
no face.” The top row of the matrix presents the three primary mechanisms for dealing with threats:
military force, policing and monitoring, and collaboration. The cells in the matrix provide historic
examples of how the three different mechanisms of engagement have dealt with the five different
sources of threats. The top-left cell (dark color) presents the classic case of using military force to
resolve nation-state differences. In contrast, at the bottom-right corner, various actors deal with systemic
threats, such as the outbreak of a pandemic, by collaborating with one another.

Description

Figure 5.3 Rethinking the Concept of National Security: A New Ecology


Source: Pherson Associates, LLC, 2019.

Classic definitions of national security focus on the potential for conflicts involving nation-states. The
top-left cell lists three military operations. In recent decades, the threat has expanded to include threats
posed by subnational actors as well as terrorist and other criminal organizations. Similarly, the use of
peacekeeping and international policing has become more common than in the past. This shift to a
broader use of the term “national security” is represented by the other five cells (medium color) in the
top left of the matrix. The remaining cells (light color) to the right and at the bottom of the matrix
represent how the concept of national security is continuing to expand as the world becomes
increasingly globalized.
By using a matrix to present the expanding concept of national security in this way, one sees that
patterns relating to how key players collect intelligence and share information vary along the two primary
dimensions. In the upper left of Figure 5.3, the practice of nation-states is to seek intelligence on their
adversaries, classify it, and protect it. As one moves diagonally across the matrix to the lower right, this
practice reverses. In the lower right of this figure, information is usually available from unclassified
sources and the imperative is to disseminate it to everyone as soon as possible. This dynamic can
create serious tensions at the midpoint, for example, when those working in the homeland security
arena must find ways to share sensitive national security information with state and local law
enforcement officers.
Origins of This Technique
The description of this basic and widely used technique is from
Pherson Associates, LLC, training materials. The national security
matrix was developed by Randolph H. Pherson.
5.4 PROCESS MAPS
Process Mapping is an umbrella term that covers a variety of
procedures for identifying and depicting the steps in a complex
procedure. It includes Flow Charts of various types (Activity Flow
Charts, Commodity Flow Charts, Causal Flow Charts), Relationship
Maps, and Value Stream Maps commonly used to assess and plan
improvements for business and industrial processes. Process Maps
or Flow Charts are diagrams that analysts use to track the step-by-
step movement of events or commodities to identify key pathways
and relationships involved in a complex system or activity. They
usually contain various types of boxes connected by lines. The
boxes are connected by arrows to illustrate the flow direction of a
process.
When to Use It
Flow Charts can track the flow of money and other commodities, the
movement of goods through production and delivery, the sequencing
of events, or a decision-making process. They help analysts
document, study, plan, and communicate complex processes with
simple, easy-to-understand diagrams. In law enforcement and
intelligence, they are used widely to map criminal and terrorist
activity.
Value Added
Flow Charts can identify critical pathways and choke points and
discover unknown relationships. In law enforcement, they help
analysts probe the meaning of various activities, identify alternative
ways a task can be accomplished, better understand how events are
related to one another, and focus attention on the most critical
elements of the process under study or investigation.
The Method
The process is straightforward.

Compile a list of specific activities, steps, or events related to


the event under study.

Determine which must precede others, which must follow


others, and which can operate independently.

Draw a chart using circles or boxes to show activities and


arrows to illustrate sequencing.

Different shapes of boxes can designate different types of


activities. For example, activities are often represented by a
rectangular box, decisions by a diamond, and input/output
by a parallelogram.4

The thickness of the arrows can illustrate the volume of


information moving from one activity to another.

In a Process Map, the steps in the process are diagrammed


sequentially with various symbols. Diagrams can be created with
readily available software such as Microsoft Visio. When
constructing or evaluating a Flow Chart, ask yourself whether it
makes complex issues easier to understand, is valid and
appropriately sourced, and provides titles and labels that are
accurate and clear.
Example
Figure 5.4 shows the process that drug traffickers use to launder their profits from the drugs they
distribute. The drug network leader provides the $1 million he receives from users to a drug mule who
deposits it in an offshore bank account. The money is deposited in the bank account of a foreign
corporation, which takes its 15 percent cut for laundering the money. The remaining $850,000 then is
transmitted back to the drug network leader.

Figure 5.4 Commodity Flow Chart of Laundering Drug Money


Origins of This Technique
The first-known “flow process charts” were presented to the
American Society of Mechanical Engineers in 1921.5 They quickly
became a tool of industrial engineering and were adopted by
computer programmers beginning in the mid-century to describe
computer algorithms. Information on how to use various types of
Process Mapping is available in Richard Damelio, The Basics of
Process Mapping (Florence, KY: Productivity Press, 2006). For user-
friendly software to create your own Process Map, visit
https://www.lucidchart.com/pages/process-mapping.
5.5 GANTT CHARTS
A Gantt Chart is a specific type of Process Map that was developed
to facilitate the planning, scheduling, and management of complex
industrial projects.
When to Use It
Intelligence analysts use Gantt Charts to track, understand, and
monitor the progress of activities of intelligence interest undertaken
by a foreign government, a criminal or terrorist group, or any other
non-state actor. For example, a Gantt Chart can be used for the
following:

Monitor progress in developing a new weapons system,


preparations for a major military action, or the execution of any
other major plan that involves a sequence of observable steps.

Identify and describe the modus operandi of a criminal or


terrorist group, including the preparatory steps that such a group
typically takes prior to a major action.

Describe and monitor the process of radicalization by which a


normal youth may be transformed over time into a terrorist.

Trace how information or a report flows through an intelligence


community’s collection and processing system until it arrives at
the desk of the analyst—and how it might have been corrupted
by a bad translation or an overgeneralization when the actual
event or initial statement was inaccurately summarized along
the way.
Value Added
The process of constructing a Gantt Chart helps analysts think
clearly about what someone else needs to do to complete a complex
project. When an adversary’s plan or process is understood well
enough to be diagrammed or charted, analysts can then answer
questions such as the following: What are they doing? How far along
are they? What do they still need to do? What resources will they
need to do it? How much time do we have before they have this
capability? Is there any vulnerable point in this process for stopping
or slowing activity?

The Gantt Chart is a visual aid for communicating this information to


the client. If there is sufficient information, the analyst’s
understanding of the process will lead to a set of indicators that can
be used to monitor the status of an ongoing plan or project.
The Method
A Gantt Chart is a matrix that lists tasks in a project or steps in a
process down the far-left column, with the estimated time period for
accomplishing these tasks or steps in weeks, months, or years
across the top row. For each task or step, a horizontal line or bar
shows the beginning and ending of the time period for that task or
step. Professionals working with Gantt Charts use tools such as
Microsoft Project to draw the chart. Gantt Charts can also be made
with Microsoft Excel or by hand on graph paper.

Detailed guidance on creating a Gantt Chart is readily available from


the sources described under “Origins of This Technique.”
Example
The U.S. Intelligence Community has considerable experience
monitoring terrorist groups. This example describes how an analyst
would create a Gantt Chart of a generic terrorist attack–planning
process (see Figure 5.5). The analyst starts by making a list of all the
tasks that terrorists must complete, estimating the schedule for when
each task will start and finish, and determining what resources are
needed for each task. Some tasks need to be performed in a
sequence, with each task somewhat completed before the next
activity can begin. These are called sequential, or linear, activities.
Other activities are not dependent upon completion of any other
tasks. These may be done at any time before or after a stage occurs.
These are called nondependent, or parallel, tasks.

Note whether each terrorist task is sequential or parallel. It is this


sequencing of dependent and nondependent activities that is critical
in determining how long a project or process will take. The more
activities worked in parallel, the greater the chances of a project
being completed on time. The more tasks done sequentially, the
greater the chances of a single bottleneck delaying the entire project.

When entering tasks into the Gantt Chart, enter the sequential tasks
first in the required sequence. Ensure that those tasks don’t start
until the tasks on which they depend are completed. Then enter the
parallel tasks in an appropriate time frame toward the bottom of the
matrix so that they do not interfere with the sequential tasks on the
critical path to completion of the project.

Gantt Charts that map a generic process can also track data about a
more specific process as it is received. For example, the Gantt Chart
depicted in Figure 5.5 can be used as a template over which new
information about a specific group’s activities could be layered using
a different color or line type. Layering in the specific data allows an
analyst to compare what is expected with the actual data. The chart
can then identify and narrow gaps or anomalies in the data and even
identify and challenge assumptions about what is expected or what
is happening. The analytic significance of considering such
possibilities can mean the difference between anticipating an attack
and wrongly assuming a lack of activity means a lack of intent. The
matrix illuminates the gap and prompts the analyst to consider
various explanations.
Origins of This Technique
The first Gantt Chart was devised in the mid-1890s by Karol Adamiecki, a Polish engineer who ran a
steelworks in southern Poland.6 Some fifteen years later, Henry Gantt, an American engineer and
project management consultant, created his own version of the chart. His chart soon became popular in
Western countries, and Henry Gantt’s name became associated with this type of chart. Gantt Charts
were a revolutionary advance in the early 1900s. During the period of industrial development, they were
used to plan industrial processes and still are in common use today. Information on how to create and
use Gantt Charts is available at www.ganttchart.com and https://www.projectmanager.com/gantt-chart.

Description

Figure 5.5 Gantt Chart of Terrorist Attack Planning


Source: Based on “Gantt Chart” by Richard Damelio, The Basics of Process Mapping (Florence, KY: Productivity Press,
2006), www.ganttchart.com.
NOTES
1. Atul Gawande, “The Checklist,” New Yorker, December 10, 2007,
www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande.
Also see Marshall Goldsmith, “Preparing Your Professional
Checklist,” Business Week, January 15, 2008,
https://www.marshallgoldsmith.com/articles/preparing-your-
professional-checklist/

2. Katherine Hibbs Pherson and Randolph H. Pherson, “Part I: How


Do I Get Started?” Critical Thinking for Strategic Intelligence, 2nd ed.
(Washington, DC: CQ Press/SAGE, 2017).

3. A fuller discussion of the matrix can be found in Randolph H.


Pherson, “Rethinking National Security in a Globalizing World: A
New Ecology,” Revista Romănă de studii de intelligence (Romanian
Journal of Intelligence Studies), 1–2 (December 2009).

4. “ISO 5807:1985,” International Organization for Standardization


(February 1985), https://en.wikipedia.org/wiki/Flowchart#cite_note-
16

5. Frank Bunker Gilbreth and Lillian Moller Gilbreth, “Process Charts”


(presentation, American Society of Mechanical Engineers, 1921),
https://en.wikipedia.org/wiki/Flowchart#cite_note-2

6. Gantt.com (2019), last accessed November 6, 2019,


https://www.gantt.com/
Descriptions of Images and Figures
Back to Figure

The cells in each row show the impact of the variable, represented
by that row, on each of the variables listed across the top of the
matrix. The cells in each column show the impact of each variable,
listed down the left side of the matrix, on the variable represented by
the column.

The matrix consists of eight columns and six rows. The columns are
as follows: A, B, C, D, E, F, Score, and percentage. The rows are as
follows: A, B, C, D, E, and F.

The corresponding paired values for Row A columns A, B, C, D, E, F,


Score, and percentage are as follows: nil; B,3; C,1; A,1; A,1; F,2; 2;
and 10.

The corresponding paired values for Row B columns A, B, C, D, E, F,


Score, and percentage are as follows: nil; nil; B,1; D,1; B,2; B,1; 7;
and 35.

The corresponding paired values for Row C columns A, B, C, D, E,


F, Score, and percentage are as follows: nil; nil; nil; D,1; E,1; F,1; 1;
and 5.

The corresponding paired values for Row D columns A, B, C, D, E,


F, Score, and percentage are as follows: nil; nil; nil; nil; D,2; D,1; 5;
and 25.

The corresponding paired values for Row E columns A, B, C, D, E, F,


Score, and percentage are as follows: nil; nil; nil; nil; nil; F,1; 1; and
5.

The corresponding paired values for Row F columns A, B, C, D, E, F,


Score, and percentage are as follows: nil; nil; nil; nil; nil; nil; 4; and
20.
Back to Figure

The mechanisms of engagement, ranging from conflict to


cooperation are use of military force, policing or monitoring, and
collaboration. Use of military force: Threat or use of military force to
impose or to defend the national interest. Policing or monitoring:
Peacekeeping treaties etcetera that maintain order with penalties for
noncompliance. Collaboration: Self-enforcing agreements, voluntary
standards. The source of threat, ranging from nation states to non-
state actors are nation stations, subnational actors, organizations,
informal networks, and systematic challenges. Nation States:
Countries, alliances, and ad hoc coalitions. Subnational actors:
Ethnic groups, guerrilla groups, and refugees. Organizations:
Transnational criminal organization, terrorist groups, international
businesses, and NGOs. Informal networks: Antiglobalization
protestors, currency speculators, computer hackers, and migrants.
Systematic Challenges: Infectious diseases, natural disasters, and
global warming.

Use of military force by nation states: Iraq in 1990 and 2003,


Afghanistan in 2002, and Somalia antipiracy operations. Use of
military force by subnational actors: Kosovo in 1998, Afghanistan in
2009, and noncombatant evacuation operations. Use of military force
by organizations: Afghanistan drug eradication, Mexican and
Colombian drug wars, and Osama bin laden in Pakistan. Use of
military force by informal networks: Information warfare, and U.S.
maritime interception of Haitian and Cuban immigrants. Use of
military force by systematic challenges: Asteroid tracking, and Guam
Brown Tree Snake quarantine.

Policing and monitoring by nation states: EU peacekeeping in


Bosnia, UN peacekeeping operations, and nonproliferation treaties.
Policing and monitoring by subnational actors: Congo refugees,
Somalia, and Gaza. Policing and monitoring by organizations:
INTERPOL, security at Olympics, and World Trade Organization.
Policing and monitoring by informal networks: International currency,
exchange protocols, and international criminal court. Policing and
monitoring by systematic challenges: Montreal Protocol on CFCs,
World Health Organizations, and Kyoto Protocols.

Collaboration by nation states: G-8 and G-20 summits, CSCE


Istanbul Summit in 1999, and bio weapons convention. Collaboration
by subnational actors: Irish peace resolution and implementation,
and US state department relief web. Collaboration by organization:
INTERPOL information technology working parties and international
relief such as Red Cross. Collaboration by informal networks: Global
internet virus monitoring, international standards for genetic
research, and internet domain names. Collaboration by systematic
challenges: H1N1 pandemic monitoring in 2009, HIV AIDS
suppression, and volcano monitoring.

A spectrum ranges from collecting and protecting intelligence secrets


by nation states, to collecting information and disseminating broadly
by collaboration. Use of military force by nation states is a classic
concept of national security. Use of military force by subnational
actors and organization, and policing and monitoring by nation
states, subnational actors, and organization are in enlarged focus of
the past two decades. The rest are gaining importance in globalizing
world.

Back to Figure

The gantt chart shows bars across different time periods before the
day of attack for different activities. Intent, such as conceiving the
idea, meetings, and deciding to act, takes place two years before
attack. Planning, such as gathering information, surveilling,
recruiting, and making bombs, takes place between two years to one
year before the day of attack. The time period for various activities
associated with preparation is as follows. Training: One year to six
months before the day of attack. Surveilling: Six months before the
day of attack to the day. Assembling the material: Six months to
eight to four weeks before the day of attack. Issuing attack order: On
the day of attack. Attack, such as positioning and detonating the
bomb, takes place on the day of the attack. The aftermath, such as
fleeing the scene and claiming credit, takes place on the day of the
attack or on the day after.
CHAPTER 6 EXPLORATION
TECHNIQUES

6.1 Simple Brainstorming [ 92 ]

6.2 Cluster Brainstorming [ 95 ]

6.3 Nominal Group Technique [ 98 ]

6.4 Circleboarding™ [ 101 ]

6.5 Starbursting [ 104 ]

6.6 Mind Maps and Concept Maps [ 106 ]

6.7 Venn Analysis [ 112 ]

6.8 Network Analysis [ 118 ]

New ideas, and the combination of old ideas in new ways, are
essential elements of effective analysis. Some structured techniques
are specifically intended for the purpose of exploring concepts and
eliciting ideas at various stages of a project, and they are the topic of
this chapter. Exploration Techniques help analysts create new
approaches and transform existing ideas, knowledge, and insights
into something novel but meaningful. They help analysts identify
potential new sources of information or reveal gaps in data or
thinking and provide a fresh perspective on long-standing issues.

Imagination is more important than knowledge. For while


knowledge defines all we currently know and understand,
imagination points to all we might yet discover and create.
—Albert Einstein
A continuing debate in the field of intelligence analysis is who makes
the “best” analyst: an individual with subject-matter expertise or a
creative generalist with good critical thinking skills. The reality is that
both do, but specialists with deep knowledge of a given subject tend
to focus primarily on their domain, and generalists, who often move
from one subject to another throughout their careers, can come
across as “facile.” Yet both types of analysts are called upon to
generate new or different thoughts, create new processes and
procedures, and help clients avoid surprise.

In one sense, all Structured Analytic Techniques are Exploration


Techniques when used in collaborative group processes. A
structured process helps identify and explore differences in
perspective and different assumptions among team or group
members, and thus stimulates learning and new ideas. A group or
team using a Structured Analytic Technique is almost always more
effective than a single individual in generating new ideas and in
synthesizing divergent ideas.

All the Exploration Techniques embrace brainstorming as part of


their process. The goal of brainstorming is to be innovative and
generate new ideas, but even this avowedly creative process
requires structure to be effective. People sometimes say they are
brainstorming when they sit with a few colleagues and try to come up
with new ideas, but such a freewheeling process rarely generates
the creative breakthroughs the group is seeking. Brainstorming is
most effective when it is structured. Experience has shown that
analysts should follow eight basic rules to optimize the value of a
brainstorming session:

Be specific about the purpose and the topic of the brainstorming


session. Announce the topic beforehand and ask participants to
come to the session with some ideas or to forward them to the
facilitator before the session. Begin the session by writing down
everyone’s initial inputs on a whiteboard or easel.
Never criticize an idea, no matter how weird, unconventional, or
improbable it might sound. Instead, try to figure out how the idea
might be applied to the task at hand.

Allow only one conversation at a time; ensure that everyone has


an opportunity to speak.

Allocate enough time to complete the brainstorming session. It


often takes one hour to set the rules of the game, get the group
comfortable, and exhaust the conventional wisdom on the topic.
Only then do truly creative ideas begin to emerge.

Engage all participants in the discussion; sometimes this might


require “silent brainstorming” techniques, such as asking
everyone to be quiet for a few minutes to write down their key
ideas and then discuss as a group what everyone wrote down.

Include one or more “outsiders” in the group to avoid Groupthink


and stimulate divergent thinking. One practical definition of an
“outsider” is someone who is not processing the same
information or information from the same generic sources as the
members of the core group.

Write it down! Track the discussion for everyone to see by using


a whiteboard, an easel, or self-stick notes.

Summarize key findings. Ask the participants to write down their


key takeaway or the most important thing they learned on an
index card as they depart the session. Then prepare a short
summary and distribute the list to the participants (who may add
items to the list) and to others interested in the topic (including
supervisors and those who could not attend).

Exploration Techniques are helpful in combating many cognitive


biases, which Daniel Kahneman says occur when analysts lack
precision while making continuous assessments to supply quick
answers to difficult problems.1 They are particularly effective in
countering the impulse to answer complicated questions rapidly
(Mental Shotgun) and to select the first answer that seems adequate
(Satisficing), as distinct from assessing multiple options to find the
optimal or best answer. The term Satisficing was coined by Nobel
Prize winner Herbert Simon by combining the words satisfy and
suffice.2 The “satisficer” who does seek out additional information
may look only for information that supports the initial answer. A
better course of action is to use structured techniques to look broadly
at all the possibilities.

Exploration Techniques help analysts avoid the intuitive traps of


assuming the same dynamic is in play when something appears to
be in accord with past experiences (Projecting Past Experiences).
They also help guard against failing to factor something into the
analysis because the analyst lacks an appropriate category or “bin”
for that item of information (Lacking Sufficient Bins).
OVERVIEW OF TECHNIQUES
Simple Brainstorming is one of the most widely used analytic
techniques. It is usually conducted in small groups but can also be
done as an individual exercise.

Cluster Brainstorming is a form of silent brainstorming that uses


self-stick notes, follows specific rules and procedures, and requires a
facilitator. It is often used to develop a comprehensive picture or map
of all the forces, factors, events, and players that relate to a given
topic or issue. It requires little training, and it is one of the most
frequently used structured techniques in the U.S. Intelligence
Community.

Nominal Group Technique serves much the same function as


Cluster Brainstorming but uses a quite different approach. It is the
preferred technique when there is concern that a senior member or
outspoken member of the group may dominate the meeting, that
junior members may be reluctant to speak up, or that the meeting
may lead to heated debate. Nominal Group Technique encourages
equal participation by requiring participants to present ideas one at a
time in round-robin fashion until all participants feel that they have
run out of ideas.

Circleboarding™ is a method for capturing the journalist’s list of


Who, What, How, When, Where, and Why to develop a
comprehensive picture of a topic. The technique adds a final
question, So What, to spur analysts to assess the impact of the
development and implications for the client.

Starbursting is a form of brainstorming that focuses on generating


questions rather than answers. To help in defining the parameters of
a research project, use Starbursting to identify the questions that
need to be answered. Questions start with the words Who, What,
How, When, Where, and Why. Brainstorm each of these words, one
at a time, to generate as many questions as possible about the
research topic.

Venn Analysis is a visual technique that helps analysts explore the


logic of arguments and illustrate sets of relationships in analytic
arguments. It is a powerful tool for showing the relative significance
of change over time and for illustrating large volumes of data in an
easily digested format.

Network Analysis is used to illustrate associations among


individuals, groups, businesses, or other entities; the meaning of
those associations to the people involved; and the degrees and ways
in which those associations can be strengthened or weakened. It has
been highly effective in helping analysts identify and understand
patterns of organization, authority, communication, travel, and
financial transactions.
6.1 SIMPLE BRAINSTORMING
Simple Brainstorming is an individual or group process designed to
generate new ideas and concepts. A brainstorming session usually
exposes an analyst to a greater range of ideas and perspectives
than the analyst could generate alone. This broadening of views
typically results in a better analytic product. To be successful,
brainstorming must focus on a specific issue and the results
captured in a final written product.
When to Use It
Analysts most often use Simple Brainstorming at the beginning of a
project to identify a list of relevant variables, key drivers, alternative
scenarios, key players or stakeholders, available evidence or
sources of information, possible solutions to a problem, potential
outcomes or scenarios, or all the forces and factors relevant to a
given situation. Later in the process, it can be used to help break the
team out of an analytic rut, stimulate new investigative leads,
generate new research or collection requirements, and design new
lines of argument or theories of a case. In most instances, a
brainstorming session can be followed with Cross-Impact Analysis to
examine the relationship between each of the variables, players, or
other factors identified by the brainstorming.
Value Added
The goal of any type of brainstorming session is to generate as
many ideas as possible to expand the range of ideas and
perspectives to consider. Brainstorming can involve any number of
processes designed to enhance creativity but is most productive
when it is structured. Intelligence analysts have found brainstorming
particularly effective in helping mitigate the intuitive traps of giving
too much weight to first impressions, lacking an appropriate category
or bin for an item of information, allowing firsthand information to
have too much impact, and expecting change to be only marginal or
incremental.
The Method
Brainstorming is conducted in many different ways (see Figure 6.1 for an example of one technique).
The facilitator or group leader should present the focal question, explain and enforce the ground rules,
keep the meeting on track, stimulate discussion by asking questions, record the ideas, and summarize
the key findings. Participants should be encouraged to express every idea that pops into their heads.
Even ideas that are outside the realm of the possible may stimulate other ideas that are more feasible.
The group should have at least four and no more than twelve participants. Five to seven is an optimal
number; if there are more than twelve people, the group should be split into two sections to conduct
parallel brainstorming exercises. The following discussion describes several techniques you can use to
engage in individual or group brainstorming.

Word or Visual Storming.


This method consists of selecting a random word or picture—related to the issue or not—and using that
word or visual to “trigger” thoughts. Ideally, settle on a word or picture not related to the topic or issue
because the “word” may force you to stretch your mind. An easy way to select a word is to open a
dictionary or magazine and point your finger at the open page. Where your finger lands, that’s the word,
or the picture on the page you opened; you could also pick a word or picture on the previous or next
page. After you select a word or picture, answer the following questions:

What is the meaning of the word or what words come to mind when you look at the picture?

What is the word or picture associated with?

What is happening in the picture?

How is the word used or what is a synonym for the word?

Figure 6.1 Engaging All Participants Using the Index Card Technique

Ask yourself how the word or picture could be related to the issue or problem. For example, when
considering who a newly-elected prime minister or president might appoint to a particularly tough
Cabinet job, the word “basketball” or a picture of a basketball court might generate ideas such as team
game, team player, assigned position, coaching, player development, winning, hoops and balls, time
outs, periods of play, or winners and losers. Each of those terms could trigger subsequent ideas about
the type of official who would be “ideal” for the job.

A freewheeling, informal group discussion session will often generate new ideas, but a structured group
process is more likely to succeed in helping analysts overcome existing mindsets, “empty the bottom of
the barrel of the obvious,” and come up with fresh insights and ideas.

Paper Recording.
Paper Recording is a silent technique that is a variant of Nominal Group Technique (discussed later in
this chapter). This group method requires each participant to write down one to four ideas on the topic
on a piece of paper initially distributed by the facilitator or organizer.

After jotting down their ideas, participants place their papers in a central “pick-up point.” They select
a piece of paper (not their own) from the “pick-up point” and add to the ideas on the paper.

The process continues until participants run out of ideas. Participants can hold up or “play” a
colored card to note when they have exhausted their ideas.

A facilitator harvests and records the ideas.


Relationship to Other Techniques
Any form of brainstorming is commonly combined with a wide variety
of other techniques. It is often an early step in analytic projects used
to identify ideas, variables, evidence, possible outcomes, suspects,
or hypotheses that are then processed by using other structured
techniques. Other forms of brainstorming described in this chapter
include Nominal Group Technique, Circleboarding™, Mind Maps,
and Venn Analysis.
Origins of This Technique
Brainstorming was a creativity technique used by advertising
agencies in the 1940s. It was popularized in a book by advertising
manager Alex Osborn, Applied Imagination: Principles and
Procedures of Creative Problem Solving (New York: Scribner’s,
1953). There are many versions of brainstorming. The techniques
described in this book are a combination of information in the
Handbook of Analytic Tools and Techniques, 5th ed. (Tysons, VA:
Pherson Associates, LLC, 2019) and training materials used
throughout the global intelligence community.
6.2 CLUSTER BRAINSTORMING
Cluster Brainstorming is a systematic, multistep process for
conducting group brainstorming that employs silent brainstorming
and self-stick notes. It allows group members to alternate between
divergent and convergent ideas, generating comprehensive lists of
new ideas. Cluster Brainstorming requires a facilitator, in part
because participants cannot talk during the brainstorming session.

In previous editions of this book, Cluster Brainstorming was referred


to as Structured Brainstorming. The authors chose to change the
name of the technique for this edition because all forms of
brainstorming—ranging from Simple Brainstorming to Mind Maps to
Outside-In Thinking—require structure to be effective.
When to Use It
Cluster Brainstorming is used widely in the intelligence community
and increasingly in the private sector at the beginning of a project.
Many will also employ it later in the analytic process to pull the team
out of an analytic rut and to stimulate new thinking.
Value Added
The stimulus for creativity comes from two or more analysts
bouncing ideas off each other. A Cluster Brainstorming session
usually exposes an analyst to a greater range of ideas and
perspectives than the analyst could generate alone, and this
broadening of views typically results in a better analytic product. The
technique can help mitigate the impact of Groupthink, Premature
Closure, and accepting evidence as true because it helps create a
coherent picture (Evidence Acceptance Bias).

In addition, analysts have found it effective in helping them


overcome the intuitive traps of failing to factor something into the
analysis because the analyst lacks an appropriate category or “bin”
for that item of information (Lacking Sufficient Bins), focusing on a
narrow range of alternatives representing only modest change
(Expecting Marginal Change), and continuing to hold to a judgment
when confronted with a mounting list of contradictory evidence
(Rejecting Evidence).
The Method
Many versions of Cluster Brainstorming are in common usage. The following process has worked well in
intelligence and law enforcement communities to generate key drivers for Multiple Scenarios
Generation, a set of alternative hypotheses when investigating a situation, or a list of key factors to
explain a behavior. The process is divided into two phases: a divergent thinking (creative) phase when
ideas are presented, and a convergent thinking (analytic) phase when these ideas are evaluated.

Pass out self-stick notes and marker-type pens to all participants. A group of ten to twelve
contributors works best. Tell those present that no one can speak except the facilitator during the
initial collection phase of the exercise.

Write the question you are brainstorming on a whiteboard or easel. The objective is to make the
question as open-ended as possible. For example, Cluster Brainstorming focal questions often
begin with “What are all the (things/forces and factors/circumstances) that would help explain . . .?”

Ask the participants to write down their responses to the question on self-stick notes. Participants
are asked to capture the concept with a few key words that will fit on the self-stick note. The
facilitator then collects the self-stick notes from the participants.

After an initial quiet time of two minutes, the facilitator begins to read the self-stick notes out loud.
The quiet time allows participants time to reflect and process before the facilitator starts talking
again. The facilitator puts all the self-stick notes on the wall or a whiteboard as he/she reads them
aloud (see Figure 6.2). All ideas are treated the same. Participants are urged to listen to and build
on one another’s ideas.

Usually there is an initial spurt of ideas followed by pauses as participants contemplate the
question. After five or ten minutes, expect a long pause of a minute or so. This slowing down
suggests that the group has “emptied the barrel of the obvious” and is now on the verge of coming
up with some fresh insights and ideas. Facilitators should not talk during this pause, even if the
silence is uncomfortable.

After a couple of long pauses, facilitators conclude this divergent stage of the brainstorming
process. They then ask a subset of the group to go up to the board and silently arrange the self-
stick notes into affinity groups (basically grouping the ideas by like concept). The group should
arrange the self-stick notes in clusters, not in rows or columns. The group should avoid putting the
self-stick notes into obvious groups like “economic” or “political.” Group members cannot talk while
they are doing this. If one self-stick note seems to “belong” in more than one group, make a copy
and place one self-stick note in each affinity group.

If the group has many members, those who are not involved in arranging the self-stick notes should
be asked to perform a different task. For example, the facilitator could make copies of several of the
self-stick notes that were considered outliers because the group at the board has not fit them into
any obvious group or the items evoked laughter when read aloud. One of these outliers is given to
each table or to each smaller group of participants, who then are asked to explain how that outlier
relates to the primary task.

When the group at the board seems to slow down organizing the notes, ask a second small group
to go to the board and review how the first group arranged the self-stick notes. The second group
cannot speak, but members are encouraged to continue to rearrange the notes into more coherent
groups. Usually the exercise should generate five to ten affinity groupings on the board.

When all the self-stick notes have been arranged, members of the group at the board can converse
among themselves to pick a word or phrase that best describes each affinity grouping.
Pay attention to outliers or self-stick notes that do not belong in a particular group. Such an outlier
could either be useless noise or, more likely, contain a gem of an idea that deserves further
elaboration as a theme. If outlier self-stick notes were distributed earlier in the exercise, ask each
group to explain how that outlier is relevant to the issue.

To identify the potentially most useful ideas, the facilitator or group leader should establish up to five
criteria for judging the value or importance of the ideas. If so desired, then use the Ranking,
Scoring, and Prioritizing technique, described in chapter 5, for voting on or ranking or prioritizing
ideas. Another option is to give each participant ten votes. Tell them to come up to the whiteboard
or easel with a marker and place their votes. They can place ten votes on a single self-stick note or
affinity group label, one vote each on ten self-stick notes or affinity group labels, or any combination
in between. Tabulate the votes.

Assess what you have accomplished and what areas will need more work or more brainstorming.
Then ask the group, “What do you see now that you did not see before?” Review the key ideas or
concepts as well as new areas that need more work or further brainstorming.

Set priorities based on the voting results, decide on the next steps for analysis, and develop an
action plan.

Figure 6.2 Cluster Brainstorming in Action


Source: Pherson Associates, LLC, 2019.
Relationship to Other Techniques
Cluster Brainstorming is also called Structured Brainstorming and
Divergent/Convergent Thinking. The authors prefer the term Cluster
Brainstorming because any brainstorming technique should be
structured to be effective.
Origins of This Technique
The description of Cluster Brainstorming is taken from the discussion
of “Cluster Brainstorming,” in Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019),
and training materials used throughout the global intelligence
community.
6.3 NOMINAL GROUP TECHNIQUE
Nominal Group Technique (NGT) is a process for generating and
evaluating ideas. It is a form of brainstorming, but NGT has always
had its own identity as a separate technique. The goals of Nominal
Group Technique and other forms of brainstorming are the same—
the generation of good, innovative, and viable ideas. NGT, however,
differs from Simple Brainstorming in several ways. Most importantly,
ideas are presented one at a time in round-robin fashion.
When to Use It
NGT prevents the domination of a discussion by a single person.
Use it whenever concerns arise that a senior officer or executive or
an outspoken member of the group or perhaps even experts in the
field will control the direction of the meeting by speaking before
anyone else. It is also appropriate to use NGT rather than other
structured brainstorming techniques if there is concern that some
members may not speak up, or the issue under discussion is
controversial and may provoke a heated debate. The index card
technique is another way to accomplish these objectives.

NGT is also effective when coordinating the initial conceptualization


of a problem before the research and writing stages begin. Like
brainstorming, NGT is commonly used to identify ideas
(assumptions, hypotheses, drivers, causes, variables, important
players) that can then be incorporated into other methods.
Value Added
NGT can be used both to generate ideas and to provide backup
support in a decision-making process where all participants are
asked to rank or prioritize the ideas generated. If it seems desirable,
all ideas and votes can be anonymous. Compared with Simple
Brainstorming, which usually seeks to generate the greatest possible
number of ideas—no matter how far out they may be—NGT may
focus on a limited list of carefully considered opinions.

The technique allows participants to focus on each idea as it is


presented, rather than having to think simultaneously about
preparing their own ideas and listening to what others are proposing
—a situation that often happens with Simple or Cluster
Brainstorming. NGT encourages piggybacking on ideas that have
already been presented—in other words, combining, modifying, and
expanding others’ ideas.
The Method
An NGT session starts with the facilitator asking an open-ended
question such as “What factors will influence . . . ?” “How can we
learn if . . . ?” “In what circumstances might . . . happen?” “What
should be included or not included in this research project?” The
facilitator answers any questions and then gives participants a few
minutes to work privately to jot down their initial responses to the
focal question on index cards. The process then proceeds according
to these steps:

The facilitator calls on one person at a time to present one idea.


As each participant presents his or her idea, the facilitator writes
a summary description on a flip chart or whiteboard. This
process continues in a round-robin fashion until all ideas have
been exhausted. If individuals have run out of ideas, they “pass”
when called upon for an idea, but they can participate again
later if they have another idea when their turn comes up again.
The facilitator can also be an active participant, announcing and
then writing down his or her own ideas. There is no discussion
until all ideas are presented; however, the facilitator can clarify
ideas to avoid duplication.

When no new ideas are forthcoming, the facilitator initiates a


group discussion to ensure there is a common understanding of
what each idea means. The facilitator asks about each idea, one
at a time, in the order presented, but does not allow any
argument for or against the idea. At this time, the participants
can expand or combine ideas, but no change can be made to an
idea without the approval of the original presenter of the idea.

Voting to rank or prioritize the ideas as discussed in chapter 5 is


optional, depending upon the purpose of the meeting. If
adopted, voting is best done by secret ballot, although various
voting procedures may be used depending upon the number of
ideas and the number of participants. It usually works best to
employ a ratio of one vote for every three ideas presented. For
example, if the facilitator lists twelve ideas, each participant has
four votes. The group can also decide to let participants give
one idea more than one vote. In this case, someone could give
three votes to one idea and another idea only one vote.

An alternative procedure is for each participant to write what he


or she considers the best four ideas on an index card. One
might rank the four ideas by giving 4 points for the best idea, 3
points for the next best, 2 points for the next best, and 1 point
for the least favored idea. The cards are then passed to the
facilitator for tabulation and announcement of the scores. In
such circumstances, a second round of voting may be needed
to rank the top ideas.
Relationship to Other Techniques
Analysts should consider other forms of brainstorming as well as
Nominal Group Technique to determine which technique is most
appropriate for the conditions in which it will be used.
Origins of This Technique
Nominal Group Technique was developed by A. L. Delbecq and A.
H. Van de Ven and first described in “A Group Process Model for
Problem Identification and Program Planning,” Journal of Applied
Behavioral Science VII (July–August, 1971): 466–491. The
discussion of NGT here is a synthesis of several sources: James M.
Higgins, 101 Creative Problem Solving Techniques: The Handbook
of New Ideas for Business, rev. ed. (Winter Park, FL: New
Management Publishing Company, 2006); “What Is Nominal Group
Technique?” American Society for Quality, http://www.asq.org/learn-
about-quality/idea-creation-tools/overview/nominal-group.html;
“Nominal Group Technique,”
http://syque.com/quality_tools/toolbook/NGT/ngt.htm; and “Nominal
Group Technique,” www.mycoted.com/Nominal_Group_Technique.
6.4 CIRCLEBOARDING™
Circleboarding™ captures the Who, What, How, When, Where, Why,
and So What of a topic. It visually depicts the known information
about differing aspects of a topic of concern or interest.
Circleboarding™ focuses on exploring the answers to the journalist’s
classic five W’s and H but adds a final question: So What? The So
What? question spurs the analyst to consider the impact of the event
or the implications of the topic and helps focus on the key message
to deliver to the client.
When to Use it
Use Circleboarding™ at the start of a project to gain a thorough,
group-wide understanding of a topic. After deciding on the topic,
assemble all researchers, investigators, or analytic contributors and
use Circleboarding™ to determine what is already known about each
aspect of the subject. A facilitator must marshal the group through
the process.

Circleboarding™ is also effective in generating indicators or a set of


scenarios. Questions to consider when creating indicators include

Who could potentially emit an indicator?

What indicator would they emit?

How would this indicator manifest? How might we miss it?

When would we be most likely to see this indicator?

Where would we be most likely to see this indicator?

Why is this indicator important?

What do we want the indicators to tell us as a set?


Value Added
Circleboarding™ provides a systematic overview of a topic or issue.
The brainstorming process helps promote widespread, collaborative
understanding of the topic for all those involved in the project,
regardless of their specific area of research focus. The cooperative
approach of Circleboarding™ also encourages investigators to
combine ideas in new or different ways to imbue the analysis with
increased rigor.

The technique can stimulate discussion beyond just consolidating


known information by identifying gaps or weak points in the group’s
knowledge and encouraging discussion of stated and unstated
assumptions and what further collection and research is needed to
fill the gaps. Circleboarding™ can be a powerful tool for mitigating
the impact of misapplied heuristics, including the tendency to provide
quick and easy answers to difficult questions (Mental Shotgun),
predict rare events based on weak evidence or evidence that easily
comes to mind (Associative Memory), and select the first answer that
appears “good enough” (Satisficing). Use of the technique can also
diminish the impact of intuitive traps, such as Overinterpreting Small
Samples, Ignoring the Absence of Information, and Projecting Past
Experiences.

Circleboarding™ promotes thorough investigation of a topic and


well-informed analysis. Detailed graphics can present the results of
an exhaustive investigation in a crisp and easily digestible format.
The Method

Setup. Draw a circle and write the following words around the circle: Who, What, How, When,
Where, and Why (see Figure 6.4). The order follows how a declarative sentence is written in
English (who did what, for what reason). The What and the How are next to each other because
they often overlap conceptually. In the middle of the circle, write “So What?”

Define the Topic. Begin by asking the group to validate the discussion topic and agree on the
objectives of the session.

Seek Answers. Systematically work through each question asking the group to “shout out” what
they know about the topic as it relates to each question. Write down the answers.

Figure 6.4 The Circleboarding™ Technique

Reflect. After going around the circle, ask the group to reflect on their responses and suggest
which of the questions appears most important to the analysis.

Prioritize Results. In some cases, highlight or prioritize the responses to the most important
questions.

So What? Initiate a final discussion to explore the So What? question in the center of the circle.

Final Product. Capture all the input on a single graphic and distribute it to participants and others
interested in the topic for comment.
Relationship to Other Techniques
Circleboarding™ is a simpler version of Starbursting (presented next
in this chapter) that focuses on exploring the answers to—rather
than generating questions related to—the journalist’s classic list of
queries: Who, What, How, When, Where, and Why. Ranking,
Scoring, and Prioritizing (chapter 5) can be helpful in prioritizing the
clusters of answers that result.
Origins of This Technique
Circleboarding™ was developed by Pherson Associates as an
alternative to Starbursting. The technique can be particularly helpful
for analysts working in law enforcement, medicine, and financial
intelligence investigating money laundering, who focus mostly on
generating tactical analytic products and are often pressed for time.
6.5 STARBURSTING
Starbursting is a form of brainstorming that focuses on generating
questions rather than eliciting ideas or answers. It uses the six
questions commonly asked by journalists: Who? What? How?
When? Where? and Why?
When to Use It
Use Starbursting to help define your research requirements and
identify information gaps. After deciding on the idea, topic, or issue
to be analyzed, brainstorm to identify the questions that need to be
answered by the research. Asking the right questions is a common
prerequisite to finding the right answer.
Value Added
Starbursting uses questions commonly asked by journalists (Who,
What, How, When, Where, and Why) to spur analysts and analytic
teams to ask the questions that will inevitably arise as they present
their findings or brief their clients. In thinking about the questions,
analysts will often discover new and different ways of combining
ideas.

Starbursting is useful in combating cognitive limitations analysts


often experience, including focusing attention on a vivid scenario
while ignoring other possibilities (Vividness Bias), the tendency to
provide quick and easy answers to difficult questions (Mental
Shotgun), and predicting rare events based on weak evidence or
evidence that easily comes to mind (Availability Heuristic). It also can
help analysts reduce the effect of several intuitive traps, including
Relying on First Impressions, Assuming a Single Solution, and
Projecting Past Experiences.
The Method
The term “Starbursting” comes from the image of a six-pointed star.
To create a Starburst diagram, begin by writing one of the following
six words at each point of the star: Who, What, How, When, Where,
and Why. Then start the brainstorming session, using one of these
words at a time to generate questions about the topic. Usually it is
best to discuss each question in the order provided, in part because
the order also approximates how English sentences are constructed.
The What and the How are next to each other because they often
overlap conceptually.

Often only three or four of the words are directly relevant to the
intelligence question. In cyber analysis, the Who is often difficult to
define. For some words (for example, When or Where), the answer
may be a given and not require further exploration.

Do not try to answer the questions as they are identified; just focus
on developing as many questions as possible. After generating
questions that start with each of the six words, ask the group either
to prioritize the questions to be answered or to sort the questions
into logical categories. Figure 6.5 is an example of a Starbursting
diagram. It identifies questions to be asked about a terrorist group
that launched a biological attack in a subway.
Relationship to Other Techniques
This Who, What, How, When, Where, and Why approach can be combined effectively with the Issue
Redefinition and Getting Started Checklist methods mentioned in the introduction to chapter 5 and
described more fully in Critical Thinking for Strategic Intelligence.3 Ranking, Scoring, and Prioritizing
(chapter 5) is helpful in prioritizing the questions to be worked on. Starbursting is also directly related to
several Diagnostic Techniques discussed in chapter 7, such as the Key Assumptions Check. In addition,
it can be used to order the analytic process used in Indicators Validation in chapter 9.

Description

Figure 6.5 Starbursting Diagram of a Lethal Biological Event at a Subway Station


Source: A basic Starbursting diagram can be found at the MindTools website:
https://www.mindtools.com/pages/article/newCT_91.htm. The starburst in Figure 6.5 was created by the authors.
Origins of This Technique
Starbursting is one of many techniques developed to stimulate
creativity. The basic idea for the design of Figure 6.5 comes from the
MindTools website:
www.mindtools.com/pages/article/newCT_91.htm. Tips on when to
use this technique and help creating your own Starbursting diagram
can be found at https://www.designorate.com/starbursting-method/
and https://business.tutsplus.com/tutorials/starbursting-how-to-use-
brainstorming-questions-to-evaluate-ideas--cms-26952.
6.6 MIND MAPS AND CONCEPT MAPS
Mind Maps and Concept Maps are visual representations of how an
individual or a group thinks about a topic of interest. Such a diagram
has two basic elements: the ideas that are judged relevant to
whatever topic one is thinking about, and the lines that show and
briefly describe the connections among these ideas. The two
dominant approaches to creating such diagrams are Mind Mapping
and Concept Mapping (see Figures 6.6a and 6.6b). Other
approaches include cognitive, causal, and influence mapping as well
as idea mapping. There are many commercial and freely available
software products that create meaningful diagrams used by diverse
groups within the intelligence community.4
When to Use It
Whenever analysts think about a problem, develop a plan, or consider making even a simple decision,
they are putting a series of thoughts together. That series of thoughts can be represented visually with
words or images connected by lines that represent the nature of the relationships among them. Any type
of thinking, whether about a personal decision or analysis of an intelligence issue, can be diagrammed
in this manner. Such mapping is usually done for either of two purposes:

By an individual or a group to help sort out ideas and achieve a shared understanding of key
concepts. By getting the ideas down on paper or a computer screen, the individual or group is
better able to remember, critique, and modify them.

To facilitate the communication to others of a complex set of relationships. Examples are an


intelligence report, a briefing or graphic prepared by an analyst for prosecutors to use in a jury trial,
or notes jotted while in class.

Mind Maps can help analysts brainstorm the various elements or key players involved in an issue and
how they might be connected. The technique stimulates analysts to ask questions such as, Are there
additional categories or “branches on the tree” that we have not considered? Are there elements of this
process or applications of this technique that we have failed to capture? Does the Mind Map suggest a
different context for understanding the problem?

Description

Figure 6.6A Mind Map of Mind Mapping


Source: Illumine Training, “Mind Map,” www.mind-mapping.co.uk. Reproduced with permission of Illumine Training. With
changes by Randolph H. Pherson.
Description

Figure 6.6B Concept Map of Concept Mapping


Source: R. R. Hoffman and J. D. Novak (Pensacola, FL: Institute for Human and Machine Cognition, 2009). Reproduced
with permission of the authors.

Analysts and students find that Mind Maps and Concept Maps are useful tools for taking notes during an
oral briefing or lecture. By developing a map as the lecture proceeds, the analyst or student can chart
the train of logic and capture all the data presented in a coherent map with all key elements on a single
page. The map also provides a useful vehicle for explaining to someone else in an organized way what
was presented in the briefing or lecture.

Concept Maps are also frequently used as teaching tools. Most recently, they have been effective in
developing “knowledge models,” in which large sets of complex Concept Maps are created and
hyperlinked together to represent analyses of complex domains or problems.
Value Added
Mapping facilitates the presentation or discussion of a complex body
of information. It is useful because it presents a considerable amount
of information that can generally be seen at a glance. Creating a
visual picture of the basic structure of a complex problem helps
analysts be as clear as possible in stating precisely what they want
to express. Diagramming skills enable analysts to stretch their
analytic capabilities.

Mind Maps and Concept Maps vary greatly in size and complexity
depending on how and why they are used. When used for structured
analysis, a Mind Map or a Concept Map is typically larger,
sometimes much larger, than the examples shown in this chapter.

Like any model, such a map is a simplification of reality. It does not


necessarily try to capture all the nuances of a complex system of
relationships. Instead, it provides an outline picture of the overall
structure of a system of variables, showing which variables are
relevant to a given problem or issue and how they are related to one
another. Once you have this information, you are well on the way
toward knowing what further research needs to be done and perhaps
even how to organize the written report. For some projects, the
diagram can be the analytical product or a key part of it.

When a group creates a Mind Map or Concept Map, its principal


value may be the process the group went through to construct the
map, not the map itself. When a group gets together to identify all
the parts of a complex system and figure out how the parts relate to
one another, the process often elicits new ideas, clarifies concepts,
identifies relevant bodies of knowledge, and brings to the surface—
and often resolves—differences of opinion at an early stage of a
project before anything is put in writing. Although such a map may
be a bare skeleton, the discussion will have revealed a great deal
more information than can be shown in a single map.
The mapping process gives the group a shared experience and a
common basis for further discussion. Creating a map ensures that
the initial effort reflects the group’s definition of the problem, not
something done by one member of the group and then presented
after the fact for coordination by the others. Some mapping software
supports virtual collaboration so that analysts at different locations
can work on a map simultaneously and see one another’s work as it
progresses.5

After having defined the problem, the group should be better able to
identify what further research needs to be done and able to parcel
out additional work among the best-qualified members of the group.
The group should also be better able to prepare a report that
represents as fully as possible the collective wisdom of the group.

Mind Maps and Concept Maps help mitigate the impact of several
cognitive biases and misapplied heuristics, including the tendency to
provide quick and easy answers to difficult questions (Mental
Shotgun), to predict rare events based on weak evidence or
evidence that easily comes to mind (Associative Memory), and to
select the first answer that appears “good enough” (Satisficing).
They can also provide an antidote to intuitive traps such as
Overinterpreting Small Samples, Ignoring the Absence of
Information, Confusing Causality and Correlation, and overrating the
role of individual behavior and underestimating the importance of
structural factors (Overestimating Behavioral Factors).
The Method
Start a Mind Map or Concept Map with an inclusive question that
focuses on defining the issue or problem. Then follow these steps:

Make a list of concepts that relate in some way to the focal


question.

Starting with the first dozen or so concepts, sort them into


groupings within the diagram space in some logical manner.
These groups may be based on things they have in common or
on their status as either direct or indirect causes of the matter
being analyzed.

Begin making links among related concepts, starting with the


most general concepts. Use lines with arrows to show the
direction of the relationship. The arrows may go in either
direction or in both directions.

Choose the most appropriate words for describing the nature of


each relationship. The lines might be labeled with words such as
“causes,” “influences,” “leads to,” “results in,” “is required by,” or
“contributes to.” Selecting good linking phrases is often the most
difficult step.

While building all the links among the concepts and the focal
question, look for and enter cross-links among concepts.

Don’t be surprised if, as the map develops, you discover that


you are now diagramming on a different focus question from the
one with which you started. This can be a good thing. The
purpose of a focus question is not to lock down the topic but to
get the process going.
Finally, reposition, refine, and expand the map structure as
appropriate.

Mind Mapping and Concept Mapping can be done manually, but


mapping software makes movement of concepts and creation of
links much easier and faster. Many different software programs are
available for various types of mapping, and each has strengths and
weaknesses. These products are usually variations of the main
contenders: Mind Mapping and Concept Mapping. The two leading
techniques differ in the following ways:

Mind Mapping has only one main or central idea, and all other
ideas branch off from it radially in all directions. The central idea
is preferably shown as an image rather than in words, and
images are used throughout the map. Around the central word,
draw the five or ten main ideas that relate to that word. Then
take each of these first-level words and again draw the five or
ten second-level ideas that relate to each of the first-level
words.6

A Concept Map has a more flexible form. It can have multiple


hubs and clusters. It can also be designed around a central
idea, but it does not have to be and often is not designed that
way. It does not normally use images. A Concept Map is usually
shown as a network, although it too can be shown as a
hierarchical structure like Mind Mapping when that is
appropriate. Concept Maps can be very complex and are often
meant to be viewed on a large-format screen.
Relationship to Other Techniques
Mind and Concept Mapping can be used to present visually the
results generated by other techniques, especially the various types
of brainstorming and/or Cross-Impact Analysis, described in
chapters 6 and 7.
Origins of This Technique
Mind Mapping was originally developed as a fast and efficient way to
take notes during briefings and lectures. Concept Mapping was
originally developed as a means of mapping students’ emerging
knowledge about science; it has a foundation in the constructivist
theory of learning, which emphasizes that “meaningful learning
involves the assimilation of new concepts and propositions into
existing cognitive structures.”7

Mind Maps and Concept Maps have been given new life by the
development of software that makes them more useful and easier to
use. For information on Mind Mapping, see Tony and Barry Buzan,
The Mind Map Book (Essex, England: BBC Active, 2006).
Information on Concept Mapping is available at
http://cmap.ihmc.us/conceptmap.html. For an introduction to
mapping in general, visit http://www.inspiration.com/visual-
learning/mind-mapping.
6.7 VENN ANALYSIS
Venn Analysis is a visual technique that analysts can use to explore the logic of arguments. Venn
diagrams consist of overlapping circles and are commonly used to teach set theory in mathematics.
Each circle encompasses a set of objects; objects that fall within the intersection of circles are members
of both sets. A simple Venn diagram typically shows the overlap between two sets. Circles can also be
nested within one another, showing that one thing is a subset of another in a hierarchy. Figure 6.7a is an
example of a Venn diagram. It describes the various components of critical thinking and how they
combine to produce synthesis and analysis.

Venn diagrams can illustrate simple sets of relationships in analytic arguments; we call this process
Venn Analysis. When applied to argumentation, it can reveal invalid reasoning. For example, in Figure
6.7b, the first diagram shows the flaw in the following argument: Cats are mammals, dogs are
mammals; therefore, dogs are cats. As the diagram shows, dogs are not subsets of cats, nor are any
cats subsets of dogs; therefore, they are distinct subsets, but both belong to the larger set of mammals.
Venn Analysis can also validate the soundness of an argument. For example, the second diagram in
Figure 6.7b shows that the argument that cats are mammals, tigers are cats, and therefore tigers are
mammals is true.

Description

Figure 6.7A Venn Diagram of Components of Critical Thinking

Description

Figure 6.7B Venn Diagram of Invalid and Valid Arguments


When to Use It
Intelligence analysts use this technique to organize their thinking,
look for gaps in logic or data, and examine the quality of an
argument. The tool is also effective in determining how to satisfy a
narrow set of conditions when multiple variables must be considered
—for example, in determining the best time to launch a satellite.
Venn Analysis is also useful when an argument involves a portion of
something that is being compared with other portions or subsets.

One other application is to use Venn Analysis as a brainstorming tool


to see if new ideas or concepts are needed to occupy every subset
in the diagram. The technique can be done individually but works
better when done in groups.
Value Added
Venn Analysis helps analysts determine if they have put like things in
the right groups and correctly identified what belongs in each subset.
The technique makes it easier to visualize arguments, often
revealing flaws in reasoning or spurring analysts to examine their
assumptions by making assumptions explicit when constructing the
diagram. Examining the relationships between the overlapping areas
of a Venn diagram helps put things in perspective and often prompts
new research or deeper inquiry. Care should be taken, however, not
to make the diagrams too complex by adding levels of precision that
may not be justifiable.

Venn Analysis protects analysts against seeing patterns in random


events as systematic and part of a coherent world (Desire for
Coherence and Uncertainty Reduction), accepting data as true
because the data help create a more coherent story (Evidence
Acceptance Bias), and offering quick and easy answers to difficult
questions (Mental Shotgun). Venn Analysis also helps mitigate the
impact of several intuitive traps, particularly Ignoring Inconsistent
Evidence, Presuming Patterns, and failing to factor something into
the analysis because the analyst lacks an appropriate category or
“bin” for that item of information (Lacking Sufficient Bins).
The Method
Venn Analysis is an agile tool that can be applied in various ways. It
is a simple process but can spark prolonged debate. The following
steps show how Venn Analysis is useful when examining the validity
of an argument:

Represent the elements of a statement or argument as circles.


Use large circles for large concepts or quantities.

Examine the boundaries of the circles. Are they well defined or


“fuzzy”? How are these things determined, measured, or
counted?

Consider the impact of time. Are the circles growing or


shrinking? This is especially important when looking at trends.

Check the relative size and relationships between the circles.


Are any assumptions being made?

Ask whether there are elements within the circles that should be
added.

Examine and compare overlapping areas. What is found in each


zone? What is significant about the size of each zone? Are
these sizes likely to change over time?
Example
Consider this analytic judgment: “The substantial investments state-owned companies in the fictional
country of Zambria are making in U.S. port infrastructure projects pose a threat to U.S. national
security.”

Using Venn Analysis, the analyst would first draw a large circle to represent the autocratic state of
Zambria, a smaller circle within it representing state-owned enterprises, and small circles within that
circle to represent individual state-owned companies. A mapping of all Zambrian corporations would
include more circles to represent other private-sector companies as well as a few that are partially state-
owned (see Figure 6.7c). Simply constructing this diagram raises several questions, such as What
percentage of companies are state-owned, partially state-owned, or privately owned? Do the
percentages correspond to the size of the circles? What is the definition of “state-owned”? What does
“state-owned” imply politically and economically in a nondemocratic state like Zambria? Do these
distinctions even matter? The answer to this last question would determine whether Zambrian
companies should be represented by one, two, or three circles.

The diagram should prompt the analyst to examine several assumptions implicit in the questions:

Does state-owned equate to state-controlled?

Should we assume that state-owned enterprises would act contrary to U.S. interests?

Would private-sector companies be less hostile or even supportive of free enterprise and U.S.
interests?

Figure 6.7C Venn Diagram of Zambrian Corporations

Each of these assumptions, made explicit by the Venn diagram, can now be challenged. For example,
the original statement may have oversimplified the problem. Perhaps the threat may be broader than
just state-owned enterprises. Could Zambria exert influence through its private companies or private-
public partnerships?

The original statement refers to investment in U.S. port infrastructure projects by companies in Zambria.
The overlap in the two circles in Figure 6.7d depicts which of these companies are investing in U.S. port
infrastructure projects. The Venn diagram shows that only private Zambrian companies and one private-
public partnership are investing in U.S. port infrastructure projects. Additional circles could be added
showing the level of investment in U.S. port infrastructure improvement projects by companies from
other countries. The large circle enveloping all the smaller circles would represent the entirety of foreign
investment in all U.S. infrastructure projects. By including this large circle, readers get a sense of what
percentage of total foreign investment in U.S. infrastructure is targeted on port improvement projects.

Figure 6.7d, however, raises several additional questions:

Can we assume Zambrian investors are the biggest investors in U.S. port infrastructure projects?
Who would be their strongest competitors? How much influence does their share of the investment
pie give them?

Can we estimate the overall amount of Zambrian investment in infrastructure projects and where it
is directed? If Zambrian state-owned companies are investing in other U.S. projects, are the
investments related only to companies doing work in the United States or also overseas?

Do we know the relative size of current port infrastructure improvement projects in the United States
as a proportion of all port improvement investments being made globally? What percentage of
foreign companies are working largely overseas in projects to improve port infrastructure or critical
infrastructure as more broadly defined? What percentage of these companies are functioning as
global enterprises or as state-owned or state-controlled companies?

Many analytic arguments highlight differences and trends, but it is important to put these differences into
perspective before becoming too engaged in the argument. By using Venn Analysis to examine the
relationships between the overlapping areas of the Venn diagram, analysts have a more rigorous base
from which to organize and develop their arguments. In this case, if the relative proportions are correct,
the Venn Analysis would reveal a more important national security concern: Zambria’s position as an
investor in U.S. port infrastructure improvement projects could give it significant economic as well as
military advantage.

Figure 6.7D Zambrian Investments in Global Port Infrastructure Projects


Relationship to Other Techniques
Venn Analysis is like other decomposition and visualization
techniques discussed in chapter 5. It is also useful as a graphical
way to conduct a Key Assumptions Check or assess whether
alternative hypotheses are mutually exclusive—two techniques
described in chapter 7.
Origins of This Technique
This application of Venn diagrams as a tool for intelligence analysis
was developed by John Pyrik for the Government of Canada. The
materials are used with the permission of the Canadian government.
For more discussion of this concept, see Peter Suber, Earlham
College, “Symbolic Logic,” at
http://legacy.earlham.edu/~peters/courses/log/loghome.htm; and Lee
Archie, Lander University, “Introduction to Logic,” at
http://philosophy.lander.edu/logic.
6.8 NETWORK ANALYSIS
Network Analysis is the review, compilation, and interpretation of
data to determine the presence of associations among individuals,
groups, businesses, or other entities; the meaning of those
associations to the people involved; and the degrees and ways in
which those associations can be strengthened or weakened.8 It is
the best method available to help analysts understand and identify
opportunities to influence the behavior of a set of actors about whom
information is sparse. In the fields of law enforcement and national
security, information used in Network Analysis usually comes from
informants or from physical or technical surveillance. These
networks are most often clandestine and therefore not visible to
open source collectors. Although software has been developed to
help collect, sort, and map data, it is not essential to many of these
analytic tasks. Social Network Analysis, which involves measuring
associations, does require software.

Analysis of networks is broken down into three stages, and analysts


can stop at the stage that answers their questions:

Network Charting is the process of and associated techniques


for identifying people, groups, things, places, and events of
interest (nodes) and drawing connecting lines (links) between
them according to various types of association. The product is
often referred to as a Link Chart.

Network Analysis is the process and techniques that strive to


make sense of the data represented by the chart by grouping
associations (sorting) and identifying patterns in and among
those groups.

Social Network Analysis (SNA) is the mathematical measuring


of variables related to the distance between nodes and the
types of associations to derive even more meaning from the
chart, especially about the degree and type of influence one
node has on another.
When to Use It
Network Analysis is used extensively in law enforcement,
counterterrorism analysis, and analysis of transnational issues, such
as narcotics and weapons proliferation, to identify and monitor
individuals who may be involved in illegal activity. It is often the first
step in an analysis as it helps the analyst map who is involved in an
issue, how much each individual matters, and how individuals relate
to one another. Network Charting (or Link Charting) is used literally
to “connect the dots” between people, groups, or other entities of
intelligence or criminal interest. Network Analysis puts these dots in
context, and Social Network Analysis helps identify hidden
associations and degrees of influence between the dots.
Value Added
Network Analysis is effective in helping analysts identify and
understand patterns of organization, authority, communication,
travel, financial transactions, or other interactions among people or
groups that are not apparent from isolated pieces of information. It
often identifies key leaders, information brokers, or sources of
funding. It can identify additional individuals or groups who need to
be investigated. If done over time, it can help spot change within the
network. Indicators monitored over time may signal preparations for
offensive action by the network or may reveal opportunities for
disrupting the network.

SNA software helps analysts accomplish these tasks by facilitating


the retrieval, charting, and storage of large amounts of information.
Software is not necessary for this task, but it is enormously helpful.
The Social Network Analysis software included in many Network
Analysis packages is essential for measuring associations.
The Method
Analysis of networks attempts to answer the question, “Who is related to whom and what is the nature of
their relationship and role in the network?” The basic Network Analysis software identifies key nodes
and shows the links between them. SNA software measures the frequency of flow between links and
explores the significance of key attributes of the nodes. We know of no software that does the
intermediate task of grouping nodes into meaningful clusters, though algorithms do exist and are used
by individual analysts. In all cases, however, you must interpret what is represented, looking at the chart
to see how it reflects organizational structure, modes of operation, and patterns of behavior.

Network Charting.
The key to good Network Analysis is to begin with a good chart. An example of such a chart is Figure
6.8a, which shows the terrorist network behind the attacks of September 11, 2001. It was compiled by
networks researcher Valdis E. Krebs using data available from news sources on the internet in early
2002.

Several tried and true methods for making good charts will help the analyst to save time, avoid
unnecessary confusion, and arrive more quickly at insights. Network charting usually involves the
following steps:

Identify at least one reliable source or stream of data to serve as a beginning point.

Identify, combine, or separate nodes within this reporting.

List each node in a database, association matrix, or software program.

Description

Figure 6.8A Social Network Analysis: The September 11 Hijackers


Source: Valdis Krebs, “Connecting the Dots: Tracking Two Identified Terrorists,” Figure 3, Orgnet.com, last modified
2008, www.orgnet.com/tnet.html. Reproduced with permission of the author.

Identify interactions among individuals or groups.


List interactions by type in a database, association matrix, or software program.

Identify each node and interaction by some criterion that is meaningful to your analysis. These
criteria often include frequency of contact, type of contact, type of activity, and source of
information.

Draw the connections between nodes—connect the dots—on a chart by hand, using a computer
drawing tool, or using Network Analysis software. If you are not using software, begin with the
nodes that are central to your intelligence question. Make the map more informative by presenting
each criterion in a different color or style or by using icons or pictures. A complex chart may use all
these elements on the same link or node. The need for additional elements often happens when the
intelligence question is murky (for example, when “I know something bad is going on, but I don’t
know what”); when the chart is being used to answer multiple questions; or when a chart is
maintained over a long period of time.

Work out from the central nodes, adding links and nodes until you run out of information from the
good sources.

Add nodes and links from other sources, constantly checking them against the information you
already have. Follow all leads, whether they are people, groups, things, or events, regardless of
source. Make note of the sources.

Stop in these cases: When you run out of information, when all the new links are dead ends, when
all the new links begin to turn in and start connecting with one another like a spider’s web, or when
you run out of time.

Update the chart and supporting documents regularly as new information becomes available, or as
you have time. Just a few minutes a day will pay enormous dividends.

Rearrange the nodes and links so that the links cross over one another as little as possible. This is
easier if you are using software. Many software packages can rearrange the nodes and links in
various ways.

Cluster the nodes. Do this by looking for “dense” areas of the chart and relatively “empty” areas.

Draw shapes around the dense areas. Use a variety of shapes, colors, and line styles to denote
different types of clusters, your relative confidence in the cluster, or any other criterion you deem
important.

“Cluster the clusters,” if you can, using the same method.

Label each cluster according to the common denominator among the nodes it contains. In doing
this, you will identify groups, events, activities, and/or key locations. If you have in mind a model for
groups or activities, you may be able to identify gaps in the chart by what is or is not present that
relates to the model.

Look for “cliques”—a group of nodes in which every node is connected to every other node in the
group, though not to many nodes outside the group. These groupings often look like stars or
pentagons. In the intelligence world, they often turn out to be clandestine cells.

Look in the empty spaces for nodes or links that connect two clusters. Highlight these nodes with
shapes or colors. These nodes are brokers, facilitators, leaders, advisers, media, or some other key
connection that bears watching. They also point where the network is susceptible to disruption.

Chart the flow of activities between nodes and clusters. You may want to use arrows and time
stamps. Some software applications will allow you to display dynamically how the chart has
changed over time.
Analyze this flow. Does it always go in one direction or in multiple directions? Are the same or
different nodes involved? How many different flows are there? What are the pathways? By asking
these questions, you can often identify activities, including indications of preparation for offensive
action and lines of authority. You can also use this knowledge to assess the resiliency of the
network. If one node or pathway were removed, would there be alternatives already built in?

Continually update and revise as nodes or links change.

Network Analysis.
Figure 6.8b is a modified version of the 9/11 hijacker network depicted in Figure 6.8a. It identifies the
different types of clusters and nodes discussed under Network Analysis. Cells are star-like or pentagon-
like line configurations, potential cells with star-like or pentagon-like line configurations are circled, and
the large diamond surrounds a cluster of cells. Brokers are shown as nodes overlaid with small
hexagons. Note the broker in the center. This node has connections to all but one of the other brokers.
This is a senior leader: Al-Qaeda’s former head of operations in Europe, Imad Eddin Barakat Yarkas.

Description

Figure 6.8B Social Network Analysis: September 11 Hijacker Key Nodes


Source: Based on Valdis Krebs, “Connecting the Dots: Tracking Two Identified Terrorists,” Figure 3, Orgnet.com, last
modified 2008, www.orgnet.com/tnet.html. Reproduced with permission of the author. With revisions by Cynthia Storer.

Social Network Analysis.


Social Network Analysis requires a specialized software application. It is important, however, for
analysts to familiarize themselves with the basic process and measures and the specialized vocabulary
used to describe position and function within the network. The following three types of centrality are
illustrated in Figure 6.8c:

Degree centrality: This is measured by the number of direct connections that a node has with other
nodes. In the network depicted in Figure 6.8c, Deborah has the most direct connections. She is a
“connector” or a “hub” in this network.

Betweenness centrality: Helen has fewer direct connections than does Deborah, but she plays a
vital role as a “broker” in the network. Without her, Ira and Janice would be cut off from the rest of
the network. A node with high “betweenness” has great influence over what flows—or does not flow
—through the network.

Closeness centrality: Frank and Gary have fewer connections than does Deborah, yet the pattern of
their direct and indirect ties allows them to access all the nodes in the network more quickly than
anyone else. They are in the best position to monitor all the information that flows through the
network.
Potential Pitfalls
This method is extremely dependent upon having at least one good
source of information. It is hard to know when information may be
missing, and the boundaries of the network may be fuzzy and
constantly changing, in which case it is difficult to determine whom to
include. The constantly changing nature of networks over time can
cause information to become outdated. You can be misled if you do
not constantly question the data being entered, update the chart
regularly, and look for gaps and consider their potential significance.

You should never rely blindly on the SNA software but strive to
understand how the application works. As is the case with any
software, different applications measure different things in different
ways, and the devil is always in the details.
Origins of This Technique
This is an old technique transformed by the development of sophisticated software programs for
organizing and analyzing large databases. Each of the following sources has made significant
contributions to the description of this technique: Valdis E. Krebs, “Social Network Analysis, An
Introduction,” www.orgnet.com/sna.html; Krebs, “Uncloaking Terrorist Networks,” First Monday 7, no. 4
(April 1, 2002), https://firstmonday.org/ojs/index.php/fm/article/view/941/863%22%3B%3E%3BNetwork;
Robert A. Hanneman, “Introduction to Social Network Methods,” Department of Sociology, University of
California, Riverside,
http://faculty.ucr.edu/~hanneman/nettext/C1_Social_Network_Data.html#Populations; Marilyn B.
Peterson, Defense Intelligence Agency, “Association Analysis,” unpublished manuscript, n.d., used with
permission of the author; Cynthia Storer and Averill Farrelly, Pherson Associates, LLC; and Pherson
Associates training materials.

Description

Figure 6.8C Social Network Analysis


Source: Pherson Associates, LLC, 2019.
NOTES
1. Daniel Kahneman, Thinking: Fast and Slow (New York: Farrar,
Straus and Giroux, 2011), 95–96.

2. Herbert A. Simon, “A Behavioral Model of Rational Choice,”


Quarterly Journal of Economics LXIX (February 1955): 99–101.

3. Katherine Hibbs Pherson and Randolph H. Pherson, Critical


Thinking for Strategic Intelligence (Washington, DC: CQ
Press/SAGE), 25–26, 41–42.

4. See www.mind-mapping.org for a comprehensive compendium of


information on all types of software that supports knowledge
management and information organization in graphic form. Many of
these software products are available at no cost.

5. Tanja Keller, Sigmar-Olaf Tergan, and John Coffey, “Concept


Maps Used as a ‘Knowledge and Information Awareness’ Tool for
Supporting Collaborative Problem Solving in Distributed Groups,”
Proceedings of the Second International Conference on Concept
Mapping, San José, Costa Rica, September 5–8, 2006.

6. Tony Buzan, The Mind Map Book, 2nd ed. (London: BBC Books,
1995).

7. Joseph D. Novak and Alberto J. Canas, The Theory Underlying


Concept Maps and How to Construct and Use Them, Technical
Report IHMC Cmap Tools 2006-01 (Pensacola, FL: Florida Institute
for Human and Machine Cognition, 2006).

8. Marilyn B. Peterson, Defense Intelligence Agency, “Association


Analysis,” unpublished manuscript, n.d., used with the permission of
the author.
Descriptions of Images and Figures
Back to Figure

Who? Has any group threatened such an attack? Could it have


occurred by accident?

What? Exactly what agent was used? Was it lethal enough to cause
that many casualties?

How? How was the agent dispersed? Did the perpetrators have to
be present?

Where? Does the choice of this site send a particular message? Did
the location help in dispersing the agent?

When? Was the time of day significant? Was the date on a particular
anniversary?

Why? What was the intent? To incite terror, cause mass casualties,
or cause economic disruption? Did the attack help advance a
political agenda?

Back to Figure

Mind maps are used in learning, meetings, education, problem,


planning, presentations, and education. Learning: Recall, such as
reviewing and summarizing. Noting, such as lectures, books,
entertainment, and in media such as television, radio, newspapers,
and computers. Meetings: Negotiations. 1 on 1s. Groups. Chairing.
Summaries, such as disagreements, minutes, actions, and
agreements. Education: Relationships. Place, such as year, term,
and lessons. Exploration. Lecturing, such as preparing and
delivering. Problem: Definition. Identification. Solving individually, in
a group, and brainstorming. Planning: Creativity. Time. Personal
activities, such as holidays and parties. Organization of knowledge,
such as study and integrate, and projects. Presentation:
Preparations. Written, such as reports, essays, articles, exams, and
projects. Oral, such as informal, formal, talks, speeches, debates,
seminars, and selling.

Back to Figure

Concept maps were invented by Joseph Novak’s research team, and


represent organized knowledge, beliefs, and feelings. They differ
from rote memorization, and are expressed in the terms of concepts
and propositions. They are context dependent, and are necessary
for effective teaching, effective learning, collaboration, and
knowledge sharing. Concepts are defined as perceived regularities
in objects and events, and are given as labels, that can be words,
such as risk, hypothesis, and motive, and symbols, such as 57
percent. Propositions may be cross-links to show interrelations. They
are structured in a hierarchical morphology. Interrelations and
hierarchical morphology facilitate creativity, which is needed to see
interrelations. Hierarchical morphology arranges context and major
concepts at the top, and detailed knowledge at the bottom.

Back to Figure

The Venn diagram consists of three intersecting ellipses, labeled


logic, data and evidence, and issue and customer identification. The
intersection of logic, and issue and customer identification is framing
and argumentation. The intersection of logic, and data and evidence
is gaps and assumptions. The intersection of data and evidence, and
issue and customer identification is source credibility. The
intersection of all the ellipses is synthesis and analysis.

Back to Figure

In the first diagram, two ellipses, labeled cats and dogs, are within a
large ellipse, labeled mammals. Text reads, “Cats are mammals.
Dogs are mammals. Therefore, dogs are cats.”

In the second diagram, an ellipse, labeled tigers, is within another


ellipse, labeled cats, which in turn is within a larger ellipse, labeled
mammals. Text reads, “Cats are mammals. Tigers are cats.
Therefore, tigers are mammals.”

Back to Figure

The interconnections are among the hijackers in flight AA number 11


that crashed into WTC north, flight AA number 77 that crashed into
Pentagon, flight UA number 93 that crashed into a field in
Pennsylvania, flight UA number 175 that crashed into WTC south,
and their associates.

Back to Figure

The interconnections are among the hijackers in flight AA number 11


that crashed into WTC north, flight AA number 77 that crashed into
Pentagon, flight UA number 93 that crashed into a field in
Pennsylvania, flight UA number 175 that crashed into WTC south,
and their associates. They are grouped into four small clusters, three
mega clusters, cells, and brokers, facilitators, or leaders.

Back to Figure

Deborah is connected to Aaron, Craig, Barbara, Everett, Gary, and


Frank. Aaron is connected to Sarah, Craig, Frank, and Barbara.
Craig is connected to Frank. Barbara is connected to Gary and
Everett. Everett is connected to Gary. Gary is connected to Frank,
Helen, and Hank. Frank is connected to Helen and Hank. Hank is
connected to Nora. Helen is connected to Ira, who in turn is
connected to Janice. Accompanying table shows the type of
centrality with example. Degree: Deborah. Betweenness: Helen.
Closeness: Frank and Gary.
CHAPTER 7 DIAGNOSTIC
TECHNIQUES

7.1 Key Assumptions Check [ 132 ]

7.2 Chronologies and Timelines [ 138 ]

7.3 Cross-Impact Matrix [ 142 ]

7.4 Multiple Hypothesis Generation [ 146 ]

7.4.1 The Method: Simple Hypotheses [148]

7.4.2 The Method: Quadrant Hypothesis Generation


[150]

7.4.3 The Method: Multiple Hypotheses Generator®


[152]

7.5 Diagnostic Reasoning [ 155 ]

7.6 Analysis of Competing Hypotheses [ 157 ]

7.7 Inconsistencies Finder™ [ 168 ]

7.8 Deception Detection [ 170 ]

7.9 Argument Mapping [ 174 ]

Analysis conducted by the intelligence, law enforcement, and


business communities will never achieve the accuracy and
predictability of a true science because the information with which
analysts must work is typically incomplete, ambiguous, and
potentially deceptive. The analytic process can, however, benefit
from the lessons of science and adapt some of the elements of
scientific reasoning.

The scientific process involves observing, categorizing, formulating


hypotheses, and then testing those hypotheses. Generating and
testing hypotheses is a core function of structured analysis. A
possible explanation of the past or a judgment about the future is a
hypothesis that needs to be tested by collecting and presenting
evidence. This chapter focuses on several key techniques that
support the Diagnostic Reasoning process, including challenging key
assumptions about what the information reveals, developing
Chronologies and Timelines, generating alternative hypotheses, and
testing the validity of hypotheses and the quality of argumentation.
Practice in using three of the techniques—Key Assumptions Check,
Multiple Hypothesis Generation, and Analysis of Competing
Hypotheses—will help analysts become proficient in the first three of
the Five Habits of the Master Thinker (see chapter 3): challenging
assumptions, generating alternative explanations, and identifying
inconsistent evidence.

The generation and testing of hypotheses is a skill, and its subtleties


do not come naturally. It is a form of reasoning that people can learn
to use for dealing with high-stakes situations. What does come
naturally is drawing on our existing body of knowledge and
experience (mental model) to make an intuitive judgment.1 In most
circumstances in our daily lives, this is an efficient approach that
works most of the time. For intelligence analysis, however, it is not
adequate, because intelligence issues are generally so complex, and
the risk and cost of error are too great. Also, the situations are often
novel, so the intuitive judgment shaped by past knowledge and
experience may well be wrong.

Good analysis of a complex issue must start with a set of alternative


hypotheses. Another practice that the experienced analyst borrows
from the scientist’s toolkit involves the testing of alternative
hypotheses. The truth of a hypothesis can never be proven beyond
doubt by citing only evidence that is consistent with the hypothesis,
because the same evidence may be and often is consistent with
more than one hypothesis. Science often proceeds by refuting or
disconfirming hypotheses. A hypothesis that cannot be refuted
should be taken just as seriously as a hypothesis that seems to have
a lot of evidence in favor of it. A single item of evidence that is shown
to be inconsistent with a hypothesis can be grounds for rejecting that
hypothesis. The most tenable hypothesis is often the one with the
least evidence against it.

Analysts often test hypotheses by using a form of reasoning known


as abduction, which differs from the two better known forms of
reasoning, deduction and induction. Abductive reasoning starts with
a set of facts. One then develops hypotheses that, if true, would
provide the best explanation for these facts. The most tenable
hypothesis is the one that best explains the facts. Because of the
uncertainties inherent in intelligence analysis, conclusive proof or
refutation of hypotheses is the exception rather than the rule.

Use of Diagnostic Techniques can provide a strong antidote to


several cognitive biases. It can reduce the influence of Confirmation
Bias by exposing analysts to new ideas and multiple permutations,
and mitigate the impact of Evidence Acceptance Bias, which is
accepting data as true because it helps create a more coherent
story. Diagnostic Techniques also protect analysts against falling into
the intuitive traps of Relying on First Impressions, Ignoring
Inconsistent Evidence, and Projecting Past Experiences.

The first part of this chapter describes techniques for challenging key
assumptions, establishing analytic baselines, and identifying the
relationships among the key variables, drivers, or players that may
influence the outcome of a situation. These and similar techniques
allow analysts to imagine new and alternative explanations for their
subject matter.

The second section describes three techniques for generating


hypotheses. Other chapters include additional techniques for
generating hypotheses, but which also have a variety of other
purposes. These include Cluster Brainstorming, Nominal Group
Technique, and Venn Analysis (chapter 6); the Delphi Method and
Classic Quadrant Crunching™ (chapter 8); various forms of
Foresight analysis (chapter 9); and Critical Path Analysis and
Decision Trees (chapter 10).

This chapter concludes with a discussion of five techniques for


testing hypotheses, detecting deception, and evaluating the strength
of an argument. These techniques spur the analyst to become more
sensitive to the quality of the data and the strength of the logic and to
look for information that not only confirms but can disconfirm the
hypothesis. One of these, Analysis of Competing Hypotheses (ACH),
was developed by Richards J. Heuer Jr. specifically for use in
intelligence analysis.
OVERVIEW OF TECHNIQUES
Key Assumptions Check is one of the most important and
frequently used techniques. Analytic judgment is always based on a
combination of evidence and assumptions—or preconceptions—that
influence how the evidence is interpreted. The Key Assumptions
Check is a systematic effort to make explicit and question the
assumptions (i.e., mental model) that guide an analyst’s thinking.

Chronologies and Timelines are used to organize data on events


or actions. They are used whenever it is important to understand the
timing and sequence of relevant events or to identify key events and
gaps.

Cross-Impact Matrix is a technique that can be used after any form


of brainstorming that identifies a list of variables relevant to an
analytic project. The results of the brainstorming session are put into
a matrix, which is used to guide a group discussion that
systematically examines how each variable influences all other
variables to which it is related in a particular problem context. The
group discussion is often a valuable learning experience that
provides a foundation for further collaboration. Results of cross-
impact discussions should be retained for future reference as a
cross-check after the analysis is completed.

Multiple Hypothesis Generation can be accomplished in many


ways. This book describes three techniques—Simple Hypotheses,
Quadrant Hypothesis Generation, and the Multiple Hypothesis
Generation. Simple Hypotheses is the easiest to use but not always
the best selection. Quadrant Hypothesis Generation is used to
identify a set of hypotheses when the outcome is likely to be
determined by just two driving forces. Multiple Hypothesis
Generation is used to identify a large set of possible hypotheses.
The latter two techniques are particularly useful in identifying sets of
Mutually Exclusive and Comprehensively Exhaustive (MECE)
hypotheses.

Diagnostic Reasoning applies hypothesis testing to the evaluation


of significant new information. Such information is evaluated in the
context of all plausible explanations of that information, not just in the
context of the analyst’s well-established mental model. The use of
Diagnostic Reasoning reduces the risk of surprise, as it ensures that
an analyst will have considered some alternative conclusions.
Diagnostic Reasoning differs from the ACH technique in that it
evaluates a single item of evidence; ACH deals with an entire issue
involving multiple pieces of evidence and a more complex analytic
process.

Analysis of Competing Hypotheses is the application of Karl


Popper’s philosophy of science to the field of intelligence analysis.2
Popper was one of the most influential philosophers of science of the
twentieth century. He is known for, among other things, his position
that scientific reasoning should start with multiple hypotheses and
proceed by rejecting or eliminating hypotheses, tentatively accepting
only those hypotheses that cannot be refuted. This process forces
an analyst to recognize the full uncertainty inherent in most analytic
situations. ACH helps the analyst sort and manage relevant
information to identify paths for reducing that uncertainty.

The Inconsistencies Finder™ is a simplified version of ACH that


helps analysts evaluate the relative credibility of a set of hypotheses
based on the amount of identified disconfirming information. It
provides a quick framework for identifying inconsistent data and
discovering the hypotheses that are most likely to be correct.

Deception Detection employs a set of checklists analysts can use


to determine when to anticipate deception, how to determine if one is
being deceived, and what to do to avoid being deceived. It is also
useful for detecting the presence of Digital Disinformation or “Fake
News.” The possibility of deception by a foreign intelligence service,
economic competitor, or other adversary organization is a distinctive
type of hypothesis that can be included in any ACH analysis.
Information identified through the Deception Detection technique can
then be entered as relevant information in an ACH matrix.

Argument Mapping is a method that can be used to put a single


hypothesis to a rigorous logical test. The structured visual
representation of the arguments and evidence makes it easier to
evaluate any analytic judgment. Argument Mapping is a logical
follow-on to an ACH analysis. It is a detailed presentation of the
arguments for and against a single hypothesis; ACH is a more
general analysis of multiple hypotheses. The successful application
of Argument Mapping to the hypothesis favored by the ACH analysis
would increase confidence in the results of both analyses.
7.1 KEY ASSUMPTIONS CHECK
Analytic judgment is always based on a combination of evidence and
assumptions, or preconceptions, which influences how the evidence
is interpreted.3 The Key Assumptions Check is a systematic effort to
make explicit and question the assumptions (the mental model) that
guide an analyst’s interpretation of evidence and reasoning about a
problem. Such assumptions are usually necessary and unavoidable
as a means to fill gaps in the incomplete, ambiguous, and
sometimes deceptive information with which the analyst must work.
They are driven by the analyst’s education, training, and experience,
plus the organizational context in which the analyst works.

An organization really begins to learn when its most


cherished assumptions are challenged by
counterassumptions. Assumptions underpinning existing
policies and procedures should therefore be unearthed, and
alternative policies and procedures put forward based upon
counterassumptions.
—Ian I. Mitroff and Richard O. Mason, Creating a Dialectical Social
Science: Concepts, Methods, and Models (1981)

The Key Assumptions Check is one of the most common techniques


used by intelligence analysts because they typically need to make
assumptions to fill information gaps. In the intelligence world, these
assumptions are often about another country’s intentions or
capabilities, the way governmental processes usually work in that
country, the relative strength of political forces, the trustworthiness or
accuracy of key sources, the validity of previous analyses on the
same subject, or the presence or absence of relevant changes in the
context in which the activity is occurring. Assumptions are often
difficult to identify because many sociocultural beliefs are held
unconsciously or so firmly that they are assumed to be truth and not
subject to challenge.
When to Use It
Any explanation of current events or estimate of future developments
requires the interpretation of evidence. If the available evidence is
incomplete or ambiguous, this interpretation is influenced by
assumptions about how things normally work in the country or
company of interest. These assumptions should be made explicit
early in the analytic process.

If a Key Assumptions Check is not done at the outset of a project, it


can still prove extremely valuable if done during the coordination
process or before conclusions are presented or delivered. When a
Key Assumptions Check is done early in the process, it is often
desirable to review the assumptions again later—for example, just
before or just after drafting the report. The task is to determine
whether the assumptions still hold true or should be modified.

When tracking the same topic or issue over time, analysts should
consider reassessing their key assumptions on a periodic basis,
especially following a major new development or surprising event. If,
on reflection, one or more key assumptions no longer appear to be
well-founded, analysts should inform key policymakers or corporate
decision makers working that target or issue that a foundational
construct no longer applies or is at least doubtful.
Value Added
Preparing a written list of one’s working assumptions at the
beginning of any project helps the analyst do the following:

Identify the specific assumptions that underpin the basic analytic


line.

Achieve a better understanding of the fundamental dynamics at


play.

Gain a better perspective and stimulate new thinking about the


issue.

Discover hidden relationships and links among key factors.

Identify what developments would call a key assumption into


question.

Avoid surprises should new information render old assumptions


invalid.

A Key Assumptions Check helps analysts mitigate the impact of


heuristics that, when misapplied, can impede analytic thinking,
including the tendency to accept a given value of an assumption or
something unknown as a proper starting point for generating an
assessment (Anchoring Effect), reaching an analytic judgment
before sufficient information is collected and proper analysis
performed (Premature Closure), and judging the frequency of an
event by the ease with which instances come to mind (Availability
Heuristic). It also safeguards an analyst against several classic
mental mistakes, including the tendency to overdraw conclusions
when presented with only a small amount of data (Overinterpreting
Small Samples), assume the same dynamic is in play when
something appears to be in accord with past experiences (Projecting
Past Experiences), and failing to factor something into the analysis
because the analyst lacks an appropriate category or “bin” for that
item of information (Lacking Sufficient Bins).

Conducting a Key Assumptions Check gives analysts a better


understanding of the suppositions underlying their key judgments or
conclusions. Doing so helps analysts establish how confident they
should be in making their assessment and disseminating their key
findings.
The Method
The process of conducting a Key Assumptions Check is relatively
straightforward in concept but often challenging to put into practice.
One challenge is that participating analysts must be open to the
possibility that they could be wrong. It helps to involve several well-
regarded analysts who are generally familiar with the topic but have
no prior commitment to any set of assumptions about the issue in the
process. Engaging a facilitator is also highly recommended. Keep in
mind that many “key assumptions” turn out to be “key uncertainties.”
Randolph Pherson’s extensive experience as a facilitator of analytic
projects indicates that approximately one in every four key
assumptions collapses on careful examination.

The following are steps in conducting a Key Assumptions Check:

Gather a small group of individuals who are working the issue


along with a few “outsiders.” The primary analytic unit already is
working from an established mental model, so the “outsiders”
are needed to bring a different perspective.

Ideally, the facilitator should notify participants about the topic


beforehand and ask them to bring to the meeting a list of
assumptions they make about the topic. If they do not do this
beforehand, start the meeting with a silent brainstorming
session by asking each participant to write down several
assumptions on an index card.

Collect the cards and list the assumptions on a whiteboard or


easel for all to see.

Elicit additional assumptions. Work from the prevailing analytic


line to identify additional arguments that support it. Use various
devices to help prod participants’ thinking:
Ask the standard journalistic questions. Who: Are we
assuming that we know who all the key players are? What:
Are we assuming that we know the goals of the key
players? How: Are we assuming that we know how they are
going to act? When: Are we assuming that conditions have
not changed since our last report or that they will not
change in the foreseeable future? Where: Are we assuming
that we know where the real action is going to be? Why:
Are we assuming that we understand the motives of the key
players?

Use of phrases such as “will always,” “will never,” or “would


have to be” suggests that an idea is not being challenged.
Perhaps it should be.

Use of phrases such as “based on” or “generally the case”


suggests the presence of a challengeable assumption.

When the flow of assumptions starts to slow down, ask,


“What else seems so obvious that one would not normally
think about challenging it?” If no one can identify more
assumptions, then there is an assumption that they do not
exist, which itself is an assumption subject to challenge.

After identifying a full set of assumptions, critically examine each


assumption and ask,

Why am I confident that this assumption is correct?

In what circumstances might this assumption be untrue?

Could it have been true in the past but not any longer?

How much confidence do I have that this assumption is


valid?
If it turns out to be invalid, how much impact would this
have on the analysis?

Place each assumption in one of three categories:

Basically solid (S)

Correct with some caveats (C)

Unsupported or questionable—the “key uncertainties” (U)

Refine the list. Delete assumptions that do not hold up to


scrutiny and add new ones that emerge from the discussion. If
an assumption generates a lot of discussion, consider breaking
it into two assumptions or rephrasing it to make the statement
more explicit. Above all, emphasize those assumptions that
would, if wrong, lead to changing the analytic conclusions.

Consider whether key uncertainties should be converted into


intelligence collection requirements or research topics.

When concluding the analysis, remember that the probability of your


analytic conclusion being accurate cannot be greater than the
weakest link in your chain of reasoning. Review your assumptions,
assess the quality of evidence and reliability of sources, and
consider the overall difficulty and complexity of the issue. Then make
a rough estimate of the probability that your analytic conclusion will
turn out to be wrong. Use this number to calculate the rough
probability of your conclusion turning out to be accurate. For
example, a three in four chance (75 percent) of being right equates
to a one in four chance (25 percent) of being wrong. This focus on
how and why we might be wrong is needed to offset the natural
human tendency toward reluctance to admit we might be wrong.
Figure 7.1 shows apparently flawed assumptions made in the Wen
Ho Lee espionage case during the 1990s and what further
investigation showed about these assumptions. A Key Assumptions
Check could have identified weaknesses in the case against Lee
much earlier.
Relationship to Other Techniques
The Key Assumptions Check is frequently paired with other techniques because assumptions play an
important role in all structured analytic efforts. It is important to get them right. For example, when an
assumption is critical to an analysis, and questions remain about the validity of that assumption, it may
be desirable to follow the Key Assumptions Check with a What If? Analysis. Imagine a future (or a
present) in which the assumption is wrong. What could have happened to make it wrong, how could that
have happened, and what are the consequences?

Figure 7.1 Key Assumptions Check: The Case of Wen Ho Lee


Source: Pherson Associates, LLC, 2019.

There is a particularly noteworthy interaction between Key Assumptions Check and ACH. Key
assumptions need to be included as “evidence” in an ACH matrix to ensure that the matrix is an
accurate reflection of the analyst’s thinking. Assumptions often emerge during a discussion of relevant
information while filling out an ACH matrix. This happens when an analyst assesses the consistency or
inconsistency of an item of evidence with a hypothesis and concludes that the designation is dependent
upon something else—usually an assumption. Classic Quadrant Crunching™ (chapter 8) and Simple
Scenarios, the Cone of Plausibility, and Reversing Assumptions (chapter 9) all use assumptions and
their opposites to generate multiple explanations or outcomes.
Origins of This Technique
Although assumptions have been a topic of analytic concern for a
long time, the idea of developing a specific analytic technique to
focus on assumptions did not occur until the late 1990s. The
discussion of Key Assumptions Check in this book is from Randolph
H. Pherson, Handbook of Analytic Tools and Techniques, 5th ed.
(Tysons, VA: Pherson Associates, LLC, 2019).
7.2 CHRONOLOGIES AND TIMELINES
A Chronology is a list that places events or actions in the order in
which they occurred, usually in narrative or bulleted format. A
Timeline is a graphic depiction of those events put in context of the
time of the events and the time between events. Both are used to
identify trends or relationships among the events or actions and, in
the case of a Timeline, among the events and actions as well as
other developments in the context of the overarching intelligence
problem.
When to Use It
Chronologies and Timelines aid in organizing events or actions. The
techniques are useful whenever analysts need to understand the
timing and sequence of relevant events. They can also reveal
significant events or important gaps in the available information. The
events may or may not have a cause-and-effect relationship.

Chronologies and Timelines are usually developed at the onset of an


analytic task to ascertain the context of the activity under review.
They can be used in postmortems to break down the stream of
reporting, find the causes for analytic failures, and highlight
significant events after an intelligence or business surprise.
Chronologies and Timelines are also useful for organizing
information in a format that can be readily understood in a briefing or
when presenting evidence to a jury.
Value Added
Chronologies and Timelines help analysts identify patterns and
correlations among events. Analysts can use them to relate
seemingly disconnected events to the big picture; to highlight or
identify significant changes; or to assist in the discovery of trends,
developing issues, or anomalies. They can serve as a catchall for
raw data when the meaning of the data is not yet clear. Multiple-level
Timelines allow analysts to track concurrent events that may affect
one another.

The activities on a Timeline can lead an analyst to hypothesize the


existence of previously unknown events. In other words, the series of
known events may make sense only if other previously unknown
events had occurred. The analyst can then look for other indicators
of those missing events.

Chronologies and Timelines are useful tools analysts can use to


counter the impact of cognitive biases and heuristics, including
accepting data as true without assessing its credibility because it
helps “make the case” (Evidence Acceptance Bias), seeing patterns
in random events as systematic and part of a coherent story (Desire
for Coherence and Uncertainty Reduction), and providing quick and
easy answers to difficult problems (Mental Shotgun). It can also
mitigate the impact of several intuitive traps, including giving too
much weight to first impressions or initial data that attracts our
attention at the time (Relying on First Impressions), not paying
sufficient attention to the impact of the absence of information
(Ignoring the Absence of Information), and discarding or ignoring
information that is inconsistent with what one would expect to see
(Ignoring Inconsistent Evidence).
The Method
Chronologies and Timelines are effective yet simple ways to order
incoming information when processing daily message traffic. A
Microsoft Word document or an Excel spreadsheet can log the
results of research and marshal evidence. Tools such as the Excel
drawing function or Analysts’ Notebook can be helpful in drawing the
Timeline. Follow these steps:

When researching the problem, ensure that the relevant


information is listed with the date or order in which it occurred. It
is important to properly reference the data to help uncover
potential patterns or links. Be sure to distinguish between the
date the event occurred and the date the report was received.

Review the Chronology or Timeline by asking the following


questions:

What are the temporal distances between key events? If


“lengthy,” what caused the delay? Are there missing pieces
of data that may fill those gaps that should be collected?

Was information overlooked that may have had an impact


on or be related to the events?

Conversely, if events seem to have happened more rapidly


than expected or if some of the events do not appear to be
related, is it possible that the analyst has information
related to multiple event Timelines?

Does the Timeline have all the critical events that are
necessary for the outcome to occur?
When did the information become known to the analyst or a
key player?

Are there information or intelligence gaps?

Are there any points along the Timeline when the target is
particularly vulnerable to the collection of intelligence or
information or countermeasures?

What events outside this Timeline could have influenced the


activities?

If preparing a Timeline, synopsize the data along a horizontal or


vertical line. Use the space on both sides of the line to highlight
important analytic points. For example, place facts above the
line and points of analysis or commentary below the line.
Alternatively, contrast the activities of different groups,
organizations, or streams of information by placement above or
below the line. If multiple actors are involved, you can use
multiple lines, showing how and where they converge. For
example, multiple lines could be used to show (1) the target’s
activities, (2) open source reporting about the events, (3)
supplemental classified or proprietary information, and (4)
analytic observations or commentary.

Look for relationships and patterns in the data connecting


persons, places, organizations, and other activities. Identify
gaps or unexplained time periods, and consider the implications
of the absence of evidence.

Prepare a summary chart detailing key events and key analytic


points in an annotated Timeline.
Potential Pitfalls
In using Timelines, analysts may assume, incorrectly, that events following earlier events were caused
by the earlier events. Also, the value of this technique may be reduced if the analyst lacks imagination in
identifying contextual events that relate to the information in the Chronology or Timeline.

Description

Figure 7.2 Timeline Estimate of Missile Launch Date


Source: Pherson Associates, LLC, 2019.

Note: A TEL is a transporter, erector, and launcher for missiles.


Example
A team of analysts working on strategic missile forces knows what
steps are necessary to prepare for and launch a nuclear missile.
(See Figure 7.2.) The analysts have been monitoring a country they
believe is close to testing a new variant of its medium-range surface-
to-surface ballistic missile. They have seen the initial steps of a test
launch in mid-February and decide to initiate a concentrated watch
of the primary and secondary test launch facilities. Observed and
expected activities are placed into a Timeline to gauge the potential
dates of a test launch. The analysts can thus estimate when a
possible missile launch may occur and make decision makers aware
of indicators of possible activity.
Origins of This Technique
Chronologies and Timelines are well-established techniques used in
many fields. The information here is from M. Jones, “Sorting,
Chronologies, and Timelines,” The Thinker’s Toolkit (New York:
Three Rivers Press, 1998), chapter 6; and from Pherson Associates
training materials.
7.3 CROSS-IMPACT MATRIX
The Cross-Impact Matrix helps analysts deal with complex problems
when “everything is related to everything else.” By using this
technique, analysts and decision makers can systematically examine
how each factor in a context influences all other factors to which it
appears related.
When to Use It
The Cross-Impact Matrix is useful early in a project when a group is
still in a learning mode trying to sort out a complex situation.
Whenever a brainstorming session or other meeting is held to
identify all the variables, drivers, or players that may influence the
outcome of a situation, the next logical step is to use a Cross-Impact
Matrix to examine the interrelationships among each of these
variables. A group discussion of how each pair of variables interacts
can be an enlightening learning experience and a good basis on
which to build ongoing collaboration. How far one goes in completing
the matrix and producing a description of the effects associated with
each variable may vary depending upon the nature and significance
of the project. At times, just the discussion is adequate.

Analysis of cross-impacts is useful when the following occurs:

A situation is in flux, and analysts need to understand all the


factors that might influence the outcome. This requires
understanding how all the factors relate to one another, and how
they might influence one another.

A situation is stable, and analysts need to identify and monitor


all the factors that could upset that stability. This, too, requires
understanding how the various factors might interact to
influence one another.

A significant event has just occurred, and analysts need to


understand the implications of the event. What other significant
forces are influenced by the event, and what are the implications
of this influence?
Value Added
When analysts are estimating or forecasting future events, they
consider the dominant forces and potential future events that might
influence an outcome. They then weigh the relative influence or
likelihood of these forces or events, often considering them
individually without regard to potentially important interactions. The
Cross-Impact Matrix provides a context for the discussion of these
interactions. This discussion often reveals that variables or issues
once believed to be simple and independent are interrelated. The
sharing of information during a discussion of each potential cross-
impact can provide an invaluable learning experience. For this
reason alone, the Cross-Impact Matrix is a useful tool that can be
applied at some point in almost any study that seeks to explain
current events or forecast future outcomes.

The Cross-Impact Matrix provides a structure for managing the


complexity that makes most analysis so difficult. It requires that
analysts clearly articulate all assumptions about the relationships
among variables. Doing so helps analysts defend or critique their
conclusions by tracing the analytical argument back through a path
of underlying premises.

Use of the Cross-Impact Matrix is particularly effective in helping


analysts avoid being influenced by heuristics such as stopping the
search for a cause when a seemingly satisfactory answer is found
(Premature Closure), selecting the first answer that appears “good
enough” (Satisficing), and seeing patterns in random events as
systematic and part of a coherent world (Desire for Coherence and
Uncertainty Reduction). It can also provide a powerful antidote to
several intuitive pitfalls, including overinterpreting conclusions from a
small sample of data (Overinterpreting Small Samples), giving too
much weight to first impressions or initial data that appears important
at the time (Relying on First Impressions), and continuing to hold to a
judgment when confronted with additional or contradictory evidence
(Rejecting Evidence).
The Method
Assemble a group of analysts knowledgeable on various aspects of the subject. The group brainstorms
a list of variables or events that would likely have some effect on the issue being studied. The project
coordinator then creates a matrix and puts the list of variables or events down the left side of the matrix
and the same variables or events across the top.

The group then fills out the matrix, considering, and then recording, the relationship between each
variable or event and every other variable or event. For example, does the presence of Variable 1
increase or diminish the influence of Variables 2, 3, 4, and so on? Or does the occurrence of Event 1
increase or decrease the likelihood of Events 2, 3, 4, and so forth? If one variable does affect the other,
the positive or negative magnitude of this effect can be recorded in the matrix by entering a large or
small + or a large or small – in the appropriate cell (or by making no marking at all if there is no
significant effect). The terminology used to describe the relationship between each pair of variables or
events is based on whether it is “enhancing,” “inhibiting,” or “unrelated.”

The matrix shown in Figure 7.3 has six variables, with thirty possible interactions. Note that the
relationship between each pair of variables is assessed twice, as the relationship may not be symmetric.
That is, the influence of Variable 1 on Variable 2 may not be the same as the impact of Variable 2 on
Variable 1. It is not unusual for a Cross-Impact Matrix to have substantially more than thirty possible
interactions, in which case careful consideration of each interaction can be time-consuming.

Description

Figure 7.3 Cross-Impact Matrix

Analysts should use the Cross-Impact technique to focus on significant interactions between variables
or events that may have been overlooked, or combinations of variables that might reinforce one another.
Combinations of variables that reinforce one another can lead to surprisingly rapid changes in a
predictable direction. On the other hand, recognizing that there is a relationship among variables and
the direction of each relationship may be sufficient for some problems.

The depth of discussion and the method used for recording the results are discretionary. Each depends
upon how much you are learning from the discussion, which will vary from one application of this matrix
to another. If the group discussion of the likelihood of these variables or events and their relationships to
one another is a productive learning experience, keep it going. If key relationships are identified that are
likely to influence the analytic judgment, fill in all cells in the matrix and take good notes. If the group
does not seem to be learning much, cut the discussion short.
As a collaborative effort, team members can conduct their discussion—and periodically review—their
key findings online. As time permits, analysts can enter new information or edit previously entered
information about the interaction between each pair of variables. This record will serve as a point of
reference or a memory jogger throughout the project.
Relationship to Other Techniques
Matrices as a generic technique with many types of applications are
discussed in chapter 5. The use of a Cross-Impact Matrix as
described here frequently follows some form of brainstorming at the
start of an analytic project. Elicit the assistance of other
knowledgeable analysts in exploring all the relationships among the
relevant factors identified in the brainstorming session. Analysts can
build on the discussion of the Cross-Impact Matrix by developing a
visual Mind Map or Concept Map of all the relationships.

See also the discussion of the Complexity Manager technique in


chapter 10. An integral part of the Complexity Manager technique is
a form of Cross-Impact Analysis that takes the analysis a step further
toward an informed conclusion.
Origins of This Technique
The Cross-Impact Matrix technique was developed in the 1960s as
one element of a quantitative futures analysis methodology called
Cross-Impact Analysis. Richards J. Heuer Jr. became familiar with it
when the CIA was testing the Cross-Impact Analysis methodology.
He started using it as an intelligence analysis technique, as
described here, more than forty years ago. For simple instructions
for using the Cross-Impact Matrix and printable templates, go to
http://discoveryoursolutions.com/toolkit/cross_impact_matrix.html.
7.4 MULTIPLE HYPOTHESIS
GENERATION
In broad terms, a hypothesis is a potential explanation or conclusion
that is to be tested by collecting and analyzing evidence. It is a
declarative statement that has not been established as true—an
“educated guess” based on observation to be supported or refuted
by more observation or through experimentation.

A good hypothesis should satisfy the following criteria represented


by the mnemonic STOP:

Statement, not a question.

Testable and falsifiable.

Observation—and knowledge-based.

Predicts anticipated results clearly.

Hypothesis Generation should be an integral part of any rigorous


analytic process because it helps the analyst think broadly and
creatively about a range of possibilities and avoid being surprised
when common wisdom turns out to be wrong. The goal is to develop
a list of hypotheses that can be scrutinized and tested over time
against existing relevant information as well as new data that may
become available in the future. Analysts should strive to make the
hypotheses mutually exclusive and the list as comprehensively
exhaustive as possible—thereby satisfying the imperative that
hypotheses should be Mutually Exclusive and Comprehensively
Exhaustive (MECE).

There are many techniques used to generate hypotheses, including


techniques discussed elsewhere in this book, such as Venn
Analysis, Cluster Brainstorming, several forms of Foresight analysis,
Classic Quadrant Crunching™, Starbursting, Delphi Method, and
Decision Trees. This section discusses techniques developed
specifically for hypothesis generation and then presents the method
for three different techniques: Simple Hypotheses, Quadrant
Hypothesis Generation, and the Multiple Hypotheses Generator®.
When to Use It
Gaining confidence in a hypothesis is not a function solely of
accumulating evidence in its favor but in showing that situations that
could establish its falsity do not, in fact, happen. Analysts should
develop multiple hypotheses at the start of a project when the
following occurs:

the importance of the subject matter requires a systematic


analysis of all alternatives,

many factors or variables are involved in the analysis,

a high level of uncertainty exists about the outcome,

analysts or decision makers hold competing views.

Simple Hypotheses is often used to broaden the spectrum of


plausible hypotheses. It utilizes Cluster Brainstorming to create
potential hypotheses based on affinity groups. Quadrant Hypothesis
Generation works best when the problem has two key drivers. In
these circumstances, a 2-×-2 matrix is adequate for creating
hypotheses that reflect the situations posited in each quadrant. The
Multiple Hypotheses Generator® is particularly helpful when there is
a reigning lead hypothesis.
Value Added
Multiple Hypothesis Generation provides a structured way to
generate a comprehensive set of mutually exclusive hypotheses.
This can increase confidence that an important hypothesis has not
been overlooked. It can also help reduce cognitive biases, such as
seeking only the information that is consistent with the lead
hypothesis (Confirmation Bias), accepting a given value of
something or a lead hypothesis as a proper—or the only—starting
point for conducting an analysis (Anchoring Effect), stopping the
search for a cause when a seemingly satisfactory answer is found
before sufficient information and proper analysis can be performed
(Premature Closure), and seeing patterns in random events as
systematic and part of a coherent story (Desire for Coherence and
Uncertainty Reduction). When the techniques are used properly,
choosing a lead hypothesis becomes much less critical than making
sure that analysts have considered all possible explanations.

The techniques are particularly useful in helping intelligence analysts


overcome some classic intuitive traps, such as the following:

Assuming a Single Solution. Most analysts quickly develop an


initial lead hypothesis to explain the topic at hand and continue
to test the initial hypothesis as new information appears. A good
analyst will simultaneously consider a few alternatives. This
helps ensure that no diagnostic information is ignored because it
does not support the lead hypothesis. For example, when
individuals move large amounts of money from China to other
countries, financial analysts are likely to suspect that ill-begotten
monies are being laundered. On some occasions, however, the
funds could have been obtained through legitimate means; this
alternative hypothesis should be in play until it can be disproven
by the facts of the case.
Overinterpreting Small Samples. Analysts frequently are
pressed to “make a call” when there is insufficient data to
support the assessment. In such cases, a preferred strategy is
to offer several possible alternatives. The U.S. Intelligence
Community—and the world as a whole—would have been better
served if the National Intelligence Council in its Iraq WMD
National Intelligence Estimate had proffered three (and not just
the first two) hypotheses that (1) Iraq had a substantial WMD
program and the intelligence community had not yet found the
evidence, (2) Iraq had a more modest program but could readily
accelerate production in areas that had fallen fallow, or (3) Iraq
had no WMD program and reporting to the contrary was
deception.

Projecting Past Experiences. When under pressure, analysts


can select a hypothesis primarily because it avoids a previous
error or replicates a past success. A prime example of this was
the desire not to repeat the mistake of underestimating Saddam
Hussein’s WMD capabilities in the run-up to the second U.S.
war with Iraq.

Relying on First Impressions. When pressed for time,


analysts can also fall into the trap of giving too much weight to
first impressions or initial data that attracts their attention at the
time. Analysts are especially susceptible to this bias if they have
recently visited a country or have viewed a particularly vivid or
disturbing video.
7.4.1 The Method: Simple Hypotheses
To use the Simple Hypotheses method, define the problem and determine how the hypotheses will be
used at the beginning of the project. Hypotheses can be applied several ways: (1) in an Analysis of
Competing Hypotheses, (2) in some other hypothesis-testing project, (3) as a basis for developing
scenarios, or (4) as a means to draw attention to particularly positive or worrisome outcomes that might
arise. Figure 7.4.1 illustrates the process.

Gather together a diverse group to review the available information and explanations for the issue,
activity, or behavior that you want to evaluate. In forming this diverse group, consider including different
types of expertise for different aspects of the problem, cultural expertise about the geographic area
involved, different perspectives from various stakeholders, and different styles of thinking (left brain/right
brain, male/female). Then do the following:

Ask each member of the group to write down on an index card up to three alternative explanations
or hypotheses. Prompt creative thinking by using the following:

Applying theory. Drawing from the study of many examples of the same phenomenon.

Description

Figure 7.4.1 Simple Hypotheses


Source: Globalytica, LLC, 2019.

Comparison with historical analogies. Comparing current events to historical precedents.

Situational logic. Representing all the known facts and an understanding of the underlying
forces at work at the given time and place.

Collect the cards and display the results on a whiteboard. Consolidate the list to avoid any
duplication.

Employ additional individual and group brainstorming techniques, such as Cluster Brainstorming, to
identify key forces and factors.

Aggregate the hypotheses into affinity groups and label each group.

Use problem restatement and consideration of the opposite to develop new ideas.
Update the list of alternative hypotheses. If the hypotheses will be used in Analysis of Competing
Hypotheses, strive to keep them mutually exclusive—that is, if one hypothesis is true, all others
must be false.

Have the group clarify each hypothesis by asking the journalist’s classic list of questions: Who,
What, How, When, Where, and Why?

Select the most promising hypotheses for further exploration.


7.4.2 The Method: Quadrant Hypothesis Generation
Use the four-quadrant technique to identify a basic set of hypotheses when two key driving forces are
likely to determine the outcome of an issue. The technique identifies four potential scenarios that
represent the extreme conditions for each of the two major drivers. It spans the logical possibilities
inherent in the relationship and interaction of the two driving forces, thereby generating options that
analysts otherwise may overlook.

Quadrant Hypothesis Generation is easier and quicker to use than the Multiple Hypotheses Generator®,
but it is limited to cases in which the outcome of a situation will be determined by two major driving
forces—and it depends on the correct identification of these forces. The technique is less effective when
more than two major drivers are present or when analysts differ over which forces constitute the two
major drivers.

The steps for using Quadrant Hypothesis Generation follow:

Identify two main drivers by using techniques such as Cluster Brainstorming or by surveying
subject-matter experts. A discussion to identify the two main drivers can be a useful exercise.

Construct a 2-×-2 matrix using the two drivers.

Think of each driver as a continuum from one extreme to the other. Write the extremes of each of
the drivers at the end of the vertical and horizontal axes.

Fill in each quadrant with the details of what the end state would be if shaped by the two extremes
of the drivers.

Develop signposts or indicators that show whether events are moving toward one of the
hypotheses. Use the signposts or indicators of change to develop intelligence collection strategies
or research priorities to determine the direction in which events are moving.

Description

Figure 7.4.2 Quadrant Hypothesis Generation: Four Hypotheses on the Future of Iraq

Figure 7.4.2 shows an example of a Quadrant Hypothesis Generation chart. In this case, analysts have
been tasked with developing a paper to project possible futures for Iraq, focusing on the potential end
state of the government. The analysts have identified and agreed upon the two key drivers in the future
of the government: the level of centralization of the federal government and the degree of religious
control of that government. They develop their quadrant chart and lay out the four logical hypotheses
based on their decisions.

The four hypotheses derived from the quadrant chart can be stated as follows:

The final state of the Iraq government will be a centralized state and a secularized society.

The final state of the Iraq government will be a centralized state and a religious society.

The final state of the Iraq government will be a decentralized state and a secularized society.

The final state of the Iraq government will be a decentralized state and a religious society.
7.4.3 The Method: Multiple Hypotheses Generator®
The Multiple Hypotheses Generator® is a technique for developing multiple alternatives for explaining an
issue, activity, or behavior. Analysts often can brainstorm a useful set of hypotheses without such a tool,
but the Multiple Hypotheses Generator® may give greater confidence than other techniques that
analysts have not overlooked a critical alternative or outlier. Analysts should employ the Multiple
Hypotheses Generator® to ensure that they have considered a broad array of potential hypotheses. In
some cases, they may have considerable data and want to ensure that they have generated a set of
plausible explanations consistent with all the data at hand. Alternatively, they may have been presented
with a hypothesis that seems to explain the phenomenon at hand and been asked to assess its validity.
The technique helps analysts rank alternative hypotheses from the most to least credible, focusing on
those at the top of the list as those deemed most worthy of attention.

To use this method:

Gather a diverse group to define the issue, activity, or behavior under study. Often, it is useful to ask
questions in the following ways:

What variations could be developed to challenge the lead hypothesis that . . .?

What are the possible permutations that would flip the assumptions contained in the lead
hypothesis that . . .?

Identify the Who, What, and Why for the lead hypothesis. Then generate plausible alternatives
for each relevant key component.

Review the lists of alternatives for each of the key components; strive to keep the alternatives on
each list mutually exclusive.

Generate a list of all possible permutations, as shown in Figure 7.4.3.


Figure 7.4.3 Multiple Hypotheses Generator®: Generating Permutations
Source: Globalytica, LLC, 2019.

Discard any permutation that simply makes no sense.

Evaluate the credibility of the remaining permutations by challenging the key assumptions of each
component. Some of these assumptions may be testable themselves. Assign a “credibility score” to
each permutation using a 1-to-5-point scale where 1 is low credibility and 5 is high credibility.

Re-sort the remaining permutations, listing them from most credible to least credible.

Restate the permutations as hypotheses, ensuring that each meets the criteria of a good
hypothesis.

Select from the top of the list those hypotheses most deserving of attention.
Potential Pitfalls
The value of this technique is limited to the ability of analysts to
generate a robust set of alternative explanations. If group dynamics
are flawed, the outcomes will be flawed. Whether the correct
hypothesis will emerge from this process and analysts identify it as
such cannot be guaranteed, but the prospect of the correct
hypothesis being included in the set of hypotheses under
consideration is greatly increased.
Relationship to Other Techniques
The product of any Foresight analysis process can be thought of as
a set of alternative hypotheses. Quadrant Hypothesis Generation is
a specific application of the generic method called Morphological
Analysis, described in chapter 9. Alternative Futures Analysis uses a
similar quadrant chart approach to define four potential outcomes,
and Multiple Scenarios Generation uses the approach to define
multiple sets of four outcomes. Both of these techniques are also
described in chapter 9.
Origins of This Technique
The generation and testing of hypotheses is a key element of
scientific reasoning. The Simple Hypotheses approach and Quadrant
Hypothesis Generation are described in the Handbook of Analytic
Tools and Techniques, 5th ed. (Tysons, VA: Pherson Associates,
LLC, 2019) and Pherson Associates training materials. The
description of the Multiple Hypotheses Generator® can be found in
the fourth edition of the Handbook of Analytic Tools and Techniques
(Reston, VA: Pherson Associates, LLC, 2015).
7.5 DIAGNOSTIC REASONING
Diagnostic Reasoning is the application of hypothesis testing to a
new development, a single new item of information or intelligence, or
the reliability of a source. It differs from Analysis of Competing
Hypotheses (ACH) in that it is used to evaluate a single item of
relevant information or a single source; ACH deals with an entire
range of hypotheses and multiple items of relevant information.
When to Use It
Analysts should use Diagnostic Reasoning if they find themselves
making a snap intuitive judgment while assessing the meaning of a
new development, the significance of a new report, or the reliability
of a stream of reporting from a new source. Often, much of the
information used to support one’s lead hypothesis turns out to be
consistent with alternative hypotheses as well. In such cases, the
new information should not—and cannot—be used as evidence to
support the prevailing view or lead hypothesis.

The technique also helps reduce the chances of being caught by


surprise. It ensures that the analyst or decision maker will have given
at least some consideration to alternative explanations. The
technique is especially important to use when an analyst—or
decision maker—is looking for evidence to confirm an existing
mental model or policy position. It helps the analyst assess whether
the same information is consistent with other reasonable conclusions
or with alternative hypotheses.
Value Added
The value of Diagnostic Reasoning is that it helps analysts balance
their natural tendency to interpret new information as consistent with
their existing understanding of what is happening—that is, the
analyst’s mental model. The technique prompts analysts to ask
themselves whether this same information is consistent with other
reasonable conclusions or alternative hypotheses. It is a common
experience to discover that much of the information supporting belief
in the most likely conclusion is of limited value because that same
information is consistent with alternative conclusions. One needs to
evaluate new information in the context of all possible explanations
of that information, not just in the context of a well-established
mental model.

The Diagnostic Reasoning technique helps the analyst identify the


information that is essential to support a hypothesis and avoid the
mistake of focusing attention on one vivid scenario or explanation
while ignoring other possibilities or alternative hypotheses (Vividness
Bias). When evaluating evidence, analysts tend to assimilate new
information into what they currently perceive. Diagnostic Reasoning
protects them from the traps of seeking only the information that is
consistent with the lead hypothesis (Confirmation Bias) and selecting
the first answer that appears “good enough” (Satisficing).

Experience can handicap experts because they often hold tightly to


timeworn models—and a fresh perspective can be helpful.
Diagnostic Reasoning helps analysts avoid the intuitive trap of
assuming the same dynamic is in play when something seems to
accord with an analyst’s past experiences (Projecting Past
Experiences). It also helps analysts counter the pitfall of continuing
to hold to an analytic judgment when confronted with a mounting list
of evidence that contradicts the initial conclusion (Rejecting
Evidence) and dismissing information at first glance without
considering all possible alternatives (Ignoring Inconsistent
Evidence).
The Method
Diagnostic Reasoning is a process that focuses on trying to refute
alternative judgments rather than confirming what you already
believe to be true. Here are the steps to follow:

When you receive a potentially significant item of information,


make a mental note of what it seems to mean (i.e., an
explanation of why something happened or what it portends for
the future). Make a quick, intuitive judgment based on your
current mental model.

Define the focal question. For example, Diagnostic Reasoning


brainstorming sessions often begin with questions like

Are there alternative explanations for the lead hypothesis


(defined as . . .) that would also be consistent with the new
information, new development, or new source of reporting?

Is there a reason other than the lead hypothesis that . . .?

Brainstorm, either alone or in a small group, the alternative


judgments that another analyst with a different perspective
might reasonably deem to have a chance of being accurate.
Make a list of these alternatives.

For each alternative, ask the following question: If this


alternative were true or accurate, how likely is it that I would
have seen, but possibly ignored, information that was consistent
with this alternative explanation? Make a tentative judgment
based on consideration of these alternatives. If the new
information is equally likely with each of the alternatives, the
information has no diagnostic value and can be ignored. If the
information is clearly inconsistent with one or more alternatives,
those alternatives might be ruled out.

Following this mode of thinking for each of the alternatives,


decide which alternatives need further attention and which can
be dropped from consideration or put aside until new information
surfaces.

Proceed by seeking additional evidence to refute the remaining


alternatives rather than to confirm them.
Potential Pitfalls
When new information is received, analysts need to validate that the
new information is accurate and not deceptive or intentionally
misleading. It is also possible that none of the key information turns
out to be diagnostic, or that all relevant information will not come to
light.
Relationship to Other Techniques
Diagnostic Reasoning is an integral part of two other techniques:
Analysis of Competing Hypotheses and Indicators Validation and
Evaluation (chapter 9). It is presented here as a separate technique
to show that its use is not limited to those two techniques. It is a
fundamental form of critical reasoning that should be widely used in
intelligence analysis.
Origins of This Technique
Diagnostic Reasoning has been the principal method for medical
problem solving for many years. For information on the role of
Diagnostic Reasoning in the medical world, see the following
publications: Albert S. Elstein, “Thinking about Diagnostic Thinking:
A Thirty-Year Perspective,” Advances in Health Science Education,
published online by Springer Science+Business Media, August 11,
2009; and Pat Croskerry, “A Universal Model of Diagnostic
Reasoning,” Academic Medicine 84, no. 8 (August 2009).
7.6 ANALYSIS OF COMPETING
HYPOTHESES
ACH is an analytic process that identifies a complete set of
alternative hypotheses, systematically evaluates data that are
consistent or inconsistent with each hypothesis, and proceeds by
rejecting hypotheses rather than trying to confirm what appears to be
the most likely hypothesis. The process of rejecting rather than
confirming hypotheses applies to intelligence analysis the scientific
principles advocated by Karl Popper, one of the most influential
philosophers of science of the twentieth century.4

ACH starts with the identification of a set of mutually exclusive


alternative explanations or outcomes called hypotheses. The analyst
assesses the consistency or inconsistency of each item of relevant
information with each hypothesis, and then selects the hypothesis
that best fits the relevant information. The scientific principle behind
this technique is to proceed by trying to refute as many reasonable
hypotheses as possible rather than to confirm what initially appears
to be the most likely hypothesis. The most likely hypothesis is then
the one with the least amount of inconsistent information—not the
one with an abundance of supporting relevant information.
When to Use It
ACH is appropriate for almost any analysis where there are
alternative explanations for what has happened, is happening, or is
likely to happen. Use it when the judgment or decision is so
important that you cannot afford to be wrong or when you need a
systematic approach to avoid being surprised by an unforeseen
outcome. ACH is particularly appropriate when dealing with
controversial issues and when analysts need to leave an audit trail to
show what relevant information they considered and how they
arrived at their analysis. If other analysts and decision makers
disagree with the analysts’ conclusions, an ACH matrix can help
identify the precise area of disagreement. Subsequent discussion
can then focus on the most important substantive differences.

ACH is most effective when there is a robust flow of data to absorb


and evaluate. It is well suited for addressing questions about
technical issues in the chemical, biological, radiological, and nuclear
arenas, such as, “For which weapons’ system is this part most likely
being imported?” or, “Which type of missile system is Country X
importing or developing?” The technique is useful for managing
criminal investigations and determining which line of analysis is
correct. ACH is particularly helpful when an analyst must deal with
the potential for denial and deception, as it was initially developed for
that purpose.

The technique can be used by a single analyst but is most effective


with a small team that can challenge team members’ evaluations of
the relevant information. It structures and facilitates the exchange of
information and ideas with colleagues in other offices or agencies.

An ACH analysis requires a modest commitment of time; it may take


a day or more to build the ACH matrix. Once all the relevant
information has been collected, it may take several hours to work
through all the stages of the analytic process before writing up the
conclusions. Usually a facilitator or a colleague previously schooled
in the use of the technique helps guide analysts through the process,
especially if it is the first time they have used the methodology.
Value Added
Analysts are commonly required to work with incomplete,
ambiguous, anomalous, and sometimes deceptive data. In addition,
strict time constraints and the need to “make a call” often conspire
with natural human cognitive biases to cause inaccurate or
incomplete judgments. If the analyst is already generally
knowledgeable on the topic, a common procedure is to develop a
favored hypothesis and then search for relevant information to
confirm it. This is called Satisficing or going with the first answer that
seems to be supported by the evidence.

Satisficing is efficient because it saves time and often works.


However, Confirmation Bias, which impels an analyst to look only for
information that is consistent with the favored or lead hypothesis or
widely accepted school of thought, is often at work in the
background, as the analyst has made no investment in protection
against surprise. Satisficing allows analysts to accept data as true
without assessing its credibility or questioning fundamental
assumptions because it helps create a more coherent story
(Evidence Acceptance Bias). If engaged in Satisficing, analysts often
bypass the analysis of alternative explanations or outcomes, which
should be fundamental to any complete analysis. As a result,
Satisficing fails to distinguish that much of the relevant information
seemingly supportive of the favored hypothesis is also consistent
with one or more alternative hypotheses. It often fails to recognize
the importance of what is missing (i.e., what should be observable if
a given hypothesis is true but is not there).

ACH improves the analyst’s chances of overcoming these


challenges by requiring analysts to identify and then try to refute as
many reasonable hypotheses as possible using the full range of
data, assumptions, and gaps that are pertinent to the problem at
hand. The method for analyzing competing hypotheses takes time
and attention in the initial stages, but it pays big dividends in the end.
When analysts are first exposed to ACH and say they find it useful, it
is because the simple focus on identifying alternative hypotheses
and how they might be disproved prompts analysts to think seriously
about evidence, explanations, or outcomes in ways that had not
previously occurred to them.

The ACH process requires the analyst to assemble the collected


information and organize it in a useful way, so that it can be readily
retrieved for use in the analysis. This is done by creating a matrix
with relevant information down the left side and hypotheses across
the top. Each item of relevant information is then evaluated as to
whether it is consistent or inconsistent with each hypothesis. The
results are then used to assess the evidentiary and logical support
for and against each hypothesis. This can be done manually, but it is
much easier and better to use an Excel spreadsheet or ACH
software designed for this purpose. Various ACH software
applications can be used to sort and analyze the data by type of
source and date of information, as well as by degree of support for or
against each hypothesis.

ACH helps analysts produce a better analytic product by

Maintaining a record of the relevant information and tracking


how that information relates to each hypothesis.

Capturing the analysts’ key assumptions when the analyst is


coding the data and recording what additional information is
needed or what collection requirements are needed.

Enabling analysts to present conclusions in a way that is


organized and transparent as it documents how conclusions
were reached.

Providing a foundation for identifying indicators that can then be


monitored and validated to determine the direction in which
events are heading.
Leaving a clear audit trail as to how the analysis was done, the
conclusions reached, and how individual analysts may have
differed in their assumptions or judgments.
ACH Software
ACH started as a manual method at the CIA in the mid-1980s. The
first professionally developed and tested ACH software was created
in 2005 by the Palo Alto Research Center (PARC), with federal
government funding and technical assistance from Richards J.
Heuer Jr. and Randolph Pherson. Randolph Pherson managed its
introduction into the U.S. Intelligence Community. The PARC
version, though designed for use by an individual analyst, was
commonly used by a co-located team of analysts. Members of such
groups reported,

The technique helped them gain a better understanding of the


differences of opinion with other analysts or between analytic
offices.

Review of the ACH matrix provided a systematic basis for


identification and discussion of differences between participating
analysts.

Reference to the matrix helped depersonalize the argumentation


when there were differences of opinion.

A collaborative version of ACH called Te@mACH® was developed


under the direction of Randolph Pherson for Globalytica, LLC, in
2010. It has most of the functions of the PARC ACH tool but allows
analysts in different locations to work on the same problem
simultaneously. They can propose hypotheses and enter data on the
matrix from multiple locations, but they must agree to work from the
same set of hypotheses and the same set of relevant information.
The software allows them to chat electronically about one another’s
assessments and assumptions, to compare their analysis with that of
their colleagues, and to learn what the group consensus was for the
overall problem solution.

Other government agencies, research centers, and academic


institutions have developed versions of ACH. One version called
Structured Analysis of Competing Hypotheses, developed for
instruction at Mercyhurst College, builds on ACH by requiring deeper
analysis at some points.

The use of collaborative ACH tools ensures that all analysts are
working from the same database of evidence, arguments, and
assumptions, and that each member of the team has had an
opportunity to express his or her view on how that information relates
to the likelihood of each hypothesis. Such tools can be used both
synchronously and asynchronously and include functions such as a
survey method to enter data that protects against bias, the ability to
record key assumptions and collection requirements, and a filtering
function that allows analysts to see how each person rated the
relevant information.5
The Method
To retain five or seven hypotheses in working memory and note how each item of information fits into
each hypothesis is beyond the capabilities of most analysts. It takes far greater mental agility than the
common practice of seeking evidence to support a single hypothesis already believed to be the most
likely answer. The following nine-step process is at the heart of ACH and can be done without software.

Identify all possible hypotheses that should be considered. Hypotheses should be mutually
exclusive; that is, if one hypothesis is true, all others must be false. The list of hypotheses should
include a deception hypothesis, if that is appropriate. For each hypothesis, develop a brief scenario
or “story” that explains how it might be true. Analysts should strive to create as comprehensive list
of hypotheses as possible.

Make a list of significant relevant information, which means everything that would help analysts
evaluate the hypotheses, including evidence, assumptions, and the absence of things one would
expect to see if a hypothesis were true. It is important to include assumptions as well as factual
evidence, because the matrix is intended to be an accurate reflection of the analyst’s thinking about
the topic. If the analyst’s thinking is driven by assumptions rather than hard facts, this needs to
become apparent so that the assumptions can be challenged. A classic example of absence of
evidence is the Sherlock Holmes story of the dog barking in the night. The failure of the dog to bark
was persuasive evidence that the guilty party was not an outsider but an insider whom the dog
knew.

Create a matrix and analyze the diagnosticity of the information. Create a matrix with all
hypotheses across the top and all items of relevant information down the left side. See Figure 7.6a
for an example. Analyze the “diagnosticity” of the evidence and arguments to identify which points
are most influential in judging the relative likelihood of the hypotheses. Ask, “Is this input Consistent
with the hypothesis, is it Inconsistent with the hypothesis, or is it Not Applicable or not relevant?”
This can be done by either filling in each cell of the matrix row-by-row or by randomly selecting cells
in the matrix for analysts to rate. If it is Consistent, put a “C” in the appropriate matrix box; if it is
Inconsistent, put an “I”; if it is Not Applicable to that hypothesis, put an “NA.” If a specific item of
evidence, argument, or assumption is particularly compelling, put two “C’s” in the box; if it strongly
undercuts the hypothesis, put two “I’s.”

When you are asking if an input is Consistent or Inconsistent with a specific hypothesis, a common
response is, “It all depends on . . .” That means the rating for the hypothesis is likely based on an
assumption. You should record all such assumptions when filling out the matrix. After completing the
matrix, look for any pattern in those assumptions, such as the same assumption being made when
ranking multiple items of information. After the relevant information has been sorted for diagnosticity,
note how many of the highly diagnostic Inconsistency ratings are based on assumptions. Consider how
much confidence you should have in those assumptions and then adjust the confidence in the ACH
Inconsistency Scores accordingly.

Review where analysts differ in their assessments and decide if the ratings need to be adjusted
(see Figure 7.6b). Often, differences in how analysts rate an item of information can be traced back
to different assumptions about the hypotheses when doing the ratings.

Refine the matrix by reconsidering the hypotheses. Does it make sense to combine two
hypotheses into one, or to add a new hypothesis that was not considered at the start? If a new
hypothesis is added, go back and evaluate all the relevant information for this hypothesis.
Additional relevant information can be added at any time.
Figure 7.6A Creating an ACH Matrix

Description

Figure 7.6B Evaluating Levels of Disagreement in ACH

Draw tentative conclusions about the relative likelihood of each hypothesis, basing your
conclusions on an analysis regarding the diagnosticity of each item of relevant information. Proceed
by trying to refute hypotheses rather than confirm them. Add up the number of Inconsistency ratings
for each hypothesis and note the Inconsistency Score for each hypothesis. As a first cut, examine
the total number of “I” and “II” ratings for each hypothesis. The hypothesis with the most
Inconsistent ratings is the least likely to be true and the hypothesis or hypotheses with the lowest
Inconsistency Score(s) is tentatively the most likely hypothesis.

The Inconsistency Scores are broad generalizations, not precise calculations. ACH is a tool
designed to help the analyst make a judgment, but not to actually make the judgment for the
analyst. This process is likely to produce correct estimates more frequently than less systematic or
rigorous approaches, but the scoring system does not eliminate the need for analysts to use their
own good judgment. The “Potential Pitfalls” section below identifies several occasions when
analysts need to override the Inconsistency Scores.

Analyze the sensitivity of your tentative conclusion to see how dependent it is on a few critical
items of information. For example, look for evidence that has a “C” for the lead hypothesis but an “I”
for all other hypotheses. Evaluate the importance and credibility of those reports, arguments, or
assumptions that garnered a “C.” Consider the consequences for the analysis if that item of relevant
information were wrong or misleading or subject to a different interpretation. If all the evidence
earns a “C” for each hypothesis, then the evidence is not diagnostic. If a different interpretation of
any of the data would cause a change in the overall conclusion, go back and double-check the
accuracy of your interpretation.

Report the conclusions. Consider the relative likelihood of all the hypotheses, not just the most
likely one. State which items of relevant information were the most diagnostic, and how compelling
a case they make in identifying the most likely hypothesis.

Identify indicators or milestones for future observation. Generate two lists: one focusing on
future events or what additional research might uncover that would substantiate the analytic
judgment, and a second that would suggest the analytic judgment is less likely to be correct or that
the situation has changed. Validate the indicators and monitor both lists on a regular basis,
remaining alert to whether new information strengthens or weakens your case.
Potential Pitfalls
A word of caution: ACH only works when all participating analysts
approach an issue with a relatively open mind. An analyst already
committed to a “right answer” will often find a way to interpret
relevant information to align with or make consistent with the
preexisting belief. In other words, as an antidote to Confirmation
Bias, ACH is like a flu shot. Getting the flu shot will usually keep you
from getting the flu, but it won’t make you well if you already have
the flu.

The Inconsistency Scores generated for each hypothesis are not the
product of a magic formula that tells you which hypothesis to believe.
The ACH software takes you through a systematic analytic process,
and the Inconsistency Score calculation that emerges is only as
accurate as your selection and evaluation of the relevant information.

Because it is more difficult to refute hypotheses than to find


information that confirms a favored hypothesis, the generation and
testing of alternative hypotheses will often increase rather than
reduce the analyst’s level of uncertainty. Such uncertainty is
frustrating, but it is usually an accurate reflection of the true situation.
The ACH procedure has the offsetting advantage of focusing
attention on the few items of critical information that cause the
uncertainty or, if they were available, would alleviate it. ACH can
guide future collection, research, and analysis to resolve the
uncertainty and produce a more accurate judgment.

Analysts should be aware of five circumstances that can cause a


divergence between an analyst’s own beliefs and the Inconsistency
Scores. In the first two circumstances described in the following list,
the Inconsistency Scores seem to be wrong when they are correct.
In the next three circumstances, the Inconsistency Scores may seem
correct when they are wrong. Analysts need to recognize these
circumstances, understand the problem, and adjust accordingly.
Assumptions or logical deductions omitted. If the scores in
the matrix do not support what you believe is the most likely
hypothesis, the matrix may be incomplete. Your thinking may be
influenced by assumptions or logical deductions not included in
the list of relevant information or arguments. If so, they should
be added so the matrix fully reflects everything that influences
your judgment on this issue. It is important for all analysts to
recognize the role that unstated or unquestioned (and
sometimes unrecognized) assumptions play in their analysis. In
political or military analysis, for example, conclusions may be
driven by assumptions about another country’s capabilities or
intentions. A principal goal of the ACH process is to identify
those factors that drive the analyst’s thinking on an issue so that
these factors can be questioned and, if appropriate, changed.

Insufficient attention to less likely hypotheses. If you think


the scoring gives undue credibility to one or more of the less
likely hypotheses, it may be because you have not assembled
the relevant information needed to refute them. You may have
devoted insufficient attention to obtaining such relevant
information, or the relevant information may simply not be there.
If you cannot find evidence to refute a hypothesis, it may be
necessary to adjust your thinking and recognize that the
uncertainty is greater than you had originally thought.

Definitive relevant information. There are occasions when


intelligence collectors obtain information from a trusted and well-
placed inside source. The ACH analysis can label the
information as having high credibility, but this is probably not
enough to reflect the conclusiveness of such relevant
information and the impact it should have on an analyst’s
thinking. In other words, in some circumstances, one or two
highly authoritative reports from a trusted source in a position to
know may support one hypothesis so strongly that they refute all
other hypotheses regardless of what other less reliable or less
definitive relevant information may show.
Unbalanced set of evidence. Evidence and arguments must
be representative of the entire problem. If there is considerable
evidence on a related but peripheral issue and comparatively
few items of evidence on the core issue, the Inconsistency
Score may be misleading.

Diminishing returns. As evidence accumulates, each new item


of Inconsistent relevant information or argument has less impact
on the Inconsistency Scores than does the earlier relevant
information. For example, the impact of any single item is less
when there are fifty items than when there are only ten items. To
understand this, consider what happens when you calculate the
average of fifty numbers. Each number has equal weight;
adding a fifty-first number will have less impact on the average
than if you start with only ten numbers and add one more.
Stated differently, the accumulation of relevant information over
time slows down the rate at which the Inconsistency Score
changes in response to new relevant information. Therefore, the
numbers may not reflect the actual amount of change in the
situation you are analyzing. When you are evaluating change
over time, it is desirable to delete the older relevant information
periodically, or to partition the relevant information and analyze
the older and newer relevant information separately.

Some other caveats when using ACH include the following:

The possibility that none of the relevant information identified is


diagnostic.

Not all relevant information is identified.

Some of the relevant information is inaccurate, deceptive, or


misleading.

The ratings are subjective and therefore subject to human error.


When the analysis is performed by a group, the outcome can be
biased by Groupthink or the absence of healthy group
dynamics.
Relationship to Other Techniques
ACH is often used in conjunction with other techniques. For
example, Cluster Brainstorming, Nominal Group Technique, Multiple
Hypothesis Generation, or the Delphi Method can identify
hypotheses or relevant information for inclusion in the ACH analysis.
They can also help analysts evaluate the significance of relevant
information. Deception Detection may identify an opponent’s motive,
opportunity, or means to conduct deception or to identify past
deception practices; information about these factors should be
included in the list of ACH-relevant information. The Diagnostic
Reasoning technique is incorporated within the ACH method. The
final step in the ACH method identifies Indicators for monitoring
future developments.

The ACH matrix is intended to reflect all relevant information and


arguments that affect one’s thinking about a designated set of
hypotheses. That means it should also include assumptions
identified by a Key Assumptions Check, discussed earlier in this
chapter. Conversely, rating the consistency of an item of relevant
information with a specific hypothesis is often based on an
assumption. When rating the consistency of relevant information in
an ACH matrix, the analyst should ask, “If this hypothesis is true,
would I see this item of relevant information?” A common thought in
response to this question is, “It all depends on. . . .” This means that,
however the consistency of that item of relevant information is rated,
that rating is likely based on an assumption—whatever assumption
the rating “depends on.” These assumptions should be recorded in
the matrix and then considered in the context of a Key Assumptions
Check.

The Delphi Method (chapter 8) can double-check the conclusions of


an ACH analysis. In this process, outside experts are asked
separately to assess the probability of the same set of hypotheses
and to explain the rationale for their conclusions. If the two different
groups of analysts using different methods arrive at the same
conclusion, confidence in the conclusion increases. If they disagree,
their lack of agreement is also useful, as one can then seek to
understand the rationale for the different judgments.

ACH and Argument Mapping (described later in this chapter) are


both used on the same types of complex analytic problems. They are
both systematic methods for organizing relevant information, but
they work in fundamentally different ways and are best used at
different stages in the analytic process. ACH is used during an early
stage to analyze a range of hypotheses to determine which is most
consistent with the broad body of relevant information. At a later
stage, when the focus is on developing, evaluating, or presenting the
case for a specific conclusion, Argument Mapping is the appropriate
method. Each method has strengths and weaknesses, and the
optimal solution is to use both.
Origins of This Technique
Richards Heuer originally developed the ACH technique at the CIA in
the mid-1980s as one part of a methodology for analyzing the
presence or absence of Soviet deception. It was described publicly
in his book, Psychology of Intelligence Analysis, first published in
1999;6 Heuer and Randolph Pherson helped the Palo Alto Research
Center gain funding from the federal government during 2004 and
2005 to produce the first professionally developed ACH software.
Randolph Pherson managed its introduction into the U.S.
Intelligence Community. Globalytica, LLC, with Pherson’s assistance,
subsequently developed a collaborative version of the software
called Te@mACH®. An example of an Analysis of Competing
Hypotheses can be found at https://www.cia.gov/library/center-for-
the-study-of-intelligence/csi-publications/books-and-
monographs/psychology-of-intelligence-analysis/art11.html.
7.7 INCONSISTENCIES FINDER™
The Inconsistencies Finder™ is a simpler version of Analysis of
Competing Hypotheses that focuses attention on relevant
information that is inconsistent with a hypothesis, helping to
disconfirm its validity.
When to Use It
The Inconsistencies Finder™ can be used whenever a set of
alternative hypotheses exists, or has recently been identified, and
analysts need to do the following:

Carefully weigh the credibility of multiple explanations, or


alternative hypotheses, explaining what has happened, is
happening, or is likely to happen.

Evaluate the validity of a large amount of data as it relates to


each hypothesis.

Challenge their current interpretation of the evidence (or,


alternatively, the interpretation of others).

Create an audit trail.


Value Added
The process of systematically reviewing the relevant information and
identifying which information or evidence is inconsistent with each
hypothesis helps analysts do the following:

Identify the most diagnostic information.

Focus on the disconfirming evidence.

Dismiss those hypotheses with compelling inconsistent


information.

Flag areas of agreement and disagreement.

Highlight the potential for disinformation or deception.

Instead of building a case to justify a preferred solution or answer,


the Inconsistencies Finder™ helps analysts easily dismiss those
hypotheses with compelling inconsistent information and focus
attention on those with the least disconfirming information. An
analytic case can then be built that supports this most likely
hypothesis—or hypotheses.

The technique is not an answer generator. It should be viewed as a


thinking tool that helps you frame a problem more efficiently. Unlike
ACH, the technique does not help analysts identify the most
diagnostic information for making their case.

The Inconsistencies Finder™ aids the production of high-quality


analysis in much the same way ACH mitigates cognitive biases and
intuitive traps by helping analysts do the following:
Avoid leaping to conclusions.

Move beyond “first impressions.”

Challenge preconceived ideas.

Uncover unknowns and uncertainties.


The Method
1. Create a matrix with all the hypotheses under consideration
listed in separate columns along the top of the matrix. Make a
list of all the relevant information (including significant evidence,
arguments, assumptions, and the absence of things) that would
be helpful in evaluating the given set of hypotheses. Put each
piece of information in a separate row down the left side of the
matrix.
2. Working in small teams, analyze each item for
consistency/inconsistency against the given hypotheses.
Review each piece of information against each hypothesis.
Analysts can move across the matrix row by row to evaluate
each hypothesis against all the relevant information moving from
column to column.

Place an “I” in the box that rates each item against each
hypothesis if you would not expect to see that item of
information if the hypothesis were true.

Place a “II” in the box if the presence of the information


makes a compelling case that the hypothesis cannot be
true. For example, if a suspect had an unassailable alibi
proving he or she was at a different location at the time a
crime was committed, then he or she could not be the
perpetrator.

3. Add up all the “I’s” (Inconsistent ratings) in each hypothesis


column. Assign one point to each “I” and two points to each “II.”
4. Rank order the credibility of the hypotheses based on the total
number of points or “I’s” that each hypothesis receives. The
higher the score, the less likely the hypothesis.
5. Assess if the “I’s” noted in each column make a compelling case
to dismiss that hypothesis. Work your way through the “I’s”
beginning with the hypothesis with the most “I’s” to the
hypothesis with the fewest or no “I’s.”
6. Identify the hypothesis(es) with the least Inconsistent
information and make a case for that hypothesis(es) being true.
7.8 DECEPTION DETECTION
Deception is an action intended by an adversary to influence the
perceptions, decisions, or actions of another to the advantage of the
deceiver. Deception Detection uses a set of checklists to help
analysts determine when to look for deception, discover whether
deception actually is present, and figure out what to do to avoid
being deceived. As Richards J. Heuer Jr. has argued, “The accurate
perception of deception in counterintelligence analysis is
extraordinarily difficult. If deception is done well, the analyst should
not expect to see any evidence of it. If, on the other hand, deception
is expected, the analyst often will find evidence of deception even
when it is not there.”7
When to Use It
Analysts should be concerned about the possibility of deception
when the following occurs:

The analysis hinges on a single critical piece of information or


reporting.

Key information is received at a critical time—that is, when


either the recipient or the potential deceiver has a great deal to
gain or to lose.

Accepting new information would cause the recipient to expend


or divert significant resources.

Accepting new information would require the analyst to alter a


key assumption or key judgment.

The potential deceiver may have a feedback channel that


illuminates whether and how the deception information is being
processed and to what effect.

Information is received from a source whose bona fides are


questionable.

The potential deceiver has a history of conducting deception.


Value Added
Most intelligence analysts know not to assume that everything that
arrives in their inbox is valid, but few know how to factor such
concerns effectively into their daily work practices. Considering the
deception hypothesis puts a major cognitive burden on the analyst. If
an analyst accepts the possibility that some of the information
received may be deceptive, then all evidence is open to question
and no valid inferences can be drawn from the reporting. This
fundamental dilemma can paralyze analysis unless the analyst uses
practical tools to determine when it is appropriate to worry about
deception, how best to detect deception in the reporting, and what to
do in the future to guard against being deceived.

It is very hard to deal with deception when you are really just
trying to get a sense of what is going on, and there is so
much noise in the system, so much overload, and so much
ambiguity. When you layer deception schemes on top of that,
it erodes your ability to act.
—Robert Jervis, “Signaling and Perception in the Information Age,” in
The Information Revolution and National Security (August 2000)

The measure of a good deception operation is how well it exploits


the cognitive biases of its target audience. The deceiver’s strategy
usually is to provide some intelligence or information of value to the
person being deceived in the hope that he or she will conclude the
“take” is good enough and should be disseminated. As additional
information is collected, the Satisficing bias is reinforced and the
recipient’s confidence in the information or the source usually grows,
further blinding the recipient to the possibility that he or she is falling
prey to deception. The deceiver knows that the information being
provided is highly valued, although over time some people will begin
to question the bona fides of the source. Often, this puts the person
who developed the source or acquired the information on the
defensive, and the natural reaction is to reject any and all criticism.
This cycle is usually broken only by applying structured techniques
such as Deception Detection to force a critical examination of the
true quality of the information and the potential for deception.

Deception Detection is a useful tool analysts can employ to avoid


cognitive biases and heuristics, such as seeking only the information
that is consistent with the lead hypothesis (Confirmation Bias),
accepting data as true without assessing its credibility because it
helps “make the case” (Evidence Acceptance Bias), and judging the
frequency of an event by the ease with which instances come to
mind (Availability Heuristic). It also safeguards an analyst against
several classic mental mistakes, including giving too much weight to
first impressions or initial data that appears important at the time
(Relying on First Impressions), assuming the same dynamic is in
play when something appears to be in accord with past experiences
(Projecting Past Experiences), and accepting or rejecting everything
someone says because the analyst strongly likes or dislikes the
person (Judging by Emotion).
The Method
Analysts should routinely consider the possibility that opponents or
competitors are attempting to mislead them or hide important
information. The possibility of deception cannot be rejected simply
because there is no evidence of it; if the deception is well done, one
should not expect to see evidence of it. Some circumstances in
which deception is most likely to occur are listed in the “When to Use
It” section. When such circumstances occur, the analyst, or
preferably a small group of analysts, should assess the situation
using four checklists that are commonly referred to by their
acronyms: MOM, POP, MOSES, and EVE (see box on pp. 173–174).

Analysts have also found the following “rules of the road” helpful in
anticipating the possibility of deception and dealing with it:8

Avoid overreliance on a single source of information.

Seek and heed the opinions of those closest to the reporting.

Be suspicious of human sources or human subsources who


have not been seen or when it is unclear how or from whom
they obtained the information.

Do not rely exclusively on what someone says (verbal


intelligence); always look for material evidence (documents,
pictures, an address, a phone number, or some other form of
concrete, verifiable information).

Be suspicious of information that plays strongly to your own


known biases and preferences.

Look for a pattern of a source’s reporting that initially appears to


be correct but later and repeatedly turns out to be wrong, with
the source invariably offering seemingly plausible, albeit weak,
explanations to justify or substantiate the reporting.

At the onset of a project, generate and evaluate a full set of


plausible hypotheses, including a deception hypothesis, if
appropriate.

Know the limitations as well as the capabilities of the potential


deceiver.
Relationship to Other Techniques
Analysts can combine Deception Detection with Analysis of
Competing Hypotheses to assess the possibility of deception. The
analyst explicitly includes deception as one of the hypotheses to be
analyzed, and information identified through the MOM, POP,
MOSES, and EVE checklists is included as evidence in the ACH
analysis.
Origins of This Technique
Deception—and efforts to detect it—has always been an integral part
of international relations. An excellent book on this subject is Michael
Bennett and Edward Waltz, Counterdeception Principles and
Applications for National Security (Boston: Artech House, 2007). The
description of Deception Detection in this book was previously
published in Randolph H. Pherson, Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019). A
concrete example of Deception Detection at work can be found at
https://www.apa.org/monitor/2016/03/deception.

Deception Detection Checklists


Motive, Opportunity, and Means
(MOM)

Motive: What are the goals and motives of the potential


deceiver?

Channels: What means are available to the potential


deceiver to feed information to us?

Risks: What consequences would the adversary suffer if


such a deception were revealed?

Costs: Would the potential deceiver need to sacrifice


sensitive information to establish the credibility of the
deception channel?

Feedback: Does the potential deceiver have a feedback


mechanism to monitor the impact of the deception
operation?
Past Opposition Practices (POP)

Does the adversary have a history of engaging in


deception?

Does the current circumstance fit the pattern of past


deceptions?

If not, are there other historical precedents?

If not, are there changed circumstances that would


explain using this form of deception at this time?
Manipulability of Sources (MOSES)

Is the source vulnerable to control or manipulation by the


potential deceiver?

What is the basis for judging the source to be reliable?

Does the source have direct access or only indirect


access to the information?

How good is the source’s track record of reporting?


Evaluation of Evidence (EVE)

How accurate is the source’s reporting? Has the whole


chain of evidence, including translations, been checked?

Does the critical evidence check out? Remember, the


subsource can be more critical than the source.

Does evidence from one source of reporting (e.g.,


human intelligence) conflict with that coming from
another source (e.g., signals intelligence or open-source
reporting)?

Do other sources of information provide corroborating


evidence?

Is the absence of evidence one would expect to see


noteworthy?
7.9 ARGUMENT MAPPING
Argument Mapping is a technique that tests a single hypothesis
through logical reasoning. An Argument Map starts with a single
hypothesis or tentative analytic judgment and then graphically
separates the claims and evidence to help break down complex
issues and communicate the reasoning behind a conclusion. It is a
type of tree diagram that starts with the conclusion or lead
hypothesis, and then branches out to reasons, evidence, and finally
assumptions. The process of creating the Argument Map helps
identify key assumptions and gaps in logic.

An Argument Map makes it easier for both the analysts and the
recipients of the analysis to clarify and organize their thoughts and
evaluate the soundness of any conclusion. It shows the logical
relationships between various thoughts in a systematic way and
allows one to assess quickly in a visual way the strength of the
overall argument. The technique also helps the analysts and
recipients of the report to focus on key issues and arguments rather
than focusing too much attention on minor points.
When to Use It
When making an intuitive judgment, use Argument Mapping to test
your own reasoning. Creating a visual map of your reasoning and
the evidence that supports this reasoning helps you better
understand the strengths, weaknesses, and gaps in your argument.
It is best to use this technique before you write your product to
ensure the quality of the argument and refine it if necessary.

Argument Mapping and Analysis of Competing Hypotheses (ACH)


are complementary techniques that work well either separately or
together. Argument Mapping is a detailed presentation of the
argument for a single hypothesis; ACH is a more general analysis of
multiple hypotheses. The ideal is to use both, as follows:

Before you generate an Argument Map, using ACH can be


helpful way to take a closer look at the viability of alternative
hypotheses. After looking at alternative hypotheses, you can
then select the best one to map.

After you have identified a favored hypothesis through ACH


analysis, Argument Mapping helps check and present the
rationale for this hypothesis.
Value Added
An Argument Map organizes one’s thinking by showing the logical
relationships between the various thoughts, both pro and con. An
Argument Map also helps the analyst recognize assumptions and
identify gaps in the available knowledge. The visualization of these
relationships makes it easier to think about a complex issue and
serves as a guide for clearly presenting to others the rationale for the
conclusions. Having this rationale available in a visual form helps
both the analyst and recipients of the report focus on the key points
rather than meandering aimlessly or going off on irrelevant tangents.

When used collaboratively, Argument Mapping helps ensure that a


variety of views are expressed and considered, helping mitigate the
influence of Groupthink. The visual representation of an argument
also makes it easier to recognize weaknesses in opposing
arguments. It pinpoints the location of any disagreement, serves as
an objective basis for mediating a disagreement, and mitigates
against seeking quick and easy answers to difficult problems (Mental
Shotgun).

An Argument Map is an ideal tool for dealing with issues of cause


and effect—and for avoiding the trap that correlation implies
causation (Confusing Causality and Correlation). By laying out all the
arguments for and against a lead hypothesis—and all the supporting
evidence and logic—it is easy to evaluate the soundness of the
overall argument.

The process also helps analysts counter the intuitive traps of


Ignoring Base Rate Probabilities by encouraging the analyst to seek
out and record all the relevant facts that support each supposition.
Similarly, the focus on seeking out and recording all data that
support or rebut the key points of the argument makes it difficult for
the analyst to overdraw conclusions from a small sample of data
(Overinterpreting Small Samples) or to continue to hold to an
analytic judgment when confronted with a mounting list of evidence
that contradicts the initial conclusion (Rejecting Evidence).
The Method
An Argument Map starts with a hypothesis—a single-sentence
statement, judgment, or claim about which the analyst can, in
subsequent statements, present general arguments and detailed
evidence, both pro and con. Boxes with arguments are arrayed
hierarchically below this statement; these boxes are connected with
arrows. The arrows signify that a statement in one box is a reason to
believe, or not to believe, the statement in the box to which the arrow
is pointing. Different types of boxes serve different functions in the
reasoning process, and boxes use some combination of color-
coding, icons, shapes, and labels so that one can quickly distinguish
arguments supporting a hypothesis from arguments opposing it.
Figure 7.9 is a simple example of Argument Mapping, showing some
of the arguments bearing on the assessment that North Korea has
nuclear weapons.

These are the specific steps involved in constructing a generic


Argument Map:

Write down the lead hypothesis—a single-sentence statement,


judgment, or claim at the top of the argument tree.

Draw a set of boxes below this initial box and list the key
reasons why the statement is true along with the key objections
to the statement.

Use green lines to link the reasons to the primary claim or other
conclusions they support.

Use green lines to connect evidence that supports the key


reason. (Hint: State the reason and then ask yourself,
“Because?” The answer should be the evidence you are
seeking.)
Identify any counterevidence that is inconsistent with the
reason. Use red lines to link the counterevidence to the reasons
they contradict.

Identify any objections or challenges to the primary claim or key


conclusions. Use red lines to connect the objections to the
primary claim or key conclusions.

Identify any counterevidence that supports the objections or


challenges. Use red lines to link the counterevidence to the
objections or challenges it supports.

Specify rebuttals, if any, with orange lines. An objection,


challenge, or counterevidence that does not have an orange-line
rebuttal suggests a flaw in the argument.

Evaluate the argument for clarity and completeness, ensuring


that red-lined opposing claims and evidence have orange-line
rebuttals. If all the reasons can be rebutted, then the argument
is without merit.
Potential Pitfalls
Argument Mapping is a challenging skill. Training and practice are
required to use the technique properly and to gain its benefits.
Detailed instructions for effective use of this technique are available
at the website listed below under “Origins of This Technique.”
Assistance by someone experienced in using the technique is
necessary for first-time users. Commercial software and freeware
are available for various types of Argument Mapping. In the absence
of software, using a self-stick note to represent each box in an
Argument Map drawn on a whiteboard can be helpful, as it is easy to
move the self-stick notes around as the map evolves and changes.
Origins of This Technique
The use of Argument Mapping goes back to the early nineteenth century. In the early twentieth century,
John Henry Wigmore pioneered its use for legal argumentation. The availability of computers to create
and modify Argument Maps in the later twentieth century prompted broader interest in Argument
Mapping in Australia for use in a variety of analytic domains. The short description here is based on
material in the Austhink website: http://www.austhink.com/critical/pages/argument_mapping.html.

Description

Figure 7.9 Argument Mapping: Does North Korea Have Nuclear Weapons?
Source: Diagram produced using the bCisive Argument Mapping software from Austhink, www.austhink.com.
NOTES
1. See the discussion in chapter 2 contrasting the characteristics of
System 1, or intuitive thinking, with System 2, or analytic thinking.

2. Karl Popper, The Logic of Science (New York: Basic Books,


1959).

3. Stuart K. Card, “The Science of Analytical Reasoning,” in


Illuminating the Path: The Research and Development Agenda for
Visual Analytics, eds. James J. Thomas and Kristin A. Cook
(Richland, WA: National Visualization and Analytics Center, Pacific
Northwest National Laboratory, 2005),
https://pdfs.semanticscholar.org/e6d0/612d677199464af131c16ab0f
a657d6954f2.pdf

4. See Popper, The Logic of Science.

5. A more detailed description of Te@mACH® can be found on the


Software tab at http://www.globalytica.com. The software is in the
process of being rehosted in 2019.

6. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, Reston, VA, 2007).

7. Richards J. Heuer Jr., “Cognitive Factors in Deception and


Counterdeception,” in Strategic Military Deception, eds. Donald C.
Daniel and Katherine L. Herbig (New York: Pergamon Press, 1982).

8. Heuer, “Cognitive Factors in Deception and Counterdeception”;


Michael I. Handel, “Strategic and Operational Deception in Historical
Perspective,” in Strategic and Operational Deception in the Second
World War, ed. Michael I. Handel (London: Frank Cass, 1987).
Descriptions of Images and Figures
Back to Figure

Data from the timeline are as follows. February 7: Range instrumentation


radar first active. February 12: Airframes observed on ground transport.
February 13: TELs observed at suspected launch site. February 16:
Transporters observed at launch site. February 24: Telemetry first active.
February 28: Military establishes communication links. March 2: Missiles
observed on training pads. March 11: Propellant handling activity
observed. March 13: Azimuth markers observed on launch pads. March
18: Transporters moved to launch area. March 23: Airframe revealed.
April 1: Propellant loading underway. April 3: Navigational closure area
announced, and TEL and equipment transloading. April 5: Military assets
deployed to support launch. April 6: Missiles launched.

Back to Figure

The cells in each row show the impact of the variable represented by
that row on each of the variables listed across the top of the matrix. The
cells in each column show the impact of each variable listed down the
left side of the matrix on the variable represented by the column.

Variables 2 and 4 in the cross-impact matrix have the greatest effect on


the other variables, while variable 6 has the most negative effect.

Variable Variable Variable Variable Variable Variable


1 2 3 4 5 6

Variable Nil Neutral Positive Neutral Strong Neutral


1 negative

Variable Neutral Nil Negative Strong Positive Positive


2 positive
Variable Variable Variable Variable Variable Variable
1 2 3 4 5 6

Variable Strong Negative Nil Positive Neutral Negative


3 positive

Variable Neutral Strong Neutral Nil Positive Negative


4 positive

Variable Strong Positive Neutral Positive Nil Neutral


5 negative

Variable Negative Positive Strong Negative Negative Nil


6 negative

Back to Figure

Idea is generated by a group, who performs structured brainstorming to


produce 1 to 3 alternative hypotheses. A list of possible alternative
hypotheses leads to idea evaluation and consolidation, such as
formation of affinity groups 1 through 4, and their related hypotheses.
The final list consists of hypotheses. Text for the group reads, “Is the
group sufficiently diverse?” Text for structured brainstorming reads,
“Prompt creativity by using situational logic, historical analogies, and
theory.” Text for the list of possible hypotheses reads, “Does this initial
list take into account all the key forces and factors?” Text for the affinity
group reads, “Create groups of similar hypotheses. Ask if the opposite
could be true to generate new ideas.” Text for the hypotheses list reads,
“Are the hypotheses mutually exclusive? Is the list comprehensive? Did
you clarify each hypothesis by asking who, what, how, when, where, and
why?”

Back to Figure
The illustration shows a quadrant. The top side is labeled centralized;
the right side is labeled religious; the bottom side is labeled
decentralized; and the left side is labeled secularized. H1: Centralized
state and secularized society. H2: Centralized state and religious society.
H3: Decentralized state and secularized society. H4: Decentralized state
and religious society.

Back to Figure

The matrix lists relevant information and hypothesis, buttons for rating
credibility, and a column for listing notes, assumptions, and credibility
justification. At the top of the matrix, a color legend shows the level of
disagreement. The column consists of a tool for reordering hypotheses
from most to least likely. The row consists of a tool for moving the most
discriminating information to the top of the matrix. The cells consists of
options for access chat and viewing analyst ratings. The number of
inconsistencies in the hypotheses is also displayed.

Back to Figure

The contention is that North Korea has nuclear weapons. The objection
to this is that North Korea does not have technical capacity to produce
enough weapons-grade fissile material. A rebuttal to this is that North
Korea was provided key information and technology by Pakistan around
1997. The reasoning to the contention is that North Korea has exploded
nuclear weapons in tests. The evidence to this is that North Korea
claimed to have exploded test weapons in 2006 and 2009, and that there
are seismological evidence of powerful explosions.
CHAPTER 8 REFRAMING TECHNIQUES

8.1 Cause and Effect Techniques [ 190 ]

8.1.1 Outside-In Thinking [ 191 ]

8.1.2 Structured Analogies [ 194 ]

8.1.3 Red Hat Analysis [ 198 ]

8.2 Challenge Analysis Techniques [ 203 ]

8.2.1 Quadrant Crunching™ [ 204 ]

8.2.2 Premortem Analysis [ 211 ]

8.2.3 Structured Self-Critique [ 216 ]

8.2.4 What If? Analysis [ 221 ]

8.2.5 High Impact/Low Probability Analysis [ 225 ]

8.2.6 Delphi Method [ 230 ]

8.3 Conflict Management Techniques [ 235 ]

8.3.1 Adversarial Collaboration [ 237 ]

8.3.2 Structured Debate [ 243 ]

Students of the intelligence profession have long recognized that failure to challenge a consensus
judgment or a well-established mental model has caused most major intelligence failures. The
postmortem analysis of virtually every major U.S. intelligence failure since Pearl Harbor has identified an
analytic mental model or outdated mindset as a key factor contributing to the failure. Appropriate use of
the techniques in this chapter can, however, help mitigate a variety of common cognitive limitations and
improve the analyst’s odds of getting the analysis right.

This record of analytic failures has generated discussion about the “paradox of expertise.”1 Experts can
be the last to recognize the occurrence and significance of change. For example, few specialists on the
Middle East foresaw the events of the Arab Spring, few experts on the Soviet Union foresaw its
collapse, and almost every economist was surprised by the depth of the financial crisis in 2008. Political
analysts in 2016 also failed to forecast the United Kingdom vote to leave the European Union or Donald
Trump’s election to the presidency of the United States.

As we noted in chapter 2, an analyst’s mental model is everything the analyst knows about how things
normally work in a certain environment or a specific scientific field. It tells the analyst, sometimes
subconsciously, what to look for, what is important, and how to interpret what he or she sees. A mental
model formed through education and experience serves an essential function: it is what enables the
analyst to provide routinely reasonably good intuitive assessments or estimates about what is
happening or likely to happen.
What gets us into trouble is not what we don’t know, it’s what we know for sure that just ain’t so.
—Mark Twain, American author and humorist

The problem is that a mental model that has previously provided accurate assessments and estimates
for many years can be slow to change. New information received incrementally over time is easily
assimilated into one’s existing mental model, so the significance of gradual change over time is easily
missed. It is human nature to see the future as a continuation of the past. Generally, major trends and
events evolve slowly, and the future is often foreseen by skilled intelligence analysts. However, life does
not always work this way. The most significant intelligence failures have been failures to foresee
historical discontinuities, when history pivots and changes direction.

In the wake of Al-Qaeda’s attack on the United States on September 11, 2001, and the erroneous 2002
National Intelligence Estimate on Iraq’s weapons of mass destruction, the U.S. Intelligence Community
came under justified criticism. This prompted demands to improve its analytic methods to mitigate the
potential for future such failures. For the most part, critics, especially in the U.S. Congress, focused on
the need for “alternative analysis” or techniques that challenged conventional wisdom by identifying
potential alternative outcomes. Such techniques carried many labels, including challenge analysis,
contrarian analysis, and Red Cell/Red Team/Red Hat analysis.

U.S. intelligence agencies responded by developing and propagating techniques such as Devil’s
Advocacy, Team A/Team B Analysis, Red Team Analysis, and Team A/Team B Debate. Use of such
techniques had both plusses and minuses. The techniques forced analysts to consider alternative
explanations and explore how their key conclusions could be undermined, but the proposed techniques
have several drawbacks.

In a study evaluating the efficacy of Structured Analytic Techniques, Coulthart notes that techniques
such as Devil’s Advocacy, Team A/Team B Analysis, and Red Teaming are not quantitative and are
more susceptible to the biasing found in System 1 Thinking.2 This observation reinforces our view that
this book should give more weight to techniques, such as Analysis of Competing Hypotheses or
Structured Self-Critique, that are based on the more formalized and deliberative rule processes
consistent with System 2 Thinking.

Another concern is the techniques carry with them an emotional component. No one wants to be told
they did something wrong, and it is hard not to take such criticism personally. Moreover, when analysts
feel obligated to defend their positions, their key judgments and mindsets tend to become even more
ingrained.

One promising solution to this dilemma has been to develop and propagate Reframing Techniques that
hopefully accomplish the same objective while neutralizing the emotional component. The goal is to find
ways to look at a problem from multiple perspectives while avoiding the emotional pitfalls of an us-
versus-them approach.

A frame is any cognitive structure that guides the perception and interpretation of what one sees. A
mental model of how things normally work can be thought of as a frame through which an analyst sees
and interprets evidence. An individual or a group of people can change their frame of reference, and
thus challenge their own thinking about a problem, simply by changing the questions they ask or
changing the perspective from which they ask the questions. Analysts can use a Reframing Technique
when they need to generate new ideas, when they want to see old ideas from a new perspective, or
when they want to challenge a line of analysis.3 Reframing helps analysts break out of a mental rut by
activating a different set of synapses in their brain.

To understand the power of reframing and why it works, it is necessary to know a little about how the
human brain works. Scientists believe the brain has roughly 100 billion neurons, each analogous to a
computer chip capable of storing information. Each neuron has octopus-like arms called axons and
dendrites. Electrical impulses flow through these arms and are ferried by neurotransmitting chemicals
across the synaptic gap between neurons. Whenever two neurons are activated, the connections, or
synapses, between them are strengthened. The more frequently those same neurons are activated, the
stronger the path between them.

Once a person has started thinking about a problem one way, the same mental circuits or pathways are
activated and strengthened each time the person thinks about it. The benefit of this is that it facilitates
the retrieval of information one wants to remember. The downside is that these pathways become
mental ruts that make it difficult to see the information from a different perspective. When an analyst
reaches a judgment or decision, this thought process is embedded in the brain. Each time the analyst
thinks about it, the same synapses are triggered, and the analyst’s thoughts tend to take the same well-
worn pathway through the brain. Because the analyst keeps getting the same answer every time, she or
he will gain confidence, and often overconfidence, in that answer.

Another way of understanding this process is to compare these mental ruts to the route a skier will cut
looking for the best path down from a mountain top (see Figure 8.0). After several runs, the skier has
identified the ideal path and most likely will remain stuck in the selected rut unless other stimuli or
barriers force him or her to break out and explore new opportunities.

Fortunately, it is easy to open the mind to think in different ways. The techniques described in this
chapter are designed to serve that function. The trick is to restate the question, task, or problem from a
different perspective to activate a different set of synapses in the brain. Each of the applications of
reframing described in this chapter does this in a different way. Premortem Analysis, for example, asks
analysts to imagine themselves at some future point in time, after having just learned that a previous
analysis turned out to be completely wrong. The task then is to figure out how and why it might have
gone wrong. What If? Analysis asks the analyst to imagine that some unlikely event has occurred, and
then to explain how it could have happened along with the implications of the event.

These techniques are generally more effective in a small group than with a single analyst. Their
effectiveness depends in large measure on how fully and enthusiastically participants in the group
embrace the imaginative or alternative role they are playing. Just going through the motions is of limited
value. Practice in using Reframing Techniques—especially Outside-In Thinking, Premortem Analysis,
and Structured Self-Critique—will help analysts become proficient in the fifth habit of the Five Habits of
the Master Thinker: understanding the overarching context within which the analysis is being done.

Description

Figure 8.0 Mount Brain: Creating Mental Ruts

In addition, appropriate use of Reframing Techniques can help mitigate a variety of common cognitive
limitations and improve the analyst’s odds of getting the analysis right. They are particularly useful in
minimizing Mirror Imaging or the tendency to assume others will act in the same way we would, given
similar circumstances. They guard against the Anchoring Effect, which is accepting a given value of
something unknown as a proper starting point for generating an assessment. Reframing Techniques are
especially helpful in countering the intuitive traps of focusing on a narrow range of alternatives
representing only modest change (Expecting Marginal Change) and continuing to hold to a judgment
when confronted with a mounting list of contradictory evidence (Rejecting Evidence).

This chapter discusses three families of Reframing Techniques:

Three techniques for assessing cause and effect: Outside-In Thinking, Structured Analogies, and
Red Hat Analysis

Six techniques for challenging conventional wisdom or the group consensus: Quadrant
Crunching™, Premortem Analysis, Structured Self-Critique, What If? Analysis, High Impact/Low
Probability Analysis, and the Delphi Method

Two techniques for managing conflict: Adversarial Collaboration and Structured Debate
OVERVIEW OF TECHNIQUES
Outside-In Thinking broadens an analyst’s thinking about the
forces that can influence an issue of concern. The technique
prompts the analyst to reach beyond his or her specialty area to
consider broader social, organizational, economic, environmental,
political, legal, military, technological, and global forces or trends that
can affect the topic under study.

Structured Analogies applies analytic rigor to reasoning by


analogy. This technique requires that the analyst systematically
compare the topic at hand with multiple potential analogies before
selecting the one for which the circumstances are most similar. Most
analysts are comfortable using analogies to organize their thinking or
make forecasts as, by definition, they contain information about what
has happened in similar situations in the past. People often
recognize patterns and then consciously take actions that were
successful in a previous experience or avoid actions that previously
were unsuccessful. However, analysts need to avoid the strong
tendency to fasten onto the first analogy that comes to mind—
particularly one that supports their prior view about an issue.

Red Hat Analysis is a useful technique for trying to perceive threats


and opportunities as others see them. Intelligence analysts
frequently endeavor to forecast the behavior of a foreign leader,
group, organization, or country. In doing so, they need to avoid the
common error of Mirror Imaging, which is the natural tendency of
analysts to assume that others think and perceive the world in the
same way they do. Business analysts can fall into the same trap
when projecting the actions of their competitors. Red Hat Analysis is
of limited value without significant understanding of the culture of the
target company or country and the decision-making styles of the
people involved.
Quadrant Crunching™ uses key assumptions and their opposites
as a starting point for systematically generating multiple alternative
outcomes. The technique forces analysts to rethink an issue from a
broad range of perspectives and systematically question all the
assumptions that underlie their lead hypothesis. It is most useful for
ambiguous situations for which little information is available. Two
versions of the technique have been developed: Classic Quadrant
Crunching™ to avoid surprise and Foresight Quadrant Crunching™
to develop a comprehensive set of potential alternative futures. For
example, analysts might use Classic Quadrant Crunching™ to
identify the many ways terrorists might attack a water supply. They
would use Foresight Quadrant Crunching™ to generate multiple
scenarios of how the conflict in Syria might evolve over the next five
years.

Premortem Analysis reduces the risk of analytic failure by


identifying and analyzing a potential failure before it occurs. Imagine
that several months or years have passed, and the just-completed
analysis has turned out to be spectacularly wrong. Then imagine
what could have caused it to be wrong. Looking back from the future
to explain something that has happened is much easier than looking
into the future to forecast what will happen. This approach to
analysis helps identify problems one has not foreseen.

Structured Self-Critique is a procedure that a small team or group


uses to identify weaknesses in its own analysis. All team or group
members don a hypothetical black hat and become critics rather
than supporters of their own analysis. From this opposite
perspective, they respond to a list of questions about sources of
uncertainty, the analytic processes used, critical assumptions,
diagnosticity of evidence, anomalous evidence, and information
gaps. They also consider changes in the broad environment in which
events are happening, alternative decision models, current cultural
expertise, and indicators of possible deception. Looking at the
responses to these questions, the team strengthens its analysis by
addressing uncovered faults and reassesses its confidence in its
overall judgment.
What If? Analysis is an important technique for alerting decision
makers to an event that could happen, or is already happening, even
if it may seem unlikely at the time. It is a tactful way of suggesting to
decision makers the possibility that their understanding of an issue
may be wrong. What If? Analysis serves a function like Foresight
analysis—it creates an awareness that prepares the mind to
recognize early signs of a significant change, and it may enable a
decision maker to plan for that contingency. The analyst imagines
that an event has occurred and then considers how the event could
have unfolded.

High Impact/Low Probability Analysis is used to sensitize analysts


and decision makers to the possibility that a low-probability event
might happen. It should also stimulate them to think about measures
that could deal with the danger or to exploit the opportunity if the
event occurs. The analyst assumes the event has occurred, and
then figures out how it could have happened and what the
consequences might be.

Delphi Method is a procedure for obtaining ideas, judgments, or


forecasts electronically from a geographically dispersed panel of
experts. It is a time-tested, extremely flexible procedure that can be
used on any topic to which expert judgment applies. The technique
can identify divergent opinions that challenge conventional wisdom
and double-check research findings. If two analyses from different
analysts who are using different techniques arrive at the same
conclusion, confidence in the conclusion is increased or at least
warranted. If the two conclusions disagree, this is also valuable
information that may open new avenues of research.

Adversarial Collaboration is an agreement between opposing


parties on how they will work together to resolve their differences,
gain a better understanding of why they differ, or collaborate on a
joint paper to explain the differences. Six approaches to
implementing Adversarial Collaboration are presented in this
chapter, including variations of three techniques described
elsewhere in this book—Key Assumptions Check, Analysis of
Competing Hypotheses, and Argument Mapping—and three new
techniques—Mutual Understanding, Joint Escalation, and the
Nosenko Approach.

Structured Debate is a planned debate of opposing points of view


on a specific issue in front of a “jury of peers,” senior analysts, or
managers. As a first step, each side writes up the best possible
argument for its position and passes this summation to the opposing
side. The next step is an oral debate that focuses on refuting the
other side’s arguments rather than further supporting one’s own
arguments. The goal is to elucidate and compare the arguments
against each side’s argument. If neither argument can be refuted,
perhaps both merit some consideration in the analytic report.
8.1 CAUSE AND EFFECT TECHNIQUES
Attempts to explain the past and forecast the future are based on an
understanding of cause and effect. Such understanding is difficult,
because the kinds of variables and relationships studied by the
intelligence analyst are, in most cases, not amenable to the kinds of
empirical analysis and theory development common in academic
research. The best the analyst can do is to make an informed
judgment, but such judgments depend upon the analyst’s subject-
matter expertise and reasoning ability and are vulnerable to various
cognitive pitfalls and fallacies of reasoning.

One of the most common causes of intelligence failures is the


unconscious assumption that other countries and their leaders will
act as we would in similar circumstances, a form of Mirror Imaging.
Two related pitfalls are the tendency to assume that the results of an
opponent’s actions are what the opponent intended and an analyst’s
reluctance to accept the reality that simple mistakes, accidents,
unintended consequences, coincidences, or small causes can have
large effects. Perceptions of causality are partly determined by
where one’s attention is directed; as a result, information that is
readily available, salient, or vivid is more likely to be perceived as
causal than information that is not. Cognitive limitations and common
errors in the perception of cause and effect are discussed in greater
detail in Richards J. Heuer Jr.’s Psychology of Intelligence Analysis
(Reston, VA: Pherson Associates, LLC, 2007).

I think we ought always to entertain our opinions with some


measure of doubt. I shouldn’t wish people dogmatically to
believe any philosophy, not even mine.
—Bertrand Russell, English philosopher
There is no single, easy technique for mitigating the pitfalls involved
in making causal judgments because analysts usually lack the
information they need to be certain of a causal relationship.
Moreover, the complex events that are the focus of intelligence
analysis often have multiple causes that interact with one another.

Psychology of Intelligence Analysis describes three principal


strategies that intelligence analysts use to make judgments to
explain the cause of current events or forecast what might happen in
the future:

Applying theory. Basing judgments on the systematic study of


many examples of the same phenomenon. Theories or models
often based on empirical academic research are used to explain
how and when certain types of events normally occur. Many
academic models are too generalized to be applicable to the
unique characteristics of most intelligence problems. Many
others involve quantitative analysis that is beyond the domain of
Structured Analytic Techniques as defined in this book.
However, a conceptual model that simply identifies relevant
variables and the diverse ways they might combine to cause
specific outcomes can be a useful template for guiding collection
and analysis of some common types of problems. Outside-In
Thinking can be used to explain current events or forecast the
future in this way.

Comparison with historical analogies. Combining an


understanding of the facts of a specific situation with knowledge
of what happened in similar situations either in one’s personal
experience or historical events. The Structured Analogies
technique adds rigor to this process for understanding what has
occurred and Analysis by Contrasting Narratives in chapter 9
provides insight into understanding how the future might evolve.

Situational logic. Making expert judgments based on the


known facts and an understanding of the underlying forces at
work at a given time and place. When an analyst is working with
incomplete, ambiguous, and possibly deceptive information,
these expert judgments usually depend upon assumptions
about capabilities, intent, or the normal workings of things in the
country of concern. Red Hat Analysis has proven highly
effective when seasoned analysts use it to anticipate the actions
of dictators or autocratic regimes.
8.1.1 Outside-In Thinking
Outside-In Thinking identifies the broad range of global, political,
environmental, technological, economic, or social forces and trends
that are outside the analyst’s area of expertise but that may
profoundly affect the issue of concern. Many analysts tend to think
from the inside out, focused on factors they are familiar with in their
specific area of responsibility. Outside-In Thinking reverses this
process as illustrated in Figure 8.1.1a. Whereas an analyst usually
works from the data at hand outward to explain what is happening,
Outside-In Thinking spurs the analyst to first consider how external
factors such as an adversary’s intent or new advances in artificial
intelligence (AI) could have an impact on the situation, thereby
enriching the analysis.
When to Use It
This technique is most useful in the early stages of an analytic process when analysts need to identify
all the critical factors that might explain an event or could influence how a situation will develop. It
should be part of the standard process for any project that analyzes potential future outcomes, for this
approach covers the broader environmental context from which surprises and unintended
consequences often come.

Description

Figure 8.1.1A An Example of Outside-In Thinking


Source: Pherson Associates, LLC, 2019.

Outside-In Thinking also is useful when assembling a large database, and analysts want to ensure they
have not forgotten important fields in the database architecture. For most analysts, important categories
of information (or database fields) are easily identifiable early in a research effort, but invariably one or
two additional fields emerge after an analyst or group of analysts is well into a project. This forces the
analyst or group to go back and review all previous reporting and input the additional data. Typically, the
overlooked fields are in the broader environment over which the analysts have little control. By applying
Outside-In Thinking, analysts can better visualize the entire set of data fields early in the research effort.
Value Added
Most analysts focus on familiar factors within their field of specialty,
but we live in a complex, interrelated world where events in our little
niche of that world are often affected by forces in the broader
environment over which we have no control. The goal of Outside-In
Thinking is to help analysts see the entire picture, not just the part of
the picture with which they are already familiar.

Outside-In Thinking reduces the risk of missing important variables


early in the analytic process because of the tendency to focus on a
narrow range of alternatives representing only incremental or
marginal change in the current situation. It encourages analysts to
rethink a problem or an issue while employing a broader conceptual
framework. The technique is illustrated in Figure 8.1.1b. By casting
their net broadly at the beginning, analysts are more likely to see an
important dynamic or to include a relevant alternative hypothesis.
The process can provide new insights and uncover relationships that
were not evident from the intelligence reporting. In doing so, the
technique helps analysts think in terms that extend beyond day-to-
day reporting. It stimulates them to address the absence of
information and identify more fundamental forces and factors that
should be considered.
The Method

Generate a generic description of the problem or phenomenon under study.

Figure 8.1.1B Inside-Out Analysis versus Outside-In Thinking


Source: Pherson Associates, LLC, 2019.

Form a group to brainstorm all the key forces and factors that could affect the topic but over which
decision makers or other individuals can exert little or no influence, such as globalization, the
emergence of new technologies, historical precedent, and the growing role of social media.

Employ the mnemonic STEMPLES + to trigger new ideas (Social, Technical, Economic, Military,
Political, Legal, Environmental, and Security, plus other factors such as Demographic, Religious, or
Psychological) and structure the discussion.

Determine whether sufficient expertise is available for each factor or category.

Assess specifically how each of these forces and factors might affect the problem.

Ascertain whether these forces and factors have an impact on the issue at hand, basing your
conclusion on the available evidence.

Generate new intelligence collection tasking or research priorities to fill in information gaps.
Relationship to Other Techniques
Outside-In Thinking is essentially the same as a business analysis
technique that goes by different acronyms such as STEEP,
STEEPLED, PEST, or PESTLE. For example, PEST is an acronym
for Political, Economic, Social, and Technological; STEEPLED also
includes Legal, Ethical, and Demographic. Military intelligence
organizations often use the mnemonic PMESII, which stands for
Political, Military, Economic, Social, Information, and Infrastructure.4
All require the analysis of external factors that may have either a
favorable or unfavorable influence on an organization or the
phenomenon under study.
Origins of This Technique
This technique has been used in planning and management
environments to ensure identification of outside factors that might
affect an outcome. The Outside-In Thinking approach described here
is from Randolph H. Pherson, Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019).
8.1.2 Structured Analogies
Analogies compare two situations to elicit and provoke ideas, help solve problems, and suggest history-
tested indicators. An analogy can capture shared attributes or it can be based on similar relationships or
functions being performed (see Figure 8.1.2). The Structured Analogies technique applies increased
rigor to analogical reasoning by requiring that the issue of concern be compared systematically with
multiple analogies.

Description

Figure 8.1.2 Two Types of Structured Analogies


Source: Pherson Associates, LLC, 2019.
When to Use It
In daily life, people recognize patterns of events or similar situations,
then consciously take actions that were successful in a previous
experience or avoid actions that were previously unsuccessful.
Similarly, analysts infer courses of action from past, similar
situations, often turning to analogical reasoning in unfamiliar or
uncertain situations where the available information is inadequate for
any other approach. However, analysts should consider the time
required for this structured approach and may choose to use it only
when the cost of being wrong is high.

Structured Analogies also is one of many structured techniques


analysts can use to generate a robust set of indicators. If history
reveals sets of events or actions that foretold what was about to
occur in the past, they could prove to be valuable lead indicators for
anticipating whether a similar event is likely to unfold in the future.

One of the most widely used tools in intelligence analysis is


the analogy. Analogies serve as the basis for constructing
many predictive models, are the basis for most hypotheses,
and rightly or wrongly, underlie many generalizations about
what the other side will do and how they will go about doing
it.
—Jerome K. Clauser and Sandra M. Weir, Intelligence Research
Methodology, Defense Intelligence School (1975)
Value Added
Reasoning by analogy helps achieve understanding by reducing the
unfamiliar to the familiar. In the absence of data required for a full
understanding of the current situation, analogical reasoning may be
the only forecasting option. Using the Structured Analogies
technique helps analysts avoid the tendency to fasten quickly on a
single analogy and then focus only on evidence that supports the
similarity of that analogy.

Structured Analogies is one technique for which there has been an


empirical study of its effectiveness. A series of experiments
compared Structured Analogies with unaided judgments in predicting
the decisions made in eight conflict situations. These were difficult
forecasting problems, and the 32 percent accuracy of unaided
experts was only slightly better than chance. In contrast, 46 percent
of the forecasts made by using the Structured Analogies process
described here were accurate. Among the experts who were
independently able to think of two or more analogies and who had
direct experience with their closest analogy, 60 percent of the
forecasts were accurate.5

Structured Analogies help analysts avoid the mistake of focusing


attention on one vivid scenario while ignoring other possibilities
(Vividness Bias), seeing patterns in random events as systematic
(Desire for Coherence and Uncertainty Reduction), and selecting the
first answer that appears “good enough” (Satisficing). It also helps
protect analysts from assuming the same dynamic is in play when, at
first glance, something seems to accord with their past experiences
(Projecting Past Experiences), assuming an event was more certain
to occur than actually was the case (Assuming Inevitability), and
believing that actions are the result of centralized patterns or
direction and finding patterns where they do not exist (Presuming
Patterns).
The Method
We recommend training in this technique before using it. Such a
training course is available at
http://www.academia.edu/1070109/Structured_Analogies_for_Forec
asting.

Describe the issue and the judgment or decision that needs to


be made.

Identify experts who are familiar with the problem and have a
broad background that enables them to identify analogous
situations. Aim for at least five experts with varied backgrounds.

Brainstorm as many analogies as possible without focusing too


strongly on how similar they are to the current situation. Various
universities and international organizations maintain databases
to facilitate this type of research. For example, the
Massachusetts Institute of Technology (MIT) maintains its
Cascon System for Analyzing International Conflict, a database
of 85 post–World War II conflicts that are categorized and coded
to facilitate their comparison with current conflicts of interest.
The University of Maryland maintains the International Crisis
Behavior Project database covering 452 international crises
between 1918 and 2006. Each case is coded for eighty-one
descriptive variables.

Review the list of potential analogies and agree on which ones


should be examined further.

Develop a tentative list of categories for comparing the


analogies to determine which analogy is closest to the issue in
question. For example, the MIT conflict database codes each
case according to the following broad categories as well as finer
subcategories: previous or general relations, military-strategic,
international organization (United Nations, legal, public opinion),
ethnic, economic/resources, internal politics of the sides,
communication and information, and actions in disputed area.

Write an account of each selected analogy, with equal focus on


those aspects of the analogy that are similar and those that are
different from the situation in play. A sophisticated approach to
analogical reasoning examines the nature of the similarities and
traces these critical aspects back to root causes. A good
analogy goes beyond superficial similarity to examine deep
structure. Each write-up, distributed among the experts, can be
posted electronically so each member of the group can read and
comment on it.

Evaluate the analogies by asking each expert to rate the


similarity of each to the issue of concern on a scale of 0 to 10,
where 0 = not at all similar and 10 = very similar.

Relate the highest-ranked analogies to the issue of concern by


discussing the results of the evaluation and making a forecast
for the current issue. The forecast may be the same as the
outcome of the most-similar analogy. Alternatively, identify
several other possible outcomes based on the diverse outcomes
of analogous situations.

When appropriate, use the analogous cases to identify drivers


or policy actions that might influence the outcome of the current
situation.

If using Structured Analogies to generate indicators, consider the


highest-ranking analogies and ask if the previous actions and events
are happening now or have happened. Also ask what actions have
not happened or are not happening and assess the implications.
Potential Pitfalls
Noticing shared characteristics leads most people to an analogy, but
logical reasoning necessitates considering conditions, qualities, or
circumstances that are dissimilar between the two phenomena. A
sophisticated approach examines the nature of the similarities and
traces these critical aspects back to root causes. This should be
standard practice in all reasoning by analogy and especially in those
cases when one cannot afford to be wrong.

When resorting to an analogy, [people] tend to seize upon the


first that comes to mind. They do not research more widely.
Nor do they pause to analyze the case, test its fitness, or
even ask in what ways it might be misleading.
—Ernest R. May, “Lessons” of the Past: The Use and Misuse of
History in American Foreign Policy (1975)

Additionally, many analogies are used loosely and have a broad


impact on the thinking of both decision makers and the public at
large—for better or worse. One role for analysis is to take analogies
that are already being used by others and subject these analogies to
rigorous examination to prevent assumption-based decision making.
Origins of This Technique
Structured Analogies is described in greater detail in Kesten C.
Green and J. Scott Armstrong, “Structured Analogies for
Forecasting,” in International Journal of Forecasting (2007), and
www.forecastingprinciples.com/paperpdf/Structured_Analogies.pdf.

We recommend that analysts considering the use of this technique


read Richard D. Neustadt and Ernest R. May, “Unreasoning from
Analogies,” chapter 4 in Thinking in Time: The Uses of History for
Decision Makers (New York: Free Press, 1986). We also suggest
Giovanni Gavetti and Jan W. Rivkin, “How Strategists Really Think:
Tapping the Power of Analogy,” Harvard Business Review (April
2005).
8.1.3 Red Hat Analysis
Red Hat Analysis is anticipating the behavior of another individual or
group by trying to replicate how they think. Intelligence analysts
frequently endeavor to forecast the actions of an adversary or a
competitor. In doing so, they must take care to avoid the common
error of Mirror Imaging, the natural tendency to assume that others
think and perceive the world in the same way as the analyst does.
Red Hat Analysis6 is a useful technique for trying to perceive threats
and opportunities as others see them. This technique alone,
however, is of limited value without significant understanding of the
culture of the other country or company and the decision-making
style of the people involved.

To see the options faced by foreign leaders as these leaders


see them, one must understand their values and assumptions
and even their misperceptions and misunderstandings.
Without such insight, interpreting foreign leaders’ decisions or
forecasting future decisions is often little more than partially
informed speculation. Too frequently, behavior of foreign
leaders appears “irrational” or “not in their own best interest.”
Such conclusions often indicate analysts have projected
American values and conceptual frameworks onto the foreign
leaders and societies, rather than understanding the logic of
the situation as it appears to them.
—Richards J. Heuer Jr., Psychology of Intelligence Analysis (2007)
When to Use It
The chances of a Red Hat Analysis being accurate are better when
one is trying to foresee the behavior of a specific person who has the
authority to make decisions. Authoritarian leaders as well as small,
cohesive groups, such as terrorist cells, are obvious candidates for
this type of analysis. In contrast, the chances of making an accurate
forecast about an adversary’s or a competitor’s decision is
appreciably lower when the decision is constrained by a legislature
or influenced by conflicting interest groups. In law enforcement, Red
Hat Analysis is useful in simulating the likely behavior of a criminal or
a drug lord.
Value Added
There is a great deal of truth in the maxim, “Where you stand
depends on where you sit.” Red Hat Analysis requires the analyst to
adopt—and make decisions consonant with—the culture of a foreign
leader, cohesive group, criminal, or competitor. This conscious effort
to imagine the situation as the target perceives it helps the analyst
gain a different and usually more accurate perspective on a problem
or issue. Reframing the problem in this way typically changes the
analyst’s perspective from that of an analyst observing and
forecasting an adversary’s behavior to that of a leader who must
make a difficult decision within that operational culture. This
reframing process often introduces new and different stimuli that
might not have been factored into a traditional analysis, such as a
target’s familial ties.

The technique introduces more human factors into the analysis such
as, “Who can I count on (e.g., do I have relatives, friends, or
business associates) to help me out?” “Is that operation within my
capabilities?” “What are my supporters expecting from me?” “Do I
really need to make this decision now?” and “What are the
consequences of making a wrong decision?”

In addition to protecting the analyst against the bias of Mirror


Imaging, Red Hat Analysis has proved to be effective in combating
the influence of accepting the given value of things unknown as
proper starting points for predicting someone’s future course of
action (Anchoring Effect) and predicting rare events based on weak
evidence or evidence that easily comes to mind (Associative
Memory).

Red Hat Analysis helps analysts guard against the common pitfall of
Overrating Behavioral Factors (also referred to as Fundamental
Attribution Error) by dampening the tendency to attribute the
behavior of other people, organizations, or governments to the
nature of the actor and underestimate the influence of situational
factors. Conversely, people tend to see their own behavior as
conditioned almost entirely by the situation in which they find
themselves. We seldom see ourselves as a bad person, but we often
see malevolent intent in others.7

Red Hat Analysis also protects analysts from falling prey to the
practitioners’ traps of not addressing the impact of the absence of
information on analytic conclusions (Ignoring the Absence of
Information) and continuing to hold to a judgment when confronted
with a mounting list of contradictory evidence (Rejecting Evidence).
The Method

Gather a group of experts with in-depth knowledge of the


operating environment and target’s personality, motives, and
style of thinking. If possible, try to include people who are well-
grounded in the target’s culture, speak the same language,
share the same ethnic background, or have lived in the target’s
country.

Establish a baseline by presenting the experts with a situation or


a stimulus and ask them what they would do in this situation.
For example, you might ask for a response to this situation: “The
United States has just imposed sanctions on your country. How
would you react?” Or, “We are about to launch a new product.
How would you react if you were a competitor?” The reason for
first asking the experts how they would react is to assess
whether the adversary is likely to react differently than the
analyst would.8

After the experts have articulated how they would have


responded or acted, ask them to explain why they think they
would behave that way. Ask the experts to list what core values
or core assumptions were motivating their behavior or actions.
Again, this step establishes a baseline for assessing why the
adversary is likely to react differently than the analyst would.

Once they can explain in a convincing way why they chose to


act the way they did, ask the experts to put themselves in the
shoes of the target, adversary, or competitor and simulate how
the target would respond. At this point, the experts should ask
themselves, “Does our target share our values or motives or
methods of operation?” If not, then how would those differences
lead the target to act in ways the analysts might not have
anticipated before engaging in this exercise? To gain cultural
expertise that might otherwise be lacking, consider using the
Delphi Method to elicit the expertise of geographically
distributed experts.

In presenting the results, describe the considered alternatives


and the rationale for selecting the path the assembled
participants think the person or group is most likely to take.
Consider other less conventional means of presenting the
results of your analysis, such as the following:

Describing a hypothetical conversation in which the leader


and other players talk in first person.

Drafting a document (a set of instructions, military orders,


policy paper, or directives) that the target, adversary, or
competitor would likely generate.

Figure 8.1.3 shows how one might use Red Hat Analysis to catch
bank robbers.
Potential Pitfalls
Forecasting human decisions or the outcome of a complex organizational process is difficult in the best
of circumstances. For example, how successful would you expect to be in forecasting the difficult
decisions to be made by the U.S. president or even your local mayor? It is even more difficult when
dealing with a foreign culture with sizeable gaps in the available information. Mirror Imaging is hard to
avoid because, in the absence of a thorough understanding of the foreign situation and culture, your
own perceptions appear to be the only reasonable way to look at the problem.

A key first step in avoiding Mirror Imaging is to establish how you would behave and the reasons why.
After establishing this baseline, the analyst then asks if the adversary would act differently and why. Is
the adversary motivated by different stimuli, or does the adversary hold different core values? The task
of Red Hat Analysis then becomes illustrating how these differences would result in different policies or
behaviors.

A common error in our perceptions of the behavior of other people, organizations, or governments is to
fall prey to the heuristic of Overrating Behavior Factors, which is likely to be even more common when
assessing the behavior of foreign leaders or groups. This error is especially easy to make when one
assumes that the target has malevolent intentions, but our understanding of the pressures on that actor
is limited.

Figure 8.1.3 Using Red Hat Analysis to Catch Bank Robbers


Source: Eric Hess, Senior Biometric Product Manager, MorphoTrak, Inc. From an unpublished paper, “Facial Recognition
for Criminal Investigations,” delivered at the International Association of Law Enforcement Intelligence Analysts, Las
Vegas, NV, 2009. Reproduced with permission.

Analysts should always try to see the situation from the other side’s perspective, but if a sophisticated
grounding in the culture and operating environment of their subject is lacking, they will often be wrong.
Recognition of this pitfall should prompt analysts to consider using words such as “possibly” and “could
happen” rather than “likely” or “probably” when reporting the results of Red Hat Analysis.
Relationship to Other Techniques
Red Hat Analysis differs from Red Team Analysis in that it can be
done or organized by any analyst—or more often a team of analysts
—who needs to understand or forecast an adversary’s behavior and
who has, or can gain access to, the required cultural expertise. Red
Cell and Red Team Analysis are challenge techniques usually
conducted by a permanent organizational unit staffed by individuals
well qualified to think like or play the role of an adversary. The goal
of Red Hat Analysis is to exploit available resources to develop the
best possible analysis of an adversary’s or competitor’s behavior.
The goal of Red Cell or Red Team Analysis is usually to challenge
the conventional wisdom of established analysts or an opposing
team.
Origins of This Technique
Red Hat, Red Cell, and Red Team Analysis became popular during
the Cold War when “red” symbolized the Soviet Union, but they
continue to have broad applicability. This description of Red Hat
Analysis is a modified version of that in Randolph H. Pherson,
Handbook of Analytic Tools and Techniques, 5th ed. (Tysons, VA:
Pherson Associates, 2019).
8.2 CHALLENGE ANALYSIS TECHNIQUES
Challenge analysis encompasses a set of analytic techniques that
are also called contrarian analysis, alternative analysis, red teaming,
and competitive analysis. What all of these have in common is the
goal of challenging an established mental model or analytic
consensus to broaden the range of possible explanations or
estimates that should be seriously considered. That this same
activity has been called by so many different names suggests there
has been some conceptual diversity about how and why these
techniques are used and what they might help accomplish. All of
them apply some form of reframing to better understand past
patterns or foresee future events.

These techniques enable the analyst, and eventually the intelligence


or business client, to evaluate events from a different or contrary
perspective—in other words, with a different mental model. A
surprising event is not likely to be anticipated if it has not been
imagined, which requires examining the world from a different
perspective.9

Former Central Intelligence Agency director Michael Hayden makes


a compelling logical case for consistently challenging conventional
wisdom. He has stated that “our profession deals with subjects that
are inherently ambiguous, and often deliberately hidden. Even when
we’re at the top of our game, we can offer policymakers insight, we
can provide context, and we can give them a clearer picture of the
issue at hand, but we cannot claim certainty for our judgments.” The
director went on to suggest that getting it right seven times out of ten
might be a realistic expectation.10

Hayden’s estimate of seven times out of ten is supported by a quick


look at verbal expressions of probability used in intelligence reports.
“Probable” seems to be the most common verbal expression of the
likelihood of an assessment or estimate. Unfortunately, there is no
consensus within the Intelligence Community on what “probable” and
other verbal expressions of likelihood mean when they are converted
to numerical percentages. For discussion here, we accept Sherman
Kent’s definition of “probable” as meaning “75% plus or minus
12%.”11 This means that analytic judgments described as “probable”
are expected to be correct roughly 75 percent of the time—and,
therefore, incorrect or off target about 25 percent of the time.

Logically, one might then expect that one of every four judgments
that intelligence analysts describe as “probable” will turn out to be
wrong. This perspective broadens the scope of what challenge
analysis might accomplish. It should not be limited to questioning the
dominant view to be sure it’s right. Even if the challenge analysis
confirms the initial probability judgment, it should go further to seek a
better understanding of the other 25 percent. In what circumstances
might there be a different assessment or outcome, what would that
be, what would constitute evidence of events moving in that
alternative direction, how likely is it, and what would be the
consequences?

As we will discuss in the next section on conflict management, an


understanding of these probabilities should reduce the frequency of
unproductive conflict between opposing views. Analysts who
recognize a one-in-four chance of being wrong should at least be
open to consideration of alternative assessments or estimates to
account for the other 25 percent.

This chapter describes three categories of challenge analysis


techniques: self-critique, critique of others, and solicitation of critique
by others:

Self-critique. Three techniques that help analysts challenge


their own thinking are Classic Quadrant Crunching™,
Premortem Analysis, and Structured Self-Critique. These
techniques spur analysts to reframe and challenge their analysis
in multiple ways. They can counteract the pressures for
conformity or consensus that often suppress the expression of
dissenting opinions in an analytic team or group. We adapted
Premortem Analysis from the business world and applied it to
the analytic process more broadly.

Critique of others. Analysts can use What If? Analysis or High


Impact/Low Probability Analysis to tactfully question the
conventional wisdom by making the best case for an alternative
explanation or outcome.

Critique by others. The Delphi Method is a structured process


for eliciting usually anonymous opinions from a panel of outside
experts. The authors have decided to drop from this edition of
the book two other techniques for seeking critique by others—
Devil’s Advocacy and Red Team Analysis. They can be
counterproductive because they add an emotional component to
the analytic process that further ingrains mindsets.
8.2.1 Quadrant Crunching™
Quadrant Crunching™ is a systematic procedure for identifying all
the potentially feasible combinations among several sets of
variables. It combines the methodology of a Key Assumptions Check
(chapter 7) with Multiple Scenarios Generation (chapter 9).

There are two versions of the technique: Classic Quadrant


Crunching™ helps analysts avoid surprise, and Foresight Quadrant
Crunching™ is used to develop a comprehensive set of potential
alternative futures. Both techniques spur analysts to rethink an issue
from a broad range of perspectives and systematically question all
the assumptions that underlie their lead hypothesis.

Classic Quadrant Crunching™ helps analysts avoid surprise by


examining multiple possible combinations of selected key
variables. Pherson Associates, LLC, initially developed the
technique in 2006 to help counterterrorism analysts and
decision makers discover all the ways international terrorists or
domestic radical extremists might mount an attack.

Foresight Quadrant Crunching™ was developed in 2013 by


Globalytica, LLC. It adopts the same initial approach of
Reversing Assumptions as Classic Quadrant Crunching™. It
then applies Multiple Scenarios Generation to generate a wide
range of comprehensive and mutually exclusive future scenarios
or outcomes of any type—many of which analysts had not
previously contemplated.
When to Use It
Both techniques are useful for dealing with highly complex and
ambiguous situations for which little data is available and the
chances for surprise are great. Analysts need training and practice
before using either technique, and we highly recommend engaging
an experienced facilitator, especially when this technique is used for
the first time.

Analysts can use Classic Quadrant Crunching™ to identify and


systematically challenge assumptions, explore the implications of
contrary assumptions, and discover “unknown unknowns.” By
generating multiple possible alternative outcomes for any situation,
Classic Quadrant Crunching™ reduces the chance that events could
play out in a way that analysts have not previously imagined or
considered. Analysts, for example, would use Classic Quadrant
Crunching™ to identify the different ways terrorists might conduct an
attack on the homeland or how business competitors might react to a
new product launch.

Analysts who use Foresight Quadrant Crunching™ can be more


confident that they have considered a broad range of possible
situations that could develop and have spotted indicators that signal
a specific scenario is starting to unfold. For example, an analyst
could use Foresight Quadrant Crunching™ to generate multiple
scenarios of how the conflict in Syria or Venezuela might evolve over
the next five years and gain a better understanding of the interplay of
key drivers in that region.
Value Added
Both techniques reduce the potential for surprise by providing a
structured framework with which the analyst can generate an array
of alternative options or mini-stories. Classic Quadrant Crunching™
requires analysts to identify and systematically challenge all their key
assumptions about how a terrorist attack might be launched or how
any other specific situation might evolve. By critically examining each
assumption and how a contrary assumption might play out, analysts
can better assess their level of confidence in their predictions and
the strength of their lead hypothesis. Foresight Quadrant
Crunching™ belongs to the family of Foresight Techniques; it is most
effective when there is a strong consensus that only a single future
outcome is likely.

Both techniques provide a useful platform for developing indicator


lists and for generating collection requirements. They also help
decision makers focus on what actions need to be undertaken today
to best prepare for events that could transpire in the future. By
reviewing an extensive list of potential alternatives, decision makers
are in a better position to select those that deserve the most
attention. They can then take the necessary actions to avoid or
mitigate the impact of unwanted or bad alternatives and help foster
more desirable ones. The techniques also are helpful in sensitizing
decision makers to potential “wild cards” (High Impact/Low
Probability developments) or “nightmare scenarios,” both of which
could have significant policy or resource implications.
The Method

8.2.1.1 The Method: Classic Quadrant Crunching™

Classic Quadrant Crunching™ is sometimes described as a Key Assumptions Check on steroids. It is


most useful when there is a well-established lead hypothesis that can be articulated clearly. Classic
Quadrant Crunching™ calls on the analyst to break down the lead hypothesis into its component parts,
identifying the key assumptions that underlie the lead hypothesis, or the dimensions that focus on Who,
What, How, When, Where, and Why. After the key dimensions of the lead hypothesis are articulated, the
analyst generates two or four examples of contrary dimensions.

For example, two contrary dimensions for a single attack would be simultaneous attacks and cascading
attacks. The various contrary dimensions are then arrayed in sets of 2-×-2 matrices. If four dimensions
are identified for a topic, the technique would generate six different 2-×-2 combinations of these four
dimensions (AB, AC, AD, BC, BD, and CD). Each of these pairs would be presented as a 2-×-2 matrix
with four quadrants. Participants then generate different stories or alternatives for each quadrant in each
matrix. If analysts create two stories for each quadrant in each of these 2-×-2 matrices, there will be a
total of forty-eight different ways the situation could evolve. Similarly, if six drivers are identified, the
technique will generate as many as 120 different stories to consider (see Figure 8.2.1.1a).

The best way to have a good idea is to have a lot of ideas.


—Linus Pauling, American chemist

After a rich array of potential alternatives is generated, the analyst’s task is to identify which of the
various alternative stories are the most deserving of attention. The last step in the process is to develop
lists of indicators for each story and track them to determine which story is beginning to emerge.

The question, “How might terrorists attack a nation’s water system?” is useful for illustrating the Classic
Quadrant Crunching™ technique. State the conventional wisdom for the most likely way terrorists might
launch such an attack. For example, “Al-Qaeda or its affiliates will contaminate the water supply for a
large metropolitan area, causing mass casualties.”

Break down this statement into its component parts or key assumptions. For example, the
statement makes its key assumptions: (1) a single attack, (2) involving the contamination of drinking
water, (3) conducted by an outside attacker, (4) against a major metropolitan area, causing large
numbers of casualties.

Posit a contrary assumption for each key assumption. For example, what if there are multiple
attacks instead of a single attack?

Figure 8.2.1.1A Classic Quadrant Crunching™: Creating a Set of Stories


Source: Pherson Associates, LLC, 2019.

Identify two or four dimensions of that contrary assumption. For example, what are different ways a
terrorist group could launch a multiple attack? Two possibilities would be simultaneous attacks (as
in the September 2001 attacks on the World Trade Center and the Pentagon or the London
bombings in 2005) or cascading attacks (as in the sniper killings in the Washington, D.C., area in
October 2002).

Figure 8.2.1.1B Terrorist Attacks on Water Systems: Reversing Assumptions


Source: Pherson Associates, LLC, 2019.

Description

Figure 8.2.1.1C Terrorist Attacks on Water Systems: Sample Matrices


Source: Pherson Associates, LLC, 2019.

Repeat this process for each of the key assumptions. Develop two or four contrary dimensions for
each contrary assumption. (See Figure 8.2.1.1b.)

Array pairs of contrary dimensions into sets of 2-×-2 matrices. In this case, ten different 2-×-2
matrices are the result. Two of the ten matrices are shown in Figure 8.2.1.1c.

For each cell in each matrix, generate one to three examples of how terrorists might launch an
attack. In some cases, such an attack might already have been imagined. In other quadrants, there
may be no credible attack concept. But several of the quadrants will usually stretch the analysts’
thinking, pushing them to consider the dynamic in new and different ways.
Description

Figure 8.2.1.1D Selecting Attack Plans


Source: Pherson Associates, LLC, 2019.

Review all the attack plans generated; using a preestablished set of criteria, select those most
deserving of attention. In this example, possible criteria might be plans that are most likely to do the
following:

Cause the most damage; have the most impact.

Be the hardest to detect or prevent.

Pose the greatest challenge for consequence management.

This process is illustrated in Figure 8.2.1.1d. In this case, three attack plans were deemed the most
likely. Attack plan 1 became Story A, attack plans 4 and 7 were combined to form Story B, and
attack plan 16 became Story C. It may also be desirable to select one or two additional attack plans
that might be described as “wild cards” or “nightmare scenarios.” These are attack plans that have a
low probability of being tried but are worthy of attention because their impact would be substantial if
they did occur. The figure shows attack plan 11 as a “nightmare scenario.”

Consider what decision makers might do to prevent bad stories from happening, mitigate their
impact, and deal with their consequences.

Generate a list of key indicators to help assess which, if any, of these attack plans is beginning to
emerge.

8.2.1.2 The Method: Foresight Quadrant Crunching™


Foresight Quadrant Crunching™ adopts much the same method as Classic Quadrant Crunching™, with
two major differences. In the first step, state the scenario that most analysts believe has the greatest
probability of emerging. When later developing the list of alternative dimensions, include the dimensions
contained in the lead scenario. By including the lead scenario, the final set of alternative scenarios or
futures should be comprehensive and mutually exclusive. The specific steps for Foresight Quadrant
Crunching™ are the following:

State what most analysts believe is the most likely future scenario.

Break down this statement into its component parts or key assumptions.

Posit a contrary assumption for each key assumption.

Identify one or three contrary dimensions of that contrary assumption.

Repeat this process for each of the contrary assumptions—a process like that shown in Figure
8.2.1.1b.

Add the key assumption to the list of contrary dimensions, creating either one or two pairs.

Repeat this process for each row, creating one or two pairs, including a key assumption and one or
three contrary dimensions.

Array these pairs into sets of 2-×-2 matrices, a process shown in Figure 8.2.1.1c.

For each cell in each matrix, generate one to three credible scenarios. In some cases, such a
scenario may already have been imagined. In other quadrants, there may be no scenario that
makes sense. But several of the quadrants will usually stretch the analysts’ thinking, often
generating counterintuitive scenarios.

Review all the scenarios generated—a process outlined in Figure 8.2.1.1d; using a preestablished
set of criteria, select those scenarios most deserving of attention. The difference is that with Classic
Quadrant Crunching™, analysts are seeking to develop a set of credible alternative attack plans to
avoid surprise. In Foresight Quadrant Crunching™, analysts are engaging in a new version of
Multiple Scenarios Generation analysis.
Relationship to Other Techniques
Both Quadrant Crunching™ techniques are specific applications of a
generic method called Morphological Analysis (described in chapter
9). They draw on the results of the Key Assumptions Check and can
contribute to Multiple Scenarios Generation. They are also useful in
identifying indicators.
Origins of This Technique
Classic Quadrant Crunching™ was developed by Randolph Pherson
and Alan Schwartz to meet a specific analytic need in the
counterterrorism arena. It was first published in Randolph H.
Pherson, Handbook of Analytic Tools and Techniques, 4th ed.
(Reston, VA: Pherson Associates, LLC, 2008). Foresight Quadrant
Crunching™ was developed by Globalytica, LLC, in 2013 as a new
method for conducting Foresight analysis.
8.2.2 Premortem Analysis
Premortem Analysis is conducted prior to finalizing an analysis or a
decision to assess how a key analytic judgment, decision, or plan of
action could go spectacularly wrong. The goal is to reduce the risk of
surprise and the subsequent need for a postmortem investigation of
what went wrong. It is an easy-to-use technique that enables a group
of analysts who have been working together on any type of future-
oriented analysis or project to challenge effectively the accuracy of
their own conclusions. It is a specific application of the reframing
method, in which restating the question, task, or problem from a
different perspective enables one to see the situation differently and
come up with different ideas.
When to Use It
Premortem Analysis should be used by analysts who can devote a
few hours to challenging their own analytic conclusions about the
future to see where they might be wrong. It is much easier to
influence people’s decisions before they make up their mind than
afterward when they have a personal investment in that decision. For
this reason, analysts should use Premortem Analysis and its
companion technique, Structured Self-Critique, just before finalizing
their key analytic judgments.

A single analyst may use the two techniques, but, like all Structured
Analytic Techniques, they are most effective when used by a small
group. If a team assessment, the process should be initiated as soon
as the group starts to coalesce on a common position.

The concept of a premortem as an analytic aid was first used in the


context of decision analysis by Gary Klein in his 1998 book, Sources
of Power: How People Make Decisions. He reported using it in
training programs to show decision makers that they typically are
overconfident that their decisions and plans will work. After the
trainees formulated a plan of action, they were asked to imagine that
it is several months or years in the future, and their plan has been
implemented but has failed. They were then asked to describe how it
might have failed, despite their original confidence in the plan. The
trainees could easily come up with multiple explanations for the
failure, but none of these reasons were articulated when the plan
was first proposed and developed.

This assignment provided the trainees with evidence of their


overconfidence, and it demonstrated that the premortem strategy
can be used to expand the number of interpretations and
explanations that decision makers consider. Klein explains: “We
devised an exercise to take them out of the perspective of defending
their plan and shielding themselves from flaws. We tried to give them
a perspective where they would be actively searching for flaws in
their own plan.”12 Klein reported his trainees showed a “much higher
level of candor” when evaluating their own plans after exposure to
the premortem exercise, as compared with other, more passive
attempts at getting them to critique their initial drafts.13
Value Added
The primary goal of Premortem Analysis is to reduce the risk of
surprise and the subsequent need for a postmortem investigation.
The technique—and its companion, Structured Self-Critique—helps
analysts identify potential causes of error that they had overlooked.
Two creative processes are at work here:

The questions are reframed. This exercise typically elicits


responses that are different from the original ones. Asking
questions about the same topic, but from a different perspective,
opens new pathways in the brain.

The method legitimizes dissent. For various reasons,


individual egos and group dynamics can suppress dissenting
opinions, leading to premature consensus. In Premortem
Analysis, all the participants make a positive contribution to the
group goal by identifying weaknesses in the previous analysis.

An important cause of poor group decisions is the desire for


consensus. This desire can lead to Groupthink or agreement with
majority views regardless of whether participants perceive them as
right or wrong. Attempts to improve group creativity and decision
making often focus on ensuring that the group considers a wide
range of information and opinions.14

Group members tend to go along with the group leader, with the first
group member to stake out a position, or with an emerging majority
viewpoint for many reasons. Most benign is the common rule of
thumb that when we have no firm opinion, we take our cues from the
opinions of others. We follow others because we believe (often
rightly) that they know what they are doing. Analysts may also be
concerned that others will critically evaluate their views, or that
dissent will come across as disloyalty or as an obstacle to progress
that will just prolong the meeting.

In a candid newspaper column written long before he became CIA


director, Leon Panetta wrote that “an unofficial rule in the
bureaucracy says that to ‘get along, go along.’ In other words, even
when it is obvious that mistakes are being made, there is a hesitancy
to report the failings for fear of retribution or embarrassment. That is
true at every level, including advisers to the president. The result is a
‘don’t make waves’ mentality that . . . is just another fact of life you
tolerate in big organizations.”15

It is not bigotry to be certain we are right; but it is bigotry to be


unable to imagine how we might possibly have gone wrong.
—G. K. Chesterton, English writer

A major benefit of Premortem Analysis is that it legitimizes dissent.


The technique empowers team members who may have unspoken
reservations or doubts because they lacked confidence to participate
in a way that is consistent with perceived group goals. If this change
in perspective is handled well, each team member will know that
they add value to the exercise by being critical of the previous
judgment, not for supporting it. By employing both companion
techniques analysts can explore all the ways analysis could turn out
to be wrong—one is a totally unbounded approach and the other is a
highly structured mechanism. The first technique is a right-brained
process called Premortem Analysis; the second is a left-brained
technique, the Structured Self-Critique, which is discussed later in
this chapter.
The Method
The best time to conduct a Premortem Analysis is shortly after a
group has reached a conclusion on an action plan but before any
serious drafting of the report. If the group members are not already
familiar with the Premortem Analysis technique, the group leader,
another group member, or a facilitator steps up and makes a
statement along the lines of the following: “Okay, we now think we
know the right answer, but we need to double-check this. To free up
our minds to consider other possibilities, let’s imagine that we have
made this judgment, our report has gone forward and been
accepted, and now, x months or x years later, we learn that our
analysis was wrong. Things have turned out much differently than
we expected. Now, working from that perspective in the future, let’s
put our imaginations to work and brainstorm what could have
possibly happened to cause our analysis to be spectacularly wrong.”

Ideally, the actual brainstorming session should be a separate


meeting to give the participants time to think about what might have
happened to cause the analytic judgment to be wrong. They should
bring to the meeting a list of what might have gone differently than
expected. To set the tone for the brainstorming session, analysts
should be advised not to focus only on the hypotheses, assumptions,
and key evidence already discussed during their group meetings.
Rather, they should be encouraged to look at the situation from the
perspective of their own life experiences. They should think about
how fast the world is changing, how many of their organization’s
programs are unsuccessful or have unintended consequences, or
how difficult it is to see things from the perspective of a foreign
culture or a competitor. This type of thinking may bring a different
part of analysts’ brains into play as they are mulling over what could
have gone wrong with their analysis. Outside-In Thinking can also be
helpful for this purpose.
In the Premortem Analysis meeting, the group leader or a facilitator
writes the ideas presented on a whiteboard or flip chart. To ensure
that no single person dominates the presentation of ideas, the
Nominal Group Technique version of brainstorming is a good option.
With that technique, the facilitator goes around the room in round-
robin fashion, taking one idea from each participant until all have
presented every idea on their lists (see chapter 6). After all ideas are
posted on the board and made visible to all, the group discusses
what it has learned from this exercise, and what action, if any, the
group should take. This generation and initial discussion of ideas can
often occur in a single two-hour meeting, which is a small investment
of time to undertake a systematic challenge to the group’s thinking.

One expected result is an increased appreciation of the uncertainties


inherent in any assessment of the future. Another outcome might be
identification of indicators that, if observed, would provide early
warning that events are not proceeding as expected. Such findings
may lead to modification of the existing analytic framework.

If the Premortem Analysis leads the group to reconsider and revise


its analytic judgment, the questions shown in Figure 8.2.2 are a good
starting point. For a more thorough set of self-critique questions, see
the discussion of Structured Self-Critique, which involves changing
one’s role from advocate to critic of one’s previous analysis.

Premortem Analysis may identify problems, conditions, or


alternatives that require rethinking the group’s original position. In
such a case, Premortem Analysis has done its job by alerting the
group to the fact that it has a problem, but it does not necessarily tell
the group exactly what the problem is or how to fix it. That is beyond
the scope of Premortem Analysis. The technique alerts the group to
the fact that it has a problem, but it does not systematically assess
the likelihood of these things happening. It also has not evaluated
multiple sources of analytic error or made a comprehensive
assessment of alternative courses of action. These tasks are better
accomplished with the Structured Self-Critique.
Relationship to Other Techniques
If the Premortem Analysis identifies a significant problem, the natural
follow-up technique for addressing this problem is Structured Self-
Critique, described in the next section.
Origins of This Technique
Gary Klein originally developed the premortem concept to train managers to recognize their habitual
overconfidence in the success of their plans and decisions. The authors adapted the technique and
redefined it as an intelligence analysis technique called Premortem Analysis. For original references on
this subject, see Gary Klein, Sources of Power: How People Make Decisions (Cambridge, MA: MIT
Press, 1998); Klein, Intuition at Work: Why Developing Your Gut Instinct Will Make You Better at What
You Do (New York: Doubleday, 2002); and Klein, “Performing a Project PreMortem,” Harvard Business
Review (September 2007). An interactive group activity can be planned at
https://www.atlassian.com/team-playbook/plays/pre-mortem.

Description

Figure 8.2.2 Premortem Analysis: Some Initial Questions


Source: Pherson Associates, LLC, 2019.
8.2.3 Structured Self-Critique
Structured Self-Critique is a systematic procedure that a small team
or group can use to identify weaknesses in its own analysis. All team
or group members don a hypothetical black hat and become critics
rather than supporters of their own analysis. From this opposite
perspective, they respond to a list of questions about sources of
uncertainty, the analytic processes used, critical assumptions,
diagnosticity of evidence, anomalous evidence, information gaps,
changes in the broad environment in which events are happening,
alternative decision models, availability of cultural expertise, and
indicators of possible deception. As it reviews responses to these
questions, the team reassesses its overall confidence in its own
judgment.

Begin challenging your own assumptions. Your assumptions


are your windows on the world. Scrub them off every once in
a while, or the light won’t come in.
—Alan Alda, American actor
When to Use It
You can use Structured Self-Critique productively to look for
weaknesses in any analytic explanation of events or estimate of the
future. We specifically recommend using it in the following ways:

As the next step if the Premortem Analysis raises unresolved


questions about any estimated future outcome or event.

As a double-check prior to the publication of any major product


such as a National Intelligence Estimate or a corporate strategic
plan.

As one approach to resolving conflicting opinions (as discussed


in the next section on Adversarial Collaboration).

The amount of time required to work through the Structured Self-


Critique will vary greatly depending upon how carefully the previous
analysis was done. The questions listed in the method later in this
section are just a prescription for careful analysis. To the extent that
analysts have already explored these same questions during the
initial analysis, the time required for the Structured Self-Critique is
reduced. If these questions are asked for the first time, the process
will take longer. As analysts gain experience with Structured Self-
Critique, they may have less need for certain parts of it, as they will
have internalized the method and used its questions during the initial
analysis (as they should have).
Value Added
When people are asked questions about the same topic but from a
different perspective, they often give different answers than the ones
they gave before. For example, if someone asks a team member if
he or she supports the team’s conclusions, the answer will usually
be “yes.” However, if all team members are asked to look for
weaknesses in the team’s argument, that member may give a
different response.

This change in the frame of reference is intended to change the


group dynamics. The critical perspective should always generate
more critical ideas. Team members who previously may have
suppressed questions or doubts because they lacked confidence or
wanted to be good team players are now able to express those
divergent thoughts. If the change in perspective is handled well, all
team members will know that they win points with their colleagues
for being critical of the previous judgment, not for supporting it.
The Method
Start by reemphasizing that all analysts in the group are now
wearing a black hat. They have become critics, not advocates, and
their job is to find weaknesses in the previous analysis, not support
the previous analysis. The group then works its way through the
following topics or questions:

Sources of uncertainty. Identify the sources and types of


uncertainty to set reasonable expectations for what the team
might expect to achieve. Should one expect to find (1) a single
correct or most likely answer, (2) a most likely answer together
with one or more alternatives that must also be considered, or
(3) many possible explanations or scenarios for future
development? To judge the uncertainty, answer these questions:

Is the question being analyzed a puzzle or a mystery?


Puzzles have answers, and correct answers are attainable
if enough pieces of the puzzle surface. A mystery has no
single definitive answer; it depends upon the future
interaction of many factors, some known and others
unknown. Analysts can frame the boundaries of a mystery
only “by identifying the critical factors and making an
intuitive judgment about how they have interacted in the
past and might interact in the future.”16

How does the team rate the quality and timeliness of its
evidence?

Are there a greater than usual number of assumptions


because of insufficient evidence or the complexity of the
situation?
Is the team dealing with a relatively stable situation or with
a situation that is undergoing, or potentially about to
undergo, significant change?

Analytic process. In the initial analysis, see if the team did the
following: Did it identify alternative hypotheses and seek out
information on these hypotheses? Did it identify key
assumptions? Did it seek a broad range of diverse opinions by
including analysts from other offices and agencies, academia, or
the private sector in the deliberations? If the team did not take
these steps, the odds of the team having a faulty or incomplete
analysis increased. Either consider doing some of these things
now or lower the team’s level of confidence in its judgment.

Critical assumptions. Presuming that the team has already


identified key assumptions, the next step is to identify the one or
two assumptions that would have the greatest impact on the
analytic judgment if they turned out to be wrong. In other words,
if the assumption is wrong, the judgment will be wrong. How
recent and well documented is the evidence that supports each
such assumption? Brainstorm circumstances that could cause
each of these assumptions to be wrong and assess the impact
on the team’s analytic judgment if the assumption is wrong.
Would the reversal of any of these assumptions support any
alternative hypothesis? If the team has not previously identified
key assumptions, it should do a Key Assumptions Check.

Diagnostic evidence. Identify alternative hypotheses and the


most diagnostic items of evidence that enable the team to reject
alternative hypotheses. For each item, brainstorm reasonable
alternative interpretations of this evidence that could make it
consistent with an alternative hypothesis. See Diagnostic
Reasoning in chapter 7.

Information gaps. Are there gaps in the available information,


or is some of the information so dated that it may no longer be
valid? Is the absence of information readily explainable? How
should absence of information affect the team’s confidence in its
conclusions?

Missing evidence. Is there evidence that one would expect to


see in the regular flow of intelligence or open-source reporting if
the analytic judgment is correct, but is not there?

Anomalous evidence. Is there any anomalous item of evidence


that would have been important if it had been believed or if it
could have been related to the issue of concern, but was
rejected because it was not deemed important at the time or its
significance was not known? If so, try to imagine how this item
might be a key clue to an emerging alternative hypothesis.

Changes in the broad environment. Driven by technology and


globalization, the world seems to be experiencing social,
technical, economic, environmental, and political changes at a
faster rate than ever before in history. Might any of these
changes play a role in what is happening or will happen? More
broadly, what key forces, factors, or events could occur
independently of the issue under study that could have a
significant impact on whether the analysis proves to be right or
wrong?

Alternative decision models. If the analysis deals with


decision making by a foreign government or nongovernmental
organization (NGO), was the group’s judgment about foreign
behavior based on a rational actor assumption? If so, consider
the potential applicability of other decision models, specifically
that the action was or will be the result of bargaining between
political or bureaucratic forces, the result of standard
organizational processes, or the whim of an authoritarian
leader.17 If information for a more thorough analysis is lacking,
consider the implications of that for confidence in the team’s
judgment.

Cultural expertise. If the topic being analyzed involves a


foreign or otherwise unfamiliar culture or subculture, does the
team have or has it obtained cultural expertise on thought
processes in that culture?18

Deception. Does another country, NGO, or commercial


competitor about which the team is making judgments have a
motive, opportunity, or means for engaging in deception to
influence U.S. policy or to change your organization’s behavior?
Does this country, NGO, or competitor have a history of
engaging in denial, deception, or influence operations?

After responding to these questions, the analysts take off their black
hats and reconsider the appropriate level of confidence in the team’s
previous judgment. Should the initial judgment be reaffirmed or
modified?
Potential Pitfalls
The success of this technique depends in large measure on the
team members’ willingness and ability to make the transition from
supporters to critics of their own ideas. Some individuals lack the
intellectual flexibility to do this well. It must be clear to all members
that they are no longer performing the same function as before. They
should view their task as an opportunity to critique an analytic
position taken by some other group (themselves, but with a different
hat on).

To emphasize the different role analysts are playing, Structured Self-


Critique meetings should be scheduled exclusively for this purpose.
The meetings should be led by a different person from the usual
leader, and, preferably, held at a different location. It will be helpful if
an experienced facilitator is available to lead the meeting(s). This
formal reframing of the analysts’ role from advocate to critic is an
important part of helping analysts see an issue from a different
perspective.
Relationship to Other Techniques
Structured Self-Critique was developed in large part as an alternative
to Devil’s Advocacy and Team A/Team B Analysis. The techniques
share the same objective, but Structured Self-Critique engages all
the members of the group in a team effort to find flaws in the
analysis as opposed to asking one person or group to criticize
another. When someone is designated to play the role of Devil’s
Advocate, that member will take one of the team’s critical judgments
or assumptions, reverse it, and then argue from that perspective
against the team’s conclusions. We believe it is more effective for the
entire team to don the hypothetical black hat and play the role of
critic. When only one team member—or a competing team—dons
the black hat and tries to persuade the authors of the analysis that
they are wrong, the authors almost always become defensive and
resist the need to make changes. Sometimes, Devil’s Advocates will
find themselves acting out a role that they do not actually agree with,
but their actions will still stir frictions within the group.
Origins of This Technique
Richards J. Heuer Jr. and Randolph Pherson developed Structured
Self-Critique. A simpler version of this technique appears in
Randolph H. Pherson, “Premortem Analysis and Structured Self-
Critique,” in Handbook of Analytic Tools and Techniques, 5th ed.
(Tysons, VA: Pherson Associates, LLC, 2019).
8.2.4 What If? Analysis
What If? Analysis posits that an event has occurred with the potential
for a major positive or negative impact and then, with the benefit of
“hindsight,” explains how this event could have come about and what
the consequences might be.
When to Use It
This technique should be in every analyst’s toolkit. It is an important
technique for alerting decision makers to an event that could
happen, even if it may seem unlikely at the present time. What If?
Analysis serves a function like Foresight analysis—it creates an
awareness that prepares the mind to recognize early signs of a
significant change, and it may enable the decision maker to plan for
that contingency. It is most appropriate when any of the following
conditions are present:

A mental model is well ingrained within the analytic or the client


community that a certain event will not happen.

The issue is highly contentious, either within the analytic


community or among decision makers, and no one is focusing
on what actions need to be considered to deal with or prevent
an untoward event.

Analysts perceive a need for others to focus on the possibility


this event could happen and to consider the consequences if it
does occur.

When analysts are too cautious in estimative judgments on


threats, they brook blame for failure to warn. When too
aggressive in issuing warnings, they brook criticism for
“crying wolf.”
—Jack Davis, “Improving CIA Analytic Performance: Strategic
Warning,” Sherman Kent School for Intelligence Analysis (September
2002)
What If? Analysis is a logical follow-up after any Key Assumptions
Check that identifies an assumption critical to an important estimate
but about which there is some doubt. In that case, the What If?
Analysis would imagine that the opposite of this assumption is true.
Analysis would then focus on ways this outcome could occur and
what the consequences would be.
Value Added
Shifting the focus from asking whether an event will occur to
imagining that it has occurred and then explaining how it might have
happened opens the mind to think in different ways. What If?
Analysis shifts the discussion from “How likely is it?” to these
questions:

How could it possibly come about?

Could it come about in more than one way?

What would be the impact?

Has the possibility of the event happening increased?

The technique also gives decision makers

a better sense of what they might be able to do today to prevent


an untoward development from occurring or leverage an
opportunity to advance their interests and

a list of specific indicators to monitor and determine if a


development may soon occur.

What If? Analysis is a useful tool for exploring unanticipated or


unlikely scenarios that are within the realm of possibility and that
would have significant consequences should they come to pass.
Figure 8.2.4 is an example of this. It posits a dramatic development
—the emergence of India as a new international hub for finance—
and then explores how this scenario could occur. In this example, the
technique spurs the analyst to challenge traditional analysis and
rethink the underlying dynamics of the situation.
The Method

A What If? Analysis can be done by an individual or as a team project. The time required is about
the same as that for drafting a short paper. It usually helps to initiate the process with a
brainstorming session. Additional brainstorming sessions can be interposed at various stages of the
process.

Figure 8.2.4 What If? Scenario: India Makes Surprising Gains from the Global
Financial Crisis
Source: This example was developed by Ray Converse and Elizabeth Manak, Pherson Associates, LLC.

Begin by assuming that what could happen has occurred. Often it is best to pose the issue in the
following way: “The New York Times reported yesterday that . . .” Be precise in defining both the
event and its impact. Sometimes it is useful to posit the new contingency as the outcome of a
specific triggering event, such as a natural disaster, an economic crisis, a major political
miscalculation, or an unexpected new opportunity that vividly reveals a key analytic assumption is
no longer valid.

Develop at least one chain of argumentation—based on both evidence and logic—to explain how
this outcome could have come about. In developing the scenario or scenarios, focus on what must
occur at each stage of the process. Work backwards from the event to the present day. This is
called “backwards thinking.” Try to envision more than one scenario or chain of argument.

Generate and validate a list of indicators or “observables” for each scenario that would help
analysts detect whether events are starting to play out in a way envisioned by that scenario.
Identify which scenarios deserve the most attention by taking into consideration the difficulty of
implementation and the potential significance of the impact.

Assess the level of damage or disruption that would result from a negative scenario and estimate
how difficult it would be to overcome or mitigate the damage incurred.

For new opportunities, assess how well developments could turn out and what can be done to
ensure that such a positive scenario might occur.

Monitor the indicators on a periodic basis.

Report periodically on whether any of the proposed scenarios are beginning to emerge and why.
Relationship to Other Techniques
What If? Analysis is sometimes confused with the High Impact/Low
Probability Analysis technique, as each considers low-probability
events. However, only What If? Analysis uses the reframing
technique of positing that a future event has happened and then
works backwards in time to imagine how it could have happened.
High Impact/Low Probability Analysis requires new or anomalous
information as a trigger and then projects forward to what might
occur and the consequences if it does.
Origins of This Technique
Analysts and practitioners have applied the term “What If? Analysis”
to a variety of techniques for a long time. The version described here
is based on Randolph H. Pherson, “What If? Analysis,” in Handbook
of Analytic Tools and Techniques, 5th ed. (Tysons, VA: Pherson
Associates, LLC, 2019) and training materials from the Department
of Homeland Security, Office of Intelligence and Analysis.

A Cautionary Note

Scenarios developed using both the What If? Analysis and


the High Impact/Low Probability Analysis techniques can
often contain highly sensitive data requiring a very limited
distribution of the final product. Examples are the following:
How might a terrorist group launch a debilitating attack on a
vital segment of the U.S. infrastructure? How could a coup be
launched successfully against a friendly government? What
could be done to undermine or disrupt global financial
networks? Obviously, if an analyst identifies a major
vulnerability that could be exploited by an adversary, extreme
care must prevent that detailed description from falling into
the hands of the adversary. An additional concern is that the
more “brilliant” and provocative the scenario, the more likely it
will attract attention, be shared with others, and possibly leak
and be read by an adversary or competitor.
8.2.5 High Impact/Low Probability Analysis
High Impact/Low Probability Analysis provides decision makers with
early warning that a seemingly unlikely event with major policy and
resource repercussions might occur.
When to Use It
Analysts should use High Impact/Low Probability Analysis when they
want to alert decision makers to the possibility that a seemingly long-
shot development with a major policy or resource impact may be
more likely than previously anticipated. Events that would have
merited such treatment before they occurred include the devastation
caused by Hurricane Katrina to New Orleans in August 2005 or
Hurricane Maria, which struck Puerto Rico in September 2017—two
of the costliest natural disasters in the history of the United States. In
addition, the world would have benefited greatly if financial and
political analysts respectively had used structured techniques to
anticipate the global economic crisis in 2008 and the rapid rise of the
Islamic State (ISIS). A variation of this technique, High
Impact/Uncertain Probability Analysis, might be used to address the
potential impact of an outbreak of H5N1 (avian influenza) or applied
to a terrorist attack when intent is well established but there are
multiple variations on how it might occur.

A High Impact/Low Probability study most often is initiated when


some new and often fragmentary information suggests that an
unanticipated event might be more likely to occur than thought
previously. For example, analysts should pass decision makers a tip-
off warning of a major information warfare attack or a serious
terrorist attack on a national holiday even though solid evidence is
lacking. The technique is also helpful in sensitizing analysts and
decision makers to the possible effects of low-probability events and
to stimulate them to think early on about measures that could avoid
the danger or exploit the opportunity.

A thoughtful senior policy official has opined that most


potentially devastating threats to U.S. interests start out being
evaluated as unlikely. The key to effective intelligence-policy
relations in strategic warning is for analysts to help policy
officials in determining which seemingly unlikely threats are
worthy of serious consideration.
—Jack Davis, “Improving CIA Analytic Performance: Strategic
Warning,” Sherman Kent School for Intelligence Analysis, September
2002
Value Added
The High Impact/Low Probability Analysis format allows analysts to
explore the consequences of an event—particularly one not deemed
likely by conventional wisdom—without having to challenge the
mainline judgment or to argue with others about the likelihood of an
event. In other words, this technique provides a tactful way of
communicating a viewpoint that some recipients might prefer not to
hear.

The analytic focus is not on whether something will happen but to


take it as a given that an event, which would have a major and
unanticipated impact, could happen. The objective is to explore
whether an increasingly credible case can be made for an unlikely
event occurring that could pose a major danger—or offer great
opportunities. The more nuanced and concrete the analyst’s
depiction of the plausible paths to danger, the easier it is for a
decision maker to develop a package of policies to protect or
advance the vital interests of his or her country or business.

High Impact/Low Probability Analysis helps protect analysts against


some of the most common cognitive biases and misapplied
heuristics, including assuming that others would act in the same way
we would in similar circumstances (Mirror Imaging), accepting a
given value of something unknown as a proper starting point for
generating an assessment (Anchoring Effect), and ignoring conflicts
within a group due to a desire for consensus (Groupthink). Use of
the technique also helps counter the impact of several intuitive traps,
including not addressing the impact of the absence of information on
analytic conclusions (Ignoring the Absence of Information), failing to
factor something into the analysis because the analyst lacks an
appropriate category or “bin” for that item of information (Lacking
Sufficient Bins), and focusing on a narrow range of alternatives
representing marginal, not radical, change (Expecting Marginal
Change).
The Method
An effective High Impact/Low Probability Analysis involves these
steps:

Clearly describe the unlikely event.

Define the high-impact outcome precisely if this event occurs.


Consider both the actual event and the secondary effects of the
event.

Identify recent information or reporting that suggests the


possibility of the unlikely event occurring may be increasing.

Postulate additional triggers that would propel events in this


unlikely direction or factors that would greatly accelerate
timetables, such as a botched government response, the rise of
an energetic political challenger, a major terrorist attack, or a
surprise electoral outcome.

Develop one or more plausible pathways to explain how this


seemingly unlikely event could unfold. Focus on the specifics of
what must happen at each stage of the process for the train of
events to play out.

Generate and validate a list of indicators to help analysts and


decision makers anticipate whether the scenarios were
beginning to unfold.

Identify factors that would deflect a bad outcome or encourage a


positive outcome.

Periodically review the indicators and report on whether the


proposed scenarios may be emerging and why. Be alert to
events so unlikely that they did not merit serious attention but
are beginning to emerge.

The last step in the process is extremely important. Periodic reviews


of indicators provide analysts with useful signals that alert them to
the possibility that prevailing mental models are no longer correct
and an event previously considered unlikely now merits careful
attention.
Potential Pitfalls
Analysts need to be careful when communicating the likelihood of
unlikely events. The meaning of the word “unlikely” can be
interpreted as meaning anywhere from 1 percent to 25 percent
probability; “highly unlikely” may mean from 1 percent to 10
percent.19 Clients receiving an intelligence report that uses words of
estimative probability such as “very unlikely” will typically interpret
the report as consistent with their own prior thinking. If the report
says a terrorist attack against a specific foreign embassy within the
next year is highly unlikely, it is possible that the analyst may be
thinking of about a 10 percent possibility. A decision maker, however,
may see that as consistent with his or her own thinking and assume
the likelihood is less than 1 percent. Such a difference in likelihood
can make the difference in deciding whether to order expensive
contingency plans or enact proactive preventive countermeasures.
When an analyst is describing the likelihood of an unlikely event, it is
desirable to express the likelihood in numeric terms, either as a
range (such as less than 5 percent or 10 to 20 percent) or as bettor’s
odds (such as one chance in ten).

Figure 8.2.5 shows an example of an unlikely event—the outbreak of


conflict in the Arctic Ocean—that could have major geopolitical
consequences. Analysts can employ the technique to sensitize
decision makers to the possible effects of the melting of Arctic ice
and stimulate them to think about measures that could deal with the
danger.
Relationship to Other Techniques
High Impact/Low Probability Analysis is sometimes confused with What If? Analysis. Both deal with low-
probability or unlikely events.

What If? Analysis does not require new or anomalous information to serve as a trigger. It reframes
the question by positing that a surprise event has happened. It then looks backwards from that
surprise event to map several ways it could have happened. It also tries to identify actions that, if
taken in a timely manner, might have prevented it.

High Impact/Low Probability Analysis is primarily a vehicle for warning decision makers that
recent, unanticipated developments suggest that an event previously deemed highly unlikely may
occur. Extrapolating on recent evidence or information, it projects forward to discuss what could
occur and the consequences if the event does occur. It challenges the conventional wisdom.
Figure 8.2.5 High Impact/Low Probability Scenario: Conflict in the Arctic20
Source: This example was developed by Pherson Associates, LLC.
Origins of This Technique
The description here is based on Randolph H. Pherson, “High
Impact/Low Probability Analysis,” in Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019);
Globalytica, LLC, training materials; and Department of Homeland
Security, Office of Intelligence and Analysis training materials. Tools
needed to create your own chart can be found at
https://www.mindtools.com/pages/article/newPPM_78.htm.
8.2.6 Delphi Method
Delphi is a method for eliciting ideas, judgments, or forecasts from a
group of experts who may be geographically dispersed. It is different
from a survey in that there are two or more rounds of questioning.
After the first round of questions, a moderator distributes all the
answers and explanations of the answers to all participants, often
anonymously. The expert participants are then given an opportunity
to modify or clarify their previous responses, if so desired, based on
what they have seen in the responses of the other participants. A
second round of questions builds on the results of the first round,
drills down into greater detail, or moves to a related topic. The
technique allows flexibility by increasing the number of rounds of
questions.
When to Use It
The RAND Corporation developed the Delphi Method at the
beginning of the Cold War in the 1950s to forecast the impact of new
technology on warfare. It was also used to assess the probability,
intensity, or frequency of future enemy attacks. In the 1960s and
1970s, Delphi became widely known and used as a method for
futures research, especially forecasting long-range trends in science
and technology. Futures research is like intelligence analysis in that
the uncertainties and complexities one must deal with often preclude
the use of traditional statistical methods, so explanations and
forecasts must be based on the experience and informed judgments
of experts.

Over the years, Delphi has been used in a wide variety of ways, and
for an equally wide variety of purposes. Although many Delphi
projects have focused on developing a consensus of expert
judgment, a variant called Policy Delphi is based on the premise that
the decision maker is not interested in having a group make a
consensus decision, but rather in having the experts identify
alternative policy options and present all the supporting evidence for
and against each option. That is the rationale for describing Delphi
as a reframing technique. It can be used to identify a set of divergent
opinions, many of which may be worth exploring.

One group of Delphi scholars advises that the Delphi technique “can
be used for nearly any problem involving forecasting, estimation, or
decision making”—if the problem is not so complex or so new as to
preclude the use of expert judgment. These Delphi advocates report
using it for diverse purposes that range from “choosing between
options for regional development, to predicting election outcomes, to
deciding which applicants should be hired for academic positions, to
predicting how many meals to order for a conference luncheon.”21
Value Added
We believe the development of Delphi panels of experts on areas of
critical concern should be standard procedure for outreach to experts
outside an analyst’s organization, particularly in the intelligence
community because of its more insular work environment. In the
United States, the Office of the Director of National Intelligence
(ODNI) encourages intelligence analysts to consult with relevant
experts in academia, business, and NGOs in Intelligence Community
Directive No. 205, on Analytic Outreach, dated July 2008.

As an effective process for eliciting information from outside experts,


Delphi has several advantages:

Outside experts can participate remotely, thus reducing time and


travel costs.

Delphi can provide analytic judgments on any topic for which


outside experts are available. That means it can be an
independent cross-check of conclusions reached in-house. If the
same conclusion is reached in two analyses using different
analysts and different methods, confidence in the conclusion is
increased. If the conclusions disagree, this is also valuable
information that may open a new avenue of research.

Delphi identifies any outliers who hold an unusual position.


Recognizing that the majority is not always correct, researchers
can then focus on gaining a better understanding of the grounds
for any views that diverge significantly from the consensus. In
fact, identification of experts who have an alternative
perspective and are qualified to defend it might be the objective
of a Delphi project.

The process by which panel members are provided feedback


from other experts and are given an opportunity to modify their
responses makes it easy for participants to adjust their previous
judgments in response to new evidence.

In many Delphi projects, the experts remain anonymous to other


panel members so that no one can use his or her position of
authority, reputation, or personality to influence others.
Anonymity also facilitates the expression of opinions that go
against the conventional wisdom and may not otherwise be
expressed.

The anonymous features of the Delphi Method substantially


reduce the potential for Groupthink. They also make it more
difficult for any participant with strong views based on his or her
past experiences and worldview to impose such views on the
rest of the group (Projecting Past Experiences). The
requirement that the group of experts engage in several rounds
of information sharing also helps mitigate the potential for
Satisficing.

The Delphi Method can help protect analysts from falling victim to
several intuitive traps, including giving too much weight to first
impressions or initial data, especially if they attract our attention and
seem important at the time (Relying on First Impressions). It also
protects against the tendency to continue holding to an analytic
judgment when confronted with a mounting list of evidence that
contradicts the initial conclusion (Rejecting Evidence).
The Method
In a Delphi project, a moderator or analyst sends a questionnaire to
a panel of experts usually in different locations. The experts respond
to these questions and are asked to provide short explanations for
their responses. The moderator collates the results from the first
questionnaire and sends the collated responses back to all panel
members, asking them to reconsider their responses based on what
they see and learn from the other experts’ responses and
explanations. Panel members may also be asked to answer another
set of questions. This cycle of question, response, and feedback
continues through several rounds using the same or a related set of
questions. It is often desirable for panel members to remain
anonymous so that they are not unduly influenced by the responses
of senior members. This method is illustrated in Figure 8.2.6.
Examples

Description

Figure 8.2.6 Delphi Technique

To show how Delphi can be used for intelligence analysis, we have developed three illustrative
applications:

Evaluation of another country’s policy options. The Delphi project manager or moderator
identifies several policy options that a foreign country might choose. The moderator then asks a
panel of experts on the country to rate the desirability and feasibility of each option, from the other
country’s point of view, on a five-point scale ranging from “Very Desirable” or “Feasible” to “Very
Undesirable” or “Definitely Infeasible.” Panel members also identify and assess any other policy
options that should be considered and identify the top two or three arguments or items of evidence
that guided their judgments. A collation of all responses is sent back to the panel with a request for
members to do one of the following: reconsider their position in view of others’ responses, provide
further explanation of their judgments, or reaffirm their previous response. In a second round of
questioning, it may be desirable to list key arguments and items of evidence and ask the panel to
rate them on their validity and their importance, again from the other country’s perspective.

Analysis of alternative hypotheses. A panel of outside experts is asked to estimate the


probability of each hypothesis in a set of mutually exclusive hypotheses where the probabilities
must add up to 100 percent. This could be done as a stand-alone project or to double-check an
already completed Analysis of Competing Hypotheses (chapter 7). If two analyses using different
analysts and different methods arrive at the same conclusion, confidence in the conclusion can
increase. If the analyses disagree, that may also be useful to know, as one can then seek to
understand the rationale for the different judgments.

Warning analysis or monitoring a situation over time. The facilitator asks a panel of experts to
estimate the probability of a future event. This might be either a single event for which the analyst is
monitoring early warning indicators or a set of scenarios for which the analyst is monitoring
milestones to determine the direction in which events seem to be moving. A Delphi project that
monitors change over time can be managed in two ways. One is to have a new round of questions
and responses at specific intervals to assess the extent of any change. The other is what is called
either Dynamic Delphi or Real-Time Delphi, where participants can modify their responses at any
time as new events occur or as a participant submits new information.22 The probability estimates
provided by the Delphi panel can be aggregated to furnish a measure of the significance of change
over time. They can also be used to identify differences of opinion among the experts that warrant
further examination.
Potential Pitfalls
A Delphi project involves administrative work to identify the experts,
communicate with panel members, and collate and tabulate their
responses through several rounds of questioning. The use of Delphi
by intelligence organizations can pose additional obstacles, such as
ensuring that the experts have appropriate security clearances or
requiring them to meet with the analysts in approved office spaces.
Another potential pitfall is that overenthusiastic use of the technique
can force consensus when it might be better to present two
competing hypotheses and the evidence supporting each position.
Origins of This Technique
The origin of Delphi as an analytic method was described above
under “When to Use It.” The following references were useful in
researching this topic: Murray Turoff and Starr Roxanne Hiltz,
“Computer Based Delphi Processes,” 1996,
http://web.njit.edu/~turoff/Papers/delphi3.html; and Harold A.
Linstone and Murray Turoff, The Delphi Method: Techniques and
Applications (Reading, MA: Addison-Wesley, 1975). A 2002 digital
version of Linstone and Turoff’s book is available online at
http://is.njit.edu/pubs/delphibook; see in particular the chapter by
Turoff on “The Policy Delphi”
(http://is.njit.edu/pubs/delphibook/ch3b1.pdf).

For more recent information on validity and optimal techniques for


implementing a Delphi project, see Gene Rowe and George Wright,
“Expert Opinions in Forecasting: The Role of the Delphi Technique,”
in Principles of Forecasting, ed. J. Scott Armstrong (New York:
Springer Science+Business Media, 2001).

Several software programs are available for using the Delphi


Method; for example, one can be found at
http://armstrong.wharton.upenn.edu/delphi2. Distributed decision
support systems now publicly available to support virtual teams
include some functions necessary for Delphi as part of a larger
package of analytic tools.
8.3 CONFLICT MANAGEMENT
TECHNIQUES
Challenge techniques support the identification and confrontation of
opposing views. That is, after all, their purpose. This raises two
important questions, however. First, how can confrontation be
managed so that it becomes a learning experience rather than a
battle between determined adversaries? Second, in an analysis of
any topic with a high degree of uncertainty, how can one decide if
one view is wrong or if both views have merit and need to be
discussed in an analytic report? This section offers a conceptual
framework and seven useful techniques for dealing with analytic
conflicts.

A widely distributed article in the Harvard Business Review stresses


that improved collaboration among organizations or organizational
units with different interests can be achieved only by accepting and
actively managing the inevitable—and desirable—conflicts between
these units:

The disagreements sparked by differences in perspective,


competencies, access to information, and strategic focus . .
. actually generate much of the value that can come from
collaboration across organizational boundaries. Clashes
between parties are the crucibles in which creative
solutions are developed. . . . So instead of trying simply to
reduce disagreements, senior executives need to embrace
conflict and, just as important, institutionalize mechanisms
for managing it.23

The most common procedures for dealing with differences of opinion


have been to force a consensus, minimize the differences, or—in the
U.S. Intelligence Community—add a dissenting footnote to an
estimate. We believe these practices are suboptimal, at best. We
hope they will become increasingly rare as our analytic communities
embrace greater collaboration early in the analytic process, rather
than endure mandated coordination at the end of the process after
all parties are locked into their positions. One of the principal benefits
of using Structured Analytic Techniques for intraoffice and
interagency collaboration is that these techniques identify differences
of opinion at the start of the analytic process. This gives time for the
differences to be at least understood, if not resolved, at the working
level before management becomes involved.

How one deals with conflicting analytic assessments or estimates


depends, in part, upon one’s expectations about what is achievable.
Mark Lowenthal has written persuasively of the need to recalibrate
expectations of what intelligence analysis can accomplish.24 More
than in any other discipline, intelligence analysts typically work with
incomplete, ambiguous, and potentially deceptive evidence.
Combine this with the fact that intelligence analysts are seeking to
understand human behavior, which is difficult to predict even in our
own culture. It should not be surprising that intelligence analysis
sometimes turns out to be “wrong.”

Acceptance of the basic principle that it is okay for analysts to be


uncertain, because they are dealing with uncertain matters, helps to
set the stage for appropriate management of conflicting views. In
some cases, one position will be refuted and rejected. In other
cases, two or more positions may be reasonable assessments or
estimates, usually with one more likely than the others. In such
cases, conflict is mitigated when it is recognized that each position
has some value in covering the full range of options.

Earlier in this chapter, we noted that an assessment or estimate that


is properly described as “probable” has about a one-in-four chance
of being wrong. This has clear implications for appropriate action
when analysts hold conflicting views. If an analysis meets rigorous
standards yet conflicting views remain, decision makers are best
served by an analytic product that deals directly with the uncertainty
rather than minimizing or suppressing it. The greater the uncertainty,
the more appropriate it is to go forward with a product that discusses
the most likely assessment or estimate and gives one or more
alternative possibilities. Some intelligence services have even
required that an analysis or assessment cannot go forward unless
the analyst can demonstrate that he or she has considered
alternative explanations or scenarios.

Factors to be considered when assessing the amount of uncertainty


include the following:

An estimate of the future generally has more uncertainty than an


assessment of a past or current event.

Mysteries, for which there are no knowable answers, are far


more uncertain than puzzles, for which an answer does exist if
one could only find it.25

The more assumptions that are made, the greater the


uncertainty. Assumptions about intent or capability, and whether
they have changed, are especially critical.

Analysis of human behavior or decision making is far more


uncertain than analysis of technical data.

The behavior of a complex dynamic system is more uncertain


than that of a simple system. The more variables and
stakeholders involved in a system, the more difficult it is to
foresee what might happen.

If the decision is to go forward with a discussion of alternative


assessments, the next step might be to produce any of the following:

A comparative analysis of opposing views in a single report.


This calls for analysts to identify the sources and reasons for the
uncertainty (e.g., assumptions, ambiguities, knowledge gaps),
consider the implications of alternative assessments or
estimates, determine what it would take to resolve the
uncertainty, and suggest indicators for future monitoring that
might provide early warning of a given alternative emerging.

An analysis of alternative scenarios, as described in chapter 9.

A What If? Analysis or High Impact/Low Probability Analysis, as


described in this chapter.

A report that is clearly identified as a “second opinion” or


“alternative perspective.”
8.3.1 Adversarial Collaboration
Adversarial Collaboration is an agreement between opposing parties
about how they will work together to resolve or at least gain a better
understanding of their differences. Adversarial Collaboration is a
relatively new concept championed by Daniel Kahneman, the
psychologist who along with Amos Tversky initiated much of the
research on cognitive biases described in Heuer’s Psychology of
Intelligence Analysis. Kahneman received a Nobel Prize in 2002 for
his research on behavioral economics, and he wrote an intellectual
autobiography in connection with this work in which he commented
as follows on Adversarial Collaboration:

One line of work that I hope may become influential is the


development of a procedure of adversarial collaboration,
which I have championed as a substitute for the format of
critique-reply-rejoinder in which debates are currently
conducted in the social sciences. Both as a participant and
as a reader, I have been appalled by the absurdly
adversarial nature of these exchanges, in which hardly
anyone ever admits an error or acknowledges learning
anything from the other. Adversarial collaboration involves a
good-faith effort to conduct debates by carrying out joint
research—in some cases there may be a need for an
agreed arbiter to lead the project and collect the data.
Because there is no expectation of the contestants reaching
complete agreement at the end of the exercise, adversarial
collaboration will usually lead to an unusual type of joint
publication, in which disagreements are laid out as part of a
jointly authored paper.26

Kahneman’s approach to Adversarial Collaboration involves


agreement on empirical tests for resolving a dispute and conducting
those tests with the help of an impartial arbiter. A joint report
describes the tests, states what both sides agree has been learned,
and provides interpretations of the test results on which they
disagree.27

Truth springs from argument amongst friends.


—David Hume, Scottish philosopher

Although differences of opinion on intelligence judgments can


seldom be resolved through empirical research, the Adversarial
Collaboration concept can, nevertheless, be adapted to apply to the
work of analysts. Analysts—and their managers—can agree to use a
variety of techniques to reduce, resolve, more clearly define, or
explain their differences. These are grouped together here under the
overall heading of Adversarial Collaboration.
When to Use It
Adversarial Collaboration should be used only if both sides are open
to discussion of an issue. If one side is fully locked into its position
and has repeatedly rejected the other side’s arguments, this
technique is unlikely to be successful. Structured Debate is more
appropriate to use in these situations because it includes an
independent arbiter who listens to both sides and then decides.
Value Added
Adversarial Collaboration can help opposing analysts see the merit
of another group’s perspective. If successful, it will help both parties
gain a better understanding of what assumptions or evidence are
behind their opposing opinions on an issue and to explore the best
way of dealing with these differences. Can one side be shown to be
wrong, or should both positions be reflected in any report on the
subject? Can there be agreement on indicators to show the direction
in which events seem to be moving?

A key advantage of Adversarial Collaboration techniques is that they


bring to the surface critical items of evidence, logic, and assumptions
that the other side had not factored into its own analysis. This is
especially true for evidence that is inconsistent with or unhelpful in
supporting either side’s lead hypothesis.
The Method
Six approaches to Adversarial Collaboration are described here.
What all have in common is the requirement to understand and
address the other side’s position rather than simply dismiss it. Mutual
understanding of the other side’s position is the bridge to productive
collaboration. These six techniques are not mutually exclusive; in
other words, one might use several of them for any specific project.

8.3.1.1 The Method: Key Assumptions Check

The first step in understanding what underlies conflicting judgments


is a Key Assumptions Check, as described in chapter 7. Evidence is
always interpreted in the context of a mental model about how
events normally transpire in a given country or situation, and a Key
Assumptions Check is one way to make a mental model explicit. If a
Key Assumptions Check has not already been done, each side can
apply this technique and then share the results with the other side.

Discussion should then focus on the rationale for each assumption


and suggestions for how the assumption might be either confirmed
or refuted. If the discussion focuses on the probability of Assumption
A versus Assumption B, it is often helpful to express probability as a
numerical range—for example, 65 percent to 85 percent for
probable. When analysts go through these steps, they sometimes
discover they are not as far apart as they thought. The discussion
should focus on refuting the other side’s assumptions rather than
supporting one’s own.

8.3.1.2 The Method: Analysis of Competing Hypotheses

When opposing sides are dealing with a collegial difference of


opinion, with neither side firmly locked into its position, Analysis of
Competing Hypotheses (ACH), described in chapter 7, may be a
good structured format for helping to identify and discuss
differences. One important benefit of ACH is that it pinpoints the
exact sources of disagreement. Both parties agree on a set of
hypotheses and then rate each item of evidence or relevant
information as consistent or inconsistent with each hypothesis. When
analysts disagree on these consistency ratings, the differences are
often quickly resolved. When not resolved, the differences often
point to previously unrecognized assumptions or to some interesting
rationale for a different interpretation of the evidence. One can also
use ACH to trace the significance of each item of relevant
information in supporting the overall conclusion.

The use of ACH may not result in the elimination of all the
differences of opinion, but it can be a big step toward understanding
these differences and determining what might be reconcilable
through further intelligence collection or research. The analysts can
then make a judgment about the potential productivity of further
efforts to resolve the differences. ACH may not be helpful, however,
if two sides are already locked into their positions. It is all too easy in
ACH for one side to interpret the evidence and enter assumptions in
a way that deliberately supports its preconceived position. To
challenge a well-established mental model, other challenge or
conflict management techniques may be more appropriate.

8.3.1.3 The Method: Argument Mapping

Argument Mapping, which was described in chapter 7, maps the


logical relationship between each element of an argument. Two
sides might agree to work together to create a single Argument Map
with the rationales both for and against a given conclusion. Such an
Argument Map will show where the two sides agree, where they
diverge, and why. The visual representation of the argument makes
it easier to recognize weaknesses in opposing arguments. This
technique pinpoints the location of any disagreement and could
serve as an objective basis for mediating a disagreement. An
alternative approach might be to create, compare, and discuss the
merits of alternative, contrasting Argument Maps.

8.3.1.4 The Method: Mutual Understanding

When analysts in different offices or agencies disagree, the


disagreement is often exacerbated by the fact that they have a
limited understanding of the other side’s position and logical
reasoning. The Mutual Understanding approach addresses this
problem directly.

There are two ways to measure the health of a debate: the


kinds of questions being asked and the level of listening.
—David A. Garvin and Michael A. Roberto, “What You Don’t Know
about Making Decisions,” Harvard Business Review (September 2001)

After an exchange of information on their positions, the two sides


meet with a facilitator, moderator, or decision maker. Side 1 is
required to explain to Side 2 its understanding of Side 2’s position.
Side 1 must do this in a manner that satisfies Side 2 that its position
is appropriately represented. Then the roles are reversed, and Side
2 explains its understanding of Side 1’s position. This mutual
exchange is often difficult to do without carefully listening to and
understanding the opposing view and what it is based upon. Once
each side accurately understands and represents the other side’s
position, both sides can more collegially discuss their differences
rationally and with less emotion. Experience shows that this
technique normally prompts some movement of the opposing parties
toward common ground.28

8.3.1.5 The Method: Joint Escalation


When disagreement occurs within an analytic team, the
disagreement is often referred to a higher authority. This escalation
often makes matters worse. What typically happens is that a
frustrated analyst takes the problem up to his or her boss, briefly
explaining the conflict in a manner that is clearly supportive of the
analyst’s own position. The analyst then returns to the group armed
with the boss’s support. However, the opposing analyst(s) have also
gone to their bosses and come back with support for their solution.
Each analyst is then locked into what has become “my manager’s
view” of the issue. An already thorny problem has become even
more intractable. If the managers engage each other directly, both
will quickly realize they lack a full understanding of the problem and
must factor in what their counterparts know before trying to resolve
the issue.

This situation can be avoided by an agreement among team


members, or preferably an established organization policy, that
requires joint escalation.29 The analysts should be required to
prepare a joint statement describing the disagreement and to
present it jointly to their superiors. This requires each analyst to
understand and address, rather than simply dismiss, the other side’s
position. It also ensures that managers have access to multiple
perspectives on the conflict, its causes, and various paths for
resolution.

Just the need to prepare such a joint statement discourages


escalation and often leads to an agreement. The proponents of this
approach report their experience that “companies that require people
to share responsibility for the escalation of a conflict often see a
decrease in the number of problems that are pushed up the
management chain. Joint escalation helps create the kind of
accountability that is lacking when people know they can provide
their side of an issue to their own manager and blame others when
things don’t work out.”30

8.3.1.6 The Method: The Nosenko Approach


Yuri Nosenko was a Soviet intelligence officer who defected to the
United States in 1964. Whether he was a true defector or a Soviet
plant was a subject of intense and emotional controversy within the
CIA for more than a decade. In the minds of some, this historic case
is still controversial.

At a critical decision point in 1968, the leadership of the CIA’s Soviet


Bloc Division set up a three-man team to review all the evidence and
make a recommendation for the division’s action in this case. The
amount of evidence is illustrated by the fact that just one single
report arguing that Nosenko was still under Soviet control was 1,000
pages long. The team consisted of one leader who was of the view
that Nosenko was a Soviet plant, one leader who believed that he
was a bona fide defector, and an experienced officer who had not
previously been involved but was inclined to think Nosenko might be
a plant.

The interesting point here is the ground rule that the team was
instructed to follow. After reviewing the evidence, each officer
identified those items of evidence thought to be of critical importance
in making a judgment on Nosenko’s bona fides. Any item that one
officer stipulated as critically important then had to be addressed by
the other two members.

It turned out that fourteen items were stipulated by at least one of the
team members and had to be addressed by the others. Each officer
prepared his own analysis, but they all had to address the same
fourteen issues. Their report became known as the “Wise Men”
report.

The team did not come to a unanimous conclusion. However, it was


significant that the thinking of all three moved in the same direction.
When the important evidence was viewed from the perspective of
searching for the truth, rather than proving Nosenko’s guilt or
innocence, the case that Nosenko was a plant began to unravel. The
officer who had always believed that Nosenko was bona fide felt he
could now prove the case. The officer who was relatively new to the
case changed his mind in favor of Nosenko’s bona fides. The officer
who had been one of the principal analysts and advocates for the
position that Nosenko was a plant became substantially less
confident in that conclusion. There were now adequate grounds for
management to make the decision.

The ground rules used in the Nosenko case can be applied in any
effort to abate a long-standing analytic controversy. The key point
that makes these rules work is the requirement that each side must
directly address the issues important to the other side and thereby
come to understand the other’s perspective. This process guards
against the common propensity of analysts to make their own
arguments and then simply dismiss those of the other side as
unworthy of consideration.31
8.3.2 Structured Debate
Structured Debate is a planned debate between analysts or analytic
teams that hold opposing points of view on a specific issue. The
debate is conducted according to set rules and before an audience,
which may be a “jury of peers” or one or more senior analysts or
managers.
When to Use It
Structured Debate is called for when a significant difference of
opinion exists within or between analytic units or within the decision-
making community. It can also be used effectively when Adversarial
Collaboration has been unsuccessful or is impractical, and a choice
must be made between two opposing opinions or a decision to go
forward with a comparative analysis of both. Structured Debate
requires a significant commitment of analytic time and resources. A
long-standing policy issue, a critical decision that has far-reaching
implications, or a dispute within the analytic community that is
obstructing effective interagency collaboration would be grounds for
making this type of investment in time and resources.
Value Added
In the method proposed here, each side presents its case in writing
to the opposing side; then, both cases are combined in a single
paper presented to the audience prior to the debate. The oral debate
then focuses on refuting the other side’s position. Glib and
personable speakers can always make arguments for their own
position sound persuasive. Effectively refuting the other side’s
position is a different ball game, however. The requirement to refute
the other side’s position brings to the debate an important feature of
the scientific method: that the most likely hypothesis is the one with
the least evidence against it as well as good evidence for it. (The
concept of refuting hypotheses is discussed in chapter 7.)

He who knows only his own side of the case, knows little of
that. His reasons may be good, and no one may have been
able to refute them. But if he is equally unable to refute the
reasons on the opposite side, if he does not so much as
know what they are, he has no ground for preferring either
opinion.
—John Stuart Mill, On Liberty (1859)

The goal of the debate is to decide what to tell the client. If neither
side can effectively refute the other, then arguments for and against
both sides should be in the report. Customers of intelligence analysis
gain more benefit by weighing well-argued conflicting views than
from reading an assessment that masks substantive differences
among analysts or drives the analysis toward the lowest common
denominator. If participants routinely interrupt one another or pile on
rebuttals before digesting the preceding comment, the objective of
Structured Debate is defeated. The teams are engaged in emotional
conflict rather than constructive debate.
The Method
Start by defining the conflict to be debated. If possible, frame the
conflict in terms of competing and mutually exclusive hypotheses.
Ensure that all sides agree with the definition. Then follow these
steps:

An individual is selected, or a group identified, who can develop


the best case for each hypothesis.

Each side writes up the best case from its point of view. This
written argument must be structured with an explicit
presentation of key assumptions, key pieces of evidence, and
careful articulation of the logic behind the argument.

Each side presents the opposing side with its arguments, and
the two sides are given time to develop counterarguments to
refute the opposing side’s position.

Next, conduct the debate phase in the presence of a jury of peers,


senior analysts, or managers who will provide guidance after
listening to the debate. If desired, an audience of interested
observers might also watch the debate.

The debate starts with each side presenting a brief (maximum


five minutes) summary of the argument for its position. The jury
and the audience are expected to have read each side’s full
argument.

Each side then presents to the audience its rebuttal of the other
side’s written position. The purpose here is to proceed in the
oral arguments by systematically refuting alternative hypotheses
rather than by presenting more evidence to support one’s own
argument. This is the best way to evaluate the strengths of the
opposing arguments.

After each side has presented its rebuttal argument, the other
side is given an opportunity to refute the rebuttal.

The jury asks questions to clarify the debaters’ positions or gain


additional insight needed to pass judgment on the debaters’
positions.

The jury discusses the issue and passes judgment. The winner
is the side that makes the best argument refuting the other
side’s position, not the side that makes the best argument
supporting its own position. The jury may also recommend
possible next steps for further research or intelligence collection
efforts. If neither side can refute the other’s arguments, it may
be because both sides have a valid argument; if that is the case,
both positions should appear in any subsequent analytic report.
Relationship to Other Techniques
Structured Debate is like the Team A/Team B technique that has
been taught and practiced throughout the intelligence community.
Structured Debate differs from Team A/Team B Analysis in its focus
on refuting the other side’s argument. Use of the technique also
reduces the chances that emotions overshadow the process of
conflict resolution. The authors have dropped Team A/Team B
Analysis and Devil’s Advocacy from the third edition of this book
because they believe neither is an appropriate model for how
analysis should be conducted.32
Origins of This Technique
The history of debate goes back to the Socratic dialogues in ancient
Greece, and even earlier. Many different forms of debate have
evolved since then. Richards J. Heuer Jr. formulated the idea of
focusing the debate on refuting the other side’s argument rather than
supporting one’s own.
NOTES
1. Rob Johnston, Analytic Culture in the U.S. Intelligence Community
(Washington, DC: CIA Center for the Study of Intelligence, 2005), 64.

2. Stephen J. Coulthart, “Improving the Analysis of Foreign Affairs:


Evaluating Structured Analytic Techniques” (PhD diss., University of
Pittsburgh, 2015), http://d-
scholarship.pitt.edu/26055/1/CoulthartSJ_ETD2015.pdf

3. Reframing is like the Problem Restatement technique Morgan


Jones described in his book The Thinker’s Toolkit (New York: Three
Rivers Press, 1995). Jones observed that “the moment we define a
problem our thinking about it quickly narrows considerably.” We
create a frame through which we view the problem and that tends to
obscure other interpretations of the problem. A group can change
that frame of reference, and challenge its own thinking, simply by
redefining the problem.

4. For more information about this application, see the Applied


Critical Thinking Handbook, 7.0 (Fort Leavenworth, KS: University of
Foreign Military and Cultural Studies, 2015),
https://fas.org/irp/doddir/army/critthink.pdf.

5. Kesten Green and J. Scott Armstrong, “Structured Analogies for


Forecasting,” in International Journal of Forecasting, Vol. 23, Issue 3,
July–September 2007, pp. 365–376, and
www.forecastingprinciples.com/paperpdf/Structured_Analogies.pdf

6. This technique should not be confused with Edward de Bono’s Six


Thinking Hats technique.

7. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, 2007), 134–138.
8. The description of how to conduct a Red Hat Analysis has been
updated since publication of the first edition to capture insights
provided by Todd Sears, a former Defense Intelligence Agency
analyst, who noted that Mirror Imaging is unlikely to be overcome
simply by sensitizing analysts to the problem. The value of a
structured technique like Red Hat Analysis is that it requires analysts
to think first about what would motivate them to act before
articulating why a foreign adversary would act differently.

9. Peter Schwartz, The Art of the Long View (New York: Doubleday,
1991).

10. Paul Bedard, “CIA Chief Claims Progress with Intelligence


Reforms,” U.S. News and World Report, May 16, 2008.

11. Donald P. Steury, ed., Sherman Kent and the Board of National
Estimates: Collected Essays (Washington, DC: CIA Center for the
Study of Intelligence, 1994), 133.

12. Gary Klein, Sources of Power: How People Make Decisions


(Cambridge, MA: MIT Press, 1998), 71.

13. Gary Klein, Intuition at Work: Why Developing Your Gut Instinct
Will Make You Better at What You Do (New York: Doubleday, 2002),
91.

14. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than


Individuals? The Potential Benefits of Dissent and Diversity for
Group Creativity,” in Group Creativity: Innovation through
Collaboration, eds. Paul B. Paulus and Bernard A. Nijstad (New
York: Oxford University Press, 2003), 63.

15. Leon Panetta, “Government: A Plague of Incompetence,”


Monterey County Herald, March 11, 2007, F1.

16. Gregory F. Treverton, “Risks and Riddles,” Smithsonian


Magazine, June 2007.
17. For information about these three decision-making models, see
Graham T. Allison and Philip Zelikov, Essence of Decision, 2nd ed.
(New York: Longman, 1999).

18. For information on fundamental differences in how people think


in different cultures, see Richard Nisbett, The Geography of
Thought: How Asians and Westerners Think Differently and Why
(New York: Free Press, 2003).

19. Richards J. Heuer Jr., Psychology of Intelligence Analysis


(Washington, DC: CIA Center for the Study of Intelligence, 1999;
reprinted by Pherson Associates, LLC, 2007), 155.

20. A more robust discussion of how conflict could erupt in the Arctic
can be found in “Uncharted Territory: Conflict, Competition, or
Collaboration in the Arctic?” accessible at shop.globalytica.com.

21. Kesten C. Green, J. Scott Armstrong, and Andreas Graefe,


“Methods to Elicit Forecasts from Groups: Delphi and Prediction
Markets Compared,” Foresight: The International Journal of Applied
Forecasting (Fall 2007),
www.forecastingprinciples.com/paperpdf/Delphi-WPlatestV.pdf

22. See Real-Time Delphi, www.realtimedelphi.org

23. Jeff Weiss and Jonathan Hughes, “Want Collaboration? Accept


—and Actively Manage—Conflict,” Harvard Business Review, March
2005.

24. Mark M. Lowenthal, “Towards a Reasonable Standard for


Analysis: How Right, How Often on Which Issues?” Intelligence and
National Security 23, no. 3 (June 2008): 303–15.
https://doi.org/10.1080/02684520802121190.

25. Treverton, “Risks and Riddles.”

26. Daniel Kahneman, Autobiography, 2002, available on the Nobel


Prize website: https://www.nobelprize.org/prizes/economic-
sciences/2002/kahneman/biographical/. For a pioneering example of
a report on an Adversarial Collaboration, see Barbara Mellers, Ralph
Hertwig, and Daniel Kahneman, “Do Frequency Representations
Eliminate Conjunction Effects? An Exercise in Adversarial
Collaboration,” Psychological Science 12 (July 2001).

27. Richards J. Heuer Jr. is grateful to Steven Rieber of the Office of


the Director of National Intelligence, Office of Analytic Integrity and
Standards, for referring him to Kahneman’s work on Adversarial
Collaboration.

28. Richards J. Heuer Jr. is grateful to Jay Hillmer of the Defense


Intelligence Agency for sharing his experience in using this
technique to resolve coordination disputes.

29. Weiss and Hughes, “Want Collaboration?”

30. Ibid.

31. This discussion is based on Richards J. Heuer Jr., “Nosenko:


Five Paths to Judgment,” in Inside CIA’s Private World: Declassified
Articles from the Agency’s Internal Journal, 1955–1992, ed. H.
Bradford Westerfield (New Haven, CT: Yale University Press, 1995).

32. The term Team A/Team B is taken from a historic analytic


experiment conducted in 1976. A team of CIA Soviet analysts (Team
A) and a team of outside critics (Team B) prepared competing
assessments of the Soviet Union’s strategic military objectives. This
exercise was characterized by entrenched and public warfare
between long-term adversaries. In other words, the historic legacy of
Team A/Team B is exactly the type of trench warfare between
opposing sides that we need to avoid. The 1976 experiment did not
achieve its goals, and it is not a model that most analysts familiar
with it would want to follow. We recognize that some Team A/Team B
exercises can be fruitful if carefully managed but believe the
Reframing Techniques described in this chapter are better ways to
proceed.
Descriptions of Images and Figures
Back to Figure

New information is acquired while going up a cable car in a


mountain. When a person skis down a gentle slope, twisting and
turning along the way, they grind a new set of lenses, and outcome
is more imaginative thinking. When a person goes straight down a
steep slope, with same old cognitive destination, the outcome is
speedy but predictable thinking.

Back to Figure

Your office has high influence in the thinking process. Your


profession, and key factors, such as size, growth, clients,
adversaries, suppliers, products, and technology have some
influence. The world, and key forces such as globalization, identity,
social stress, social media, big data, and artificial intelligence have
little or no influence.

Back to Figure

Analogy based on shared attributes. The illustration shows a


pentagon and a hexagon with two stars in each of them. An
accompanying text reads, “Canadians and Americans speak English
and eat pizza.”

Analogy based on function or relation. The illustration shows a small


square, labeled A, a larger square, labeled B, a small triangle,
labeled C, and a larger triangle, labeled D. Text reads, “A is to B as
C is to D. A driver is to a car as a pilot is to an airplane.”

Back to Figure

The first coordinate system is titled, multiple attacks or insider. Data


in each quadrant are as follows. Quadrant 1: Simultaneous.
Contractor or visitor. Quadrant 2: Simultaneous. Staff employee.
Quadrant 3: Cascading. Staff employee. Quadrant 4: Cascading.
Contractor or visitor. The second coordinate system is titled, multiple
attacks or minor casualties. Data in each quadrant are as follows.
Quadrant 1: Simultaneous. Disrupt Economy. Quadrant 2:
Simultaneous. Spark terror. Quadrant 3: Cascading. Spark terror.
Quadrant 4: Simultaneous. Spark terror.

Back to Figure

The first system is titled, multiple attacks or insider. Data in each


quadrant are as follows. Quadrant 1: Attack plan 2. Quadrant 2:
Story A. Simultaneous. Staff employee. Quadrant 3: Attack Plan 3.
Quadrant 4: Story B. Cascading. Staff employee.

The second system is titled, multiple attacks or minor casualties.


Data in the quadrant are as follows. Quadrant 1: Attack Plan 6.
Quadrant 2: Attack Plan 5. Quadrant 3: Story B. Cascading. Spark
terror. Quadrant 4: Attack Plan 8.

The third system is titled, multiple attacks or other strategies. Data in


the quadrant are as follows. Quadrant 1: Attack Plan 14. Quadrant 2:
Attack Plan 13. Quadrant 3: Attack Plan 15. Quadrant 4: Story B.
Cascading. Water as weapon.

The fourth system is titled, multiple attacks or wastewater. Quadrant


1: Attack Plan 10. Quadrant 2: Attack Plan 9. Quadrant 3: Nightmare
Attack Plan. Cascading. Treatment plants. Quadrant 4: Attack Plan
12.

Back to Figure

What caused our analysis to be so wrong? Did we consider


alternative hypotheses? Did external influences affect the outcome?
Did deception go undetected? Were our sources or key evidence
unreliable? Was any contradictory evidence ignored? Did the
absence of information mislead us? Were our key assumptions
valid?
Back to Figure

There are seven steps. Steps 1, 3, and 5 are from the facilitator to
the experts. Steps 2, 4, and 6 are from the experts to the facilitator.
The facilitator prepares the final report in step 7. The technique flows
as follows. Facilitator seeks individual assessments from a pool of
experts. Experts respond to the request, receive feedback from the
facilitator, and revise their responses. Facilitator compiles the
responses and sends a revised set of questions to each expert.
Several feedback cycles may be needed. Facilitator prepares report
on experts’ responses, noting key outliers.
CHAPTER 9 FORESIGHT TECHNIQUES

9.1 Key Drivers Generation™ [ 260 ]

9.2 Key Uncertainties Finder™ [ 262 ]

9.3 Reversing Assumption [ 264 ]

9.4 Simple Scenarios [ 265 ]

9.5 Cone of Plausibility [ 267 ]

9.6 Alternative Futures Analysis [ 270 ]

9.7 Multiple Scenarios Generation [ 272 ]

9.8 Morphological Analysis [ 276 ]

9.9 Counterfactual Reasoning [ 279 ]

9.10 Analysis by Contrasting Narratives [ 285 ]

9.11 Indicators Generation, Validation, and Evaluation [ 289 ]

9.11.1 The Method: Indicators Generation [ 292 ]

9.11.2 The Method: Indicators Validation [ 295 ]

9.11.3 The Method: Indicators Evaluation [ 295 ]

In the complex, evolving, uncertain situations that analysts and decision makers must deal with every
day, the future is not easily predictable. Some events are intrinsically of low predictability. The best the
analyst can do is to identify the driving forces that are likely to determine future outcomes and monitor
those forces as they interact to become the future. Scenarios are a principal vehicle for doing this.
Scenarios are plausible and provocative stories about how the future might unfold. When alternative
futures are clearly outlined, decision makers can mentally rehearse these futures and ask themselves,
“What should I be doing now to prepare for these futures?”

Anticipating future developments and implementing future-oriented policies is particularly challenging


because of the increasing complexity of problems, the expanding number of stakeholders, and the
growing interdependence among various actors, institutions, and events. Senior officials in the
government and private sector expect analysts to alert them to emerging trends and unanticipated
developments such as the rapid rise of the Islamic State (ISIS), the migration crisis in Europe, the Brexit
vote in the United Kingdom, and the results of the 2016 presidential election in the United States.

Analysts can best perform this function by using Foresight Techniques—a family of imaginative
structured techniques that infuse creativity into the analytic process. The techniques help decision
makers better structure a problem and anticipate the unanticipated. Figure 9.0 lists eleven techniques
described in this and other chapters, suggesting the best circumstances for using each technique. When
the Foresight Techniques are matched with Indicators, they can help warn of coming dangers or expose
new ways of responding to opportunities.
Description

Figure 9.0 Taxonomy of Foresight Techniques


Source: Pherson Associates, LLC, 2019.

The process of developing key drivers—and using them in combinations to generate a wide array of
alternative trajectories—forces analysts to think about the future in ways they never would have
contemplated if they relied only on intuition and their own expert knowledge. Generating a
comprehensive list of key drivers requires organizing a diverse team that is knowledgeable in a wide
variety of disciplines. A good guide for ensuring diversity is to engage a set of experts who can address
all the elements of STEMPLES+, that is, the Social, Technical, Economic, Military, Political, Legal,
Environmental, and Security dimensions of a problem plus possible additional factors such as
Demographics, Religion, or Psychology.

We begin this chapter with a discussion of two techniques for developing a list of key drivers. Key
Drivers Generation™ uses the Cluster Brainstorming technique to identify potential key drivers. The Key
Uncertainties Finder™ is an extension of the Key Assumptions Check. We recommend using both
techniques and melding their findings to generate a robust set of drivers. The authors’ decades of
experience in developing key drivers suggest that the number of mutually exclusive key drivers rarely
exceeds four or five. Practice in using these two techniques will help analysts become proficient in the
fourth of the Five Habits of the Master Thinker: identifying key drivers.

Scenarios provide a framework for considering multiple plausible futures by constructing alternative
trajectories or stories about how a situation could unfold. As Peter Schwartz, author of The Art of the
Long View, has argued, “The future is plural.”1 Trying to divine or predict a single outcome typically is a
disservice to senior policy officials, decision makers, and other clients. Generating several scenarios (for
example, those that are most likely, unanticipated, or most dangerous) can be more helpful because it
helps focus attention on the underlying forces and factors—or key drivers—that are most likely to
determine how the future will unfold. When High Impact/Low Probability scenarios are included, analysts
can use scenarios to examine assumptions, identify emerging trends, and deliver useful warning
messages. Foresight Techniques help analysts manage complexity and uncertainty by adding rigor to
the foresight process. They are based on the premise that generating numerous stories about how the
future will evolve will increase the practitioner’s sensitivity to outlier scenarios, reveal new opportunities,
and reduce the chances of surprise. By postulating different scenarios, analysts can identify the multiple
ways in which a situation might evolve. This process can help decision makers develop plans to take
advantage of whatever opportunities the future may hold—or, conversely, to avoid or mitigate risks.
It is vitally important that we think deeply and creatively about the future, or else we run the risk
of being surprised and unprepared. At the same time, the future is uncertain, so we must prepare
for multiple plausible futures, not just the one we expect to happen. Scenarios contain the stories
of these multiple futures, from the expected to the wildcard, in forms that are analytically
coherent and imaginatively engaging. A good scenario grabs us by the collar and says, “Take a
good look at this future. This could be your future. Are you going to be ready?”
— Peter Bishop, Andy Hines, and Terry Collins,“The Current State of Scenario Development,” Foresight (March 2007)

Foresight Techniques are most useful when a situation is complex or when the outcomes are too
uncertain to trust a single prediction. When decision makers and analysts first come to grips with a new
situation or challenge, a degree of uncertainty always exists about how events will unfold. At the point
when national policies or long-term corporate strategies are in the initial stages of formulation, Foresight
Techniques can have a strong impact on decision makers’ thinking.

One benefit of Foresight Techniques is that it provides an efficient mechanism for communicating
complex ideas, typically described in a scenario with a short and “catchy” label. These labels provide a
lexicon for thinking and communicating with other analysts and decision makers about how a situation
or a country is evolving. Examples of effective labels include “Red Ice,” which describes Russia’s
takeover of a melting Arctic Ocean, or “Glueless in Havana,” which describes the collapse of the Cuban
government and its replacement by the Russian mafia.

Scenarios do not predict the future, but a good set of scenarios bounds the range of possible futures for
which a decision maker may need to be prepared. Scenarios can be used as a strategic planning tool
that brings decision makers and stakeholders together with experts to envisage the alternative futures
for which they must plan.2

When analysts are thinking about scenarios, they are rehearsing the future so that decision makers can
be prepared for whatever direction that future takes. Instead of trying to estimate the most likely
outcome and being wrong quite often, scenarios provide a framework for considering multiple plausible
futures. Trying to divine or predict a single outcome is usually a fool’s errand. By generating several
scenarios, the decision makers’ attention is shifted to the key drivers that are most likely to influence
how a situation will develop.

Analysts have learned from experience that involving decision makers in a scenarios workshop is an
effective way to communicate the results of this technique and to sensitize them to important
uncertainties. Most participants find the process of developing scenarios much more useful than any
written report or formal briefing. Those involved in the process often benefit in several ways. Experience
has shown that scenarios can do the following:

Suggest indicators to monitor for signs that a specified future is becoming more or less likely.

Help analysts and decision makers anticipate what would otherwise be surprising developments by
forcing them to challenge assumptions and consider plausible “wild-card” scenarios or
discontinuous events.

Produce an analytic framework for calculating the costs, risks, and opportunities represented by
different outcomes.

Provide a means for weighing multiple unknown or unknowable factors and presenting a set of
plausible outcomes.

Stimulate thinking about opportunities that can be leveraged or exploited.


Bound a problem by identifying plausible combinations of uncertain factors.

When decision makers or analysts from different disciplines or organizational cultures are included on
the team, new insights invariably emerge as new relevant information and competing perspectives are
introduced. Analysts from outside the organizational culture of an analytic unit are likely to see a
problem in different ways. They are likely to challenge key working assumptions and established mental
models of the analytic unit and avoid the trap of expecting only incremental change. Involving decision
makers, or at least a few individuals who work in the office of the ultimate client or decision maker, can
bring invaluable perspective and practical insights to the process.

When analysts look well into the future, they usually find it extremely difficult to do a simple, straight-line
projection, given the high number of variables they need to consider. By changing the “analytic lens”
through which the future is viewed, analysts are also forced to reevaluate their assumptions about the
priority order of key factors driving the issue. By pairing the key drivers to create sets of mutually
exclusive scenarios, scenario techniques help analysts think about the situation from sometimes
counterintuitive perspectives, often generating several unexpected and dramatically different potential
future worlds.

By engaging in a multifaceted and systematic examination of an issue, analysts create a more


comprehensive set of alternative futures. This enables them to maintain records about each alternative
and track the potential for change, thus gaining greater confidence in their overall assessment.

The amount of time and effort required depends upon the specific technique used. A single analyst can
use Reversing Assumptions, Simple Scenarios, What If? Analysis, and Cone of Plausibility without any
technical or methodological support, although a group effort typically yields more diverse and creative
results. Various forms of Brainstorming, Key Drivers Generation™, and Key Uncertainties Finder™
require a group and a facilitator but can be done in an hour or two. The time required for Foresight
Quadrant Crunching™, Alternative Futures Analysis, Multiple Scenarios Generation, Morphological
Analysis, and Counterfactual Reasoning varies, but these techniques usually require a team of experts
to spend several days working together on the project. Analysis by Contrasting Narratives and often
Multiple Scenarios Generation involve engaging decision makers directly in the process. How the
technique is applied will also vary depending on the topic and the target client. For this reason, we
strongly recommend engaging a facilitator who is knowledgeable about Foresight Techniques to save
time and ensure a high-quality product.

Criteria should be established for choosing which scenarios are the most important to bring to the
attention of the decision maker or ultimate client. The list should be tailored to the client’s needs and
should fully answer the focal question asked at the beginning of the exercise. Five criteria often used in
selecting scenarios follow:

Downside Risk. This criterion addresses the question most often asked: “How bad can it get?” The
response should be a credible scenario that has a reasonable chance of occurring and should
require the development of a contingency plan for avoiding, mitigating, or recovering if the selected
scenario comes to pass. A “nightmare scenario,” also described as a High Impact/Low Probability
scenario, is usually best portrayed in a tone box or text box in the paper and not as its own stand-
alone scenario.

Mainline Assessment. Most clients will usually ask, “What is most likely to happen?” The honest
answer is usually, “We do not really know; it depends on how the various key drivers play out in
influencing future developments.” Although the purpose of Foresight analysis is to show that
several scenarios are possible, scenarios can usually be ranked in order from most to least likely to
occur based on current trends and reasonable key assumptions. Providing a mainline scenario also
establishes a convenient baseline for conducting further analysis and deciding what actions are
critical.
New Opportunity. Every Foresight workshop should include pathways that show decision makers
how they can fashion a future or futures more to their liking. This can be accomplished by including
one or more scenario that showcases new opportunities or by including a section describing how a
bad outcome can be mitigated or remedied. Every adversity comes with an opportunity, and the
Foresight processes discussed in this chapter can be just as effective in developing positive,
opportunities-based scenarios as in describing all the bad things that can happen.

Emerging Trend. Often when conducting a Foresight workshop, new factors will appear or new
trends will be identified that analysts or decision makers had previously ignored. These new trends,
relationships, or dynamics often are integral to or reflected in several of the scenarios and can be
collapsed into a single scenario that best illustrates the significance of—and opportunities
presented by—the new trend.

Recognizable Anchor. If the client does not believe any of the scenarios of the Foresight analysis
exercise are credible or consistent with the client’s experience or convictions, then the recipient will
likely disregard the entire process and ignore key findings or discoveries made. On the other hand,
recipients who find a scenario that resonates with their current worldview will anchor their
understanding of the exercise on that scenario and more easily understand the alternatives.

One of the greatest challenges in applying Foresight Techniques is to generate attention-deserving


scenarios that are comprehensive, mutually exclusive, and optimally support the needs of the primary
client. A frequent question is, “How many scenarios should we create?” Past practice suggests that the
optimal number is four, because any other number has built-in drawbacks:

One scenario is a prediction that invariably will not come true.

Two scenarios suggest an artificial binary process.

Three scenarios introduce the Goldilocks effect, often implying that the “middle” scenario is an
appropriate compromise.

Five or more scenarios are usually too many for a decision maker to process cognitively.

Scenarios workshops will most likely fail if the group conducting the exercise is not highly diverse, with
representatives from a variety of disciplines, organizations, and even cultures to avoid the trap of
Groupthink. Foresight analysis can be a powerful instrument for overcoming well-known cognitive biases
and heuristics such as the Anchoring Effect, Groupthink, and Premature Closure. Scenarios techniques
also mitigate the intuitive traps of thinking of only one likely (and predictable) outcome instead of
acknowledging that several outcomes are possible (Assuming a Single Solution), focusing on a narrow
range of alternatives representing marginal—and not radical—change (Expecting Marginal Change),
and failing to factor something into the analysis because the analyst lacks an appropriate category or
“bin” for that item of information (Lacking Sufficient Bins).

Users of Foresight Techniques often find that members of the “futures group or study group” have
difficulty thinking outside of their comfort zone, resisting instructions to look far into the future, or to
explore or suggest concepts that do not fall within their area of expertise. Techniques that have worked
well to pry participants out of such analytic pitfalls are to (1) define a time period for the estimate (such
as five or ten years) that one cannot easily extrapolate from current events, or (2) post a list of concepts
or categories (such as the STEMPLES+ list of Social, Technical, Economic, Military, Political, Legal,
Environmental, and Security, plus other factors such as Demographic, Religious, or Psychological) to
stimulate thinking about an issue from different perspectives. Analysts involved in the process should
have a thorough understanding of the subject matter and possess the conceptual skills necessary to
identify the key drivers and assumptions that are likely to remain valid throughout the period of the
assessment.
Identification and monitoring of indicators or signposts can provide early warning of the direction in
which the future is heading, but these early signs are not obvious.3 Indicators take on meaning only in
the context of a specific scenario with which they are associated. The prior identification of a scenario
and related indicators can create an awareness that prepares the mind to recognize early signs of
significant change. Change sometimes happens so gradually that analysts do not notice it, or they
rationalize it as not being of fundamental importance until it is too obvious to ignore. After analysts take
a position on an issue, they typically are slow to change their minds in response to new evidence. By
going on the record in advance to specify what actions or events would be significant and might change
their minds, analysts can avert this type of rationalization.

The time required to use Foresight Techniques (such as Multiple Scenarios Generation, Foresight
Quadrant Crunching™, or Counterfactual Reasoning) ranges from a few days to several months,
depending on the complexity of the problem, scheduling logistics, and the number of participants
involved in the process.4 Most of the techniques involve several stages of analysis and employ different
techniques to (1) identify key drivers, (2) generate permutations to reframe how the topic could evolve,
(3) convert them into scenarios, (4) establish indicators for assessing the potential for each proposed
alternative trajectory, and (5) use Decision Support Techniques to help policymakers and decision
makers shape an action agenda.

This chapter addresses the first three stages. Decision Support Techniques such as the Opportunities
Incubator™ and the Impact Matrix that can be used to implement stage 5 are described in the next
chapter. In a robust Foresight exercise, several weeks may pass between each of these stages to
provide time to effectively capture, reflect on, and refine the results of each session.
OVERVIEW OF TECHNIQUES
Key Drivers Generation™ should be used at the start of a
Foresight exercise to assist in the creation of key drivers. These key
drivers should be mutually exclusive, fundamental to the issue or
problem under study, and usually not obvious to the uninformed.

Key Uncertainties Finder™ should also be used at the start of a


Foresight analysis exercise to assist in the creation of a list of key
drivers. When possible, the results of the Key Uncertainties Finder™
process should be melded with the drivers generated by Key Drivers
Generation™ to create a more robust and cross-checked list of key
drivers.

Reversing Assumptions is a simple technique. The process is to


assume that a key assumption is no longer valid and then explore
the implications of this change by generating a new, alternative
scenario.

Simple Scenarios is a quick and easy way for an individual analyst


or a small group of analysts to generate multiple scenarios or
trajectories. It starts with the current analytic line and then explores
other alternatives, often employing the Cluster Brainstorming
technique.

Cone of Plausibility works well with a small group of experts who


define a set of assumptions and key drivers, establish a baseline
scenario, and then modify the assumptions and drivers to create
plausible alternative scenarios and wild cards.

Alternative Futures Analysis is a systematic and imaginative


procedure that engages a group of experts in using a 2-×-2 matrix
defined by two key drivers to generate a set of mutually exclusive
scenarios. The technique often includes academics and decision
makers and requires the support of a trained facilitator.
Multiple Scenarios Generation can handle a much larger number
of scenarios than Alternative Futures Analysis. It also requires a
facilitator, but the use of this technique can greatly reduce the
chance that events could play out in a way that was not foreseen as
a possibility.

Morphological Analysis is useful for dealing with complex,


nonquantifiable problems for which little data are available and the
chances for surprise are significant. It is a generic method for
systematically identifying and considering all possible relationships in
a multidimensional, highly complex, usually nonquantifiable problem
space. Users need training and practice in this method, and a
facilitator experienced in Morphological Analysis may be necessary.

Counterfactual Reasoning considers what might happen if an


alternative possibility were to occur rather than attempting to
determine if the lead scenario itself is probable. It is designed to
answer the question “How could things have been different in the
past and what does that tell us about what to expect in the future?”
Use of an experienced facilitator is also highly recommended.

Analysis by Contrasting Narratives is a method for analyzing


complex problems by identifying a set of narratives associated with
entities involved in the problem.5 This includes the strategic narrative
of the primary client of the analysis, the adversary, and a third party.
The process involves having analysts and working-level decision
makers collaborate to further their understanding of a problem.

Several techniques that can be used to generate scenarios also


perform other functions. They are listed below and are described in
other chapters:

Simple Brainstorming is a simple and well-established


mechanism described in chapter 6 to stimulate creative thinking
about alternative ways the future might unfold. The
brainstorming session should be a structured process that
follows specific rules.6 A downside risk for using brainstorming
to generate scenarios is that there is no guarantee that the
scenarios generated are mutually exclusive. The tendency is to
draw heavily from past experiences and similar situations, thus
falling victim to the Availability Heuristic.

Cluster Brainstorming is a silent brainstorming technique


described in chapter 6 that can be used to generate scenarios.
Participants generate ideas using self-stick notes that are
arrayed in clusters, with each cluster suggesting a different
scenario.

What If? Analysis (also referred to as Backwards Thinking) is


scenarios analysis in reverse. Participants posit an outcome and
then develop scenarios to explain what had to occur for that
outcome to have happened. It has also been categorized as a
Reframing Technique and is described in chapter 8.

Foresight Quadrant Crunching™ was developed in 2013 by


Randolph Pherson as a variation on Multiple Scenarios
Generation and the Key Assumptions Check. It adopts a
different approach to generating scenarios by Reversing
Assumptions, and it is described along with its companion
technique, Classic Quadrant Crunching™, in chapter 8.

Indicators are used in Foresight analysis to provide early warning of


some future event. They are often paired with scenarios to identify
which of several possible scenarios is developing. They also are
useful in measuring change toward an undesirable condition, such
as political instability, or a desirable condition, such as economic
reform. Analysts can use a variety of structured techniques to
generate indicators, which should be validated using the five
characteristics of a good indicator. Indicators Evaluation is a process
that helps analysts assess the diagnostic power of an indicator.
9.1 KEY DRIVERS GENERATION™
Key Drivers Generation™ uses the Cluster Brainstorming technique
to generate a list of candidate key drivers needed to conduct a
Foresight exercise.
When to Use It
Key Drivers Generation™ should be used at the start of a Foresight
exercise to assist in the creation of key drivers. A key driver is
defined as a basic force or factor, such as economic growth, popular
support, conflict versus cooperation, or globalization, that affects
behavior, performance, or strategy now or in the future. Items on the
list of key drivers should be Mutually Exclusive and Comprehensively
Exhaustive (MECE). A robust set of key drivers should be
fundamental to the issue or problem; the list often is not obvious to
the uninformed.
Value Added
Key Drivers Generation™ is one of two techniques that have proved
helpful in developing rigorous lists of key drivers. The other
technique is the Key Uncertainties Finder™, which adapts elements
of the Key Assumptions Check. Both techniques are particularly
effective in countering the impact of the Anchoring Effect, Availability
Heuristic, and Hindsight Bias. They also help mitigate the influence
of intuitive traps such as Assuming Inevitability, overrating the role of
internal determinants of behavior and underestimating the
importance of situational factors (Overrating Behavioral Factors),
and failing to accurately assess the likelihood of an event when
faced with statistical facts and ignoring prior probabilities or base
rates (Ignoring Base Rate Probabilities).
The Method
Key Drivers Generation™ follows specific rules and procedures to
stimulate new ideas and concepts, emphasizing the use of silence
and “kinetic brainstorming” with self-stick notes.

Stage I: Cluster Brainstorming Exercise (see more detailed


instructions in chapter 6)

Gather a small group of individuals who are working the issue


along with a few “outsiders” who bring an independent
perspective.

Pass out self-stick notes and marker pens.

Formulate the question and write it on a whiteboard or easel. A


Key Drivers Generation™ focal question usually begins with
“What are all the (things/forces and factors/circumstances) that
would help explain . . .?”

Ask the participants to write down their responses on self-stick


notes. The facilitator collects them and reads the responses out
loud. Only the facilitator speaks during this initial information-
gathering stage of the exercise.

The facilitator sticks the notes on the wall or whiteboard. As the


facilitator calls out the responses, participants are urged to build
on one another’s ideas.

After several pauses in idea generation, facilitators ask three to


five participants to arrange the self-stick notes into affinity
groups (basically grouping the ideas by like concept). Group
members do not talk while doing this.
If the topic is sufficiently complex, ask a second small group to
rearrange the notes into a more coherent pattern. They cannot
speak.

This group—or a third group—then picks a word that best


describes each grouping. This group can discuss how the self-
stick notes were arranged and what would constitute the best
label to assign to each affinity group.

Participants then discuss which affinity groups are the best


candidates for conversion to key drivers. Create a list of four to
six candidate drivers.

Stage II: Find the Key Drivers

Identify which affinity groups represent or suggest a critical


variable—something that is certain to influence how the situation
under consideration would evolve over time.

Make a list of four to six critical variables that would best serve
as key drivers to use in conducting a Foresight analysis.

If the group has also conducted a Key Uncertainties Finder™


exercise, examine both sets of key drivers and meld them into a
single list of four or five drivers.

Determine if the final list of key drivers meets the following


requirements:

Mutually exclusive—items do not overlap or are not variants


of the same issue.

Cover most, if not all, of the STEMPLES+ criteria (Social,


Technical, Economic, Military, Political, Legal,
Environmental, and Security, plus other factors such as
Demographic, Religious, or Psychological).

Revise the list as appropriate and add a new driver if a major


dimension of the problem is not covered by the list of drivers.
9.2 KEY UNCERTAINTIES FINDER™
The Key Uncertainties Finder™ transforms the results of a Key
Assumptions Check exercise into a list of candidate key drivers
needed to conduct a Foresight exercise.
When to Use It
The Key Uncertainties Finder™ should be used at the start of a
Foresight exercise to assist in the creation of a list of key drivers. In
the business world, a key driver is defined as a basic force or factor
affecting performance. The definition used in intelligence analysis is
broader: basic forces and factors (economic growth, popular support,
conflict versus cooperation, globalization) that affect behavior,
performance, or strategy now or in the future. Key drivers are not
nations, regions, or labels, such as “Russia,” “cyber,” or “increased
military spending.” When compiling a list of key drivers, the list
should reflect the following characteristics:

Mutually Exclusive. Each key driver does not share the same
basic dynamic as another driver.

Fundamental. Each key driver affects performance or behavior


or strategy.

Nonobvious. At least one listed key driver illustrates a dynamic


that is not immediately obvious.
Value Added
The Key Uncertainties Finder™ adapts elements of the Key
Assumptions Check to generate a list of key drivers needed in a
Foresight analysis. It is one of two techniques that have proved
helpful in developing rigorous lists of key drivers for generating
alternative scenarios. The other technique is Key Drivers
Generation™, which builds on Cluster Brainstorming.

When conducting a Key Assumptions Check, some of the


unsupported assumptions often turn out to be key uncertainties—
things we initially thought to be true but were not when critically
examined. In most cases, these key uncertainties can also be
described as critical variables in determining how a situation might
evolve or what trajectory evolves over time.
The Method
Stage I: Conduct a Key Assumptions Check Exercise (see more
detailed instructions in chapter 7)

Gather a small group of individuals who are working the issue


along with a small number of “outsiders” who can come to the
table with an independent perspective.

Review the definition of a key assumption: a supposition of


something as true for another condition or development to be
true. It can also be a fact or statement that analysts tend to take
for granted.

On a whiteboard or an easel, list all the key assumptions that


participants can generate about the topic.

After developing a complete list, go back and critically examine


each assumption.

Stage II: Find the Key Uncertainties

Identify the unsupported assumptions on the list; ask if they can


be characterized as key uncertainties.

Review the key uncertainties and ask if they could also be


described as critical variables. A critical variable should have
specific end points that bound the phenomenon along a
continuous spectrum.

Stage III: Convert the Key Uncertainties into Key Drivers


Make a list of the four to six key uncertainties that would best
serve as key drivers used in a Foresight exercise.

If the group has also conducted a Key Drivers Generation™


exercise, compare both sets of key drivers and meld them into a
single list of four to five drivers.

Determine if the final list of key drivers meets the following


requirements:

Mutually exclusive

Cover most, if not all, of the STEMPLES+ criteria (Social,


Technical, Economic, Military, Political, Legal,
Environmental, and Security, plus other factors such as
Demographic, Religious, or Psychological)

Revise the list as appropriate and add a new driver if a major


dimension of the problem is not covered by the list of drivers.
9.3 REVERSING ASSUMPTIONS
Reversing Assumptions challenges established mindsets by
reframing key elements of the problem. The technique involves
identifying a key assumption, assuming the reverse is true, and
assessing how the overall assessment would change if the key
assumption were not true.
When to Use It
Reversing Assumptions is a simple but highly effective technique
analysts and decision makers can use to explore the significance of
key assumptions and generate alternative scenarios. Individuals or a
group can use the technique without a facilitator. The technique
usually is employed at the start of a project but can prove just as
useful for testing working hypotheses and analytic judgments
throughout the production process.
The Method
The method is straightforward:

Make a list of key assumptions.

Identify one or more solid assumptions that underpin the


analysis.

Assume that—for whatever reason—the assumption is


incorrect, and the contrary assumption has turned out to be true.

Ask how reversing that key assumption would change the


expected outcome.

If the impact would be significant, ask if a credible case—no


matter how unlikely—could be made that the selected key
assumption could turn out to be untrue. Articulate the
circumstances under which this could happen.

Convert the case into an alternative scenario.

The process can be repeated for several key assumptions,


generating a set of plausible alternative scenarios.
9.4 SIMPLE SCENARIOS
The Simple Scenarios technique is a brainstorming process
designed to generate multiple scenarios by manipulating the
strengths of a set of key drivers.
When to Use It
Simple Scenarios is a relatively straightforward technique. An
analyst working alone can use this technique as well as a group or a
team. There is no need for a coach or a facilitator to conduct the
process or exercise but having one available enriches the outcome.
The lack of structure in the brainstorming process does not
guarantee that all the scenarios generated are mutually exclusive, so
the results may not be optimal. Participants also may fall into the trap
of drawing too heavily from past experiences and similar situations.
The Method
Here are the steps for using this technique:

Clearly define the focal issue and the specific goals of the futures exercise.

Make a list of forces, factors, and events that are likely to influence the future using Simple
Brainstorming or Cluster Brainstorming techniques.

Organize the forces, factors, and events related to one another into five to ten affinity groups that
are the likely driving forces in determining how the focal issue will evolve.

Label each of these drivers and write a brief description of each. For example, analysts used the
technique to forecast the future of the fictional country of Caldonia, which was facing a chronic
insurgency and a growing threat from narcotics traffickers. Six drivers were identified using Cluster
Brainstorming.

Generate a matrix, as shown in Figure 9.4, with a list of drivers down the left side. The columns of
the matrix are used to describe scenarios. Each scenario is assigned a value for each driver. The
values are strong or positive (+), weak or negative (–), and blank if neutral or no change. In Figure
9.4, participants identified the following six key drivers and then rated them.

Government effectiveness. To what extent does the government exert control over all
populated regions of the country and effectively deliver services?

Economy. Does the economy sustain positive growth?

Civil society. Can nongovernmental and local institutions provide appropriate services and
security to the population?

Insurgency. Does the insurgency pose a viable threat to the government? Is it able to extend
its dominion over greater portions of the country?

Drug trade. Is there a robust drug-trafficking economy?

Foreign influence. Do foreign governments, international financial organizations, or


nongovernmental organizations provide military or economic assistance to the government?

Figure 9.4 Simple Scenarios


Source: Pherson Associates, LLC, 2019.

Generate at least four different scenarios—a best case, worst case, mainline, and at least one other
by assigning different values (+, 0, –) to each driver. In this case, four scenarios were created by
varying the impact of six drivers: “Fragmentation” represents the downside scenario, “Descent into
Order” the mainline assessment, “An Imperfect Peace” a new opportunity, and “Pockets of Civility”
focuses on the strength of civil society.

Reconsider the list of drivers. Is there a better way to conceptualize and describe the drivers? Are
there important forces that have not been included? Look across the matrix to see the extent to
which each driver discriminates among the scenarios. If a driver has the same value across all
scenarios, it is not discriminating and should be deleted.

Ask if the set of selected scenarios is complete. To stimulate thinking about other possible
scenarios, consider the key assumptions made in deciding on the most likely scenario. What if
some of these assumptions turn out to be invalid? If they are invalid, how might that affect the
outcome, and are such outcomes included in the available set of scenarios?

For each scenario, write a one-page story to describe what that future looks like and/or how it might
come about. The story should illustrate the interplay of the drivers.

For each scenario, describe the implications for the decision maker.

Generate and validate a list of indicators, or “observables,” for each scenario that would aid in
discovering that events are starting to play out in a way envisioned by that scenario.

Monitor the list of indicators on a regular basis.

Report periodically on which scenario appears to be emerging and why.


Origins of This Technique
Pherson Associates, LLC, developed the model of Simple Scenarios
in the late 1990s to support scenarios work done for the U.S. State
Department and the U.S. Intelligence Community.
9.5 CONE OF PLAUSIBILITY
Cone of Plausibility is a structured process using key drivers and
assumptions to generate a range of plausible alternative scenarios
that help analysts and decision makers imagine various futures and
their effects. The value of Cone of Plausibility lies in showcasing the
drivers that are shaping current and future events.
When to Use It
The Cone of Plausibility can be used to explore how well or how
poorly events might unfold, thereby bounding the range of
possibilities for the decision maker.7 It helps the decision maker
focus on scenarios that are plausible—and fall inside the cone—and
implausible—and fall outside the cone. Dramatic, but unlikely, or
“possible” scenarios that fall outside the cone can be displayed
separately in text boxes alongside the narrative. The technique also
is highly effective for strategic warning.
The Method
The steps in the technique are as follows (see Figure 9.5):

Convene a small group of experts with some diversity of background. Define the issue at hand and
set the time frame of the assessment. A common question to ask is, “What will X (e.g., a country,
regime, issue) look like in Y (e.g., two months, five years, twenty years)?”

Identify the drivers that are key factors or forces and thus most useful in defining the issue and
shaping the current environment. Analysts in various fields have created mnemonics to guide their
analysis of key drivers. One of the most common is PEST, which signifies Political, Economic,
Social, and Technological variables. Other analysts have combined PEST with legal, military,
environmental, psychological, or demographic factors to form abbreviations such as STEEP,
STEEPLE, or PESTLE.8 We recommend using STEMPLES+ (Social, Technical, Economic, Military,
Political, Legal, Environmental, and Security, plus other factors such as Demographic, Religious, or
Psychological).

Write the drivers as neutral statements that should be valid throughout the period of the
assessment. For example, write “the economy,” not “declining economic growth.” Be sure you are
listing true drivers and not just describing important players or factors relevant to the situation. The
technique works best when four to six drivers are generated.

Make assumptions about how the drivers are most likely to play out over the time frame of the
assessment. Be as specific as possible; for example, say, “The economy will grow 2–4 percent
annually over the next five years,” not simply that “The economy will improve.” Generate only one
assumption per driver.

Description

Figure 9.5 Cone of Plausibility

Generate a baseline scenario from the list of key drivers and key assumptions. This is often a
projection from the current situation forward, adjusted by the assumptions you are making about
future behavior. The scenario assumes that the drivers and their descriptions will remain valid
throughout the period. Write the scenario as a future that has come to pass and describe how it
came about. Construct one to three alternative scenarios by changing an assumption or several of
the assumptions that you included in your initial list. Often it is best to start by looking at those
assumptions that appear least likely to remain true. Consider the impact that change is likely to
have on the baseline scenario and describe this new end point and how it came about. Also
consider what impact changing one assumption would have on the other assumptions on the list.

We recommend making at least one of these alternative scenarios an opportunities scenario,


illustrating how a positive outcome that is significantly better than the current situation could
plausibly be achieved. Often it is also desirable to develop a scenario that captures the full extent of
the downside risk.

Generate a possible wild-card scenario by radically changing the assumption that you judge as the
least likely to change. This should produce a High Impact/Low Probability scenario (see chapter 8)
that may not have been considered otherwise.
Origins of This Technique
Cone of Plausibility is a well-established technique used by
intelligence analysts in several countries. It is a favorite of analysts
working in Canada and the United Kingdom. For additional insight
and visuals on the Cone of Plausibility, visit
https://prescient2050.com/the-cone-of-plausibility-can-assist-your-
strategic-planning-process/.
9.6 ALTERNATIVE FUTURES ANALYSIS
Alternative Futures Analysis is a systematic method for identifying
alternative trajectories by developing plausible but mind-stretching
“stories” based on critical uncertainties to inform and illuminate
decisions, plans, and actions today.
When to Use It
Alternative Futures Analysis and Multiple Scenarios Generation (the
next technique to be described) differ from the previously described
techniques in that they are usually larger projects that rely on a
group of experts, often including decision makers, academics, and
other outside experts. They use a more systematic process and
usually require the assistance of a knowledgeable facilitator.

Alternative Futures Analysis is limited in that participants define a


future world based on only two driving forces. Each driving force is a
spectrum with two extremes, and these drivers combine to make four
possible scenarios. Multiple Scenarios Generation has no such
limitation other than the practical limitations of time and complexity.

A team of experts can spend hours or days using the technique to


organize, brainstorm, and develop multiple futures. A large, multi-day
effort often demands the special skills of trained facilitators
knowledgeable in the mechanics of Alternative Futures Analysis. The
technique has proven highly effective in helping decision makers and
policymakers contemplate multiple futures, challenge their
assumptions, and anticipate surprise developments by identifying
“unknown unknowns.” “Unknown unknowns” are best defined as
those factors, forces, or players that one did not realize were
important or influential before commencing the exercise.
The Method
The steps in the Alternative Futures Analysis process are as follows:

Clearly define the focal issue and the specific goals of the futures exercise.

Brainstorm to identify the key forces, factors, or events most likely to influence how the issue will
develop over a specified time period.

If possible, group these various forces, factors, or events to form two critical drivers that are
expected to determine the future outcome. In the example on the future of Cuba (Figure 9.6), the
two key drivers are “Effectiveness of Government” and “Strength of Civil Society.” If there are more
than two critical drivers, do not use this technique—use the Multiple Scenarios Generation
technique, which can handle a larger number of drivers.

As shown in the Cuba example, define the two ends of the spectrum for each driver.

Draw a 2-×-2 matrix. Label the two ends of the spectrum for each driver.

Note that the square is now divided into four quadrants. Each quadrant represents a scenario
generated by a combination of the two drivers. Now give a name to each scenario and write it in the
relevant quadrant.

Generate a narrative story of how each hypothetical scenario might come to pass. Include a
hypothetical chronology of key dates and events for each scenario.

Describe the implications of each scenario, should it develop.

Description

Figure 9.6 Alternative Futures Analysis: Cuba


Source: Pherson Associates, LLC, 2019.

Generate and validate a list of indicators, or “observables,” for each scenario that would help
determine whether events are starting to play out in a way envisioned by that scenario.

Monitor the list of indicators on a regular basis.


Report periodically on which scenario appears to be emerging and why.
Origins of the Technique
A team at the Royal Dutch Shell Company developed a robust
alternative futures methodology in the 1980s. In The Art of the Long
View, Peter Schwartz provides a detailed description of the process
and the power of the technique. Use of the technique usually
requires the assistance of a team of knowledgeable facilitators.
9.7 MULTIPLE SCENARIOS GENERATION
Multiple Scenarios Generation is a systematic method for
brainstorming multiple explanations of how a situation may develop
when considerable uncertainty and several underlying key drivers
are present.
When to Use It
Multiple Scenarios Generation is a useful technique for exploring the
many ways a situation might evolve, anticipating surprise
developments, and generating field requirements when dealing with
little concrete information and/or a highly ambiguous or uncertain
threat. In counterterrorism, analysts can use it to identify new
vulnerabilities, and to assess, anticipate, and prioritize possible
attacks and attack methods. It also can be an investigative tool,
providing an ideal framework for developing indicators and
formulating requirements for field collectors and researchers.
Value Added
The Multiple Scenarios Generation process helps analysts and
decision makers expand their imagination and avoid surprise by
generating large numbers of potential scenarios. This sensitizes
them to possible new outcomes and makes them more likely to
consider outlying data that suggest events are unfolding in a way not
previously imagined. The challenge for the analyst is to identify just
three or four major themes that emerge from the process. Thus, the
true value of the technique is to provide a palette of ideas from which
analysts can develop attention-deserving themes.
The Method
Multiple Scenarios Generation applies the collective knowledge and imagination of a group of experts to
identify a set of key drivers (forces, factors, or events) that are likely to shape an issue and arrays them
in different paired combinations to generate robust sets of potential scenarios.

Multiple Scenarios Generation is like Alternative Futures Analysis (described above) except that with
this technique, you are not limited to two critical drivers that generate four scenarios. By using multiple
2-×-2 matrices pairing every possible combination of multiple drivers, you can create many possible
scenarios. Doing so helps ensure nothing has been overlooked. Once generated, the scenarios can be
screened quickly without detailed analysis of each one. After becoming aware of the variety of possible
scenarios, analysts are more likely to pay attention to outlying data that would suggest that events are
playing out in a way not previously imagined.

Training and an experienced team of facilitators are needed to use this technique. Here are the basic
steps:

Clearly define the focal issue and the specific goals of the futures exercise.

Brainstorm to identify the key forces, factors, or events most likely to influence how the issue will
develop over a specified time period—often five or ten years.

Define the two ends of the spectrum for each driver.

Pair the drivers in a series of 2-×-2 matrices.

Develop a story or two for each quadrant of each 2-×-2 matrix.

From all the scenarios generated, select those most deserving of attention because they illustrate
compelling and challenging futures not yet under consideration.

Develop and validate indicators for each scenario that could be tracked to determine which
scenario is starting to develop.

Report periodically on which scenario appears to be emerging and why.

The technique is illustrated by exploring the focal question, “What is the future of the insurgency in
Iraq?” (See Figure 9.7a.) Here are the steps:

Convene a group of experts (including some creative thinkers who can challenge the group’s
mental model) to brainstorm the forces and factors that are likely to determine the future of the
insurgency in Iraq.

Select those factors or drivers whose outcome is the hardest to predict or for which analysts cannot
confidently assess how the driver will influence future events. In the Iraq example, three drivers
meet these criteria:

The role of neighboring states (e.g., Syria, Iran)

The capability of Iraq’s security services (military and police)

The political environment in Iraq


Define the ends of the spectrum for each driver. For example, the neighboring state could be stable
and supportive at one end and unstable and disruptive at the other end of the spectrum.

Pair the drivers in a series of 2-×-2 matrices, as shown in Figure 9.7a.

Develop a story or a couple of stories describing how events might unfold for each quadrant of each
2-×-2 matrix. For example, in the 2-×-2 matrix defined by the role of neighboring states and the
capability of Iraq’s security forces, analysts would describe how the insurgency would function in
each quadrant on the basis of the criteria defined at the far end of each spectrum. In the upper-left
quadrant, the criteria would be stable and supportive neighboring states but ineffective internal
security capabilities (see Figure 9.7b). In this “world,” one might imagine a regional defense
umbrella that would help to secure the borders. Another possibility is that the neighboring states
would have the Shiites and Kurds under control, with Sunnis, who continue to harass the Shia-led
central government, as the only remaining insurgents.

Description

Figure 9.7A Multiple Scenarios Generation: Future of the Iraq Insurgency

Description

Figure 9.7B Future of the Iraq Insurgency: Using Spectrums to Define Potential
Outcomes

Review all the stories generated and select those most deserving of attention. For example, which
scenario

presents the greatest challenges to Iraqi and U.S. decision makers?


raises concerns that have not been anticipated?

surfaces new dynamics that should be addressed?

suggests new collection needs?

Select a few scenarios that might be described as “wild cards” (High Impact/Low Probability
developments) or “nightmare scenarios” (see Figure 9.7c).

Consider what decision makers might do to prevent bad scenarios from occurring or enable good
scenarios to develop.

Generate and validate a list of key indicators to help monitor which scenario story best describes
how events are beginning to play out.

Report periodically on which scenario appears to be emerging and why.


Origins of This Technique
Multiple Scenarios Generation is described in Randolph H. Pherson, Handbook of Analytic Tools and
Techniques, 5th ed. (Tysons, VA: Pherson Associates, LLC, 2019). For information on other approaches
to scenarios analysis, see Randolph H. Pherson, “Leveraging the Future with Foresight Analysis,” The
International Journal of Intelligence, Security, and Public Affairs (Fall 2018), and Andy Hines, “The
Current State of Scenario Development: An Overview of Techniques,” Foresight 9, no. 1 (March 2007).
The Multiple Scenarios Generation illustrations are drawn from a report prepared by Alan Schwartz
(PolicyFutures, LLC), “Scenarios for the Insurgency in Iraq,” Special Report 174 (Washington, DC:
United States Institute of Peace, October 2006).

Description

Figure 9.7C Selecting Attention-Deserving and Nightmare Scenarios


9.8 MORPHOLOGICAL ANALYSIS
Morphological Analysis is a method for systematically structuring and
examining all the possible relationships in a multidimensional, highly
complex, usually nonquantifiable problem space. The basic idea is to
identify a set of variables and then examine all possible
combinations of these variables.

Morphological Analysis is a generic method used in a variety of


disciplines. For intelligence analysis, it helps prevent surprise by
generating many feasible outcomes for any complex situation. This
exercise reduces the chance that events will play out in a way that
the analyst has not previously imagined and considered. Specific
applications of this method are Quadrant Crunching™ (described in
chapter 8), Multiple Scenarios Generation (in this chapter), and
Quadrant Hypothesis Generation (chapter 7). This technique needs
training and practice for its successful application, and a facilitator
with experience in Morphological Analysis is highly desirable.
When to Use It
Morphological Analysis is most useful for dealing with complex,
nonquantifiable problems for which little information is available and
the chances for surprise are great. It can be used, for example, to
identify possible variations of a threat, possible ways a crisis might
occur between two countries, possible ways a set of driving forces
might interact, or the full range of potential outcomes in any
ambiguous situation. Morphological Analysis is generally used early
in an analytic project, as it aims to identify all the possibilities, not to
drill deeply into any specific possibility.

Morphological Analysis is typically used for looking ahead; it can also


be used in an investigative context to identify the full set of possible
explanations for some event.
Value Added
By generating a comprehensive list of possible outcomes, analysts
are in a better position to identify and select those outcomes that
seem most credible or that most deserve attention. This list helps
analysts and decision makers focus on what actions need to be
undertaken today to prepare for events that could occur in the future.
Decision makers can then take the actions necessary to prevent or
mitigate the effect of bad outcomes and help foster better outcomes.
The technique can also sensitize analysts to High Impact/Low
Probability developments, or “nightmare scenarios,” which could
have significant adverse implications for influencing policy or
allocation of resources.

The product of Morphological Analysis is often a set of potential


noteworthy scenarios, with indicators of each, plus the intelligence
collection requirements or research directions for each scenario.
Another benefit is that Morphological Analysis leaves a clear audit
trail about how the judgments were reached.
The Method
Morphological Analysis works by applying two common principles of
creativity techniques: decomposition and forced association. Start by
defining a set of key parameters or dimensions of the problem; then
break down each of those dimensions further into relevant forms or
states or values that the dimension can assume—as in the example
described later in this section. Two dimensions can form a matrix
and three dimensions a cube. In more complicated cases, multiple
linked matrices or cubes may be needed to break the problem down
into all its parts.

The principle of forced association then requires that every element


be paired with and considered in connection with every other
element in the morphological space. How that is done depends upon
the complexity of the case. In a simple case, each combination may
be viewed as a potential scenario or problem solution and examined
from the point of view of its possibility, practicability, effectiveness, or
other criteria. In complex cases, there may be thousands of possible
combinations; computer assistance is required to handle large
numbers of combinations. With or without computer assistance, it is
often possible to quickly eliminate a large proportion of the
combinations as not physically possible, impracticable, or
undeserving of attention. This narrowing-down process allows the
analyst to concentrate only on those combinations that are within the
realm of the possible and most worthy of attention.
Example
Decision makers ask analysts to assess how a terrorist attack on the
water supply might unfold. In the absence of direct information about
specific terrorist planning for such an attack, a group of analysts
uses Cluster Brainstorming or Mind Mapping to identify the following
key dimensions of the problem: attacker, type of attack, target, and
intended impact. For each dimension, the analysts identify as many
elements as possible. For example, the group could be an outsider,
an insider, or a visitor to a facility; the location could be an attack on
drinking water, wastewater, or storm sewer runoff. The analysts then
array this data into a matrix, illustrated in Figure 9.8, and begin to
create as many permutations as possible using different
combinations of the matrix boxes. These permutations allow the
analysts to identify and consider multiple combinations for further
exploration. One possible scenario is an outsider who carries out
multiple attacks on a treatment plant to cause economic disruption.
Another possible scenario is an insider who carries out a single
attack on drinking water to terrorize the population.

Analysts interested in using a computerized version of Morphological


Analysis should consult information produced by the Swedish
Morphology Society (www.swemorph.com). Their website has
detailed guidance and examples of the use of Morphological
Analysis for futures research, disaster risk management, complex
socio-technical problems, policy research, and other problems
comparable to those faced by intelligence analysts.
Origins of This Technique
The current form of Morphology Analysis was developed by astronomer Fritz Zwicky and described in
his book Discovery, Invention, Research through the Morphological Approach (Toronto: Macmillan,
1969). Basic information about this method is available from two well-known websites that provide
information on creativity tools: http://creatingminds.org and www.mindtools.com. For more advanced
information, see General Morphological Analysis: A General Method for Non-Quantified Modeling
(1998); Wicked Problems: Structuring Social Messes with Morphological Analysis (2008); and Futures
Studies Using Morphological Analysis (2009), all downloadable from the Swedish Morphology Society’s
website: http://www.swemorph.com. For further instruction on how to use Morphological Analysis, go to
https://www.ideaconnection.com/thinking-methods/morphological-analysis-00026.html.

Description

Figure 9.8 Morphological Analysis: Terrorist Attack Options


Source: Pherson Associates, LLC, 2019.
9.9 COUNTERFACTUAL REASONING
Counterfactual Reasoning is the process of evaluating conditional
claims about possible changes and their consequences.9 The
changes can be either alternative past possibilities that did not
happen (but could have) or alternative future possibilities that are not
expected to happen (but could). The technique considers what would
or might happen if such changes (counter to the facts of what
happened or counter to what is expected to happen) were to occur.
But, it does not attempt to determine the extent to which the
alternative possibility itself is probable.
When to Use It
Counterfactual Reasoning should be used when conducting a
strategic assessment or supporting a strategic decision-making
process. The purpose of the method is to help analysts recognize
that any strategic analysis is grounded in a series of underlying
claims about alternative possibilities, their consequences, and the
relationships among them. The technique should be used when
analysts need to answer basic questions such as

How could things have been different in the past and what does
this tell us about what to do today?

How could things be different (than they are expected to be) in


the future and what can be done to facilitate good outcomes and
mitigate the impact of bad outcomes?

The method also provides a robust and systematic framework for


using other structured techniques such as What If? Analysis, High
Impact/Low Probability Analysis, Red Hat Analysis, and many
Foresight Techniques.
Value Added
The primary purpose of Counterfactual Reasoning is to ground the
analytic foundation of a strategic assessment by considering
alternative possibilities, their consequences, and the relationships
among them. Counterfactual Reasoning is essential to analysis and
strategy because all strategic assessment and/or decision making
presupposes counterfactual claims. Every strategic question has
embedded assumptions about the consequences of possible
challenges the decision maker might face. These assumptions can
be conceived of as counterfactuals of the form “If X were to occur,
then Y would (or might) follow.”

Counterfactual Reasoning provides analysts with a rigorous process:

Develop a detailed account of the causes, context, and


consequences of a specific possible change or alternative.

Integrate creative thinking into the analytic process while


simultaneously guarding against speculation by employing a
series of precise techniques.

Open the analyst and decision maker up to a range of new


possibilities to overcome deterministic biases such as Hindsight
Bias and the practitioners’ intuitive trap of Assuming Inevitability.
The Method
Counterfactual Reasoning explores a specific possible change in three progressive stages that ask the
following:

Stage 1: “When and where might this change plausibly come about?”

Stage 2: “When and where might this change cause broader uncertainties (and unexpected
consequences)?”

Stage 3: “When and where might this have long-term impact?”

If one conceives of the possible change as a narrative or story, then the stages correspond to
developing the beginning, middle, and end of the story (see Figure 9.9). The first two stages can
sometimes be counterintuitive and, as a result, they are often overlooked by analysts.

Description

Figure 9.9 Three Stages of Counterfactual Reasoning


Stage One: “Convergent Scenario
Development”— The Beginning and Causes
of the Change
There is rarely just one way that a possible change can happen.
Instead, typically many paths could lead to the change. Whenever
analysts consider an “if . . . then” statement, they should avoid
assuming that they already know how the possibility (i.e., the “if”)
would come to be. Instead, they should develop and assess multiple
possible scenarios that start differently, but all “converge” on that
possibility. The most plausible of these scenarios will be the analyst’s
chosen “back story” for the possibility: the most reasonable way that
it could happen. It is what happens prior to the “if” in a standard
statement: “If X were to occur, then Y would (or might) follow.” It
constitutes the first part of a counterfactual conditional.

For an alternative possible future change, this scenario begins


at the present and charts a precise, postulated future sequence
of events that ends when the change happens or “If X were to
occur.”

For an alternative possible past change, the scenario begins at


the first place at which history is imagined to be different than it
was and then charts a precise sequence of events (which also
change history) that ends at the moment when the possibility (X)
fully comes to be.

Obviously, the ideal scenario back story for the possible change
would be the most plausible one. To develop a plausible account,
analysts should first identify the reasons why the change itself is (or
was) unlikely. Then, they should identify the reasons why the change
is (or was) still possible nonetheless. Using these reasons, analysts
should develop possible “triggering events” that could weaken the
reasons why the change is unlikely and/or strengthen the reasons
why the change is still possible. The resulting scenarios can be a
single such event, or a series of distinct events that lead stepwise or
converge to produce the change. Usually, the less time covered by
the scenario the better. The more recent the history is to a scenario
coming to be, the more likely it would merit consideration by the
decision maker.

Three key characteristics of good convergent scenarios are that they


(1) are shorter in temporal length, (2) have triggering events that are
themselves as plausible as possible, and (3) require a fewer number
of triggering events. Once constructed, the convergent scenarios
should be assessed relative to each other and rated in terms of
plausibility by applying the three criteria. The chosen scenario
represents the official assessed “back story” of the possible change
as the analysts consider it further.
Stage Two: “Ripple Effect Analysis”— The
Middle and (Broader) Context of the Change
When analysts imagine a possible change to what happened in the
past or to what is expected to happen in the future, it is easy for
them to assume that its effects will be narrow. But, change often
extends well beyond the reach of what is anticipated, especially
when other actors start to respond to the initial change and thereby
create a broader “ripple effect” of unintended consequences. In
many narratives, it is in the “middle” of the story that everything gets
more complicated as the actors begin to experience the broader and
deeper consequences of what happened at the beginning. Analysts
have to explore these possible consequences before they can
develop a viable account of the longer-term impact of their imagined
change. This is what happens between the “if” and the “then” in a
standard statement: “If X were to occur, then Y would (or might)
follow.”

To search for possible “ripple effects,” analysts should first locate


several major actors, causal forces, or trends that are not major
players or factors in the scenario they developed for Stage 1 but are
likely to interact with the change after it begins to have impact.
Second, the analysts should identify what is currently expected of
those major actors (or what they actually did in the past). Third, the
analysts should consider how those actors might act differently
assuming all the changes imagined in Stage 1:

How do the events of that scenario create a “new world” for


these actors?

How might these actors respond in ways that are different from
what analysts currently expect them to do (or what they did in
the past)?
In what ways can the analysts no longer assume these actors
will maintain the “status quo” given the imagined changes?

The most significant possible alternative ways these actors might


respond represent “new uncertainties” or possible “unintended
consequences” that should be taken seriously. Note that analysts are
not (at this point) projecting what these actors would do but are only
identifying the significant ways in which they might plausibly act
differently. They are locating possible “ripple effects”—what analysts
can no longer assume will still happen.
Stage Three: “Divergent Scenario
Development”—The End and Consequences
of the Change
The task for the analyst in Stage 3 is to examine all possible
consequences of the change’s back story (Stage 1) and the further
changes that might follow it (Stage 2). A key tenet of the
methodology is that the evaluation of the longer-term consequences
of a change (what happens after the “then”) should be done only
after considering both how the change would come to be (what
happens before the “if”), and how it might generate new uncertainties
(what happens between the “if” and the “then”).

To identify the range of outcomes that follow (or “diverge”) from a


possible change, analysts should look for possible interactions
between the events of the convergent scenario (Stage 1) and the
possible new uncertainties generated by it (Stage 2). Divergent
scenarios are selected based on what would be useful for a decision
maker to consider when conducting a full-fledged strategic
assessment.

To this end, it is important to know what sorts of consequences the


decision maker believes are worth thinking about, what decisions the
outcomes are related to, and the nature of the connection between
the decision and the consequences. The analyst should distinguish
between outcomes that occur in one possible scenario from those
that occur across a wide range of possible scenarios. The former are
consequences that “might plausibly follow” and the latter are
consequences that “would probably follow.” Note that it is important
for analysts to always put the conclusions of Counterfactual
Reasoning in the form of a conditional claim such as “If X were to
occur, then Y would (or might) follow,” and never simply as “Y would
(or might) follow.”
Relationship to Other Techniques
Counterfactual Reasoning is implicit in most strategic assessments
because any proposed scenario or response to a situation presumes
that certain things would or might have to occur for that scenario to
play out. Several Structured Analytic Techniques mirror some of the
basic processes of Counterfactual Reasoning. The Key Assumptions
Check, for example, employs some of the principles of
Counterfactual Reasoning in that it prompts analysts to challenge
their underlying assumptions about what might have caused
something to occur in the past or would make it occur in the future.

What If? Analysis focuses its attention mostly on the same subject
as the first stage of Counterfactual Reasoning, emphasizing the
need to develop a “back story” to explain how a posited outcome
emerged. An analyst could (theoretically) use What If? Analysis to do
the first stage of Counterfactual Reasoning, or vice versa. Red Hat
Analysis also generates a specific back story by simulating how an
adversary would deal with a particular situation. Counterfactual
Reasoning, however, goes further by exploring the implications or
consequences of that scenario for the decision maker.

High Impact/Low Probability Analysis relates more to the third stage


of Counterfactual Reasoning by focusing on the consequences of a
specific—but low-probability—scenario. Similarly, the function of
most Foresight Techniques, such as Multiple Scenarios Generation
and the Cone of Plausibility, is to identify a set of plausible futures
and their consequences by applying a systematic process to
generate a set of comprehensive and mutually exclusive alternative
scenarios. An analyst could (theoretically) use one of these methods
to do the third stage of Counterfactual Reasoning (as long as they
integrate the outcomes of Stages 1 and 2), or vice versa.
Origins of This Technique
The origin of counterfactual thinking has philosophical roots and can
be traced back to early philosophers such as Aristotle and Plato. In
the seventeenth century, the German philosopher Gottfried Wilhelm
Leibniz argued that there could be an infinite number of alternate
worlds, so long as they were not in conflict with laws of logic.10 More
recently, counterfactual thinking has gained interest from a
psychological perspective. Daniel Kahneman and Amos Tversky
pioneered the study of counterfactual thought, showing that people
tend to think “if only” more often about exceptional events than about
normal events.11 Noel Hendrickson has pioneered the application of
counterfactual thinking to intelligence analysis and teaches the
technique as one of his four core courses in advanced reasoning
methods for intelligence analysts at James Madison University.12
9.10 ANALYSIS BY CONTRASTING
NARRATIVES
Analysis by Contrasting Narratives is a recently developed
methodology for analyzing complex problems by identifying the
narratives associated with entities involved in the problem.13 This
includes the strategic narrative associated with the primary client of
the analysis, be it an intelligence consumer or a corporate executive.
The process involves having analysts and decision makers work
collaboratively to further their understanding of a problem. This
melding of the expertise of both analysts and decision makers is also
an effective practice in Multiple Scenarios Generation workshops
and other Foresight exercises.
When to Use It
Analysis by Contrasting Narratives seeks to answer strategic-level
questions such as, “Who has what power, on what level, in what
setting, over what audience, to attribute what meaning of security, in
what form of discourse, which supports what interests, and motivates
what action, by whom?” The technique helps analysts understand
how a decision maker’s perception of a threat differs from that of an
adversary or from that of other geopolitical state or non-state actors.
It also focuses attention on the identities of an adversary, whether
they may be changing over time, and to what extent the decision
makers’ own statements and actions could have undercut their own
policy objectives.

The technique is most typically used on complex intelligence


problems characterized by sets of interacting events, themes, or
entities that are evolving in a dynamic social context. It is useful in
analyzing trends in international terrorism, weapons proliferation,
and cyber security, and complex counterintelligence, political
instability, and Digital Disinformation issues. The narratives can be
considered as separate case studies focused on distinct entities,
often at different levels (individual, group, or institution).
Value Added
Analysis by Contrasting Narratives engages analysts, working-level
policymakers and decision makers, and trusted external subject
matter experts in a joint effort to study a topic and develop working
narratives relating to a common issue. The narratives reflect the
perspectives of the key decision maker(s), the adversary or
competitor, and other stakeholders or entities associated with the
issue.

The method requires the development and interpretation of distinct


narratives by those who can remain critically distant and objective
while being sufficiently informed on the topic. It can broaden the
analytic spectrum at an initial stage to prompt further analysis and
drive additional collection efforts.

A key benefit of the technique is that it increases the diversity of


perspectives an analyst can bring to understanding the root causes
of an event and the circumstances surrounding it. By highlighting the
significance of differing narratives, analysts can consider multiple
interpretations as they reframe the intelligence issue or analytic
problem. As the developer of the methodology explains, “Rather than
telling truth to power, this thesis ‘Analysis by Contrasting Narratives’
argues that intelligence analysis should strive to consider the most
relevant truths to serve power.”14

The method recognizes that the world has become much more
complex. As analysis expands beyond traditional military, political,
and economic issues to environmental, social, technical, and cyber
domains, there is growing need for analytic techniques that involve
more adaptive sense making, flexible organizational structures,
direct engagement with the decision maker, and liaising with
nontraditional intelligence and academic partners.
Analysis by Contrasting Narratives differs from many traditional
forms of intelligence analysis in that it seeks to integrate the
perceptions of the policymaker or decision maker into the analysis of
the behavior of adversaries and other hostile entities. By engaging
the decision maker in the analytic process, the method can also
reflect and interactively assess the impact of a decision maker’s
actions on the problem at hand.
The Method
The methodology consists of two phases: (1) basic analytic
narratives are identified, and (2) the narratives are analyzed to
assess their development, looking for effects of statements and
actions across multiple social domains. A central focus lies with
articulations of difference, expressing (or critiquing) an “us-against-
them” logic to enable or legitimize particular security policies or
decisions.

Phase I: Developing Contrasting Narratives

Create a basic timeline that defines key events to be studied in


all the narratives.

Develop at least three narratives reflecting the perspective of (1)


those who have power or influence over how the issue plays out
—a macro narrative—and (2) entities that can reflect critically on
actions taken by key players but who lack power to act to
influence the situation—a micro narrative. Primary candidates
for macro narratives are the adversary or competitor in the
scenario and the decision maker and his or her coterie who are
the clients for the study.

For each narrative,

Construct a timeline, based on key words, of all significant


events.

Select texts (articles, public statements) produced by the


central actors that are related to the major events in the
timeline. For institutional actors, the texts usually are
generated by the leadership. In a personal narrative, the
texts would be produced by that person. It is possible for
new key events to be added to the basic timeline as the
analysis progresses.

Analyze the meaning constructed in texts regarding threats


and security through the type of grammar (e.g., declaring),
lexicon (e.g., metaphors, synonyms, stereotypes), and
visual aspects of signs and images used.

Examine the settings, involving ways of communicating (to


various audiences), social roles, identities, and power
relations.

Consider the background context, including cultural,


religious, and socio-political factors.

For macro narratives, focus on threat articulations in and


through statements and actions. For micro narratives, identify
commentary on these threat articulations to reveal internal
tensions and inconsistencies.

Phase II: Comparing and Contrasting the Narratives

Analyze and link the macro narratives. Ask, “To what extent are
statements and actions in one narrative reflected in another
narrative?”

Use the micro narratives to enhance, contrast, or highlight


additional events and factors of influence.

Explore:

What facilitating conditions and drivers, or factors and


events, account for the overall transformation of narratives
from beginning to end? How do these elements relate?
How do the beginning points of the analysis for each
narrative differ?

How do the analytic end states for each narrative differ?

How do statements and actions resonate with various


audiences? Is an audience formal (granting executive
powers) or moral (legitimizing statements and actions)? Are
the audiences institutionalized, or forming in and through
the narrative? Is their response hostile or supportive,
sympathetic, and understanding? Is the impact on the
audience fluctuating or gradual?

What key factors and events, such as material and


ideational circumstances, intentions, statements, and
actions, add or diminish momentum of threat articulations?

Overall, explain how courses of action are shaped and produce


multiple effects within and across social domains. For example,
ask,

Does ideology fuel the process of identification of others


and the self?

Are conflicts between entities a reflection of inherent


structural factors or perceptions of a physical threat to self?

Are people being portrayed in negative ways to suit


ideological, political, religious, or economic norms?

Analysis by Contrasting Narratives can be graphically displayed with


diagrams, creating one for each macro narrative. Key events for a
narrative are listed vertically in chronological order. On the left,
factors are depicted that add momentum to these events and the
overall security dynamic in the narrative. Factors removing
momentum are listed on the right. Each narrative has its own color
that is also used to highlight related factors in other narratives,
visualizing the extent to which narratives influence each other.
Relationship to Other Techniques
Analysis by Contrasting Narratives incorporates elements of
Reframing Techniques such as Red Hat Analysis, Premortem
Analysis, and Structured Self-Critique and several challenge
approaches including Team A/Team B Analysis and Devil’s
Advocacy.

The method is like Red Hat Analysis in that both techniques aim
to widen cultural empathy and understanding of a problem. Red
Hat Analysis differs from Analysis by Contrasting Narratives,
however, in that it is more likely to assume that the opposing
sides view the conflict in much the same way.

Devil’s Advocacy involves having someone who did not


participate in the analysis challenge the analytic judgments and
the process that produced them whereas Analysis by
Contrasting Narratives is a more collaborative process. Rather
than posing a narrative and a counter-narrative, Analysis by
Contrasting Narratives seeks to understand narratives
associated with a multitude of actors.

The method differs from challenge techniques in that it engages


the decision maker as a participant in the analytic process rather
than presenting the decision maker with several options from
which to choose.
Origins of This Technique
The Analysis by Contrasting Narratives methodology was developed
by Peter de Werd, an assistant professor in intelligence and security
at the Netherlands Defence Academy. His PhD dissertation, “Critical
Intelligence: Analysis by Contrasting Narratives, Identifying and
Analyzing the Most Relevant Truths,” provides a detailed description
of both the theory underlying this new approach to analysis and the
method itself.15
9.11 INDICATORS GENERATION,
VALIDATION, AND EVALUATION
Indicators are a preestablished set of observable phenomena that
are periodically reviewed to track events, spot emerging trends,
validate a hypothesis, and warn of unanticipated change. An
indicator list is a preestablished set of observable or potentially
observable actions, conditions, facts, or events whose simultaneous
occurrence would argue strongly that a phenomenon is present or is
highly likely to occur. Indicators can be monitored to obtain tactical,
operational, or strategic warnings of some future development that, if
it were to occur, would have a major impact.

The central mission of intelligence analysis is to warn U.S.


officials about dangers to national security interests and to
alert them to perceived openings to advance U.S. policy
objectives. Thus the bulk of analysts’ written and oral
deliverables points directly or indirectly to the existence,
characteristics, and implications of threats to and
opportunities for U.S. national security.
—Jack Davis, “Strategic Warning: If Surprise Is Inevitable, What Role
for Analysis?” (January 2003)

The identification and monitoring of indicators are fundamental tasks


of analysis, as they are the principal means of avoiding surprise.
When used in intelligence analysis, they usually are forward looking
and are often described as estimative, anticipatory, or foresight
indicators. In the law enforcement community, indicators are more
often used to assess whether a target’s activities or behavior are
consistent with an established pattern. These indicators look
backward and are often described as descriptive or diagnostic
indicators.
When to Use It
Indicators provide an objective baseline for tracking events, instilling
rigor into the analytic process, and enhancing the credibility of the
final product. The indicator list can become the basis for
investigating a situation or directing collection efforts and routing
relevant information to all interested parties. In the private sector,
indicators can track whether a new business strategy is working or
whether a low-probability scenario is developing that offers new
commercial opportunities.

Indicator lists can serve as a baseline for generating collection


requirements or establishing research priorities. They can also be
the basis for the analyst’s filing system to track developing events.

Descriptive or diagnostic indicators are best used to help the analyst


assess whether there are grounds to believe that a specific action is
taking place. They provide a systematic way to validate a hypothesis
or help substantiate an emerging viewpoint. Figure 9.11 is an
example of a list of descriptive indicators, in this case pointing to a
clandestine drug laboratory.

A classic application of anticipatory indicators is to seek early


warning of some undesirable event, such as a military attack or a
nuclear test by a foreign country. Today, indicators are often paired
with scenarios to identify which of several possible scenarios is
developing. Analysts also use indicators to measure change that
points toward an undesirable condition, such as political instability or
an economic slowdown, or toward a desirable condition, such as
economic reform or the potential for market growth. Analysts can use
this technique whenever they need to track a specific situation to
monitor, detect, or evaluate change over time.
Value Added
The human mind sometimes sees what it expects to see and can overlook the unexpected. Identification
of indicators prepares the mind to recognize early signs of significant change. Change often happens so
gradually that analysts do not see it, or they rationalize signs of change as not important until they are
too obvious to ignore. Once analysts take a position on an issue, they can be reluctant to change their
minds in response to new evidence. Analysts can avoid this type of rationalization by specifying
beforehand the threshold for what actions or events would be significant and might cause them to
change their minds.

Figure 9.11 Descriptive Indicators of a Clandestine Drug Laboratory


Source: Pamphlet from ALERT Unit, New Jersey State Police, 1990; republished in The Community Model, Counterdrug
Intelligence Coordinating Group, 2003.

Defining explicit criteria for tracking and judging the course of events makes the analytic process more
visible and available to scrutiny by others, thus enhancing the credibility of analytic judgments. Including
an indicator list in the finished product helps decision makers track future developments and builds a
more concrete case for the analytic conclusions.

Preparation of a detailed indicator list by a group of knowledgeable analysts is usually a good learning
experience for all participants. It can be a useful medium for an exchange of knowledge between
analysts from different organizations or those with different types of expertise—for example, analysts
who specialize in a country and those who are knowledgeable in a given field, such as military
mobilization, political instability, or economic development.

When analysts or decision makers are sharply divided over (1) the interpretation of events (for example,
political dynamics in Saudi Arabia or how the conflict in Syria is progressing), (2) the guilt or innocence
of a “person of interest,” or (3) the culpability of a counterintelligence suspect, indicators can help
depersonalize the debate by shifting attention away from personal viewpoints to more objective criteria.
Strong emotions are often diffused and substantive disagreements clarified if both sides can agree on a
set of criteria before the fact that show developments are—or are not—moving in a particular direction
or a person’s behavior suggests guilt or innocence.

The process of developing indicators forces the analyst to reflect and explore all that might be required
for a specific event to occur. The process can also ensure greater objectivity if two sets of indicators are
developed: one pointing to a likelihood that the scenario will emerge and another showing that it is not
emerging.
Indicators help counteract Hindsight Bias because they provide a written record that more accurately
reflects what the analyst was thinking at the time rather than relying on that person’s memory. Indicators
can help analysts overcome the tendency to judge the frequency of an event by the ease with which
instances come to mind (Availability Heuristic) and the tendency to predict rare events based on weak
evidence or evidence that easily comes to mind (Associative Memory). Indicators also help analysts
avoid the intuitive traps of ignoring information that is inconsistent with what one wants to see (Ignoring
Inconsistent Evidence), continuing to hold to a judgment when confronted with a mounting list of
contradictory evidence (Rejecting Evidence), and assuming something is inevitable, for example, if the
indicators an analyst had expected to emerge are not actually realized (Assuming Inevitability).
9.11.1 The Method: Indicators Generation
Analysts can develop indicators in a variety of ways. The method can range from a simple process to a
sophisticated team effort. For example, with minimum effort, analysts can jot down a list of things they
would expect to see if a given situation were to develop as feared or foreseen. Or analysts could work
together to define multiple variables that would influence a situation and then rank the value of each
variable based on incoming information about relevant events, activities, or official statements.

When developing indicators, clearly define the issue, question, outcome, or hypothesis and then
generate a list of activities, events, or other observables that you would expect to see if that issue or
outcome emerged. Think in multiple dimensions using STEMPLES+ (Social, Technical, Economic,
Military, Political, Legal, Environmental, and Security, plus Demographic, Religious, Psychological, or
other factors) to stimulate new ways of thinking about the problem. Also consider analogous sets of
indicators from similar or parallel circumstances.

Indicators can be derived by applying a variety of Structured Analytic Techniques, depending on the
issue at hand and the frame of analysis.16 For example, analysts can use

Cluster Brainstorming or Mind Mapping to generate signposts. A signpost could be a new


development, a specific event, or the announcement of a new policy or decision that would be likely
to emerge as a scenario unfolds.

Circleboarding™ to identify all the dimensions of a problem. It prompts the analyst to explore the
Who, What, How, When, Where, Why, and So What of an issue.

Key Assumptions Check to surface key variables or key uncertainties that could determine how a
situation unfolds.

Gantt Charts or Critical Path Analysis to identify markers. Markers are the various stages in a
process (planning, recruitment, acquiring materials, surveillance, travel, etc.) that note how much
progress the group has made toward accomplishing the task. Analysts can identify one or more
markers for each step of the process and then aggregate them to create a chronologically ordered
list of indicators.

Decision Trees to reveal critical nodes. The critical nodes displayed on a Decision Tree diagram
can often prove to be useful indicators.

Models to describe emerging phenomena. Analysts can identify indicators that correspond to the
various components or stages of a model that capture the essence of dynamics such as political
instability, civil-military actions presaging a possible coup, or ethnic conflict. The more indicators
observed, the more likely that the phenomenon represented by the model is present.

Structured Analogies to flag what caused similar situations to develop. When historical or generic
examples of the topic under study exist, analysis of what created these analogous situations can be
the basis for powerful indicators of how the future might evolve.

When developing indicators, analysts should take time to carefully define each indicator. It is also
important to establish what is “normal” for that indicator.

Consider the indicators as a set. Are any redundant? Have you generated enough? The set should be
comprehensive, consistent, and complementary. Avoid the temptation of creating too many indicators;
collectors, decision makers, and other analysts usually ignore long lists.
After completing your list, review and refine it, discarding indicators that are duplicative and combining
those that are similar. See Figure 9.11.1 for a sample list of anticipatory or foresight indicators.

Figure 9.11.1 Using Indicators to Track Emerging Scenarios in Zambria


Source: Pherson Associates, LLC, 2019.
9.11.2 The Method: Indicators Validation
Good indicators possess five key characteristics: they should be
observable and collectible, valid, reliable, stable, and unique.17
Discard those that are found wanting. The first two characteristics
are required for every indicator. The third and fourth characteristics
are extremely important but cannot always be satisfied. The fifth
characteristic is key to achieving a high degree of diagnosticity for a
set of indicators but is the most difficult goal to achieve.

Observable and collectible. The analyst should have good


reason to expect that the indicator can be observed and
reported by a reliable source. To monitor change over time, an
indicator needs to be collectible over time. It must be legal and
not too costly to collect.

Valid. An indicator must be clearly relevant to the end state the


analyst is trying to predict or assess. It must accurately measure
the concept or phenomenon at issue.

Reliable. Data collection must be consistent when comparable


methods are used. Those observing and collecting the
information must observe the same thing. This requires precise
definition of each indicator.

Stable. An indicator must be useful over time to allow


comparisons and to track events. Ideally, the indicator should be
observable early in the evolution of a development so that
analysts and decision makers have time to react accordingly.

Unique. An indicator should measure only one thing and, in


combination with other indicators, point only to the phenomenon
being studied. Valuable indicators are those that are consistent
with a specified scenario or hypothesis and inconsistent with
alternative scenarios or hypotheses. Indicators Evaluation,
described next, can be used to check the diagnosticity of
indicators.

Remember that indicators must be tangibly defined to be objective


and reliable. For example, “growing nervousness” or “intent to do
harm” would fail the test, but “number of demonstrators” or
“purchase of weapons” would pass.
9.11.3 The Method: Indicators Evaluation
The best way to assess the diagnosticity of indicators used to distinguish among different scenarios is to
employ the Indicators Evaluation methodology. Indicators Evaluation helps ensure the credibility of the
analysis by identifying and dismissing non-diagnostic indicators or those defined as indicators that
would be present for multiple scenarios or hypotheses.

The ideal indicator is highly likely or consistent for the scenario or hypothesis to which it is assigned
and highly unlikely or inconsistent for all other alternatives.

A non-diagnostic indicator would be observed in every scenario or hypothesis, suggesting that it


may not be particularly useful in determining whether a specific scenario or a particular hypothesis
is true.

Most indicators fall somewhere in between.

Application of the Indicators Evaluation method helps identify the most-diagnostic indicators for each
scenario or hypothesis—which are most deserving of monitoring and collection (see Figure 9.11.3a).

Figure 9.11.3A Indicators Evaluation


Source: Pherson Associates, LLC, 2019.

Employing Indicators Evaluation to identify and dismiss non-diagnostic indicators can increase the
credibility of an analysis. By applying the method, analysts can rank order their indicators from most to
least diagnostic and decide how far up the list they want to draw the line in selecting the indicators used
in the analysis. In some circumstances, analysts might discover that most or all the indicators for a given
scenario are also consistent with other scenarios, forcing them to brainstorm a new and better set of
indicators for that scenario. If analysts find it difficult to generate independent lists of diagnostic
indicators for two scenarios, it may be that the scenarios are not sufficiently dissimilar, suggesting the
two scenarios should be combined.

Indicators Evaluation can help overcome mindsets by showing analysts how a set of indicators that
point to one scenario may also point to others. It can also show how some indicators, initially perceived
to be useful or diagnostic, may not be. By placing an indicator in a broader context against multiple
scenarios, the technique helps analysts focus on which one(s) are useful and diagnostic instead of
simply supporting a given scenario.
The Method
The first step is to fill out a matrix like that used for Analysis of Competing Hypotheses.

List the alternative scenarios along the top of the matrix (as is done for hypotheses in Analysis of
Competing Hypotheses).

List indicators generated for all the scenarios down the left side of the matrix (as is done with
relevant information in Analysis of Competing Hypotheses).

In each cell of the matrix, assess the status of that indicator against the noted scenario. Would you
rate the indicator as

Highly Likely to appear?

Likely to appear?

Could appear?

Unlikely to appear?

Highly Unlikely to appear?

Indicators developed for the home scenario should be either “Highly Likely” or “Likely.”

After assessing the likelihood of all indicators against all scenarios, assign a score to each cell. If
the indicator is “Highly Likely” in the home scenario as we would expect it to be, then other cells for
that indicator should be scored as follows for the other scenarios:

Highly Likely is 0 points

Likely is 1 point

Could is 2 points

Unlikely is 4 points

Highly Unlikely is 6 points

If the indicator is deemed “Likely” in the home scenario, then the cells for the other scenarios for
that indicator should be scored as follows:

Highly Likely is 0 points

Likely is 0 points

Could is 1 point

Unlikely is 3 points
Highly Unlikely is 5 points

Tally up the scores across each row; the indicators with the highest scores are the most diagnostic
or discriminating.

Once this process is complete, re-sort the indicators for each scenario so that the most
discriminating indicators are displayed at the top and the least discriminating indicators at the
bottom.

The most discriminating indicator is “Highly Likely” to emerge in its scenario and “Highly
Unlikely” to emerge in all other scenarios.

The least discriminating indicator is “Highly Likely” to appear in all scenarios.

Most indicators will fall somewhere in between.

The indicators with the most “Highly Unlikely” and “Unlikely” ratings are the most discriminating.

Review where analysts differ in their assessments and decide if adjustments are needed in their
ratings. Often, differences in how an analyst rates an indicator can be traced back to different
assumptions about the scenario when the analysts were doing the ratings.

Decide whether to retain or discard indicators that have no “Unlikely” or “Highly Unlikely” ratings. In
some cases, an indicator may be worth keeping if it is useful when viewed in combination with a
cluster of indicators.

Develop additional—and more diagnostic indicators—if a large number of initial indicators for a
given scenario have been eliminated.

Recheck the diagnostic value of any new indicators by applying the Indicators Evaluation technique
to them as well.

Analysts should think seriously before discarding indicators determined to be non-diagnostic. For
example, an indicator might not have diagnostic value on its own but be helpful when viewed as part of
a cluster of indicators. An indicator that a terrorist group “purchased guns” would not be diagnostic in
determining which of the following scenarios were likely to happen: armed attack, hostage taking, or
kidnapping; knowing that guns had been purchased could be critical in pointing to an intent to commit an
act of violence or even to warn of the imminence of the event. Figure 9.11.3b explores another reason
for not discarding a non-diagnostic indicator. It is called the INUS (Insufficient but
Nonredundant/Unnecessary but Sufficient) Condition.

A final argument for not discarding non-diagnostic indicators is that maintaining and publishing the list of
non-diagnostic indicators could prove valuable to collectors. If analysts initially believed the indicators
would be helpful in determining whether a specific scenario was emerging, then collectors and other
analysts working the issue, or a similar issue, might come to the same conclusion. For these reasons,
facilitators of the Indicators Validation and Evaluation techniques believe that the list of non-diagnostic
indicators should also be published to alert other analysts and collectors to the possibility that they might
also assume an indicator was diagnostic when it turned out on further inspection not to be.

If you take the time to develop a robust set or sets of anticipatory or foresight indicators (see Figure
9.11.3c), you must establish a regimen for monitoring and reviewing the indicators on a regular basis.
Analysts should evaluate indicators on a set schedule—every week or every month or every quarter—
and use preestablished criteria when doing so. When many or most of the indicators assigned to a given
scenario begin to “light up,” this should prompt the analyst to alert the broader analytic community and
key decision makers interested in the topic. A good set of indicators will give you advance warning of
which scenario is about to emerge and where to concentrate your attention. It can also alert you to
unlikely or unanticipated developments in time for decision makers to take appropriate action.

Figure 9.11.3B The INUS Condition18

Any indicator list used to monitor whether something has happened, is happening, or will happen
implies at least one alternative scenario or hypothesis—that it has not happened, is not happening, or
will not happen. Many indicators that a scenario or hypothesis is happening are just the opposite of
indicators that it is not happening; some are not. Some are consistent with two or more scenarios or
hypotheses. Therefore, an analyst should prepare separate lists of indicators for each scenario or
hypothesis. For example, consider indicators of an opponent’s preparations for a military attack where
there may be three hypotheses—no attack, attack, and feigned intent to attack with the goal of forcing a
favorable negotiated solution. Almost all indicators of an imminent attack are also consistent with the
hypothesis of a feigned attack. The analyst must identify indicators capable of diagnosing the difference
between true intent to attack and feigned intent to attack. The mobilization of reserves is such a
diagnostic indicator. It is so costly that it is not usually undertaken unless there is a strong presumption
that the reserves will be needed.

After creating the indicator list or lists, the analyst or analytic team should regularly review incoming
reporting and note any changes in the indicators. To the extent possible, the analyst or the team should
decide well in advance which critical indicators, if observed, will serve as early-warning decision points.
In other words, if a certain indicator or set of indicators is observed, it will trigger a report advising of
some modification in the analysts’ appraisal of the situation.

Techniques for increasing the sophistication and credibility of an indicator list include the following:

Establishing a scale for rating each indicator.

Providing specific definitions of each indicator.

Rating the indicators on a scheduled basis (e.g., monthly, quarterly, annually).

Assigning a level of confidence to each rating.

Providing a narrative description for each point on the rating scale, describing what one would
expect to observe at that level.

Listing the sources of information used in generating the rating.

Figure 9.11.3c is an example of a complex indicators chart that incorporates the first three techniques
listed above.
Description

Figure 9.11.3C Zambria Political Instability Indicators


Potential Pitfalls
The quality of indicators is critical, as poor indicators lead to analytic
failure. For these reasons, analysts must periodically review the
validity and relevance of an indicator list. Narrowly conceived or
outdated indicators can reinforce analytic bias, encourage analysts
to discard new evidence, and lull consumers of information
inappropriately. Indicators can also prove to be invalid over time, or
they may turn out to be poor “pointers” to what they were supposed
to show. By regularly checking the validity of the indicators, analysts
may also discover that their original assumptions were flawed.
Finally, if an opponent learns what indicators are on your list, the
opponent may make operational changes to conceal what you are
looking for or arrange for you to see contrary indicators.
Relationship to Other Techniques
Indicators are closely related to many other techniques. Some form
of brainstorming is commonly used to draw upon the expertise of
various analysts to create indicators reflecting different perspectives
and different specialties. The development of alternative scenarios
should always involve the development and monitoring of indicators
that point toward a given scenario unfolding or evolving. What If?
Analysis and High Impact/Low Probability Analysis depend upon the
development and use of indicators. Indicators are often entered as
items of relevant information in Analysis of Competing Hypotheses,
as discussed in chapter 7.
Origins of These Techniques
The identification and monitoring of indicators of military attack is
one of the oldest forms of intelligence analysis. The discussion here
is based on Randolph H. Pherson and John Pyrik, Analyst’s Guide to
Indicators (Tysons, VA: Pherson Associates, LLC, 2018), and
Randolph H. Pherson, Handbook of Analytic Tools and Techniques,
5th ed. (Tysons, VA: Pherson Associates, LLC, 2019). Cynthia M.
Grabo’s book Anticipating Surprise: Analysis for Strategic Warning
(Lanham, MD: University Press of America, 2004) is a classic text on
the development and use of indicators. A useful compendium of
violent extremist mobilization indicators published by the U.S.
Director of National Intelligence (DNI) describes indicators in terms
of four criteria: diagnosticity, category of behavior, observability, and
time sensitivity. See Homegrown Violent Extremist Mobilization
Indicators, 2019 edition, accessible at
https://www.dni.gov/index.php/nctc-newsroom/nctc-
resources/item/1945-homegrown-violent-extremist-mobilization-
indicators-2019.

The Indicators Evaluation methodology was developed by Randolph


Pherson, Grace Scarborough, Alan Schwartz, and Sarah Beebe,
Pherson Associates, LLC. It was first published as the Indicators
Validator® in Randolph H. Pherson, Handbook of Analytic Tools and
Techniques, 3rd ed. (Reston, VA: Pherson Associates, LLC, 2008).
NOTES
1. Peter Schwartz, The Art of the Long View: Planning for the Future
in an Uncertain World (New York: Doubleday, 1991).

2. See, for example, Brian Nichiporuk, Alternative Futures and Army


Force Planning: Implications for the Future Force Era (Santa Monica,
CA: RAND Corporation, 2005).

3. A comprehensive review of how to generate, validate, and


evaluate the diagnosticity of indicators can be found in Randolph H.
Pherson and John Pyrik, Analyst’s Guide to Indicators (Tysons, VA:
Pherson Associates LLC, 2018).

4. Randolph H. Pherson, “Leveraging the Future with Foresight


Analysis,” The International Journal of Intelligence, Security, and
Public Affairs 20, no. 2 (Fall 2018).

5. Peter Gijsbert de Werd, “Critical Intelligence: Analysis by


Contrasting Narratives, Identifying and Analyzing the Most Relevant
Truths” (PhD diss., Utrecht University, 2018),
https://dspace.library.uu.nl/bitstream/handle/1874/373430/deWerd.p
df?sequence=1&isAllowed=y.

6. A fuller description of individual and group brainstorming


techniques can be found in Randolph H. Pherson, Handbook of
Analytic Tools and Techniques, 5th ed. (Tysons, VA: Pherson
Associates LLC, 2019), 10–11.

7. The description of the Cone of Plausibility is taken from two


government publications: (1) Quick Wins for Busy Analysts, DI
Futures and Analytic Methods (DI FAM), Professional Head of
Defence Intelligence Analysis, United Kingdom Ministry of Defence,
and (2) Gudmund Thompson, Aide Memoire on Intelligence Analysis
Tradecraft, Version 4.02, Chief of Defence Intelligence, Director
General of Intelligence Production, Canada. These sources are used
with the permission of the UK and Canadian governments,
respectively.

8. STEEP stands for Social, Technological, Economic,


Environmental, and Political; STEEP + 2 adds Psychological and
Military; STEEPLE adds Legal and Ethics to the original STEEP list;
STEEPLED further adds Demographics; and PESTLE stands for
Political, Economic, Social, Technological, Legal, and
Environmental.

9. This description of Counterfactual Reasoning is drawn largely


from Noel Hendrickson, Reasoning for Intelligence Analysis
(Lanham, MD: Rowman & Littlefield, 2018), as well as his earlier
“Counterfactual Reasoning: A Basic Guide for Analysts, Strategists,
and Decision Makers,” The Proteus Monograph Series 2, no. 5
(October 2008).

10. N. J. Roese and J. M. Olson, eds., What Might Have Been: The
Social Psychology of Counterfactual Thinking (Hillsdale, NJ:
Lawrence Erlbaum Associates, 1995).

11. D. Kahneman and A. Tversky, “The Simulation Heuristic,” in


Judgment under Uncertainty: Heuristics and Biases, eds. D.
Kahneman, P. Slovic, and A. Tversky (New York: Cambridge
University Press, 1982).

12. Noel Hendrickson, “Critical Thinking in Intelligence Analysis,”


International Journal for Intelligence and Counterintelligence 21, no.
4 (September 2008).

13. This discussion of Analysis by Contrasting Narratives is taken


from de Werd’s “Critical Intelligence.”

14. Ibid, 15.

15. Ibid.
16. A robust discussion of the processes analysts use to generate,
validate, and evaluate the diagnosticity of indicators can be found in
Pherson and Pyrik, Analyst’s Guide to Indicators.

17. A shorter description of Indicators Generation, Validation, and


Evaluation can be found in Pherson, Handbook of Analytic Tools and
Techniques, 5th ed., 48–50.

18. See J. L. Mackie, “Causes and Conditions,” American


Philosophical Quarterly 2, no. 4 (October 1965), 245–264.
Descriptions of Images and Figures
Back to Figure

1 to 4 years. Simple situation: Brainstorming, reversing, and assumptions. Complex situation: Simple
scenarios, cone of plausibility, and morphological analysis. Primary objective: Avoiding surprises.
Anticipating the unanticipated.

5 to 10 years. Simple situation: Alternative futures analysis, and what if analysis. Complex situation:
Multiple scenarios, generation, foresight quadrant, crunching trademarked, and analysis by contrasting
narratives and counterfactual reasoning. Primary objective: Mapping the future. Finding opportunities.

Back to Figure

The present drivers and assumptions lead to the multiple scenarios after a period of stable regime. The
drivers and their corresponding assumptions are as follows. Economy: Growth likely 2 to 5 percent.
Popular support: Slowly eroding. Civil-Military relations: Growing tensions. Regional relations: Peaceful
and stable. Foreign relations: Substantial. The scenarios are as follows. Plausible Scenario 1: More
inclusive policies and sound economic decisions bring stability. Baseline scenario. President under fire
and struggling to retain control. Plausible scenario 2: Junior military officers stage successful coup. Wild-
card scenario: War breaks out with neighbor as regime collapses.

Back to Figure

The effectiveness of the government is fully operational and marginalized. The strength of the civil
society is nonexistent and robust. Fully operational and nonexistent: Keeping it all together. Fully
operational and robust: Competing power centers. Marginalized and nonexistent: Glueless in Havana.
Marginalized and robust: Drifting toward democracy.

Back to Figure

A. The role of neighboring states, example, Syria, Iran. B. The capability of Iraq’s security forces, such
as military and police. C. The political environment in Iraq. In the first scenario, the key drivers are A and
B. In the second scenario, the key drivers are A and C. In the third scenario, the key drivers are B and
C.

Back to Figure

The key definers are Iraqi security capability, which are ineffective and effective, and the helpfulness of
neighbors, which are stable and supportive, and unstable or disruptive. Neighboring states are stable
and supportive and ineffective security capability: Regional defensive umbrella secures borders.
Insurgency is pure Sunni, internal political solution? Neighboring states are stable and supportive and
effective security capability: Militias integrated into new Iraqi Army. Jordan brokers deal; economic aid to
Sunnis. Neighboring states are unstable or disruptive and ineffective security capability: Syria collapses,
influx of new fighters. Civil wars. Neighboring states are unstable or disruptive and effective security
capability: Insurgency fragments. Refugees flow into Iraq seeking safe haven.

Back to Figure

A. The role of neighboring states, example, Syria, Iran. B. The capability of Iraq’s security forces, such
as military and police. C. The political environment in Iraq. In the first scenario, the key drivers are A and
B. The Civil War is the nightmare scenario in the third quadrant. In the second scenario, the key drivers
are A and C. Sunni politics in the second quadrant deserve the most attention. In the third scenario, the
key drivers are B and C. New fighters in the third column deserve the most attention.

Back to Figure
The dimensions are group, type of attack, target, and impact. The first option is an outside group
planning multiple attacks on the treatment plant for disrupting economy. The second option is an insider
group planning a single type of attack on the drinking water for terrorizing the population. The third
option is a visitor group planning a threatening type of attack on wastewater to cause major casualties.

Back to Figure

Convergent scenario for X changes to divergent scenario for Y due to ripple effects, or possible change
X, if, is the consequence Y, then. Convergent scenario is the break with prior history. Divergent scenario
is he continuation to failure. Counterfactual: If X were to occur, then Y would or might occur. Stage 1.
Convergent scenario development. Ask, “When and where might this change plausibly come about?”
Assess the causes of the change and develop the story’s beginning. Stage 2. Ripple Effect Analysis.
Ask, “When and where might this change cause broader uncertainties and unintended consequences?
Asses the context of the change or new uncertainties and develop the story’s middle. Stage 3. Divergent
Scenario Development. Ask, “When and where might this change have long-term impact?” Assess the
consequences of the change and develop the story’s end.

Back to Figure

A tabular representation of Zambria Political Instability Indicators lists the five main indicators, with sub-
indicators, and their corresponding concerns for the first, second, third, and fourth quarters of 2008 and
2009 and the first and second quarters of 2010.

2008 2009

First Second Third Fourth First Second


Quarter Quarter Quarter Quarter Quarter Quarter

Social Ethnic or Negligible Negligible Negligible Negligible Negligible Negligible


change or religious
conflict discontent

Demonstrations, Negligible Negligible Low Low Low Moderate


riots, strikes

Economic General Low Low Moderate Moderate Moderate Low


factors deterioration

Decreased Low Moderate Low Low Low Low


access to
foreign funds

Capital flight Low Low Low Moderate Moderate Low


2008 2009

First Second Third Fourth First Second


Quarter Quarter Quarter Quarter Quarter Quarter

Unpopular Low Low Low Low Strong Strong


changes in
economic
policies

Food or energy Low Low Low Low Moderate Substantial


shortages

Inflation Low Low Moderate Low Low Moderate

Opposition Organizational Low Negligible Negligible Negligible Negligible Negligible


activities capabilities

Opposition or Low Low Low Low Low Low


conspiracy
planning

Terrorism and Low Low Low Low Low Low


sabotage

Insurgent armed Low Low Low Low Low Low


attacks

Public support Negligible Negligible Negligible Negligible Negligible Negligible

Military Threat to Negligible Negligible Negligible Negligible Negligible Negligible


attitude or corporate
activities military interests
or dignity

Discontent over Negligible Negligible Negligible Negligible Negligible Negligible


career loss, pay,
or benefits
2008 2009

First Second Third Fourth First Second


Quarter Quarter Quarter Quarter Quarter Quarter

Discontent over Low Low Low Low Low Low


government
action or
policies

Reports or Low Low Low Low Low Low


rumors of coup
plotting

External support Low Low Low Low Low Low


for government

External support Low Low Low Low Low Low


for opposition

Threat of Low Low Low Negligible Negligible Negligible


military conflict

Regime Repression or Moderate Low Moderate Moderate Moderate Moderate


actions or brutality
capabilities

Security Negligible Negligible Negligible Negligible Low Low


capabilities

Political disunity Low Low Low Low Low Low


or loss of
confidence

Loss of Substantial Moderate Moderate Moderate Moderate Moderate


legitimacy
CHAPTER 10 DECISION SUPPORT
TECHNIQUES

10.1 Opportunities Incubator™ [ 311 ]

10.2 Bowtie Analysis [ 314 ]

10.3 Impact Matrix [ 318 ]

10.4 SWOT Analysis [ 321 ]

10.5 Critical Path Analysis [ 323 ]

10.6 Decision Trees [ 326 ]

10.7 Decision Matrix [ 329 ]

10.8 Force Field Analysis [ 332 ]

10.9 Pros-Cons-Faults-and-Fixes [ 336 ]

10.10 Complexity Manager [ 339 ]

Managers, commanders, planners, and other decision makers all


make choices or trade-offs among competing goals, values, or
preferences. Because of limitations in human short-term memory, we
usually cannot keep all the pros and cons of multiple options in mind
at the same time. That causes us to focus first on one set of
problems or opportunities and then another. This often leads to
vacillation or procrastination in making a firm decision. Some
Decision Support Techniques help overcome this cognitive limitation
by laying out all the options and interrelationships in graphic form so
that analysts can test the results of alternative options while keeping
the problem as a whole in view. Other techniques help decision
makers untangle the complexity of a situation or define the
opportunities and constraints in the environment in which the choice
needs to be made.

The role of the analyst in the policymaking process is similar


to that of the scout in relation to the football coach. The job of
the scout is not to predict in advance the final score of the
game, but to assess the strengths and weaknesses of the
opponent so that the coach can devise a winning game plan.
Then the scout sits in a booth with powerful binoculars, to
report on specific vulnerabilities the coach can exploit.
—Douglas MacEachin, CIA Deputy Director for Intelligence, 1993–
1995

It is usually not the analyst’s job to make the choices or decide on


the trade-offs, but analysts can and should use Decision Support
Techniques to provide timely support to managers and decision
makers who must make these choices. To engage in this type of
client support, analysts must be aware of the operating environment
of decision makers and anticipate how they are likely to approach an
issue. Analysts also need to understand the dynamics of the
decision-making process to recognize when and how they can be
most useful. Most of the decision support techniques described here
are used in both government and industry.

By using such techniques, analysts can see a problem from the


decision maker’s perspective. They can use these techniques
without overstepping the limits of their role as analysts because the
technique does not make the decision; it just structures all the
relevant information in a format that makes it easier for the decision
maker to make a choice.

The decision aids described in this chapter provide a framework for


analyzing why or how a leader, group, organization, company, or
country has made, or is likely to make, a decision. If analysts can
describe an adversary’s or a competitor’s goals and preferences, it
will be easier to anticipate their actions. Similarly, when the decisions
are known, the technique makes it easier to infer the adversary’s or
competitor’s goals and preferences. Analysts can use these Decision
Support Techniques to help the decision maker frame a problem
instead of trying to predict the decision of a foreign government or
competitor. Often, the best support an analyst can provide is to
describe the forces that are most likely to shape a decision or an
outcome. Knowledge of these key drivers then gives the decision
maker a “head start” in trying to leverage the eventual outcome. (See
chapter 9 for a discussion of Foresight Techniques.)

Caution is in order, however, whenever one attempts to predict or


explain another person’s decision, even if the person is of similar
background. People do not always act rationally or in their own best
interests. Their decisions are influenced by emotions and habits as
well as by what others might think and values of which others may
not be aware.

The same is true of organizations, companies, and governments.


One of the most common analytic errors is to assume that an
organization, company, or government will act rationally or in its own
best interests. All intelligence analysts seeking to understand the
behavior of another country should be familiar with Graham Allison’s
analysis of U.S. and Soviet decision making during the Cuban
missile crisis.1 It documents three different models for how
governments make decisions—bureaucratic bargaining processes,
standard organizational procedures, and the rational actor model.

Even if an organization, company, or government is making a


rational decision, analysts may get their analysis wrong. Foreign
entities typically view their own best interests quite differently from
the way analysts from different cultures, countries, or backgrounds
would see them. Also, organizations, companies, and governments
do not always have a clear understanding of their own best interests,
and often must manage a variety of conflicting interests.
Decision making and decision analysis are large and diverse fields of
study and research. The decision support techniques described in
this chapter are only a small sample of what is available, but they do
meet many of the basic requirements for intelligence and competitive
analysis.

By providing structure to the decision-making process, the Decision


Support Techniques discussed in this chapter help analysts as well
as decision makers avoid the common cognitive limitations of
Premature Closure and Groupthink. Application of the techniques will
often surface new options or demonstrate that a previously favored
option is less optimal than originally thought. The natural tendency
toward Mirror Imaging is more likely to be kept in check when using
these techniques because they provide multiple perspectives for
viewing a problem and envisioning the interplay of complex factors.

Decision Support Techniques help analysts overcome several


practitioner’s mental mistakes including the intuitive trap of
Overrating Behavioral Factors when the role of internal determinants
of behavior (personality, attitudes, beliefs) are given more weight
than external or situational factors (constraints, forces, incentives).
They also help counter the traps of overestimating the probability of
multiple independent events occurring for an event or attack to take
place (Overestimating Probability) and failing to factor something into
the analysis because an appropriate category or “bin” is lacking
(Lacking Sufficient Bins).

Techniques such as the Opportunities Incubator™, Bowtie Analysis,


and Impact Matrix help decision makers decide how to implement
the key findings of a Foresight analysis to either mitigate the impact
of a bad scenario or increase the chances of a good scenario
occurring. SWOT Analysis (Strengths, Weaknesses, Opportunities,
and Threats) and Critical Path Analysis are basic tools of the
competitive analysis profession. Decision Trees and the Decision
Matrix use simple math to help analysts and decision makers
calculate the most probable or preferred outcomes. Force Field
Analysis, Pros-Cons-Faults-and-Fixes, and the Complexity Manager
can help decision makers understand the overarching context of a
problem and identify the key forces and factors at play.
OVERVIEW OF TECHNIQUES
Opportunities Incubator™. A systematic method for identifying
actions that can facilitate the emergence of positive scenarios and
thwart or mitigate less desirable outcomes.

Bowtie Analysis. A technique for mapping causes and


consequences of a disruptive event. It is particularly effective in
identifying opportunities for decision makers to avoid undesirable
developments and promote positive outcomes.

Impact Matrix. A management tool for assessing the impact of a


decision on the organization by evaluating what impact a decision is
likely to have on all key actors or participants in that decision. It gives
the analyst or decision maker a better sense of how the issue is
most likely to play out or be resolved in the future.

SWOT Analysis (Strengths, Weaknesses, Opportunities, and


Threats). A 2-×-2 matrix used to develop a plan or strategy to
accomplish a specific goal. In using this technique, the analyst first
lists the Strengths and Weaknesses in the organization’s ability to
achieve a goal, and then balances them against lists of Opportunities
and Threats in the external environment that would either help or
hinder the organization from reaching the goal.

Critical Path Analysis. A modeling technique for identifying the


critical stages required to move from a beginning to an end. It also is
used for scheduling a set of project activities commonly used in
conjunction with Program Evaluation and Review Technique (PERT)
charts.

Decision Trees. A simple way to chart the range of options available


to a decision maker, estimate the probability of each option, and
show possible outcomes. The technique provides a useful landscape
to organize a discussion and weigh alternatives but can also
oversimplify a problem.
Decision Matrix. A simple but powerful device for making trade-offs
between conflicting goals or preferences. An analyst lists the
decision options or possible choices, the criteria for judging the
options, the weights assigned to each of these criteria, and an
evaluation of the extent to which each option satisfies each of the
criteria. This process will show the best choice—based on the values
the analyst or a decision maker puts into the matrix. By studying the
matrix, one can also analyze how the best choice would change if
the values assigned to the selection criteria were changed or if the
ability of an option to satisfy a specific criterion were changed. It is
almost impossible for an analyst to keep track of these factors
effectively without such a matrix, as one cannot keep all the pros and
cons in working memory at the same time. A Decision Matrix helps
the analyst see the whole picture.

Force Field Analysis. A technique that analysts can use to help the
decision maker identify the most effective ways to solve a problem or
achieve a goal—and whether it is possible to do so. The analyst
identifies and assigns weights to the relative importance of all the
factors or forces that either help or hinder a solution to the problem
or achievement of the goal. After organizing all these factors in two
lists, pro and con, with a weighted value for each factor, the analyst
or decision maker is in a better position to recommend strategies
that would be most effective in either strengthening the impact of the
driving forces or reducing the impact of the restraining forces.

Pros-Cons-Faults-and-Fixes. A strategy for critiquing new policy


ideas. It is intended to offset the human tendency of analysts and
decision makers to jump to conclusions before conducting a full
analysis of a problem, as often happens in group meetings. The first
step is for the analyst or the project team to make lists of Pros and
Cons. If the analyst or team is concerned that people are being
unduly negative about an idea, he or she looks for ways to “Fix” the
Cons—that is, to explain why the Cons are unimportant or even to
transform them into Pros. If concerned that people are jumping on
the bandwagon too quickly, the analyst tries to “Fault” the Pros by
exploring how they could go wrong. Usually, the analyst will either
“Fix” the Cons or “Fault” the Pros, but will not do both. Of the various
techniques described in this chapter, this is one of the easiest and
quickest to use.

Complexity Manager. A simplified approach to understanding


complex systems—the kind of systems in which many variables are
related to each other and may be changing over time. Government
policy decisions are often aimed at changing a dynamically complex
system. It is because of this dynamic complexity that many policies
fail to meet their goals or have unforeseen and unintended
consequences. Use the Complexity Manager to assess the chances
for success or failure of a new or proposed policy, identify
opportunities for influencing the outcome of any situation, determine
what would need to change in order to achieve a specified goal, or
recognize the potential for unintended consequences from the
pursuit of a policy goal.
10.1 OPPORTUNITIES INCUBATOR™
The Opportunities Incubator™ is a systematic method for identifying
actions that can facilitate positive outcomes and thwart or mitigate
less desirable outcomes.
When to Use It
The technique is most useful for assisting decision makers when
they are preparing for change or they want to shape how change will
occur.
Value Added
The Opportunities Incubator™ helps senior officials and decision
makers identify what actions would be most effective in preventing a
negative scenario from occurring or fostering the emergence of a
good scenario. The tool focuses attention on who is most affected by
a given scenario and who has the capability and intent to influence
an outcome.

The technique is helpful in mitigating the deleterious impact of


cognitive biases such as Mirror Imaging, providing quick and easy
answers to complex challenges (Mental Shotgun), and judging the
desirability of a potential course of action by the ease with which the
policy option comes to mind (Availability Heuristic). It also helps
analysts avoid the pitfalls of failing to incorporate a policy option into
an action plan because the analyst lacks a category or “bin” for such
an option (Lacking Sufficient Bins), overestimating the probable
impact of multiple independent actions occurring (Overestimating
Probability), and giving too much credit to the role of behavioral
factors (personality, attitudes, beliefs) and underestimating the
impact of situational factors (constraints, time, incentives) on
accomplishing a stated objective (Overrating Behavioral Factors).
The Method
After developing a set of scenarios, assess each scenario separately using the following steps (see
Figure 10.1):

Describe the scenario, projected trajectory, or anticipated outcome in one sentence.

Determine your client’s perception of the scenario, projected trajectory, or anticipated outcome. Use
the following scale: Strongly Positive, Positive, Neutral, Negative, Strongly Negative.

Identify the primary actors in the scenario who have a stake in the projected trajectory or anticipated
outcome.

Assess how much each actor might care about the scenario’s projected outcome because of its
positive or negative (perceived or real) impact on the actor’s livelihood, status, prospects, and so
forth. This assessment considers how motivated the actor may be to act, not whether the actor is
likely to act or not. Use the scale: Very Desirable (DD), Desirable (D), Neutral (N), Undesirable (U),
Very Undesirable (UU).

Assess each actor’s capability or resources to respond to the scenario using a High, Medium, or
Low scale.

Assess each actor’s likely intent to respond to the scenario using a High, Medium, or Low scale.

Identify the actors who should receive the most attention based on the following tiers:

1st: DD or UU Level of Interest rating plus High ratings in both Capability and Intent

2nd: High ratings in Capability and Intent

3rd: DD or UU Level of Interest rating plus a High rating in either Capability or Intent

4th: High rating in either Capability or Intent

5th: All other actors

Reorder the rows in the matrix so that the actors are listed from first to fifth tiers.

Record the two to three key drivers that would most likely influence or affect each actor or the
actor’s response.

Consider your client’s perception and determine how and when he or she might act to influence
favorably, counteract, or deter an actor’s response. From this discussion, develop a list of possible
actions the client can take.
Figure 10.1 Opportunities Incubator™
Source: Globalytica, LLC, 2019.
Origins of This Technique
The Opportunities Incubator™ was developed by Globalytica, LLC,
to provide a structured process decision makers can use to
implement the key findings of a Foresight exercise. This technique
and the Impact Matrix are the most common decision support tools
used to conclude a Foresight exercise.
10.2 BOWTIE ANALYSIS
Bowtie Analysis is a technique for mapping causes and
consequences of a disruptive event to facilitate the management of
both risks and opportunities.

The technique was first developed for the oil and gas industry but
has evolved into a generic method for assisting decision makers in
proactively managing potential hazards and anticipated
opportunities. Analysts can use the method to enhance their
understanding of causal relationships through mapping both
anticipatory and reactive responses to a disruptive event.

The Bowtie’s logical flow can make the analysis of risks and
opportunities more rapid and efficient. The graphical “bowtie” display
also makes it easier for analysts to communicate the interaction and
relative significance of causes and consequences of a disruptive
event, whether it presents a hazard or an opportunity.
When to Use It
Bowtie Analysis is used when an organization needs to thoroughly
examine its responses to a potential or anticipated disruptive event.
Traditionally, industry has used it to establish more control over a
potential hazard and improve industrial safety, but it is also useful in
identifying a potential opportunity. Bowtie Analysis helps analysts
and decision makers understand the causal relationships among
seemingly independent events. Decision makers can use the
technique to do the following:

Evaluate their ability to control the occurrence of a risk event by


identifying both measures to prevent the event from happening
and responses for mitigating the harm it would do.

Explore ways to ensure good things happen and accelerate the


timing and benefits of a positive event.

In both cases, the process helps analysts and decision makers


recognize weaknesses and strengths in the risk prevention
structures and strategic planning processes of an adversary or their
organization. It can also be a part of a lessons-learned process to
assess why something happened in the past and identify
opportunities that can be leveraged in the future.
Value Added
Bowtie Analysis forces a thorough assessment of causes and
consequences of a disruptive event. It is adaptable to almost any
organization and scalable for any level of risk or opportunity. The
technique can be used either to evaluate possible causes and
consequences of a risk or opportunity event or to investigate an
organization’s overall risk and profitability management system. It
can also pinpoint elements of an organization’s risk management
system that need further development.

The method first evaluates the ability of an organization to prevent or


control the occurrence of a risk event—or anticipate the emergence
of an opportunity. If there is the potential for control, then the
technique identifies the steps that can be taken to exercise that
control. If there is little ability to control, then the method evaluates
the potential impact of the event and describes what steps should be
taken to mitigate the resulting damage. Similarly, Bowtie Analysis
can help develop strategies for taking advantage of an upcoming
event to optimize positive benefits for the organization.

Bowtie graphics are an effective mechanism for conveying a risk or


opportunity landscape because of their visual and logically flowing
presentation. They are quickly understood and conducive to
manipulation by decision makers as they consider options and
construct strategies to either mitigate harm or capitalize on the
potential for gain.

As with opportunities analysis, the technique is helpful in mitigating


the impact of Mental Shotgun, the Availability Heuristic, and
Satisficing, which is selecting the first answer that appears “good
enough.” The technique also helps counter the intuitive traps of
Lacking Sufficient Bins, Overrating Behavioral Factors, and
Assuming a Single Solution.
The Method
A Bowtie Analysis is conducted using the following steps (see Figure 10.2):

Risk or Opportunity Event. Identify a hazard (something in or around the organization of a


decision maker that has the potential to cause damage) or an opportunity (something that has the
potential to bring positive benefits to the organization). If no specific hazard or opportunity comes
readily to mind, begin by brainstorming ideas, then choose the one with the greatest potential for
good or ill. Determine a possible event caused by the hazard or opportunity. This event is the Risk
or Opportunity Event. It represents either a loss of control of the hazard or a favorable outcome of
an opportunity. The event may be unprecedented, or it may have already occurred, in which case
the organization can look to the past for root causes.

Description

Figure 10.2 Bowtie Analysis

Causes. Make a list of threats or trends, placing them on the left and drawing connecting lines from
each to the centered Risk/Opportunity Event. Threats/trends are potential causes of the
Risk/Opportunity Event. Be specific (i.e., “weather conditions” can be specified as “slippery road
conditions”) so that actionable preventive barriers in the case of threats, or accelerators for positive
trends, can be created in a later step.

Consequences. Make a similar list of consequences, placing them on the right and drawing
connecting lines from each to the centered Risk/Opportunity Event. Consequences are results from
the event. Continue to be specific. This will aid in identifying relevant recovery barriers or
distributors to the consequences in a later step.

Preventive Barriers or Accelerators. Focus on the causes on the left of the Bowtie. If the causes
are threats, brainstorm barriers that would stop the threats from leading to the Risk Event. If the
causes are trends, brainstorm accelerators that would quicken the occurrence of the Opportunity
Event.

Recovery Barriers or Amplifiers. Similarly, focus on the consequences on the right of the Bowtie.
If the consequences are undesired, brainstorm barriers that would stop the Risk Event from leading
to the worst-case consequences. If the consequences are desired, brainstorm amplifiers that would
capitalize on the effects of the Opportunity Event.

Escalation Factor. Brainstorm escalation factors and connect each to a barrier, accelerator, or
amplifier. An escalation factor (EF) is anything that may cause a barrier to fail (e.g., “forgetting to
wear a seatbelt” is an escalation factor for a Risk Event because it impairs the effectiveness of the
“wearing a seatbelt” recovery barrier) or enhances the positive effects of an accelerator or
distributor (e.g., “getting all green lights” is an escalation factor for an Opportunity Event because it
increases the effectiveness of the amplifier “going the maximum speed limit”). An escalation factor
barrier stops or mitigates the impact of the escalation factor and its effects, while an escalation
factor accelerator or amplifier intensifies the escalation factor and its effects.
Potential Pitfalls
Specificity is necessary to create actionable barriers, accelerators,
and amplifiers in a Bowtie Analysis. However, a detailed Bowtie
Analysis can present the impression that the authors have thought of
all options or outcomes when they actually have not, as
unanticipated options are often available to decision makers and
unintended consequences result.
Relationship to Other Techniques
The Bowtie method is like Decision Tree analysis because both
methods analyze chains of events to illustrate possible future
actions. Bowtie Analysis also evaluates controls, or barriers, that an
organization has in place. The Opportunities Incubator™ is another
technique that can be used to facilitate positive outcomes and thwart
or mitigate less desirable outcomes. It structures the process
decision makers can use to develop strategies for leveraging or
mitigating the impact of key drivers that influence primary actors,
who are associated with differing levels of intent and capability.
Origins of This Technique
The University of Queensland, Australia, is credited with
disseminating the first Bowtie diagrams at a lecture on hazard
analysis in 1979. After the Piper Alpha offshore oil and gas platform
explosion in 1988, the oil and gas industry adopted the technique to
develop a systematic way of understanding the causal relationships
among seemingly independent events and asserting control over the
potentially lethal hazards in the industry. The versatile Bowtie
Analysis technique is now in widespread use throughout a variety of
industries, including chemicals, aviation, and health care. It is also
used by several intelligence services. Additional resources on Bowtie
Analysis can be found at
https://www.cgerisk.com/knowledgebase/The_bowtie_method.
10.3 IMPACT MATRIX
The Impact Matrix identifies the key actors involved in a decision,
their level of interest in the issue, and the impact of the decision on
them. It is a technique managers use to gain a better sense of how
well or how poorly a decision may be received, how it is most likely
to play out, and what would be the most effective strategies to
resolve a problem. Analysts can also use it to anticipate how
decisions will be made in another organization or by a foreign leader.
When to Use It
The best time for a manager to use this technique is when a major
new policy initiative is being contemplated or a mandated change is
about to be announced. The technique helps managers identify
where they are most likely to encounter both resistance and support.
Intelligence analysts can also use the technique to assess how the
public might react to a new policy pronouncement by a foreign
government or a new doctrine posted on the internet by a political
movement. Invariably, the technique will uncover new insights by
focusing in a systematic way on all possible dimensions of the issue.

The matrix template makes the technique easy to use. Most often,
an individual manager will apply the technique to develop a strategy
for how he or she plans to implement a new policy or respond to a
newly decreed mandate from superiors. Managers can also use the
technique proactively before they announce a new policy or
procedure. The technique can expose unanticipated pockets of
resistance or support, as well as individuals to consult before the
policy or procedure becomes public knowledge. A single intelligence
analyst or manager can also use the technique, although it is usually
more effective if done as a group process.
Value Added
The technique provides the user with a comprehensive framework
for assessing whether a new policy or procedure will be met with
resistance or support. A key concern is to identify any actor who will
be heavily affected in a negative way. Those actors should be
engaged early on or ideally before the policy is announced, in case
they have ideas on how to make the new policy more digestible. At a
minimum, they will appreciate that their views—either positive or
negative—were sought out and considered. Support can be enlisted
from those who will be strongly impacted in a positive way.

The Impact Matrix usually is most effective when used by a manager


as he or she is developing a new policy. The matrix helps the
manager identify who will be most affected, and he or she can
consider whether this argues for either modifying the plan or
modifying the strategy for announcing the plan.

The technique is helpful in reducing the impact of several of the most


common cognitive biases and heuristics: Mirror Imaging, Mental
Shotgun, and Groupthink. It also helps analysts avoid several
common mental mindsets or intuitive traps, including Overrating
Behavioral Factors, Lacking Sufficient Bins, and Overestimating
Probability.
The Method
The Impact Matrix process involves the following steps (a template for using the Impact Matrix is
provided in Figure 10.3):

Identify all the individuals or groups involved in the decision or issue. The list should include me
(usually the manager); my supervisor; my employees or subordinates; my client(s), colleagues, or
counterparts in my office or agency; and counterparts in other agencies. If analyzing the decision-
making process in another organization, the “me” becomes the decision maker.

Rate how important this issue is to each actor or how much each actor is likely to care about it. Use
a three-point scale: Low, Moderate, or High. The level of interest should reflect how great an impact
the decision would have on such issues as each actor’s time, quality of work life, and prospects for
success.

Figure 10.3 Impact Matrix: Identifying Key Actors, Interests, and Impact

Categorize the impact of the decision on each actor as Mostly Positive (P), Neutral or Mixed (O), or
Mostly Negative (N). If a decision has the potential to be negative, mark it as negative. If in some
cases the impact on a person or group is mixed, then either mark it as neutral or split the group into
subgroups if specific subgroups can be identified.

Review the matrix after completion and assess the likely overall reaction to the policy or decision.

Develop an initial action plan.

Identify where the decision is likely to have a major negative impact and consider the utility of prior
consultations.

Identify where the decision is likely to have a major positive impact and consider enlisting the
support of key actors in helping make the decision or procedure work.
Finalize the action plan reflecting input gained from consultations.

Announce the decision and monitor reactions.

Reassess the action plan based on feedback received on a periodic basis.


Origins of This Technique
The Impact Matrix was developed by Mary O’Sullivan and Randolph
Pherson, Pherson Associates, LLC, and is taught in courses for mid-
level managers in the government, law enforcement, and business.
10.4 SWOT ANALYSIS
SWOT Analysis is commonly used by all types of organizations to
evaluate the Strengths, Weaknesses, Opportunities, and Threats
involved in any project or plan of action. The Strengths and
Weaknesses are internal to the organization; Opportunities and
Threats are characteristics of the external environment. It is a
frequently used tool in competitive analysis.
When to Use It
After setting a goal or objective, use SWOT as a framework for
collecting and organizing information in support of strategic planning
and decision making to achieve the goal or objective. Information is
collected to analyze the plan’s Strengths and Weaknesses and the
Opportunities and Threats present in the external environment that
might affect attainment of the goal.

SWOT is easy to use. It is usually a group process, but a single


analyst can also use it effectively. It is particularly effective as a
cross-functional team-building exercise at the start of a new project.
Businesses and organizations of all types use SWOT so frequently
that a Google search on “SWOT Analysis” turns up more than one
million hits.
Value Added
SWOT can generate useful information with relatively little effort. It
brings information together in a framework that provides a good base
for further analysis. It often points to specific actions that can or
should be taken. Because the technique matches an organization’s
or plan’s Strengths and Weaknesses against the Opportunities and
Threats in the environment in which it operates, the plans or action
recommendations that develop from the use of this technique are
often highly practical.

SWOT helps analysts overcome, or at least reduce, the impact of


seeking only the information that is consistent with the lead
hypothesis, policy option, or business strategy (Confirmation Bias),
accepting a given value of something as a proper starting point
(Anchoring Effect), and Groupthink. It also helps analysts avoid the
intuitive traps of Lacking Sufficient Bins, Overrating Behavioral
Factors, and Overestimating Probability.
The Method

Define the objective.

Fill in the SWOT table by listing Strengths, Weaknesses, Opportunities, and Threats that are
expected to facilitate or hinder achievement of the objective (see Figure 10.4). The significance of
the attributes’ and conditions’ impact on achievement of the objective is far more important than the
length of the list. It is often desirable to list the items in each quadrant in order of their significance
or to assign them values on a scale of 1 to 5.

Identify possible strategies for achieving the objective. This is done by asking the following
questions:

How can we use each Strength?

How can we improve each Weakness?

How can we exploit each Opportunity?

How can we mitigate each Threat?

Figure 10.4 SWOT Analysis

An alternative approach is to apply “matching and converting” techniques. Matching refers to matching
Strengths with Opportunities to make the Strengths even stronger. Converting refers to matching
Opportunities with Weaknesses to convert the Weaknesses into Strengths.
Potential Pitfalls
SWOT is simple, easy, and widely used, but it has limitations. It
focuses on a single goal without weighing the costs and benefits of
alternative means of achieving the same goal. In other words, SWOT
is a useful technique if the analyst or group recognizes that it does
not necessarily tell the full story of what decision should or will be
made. There may be other equally good or better courses of action.

Another strategic planning technique, the TOWS Matrix, remedies


one of the limitations of SWOT. The factors listed under Threats,
Opportunities, Weaknesses, and Strengths are combined to identify
multiple alternative strategies that an organization might pursue.2
Relationship to Other Techniques
The factors listed in the Opportunities and Threats quadrants of a
SWOT Analysis are the same as the outside or external factors the
analyst seeks to identify during Outside-In Thinking (chapter 8). In
that sense, there is some overlap between the two techniques.
Origins of This Technique
The SWOT technique was developed in the late 1960s at Stanford
Research Institute as part of a decade-long research project on why
corporate planning fails. It is the first part of a more comprehensive
strategic planning program. It has been so heavily used over such a
long period of time that several versions have evolved. Richards J.
Heuer Jr. selected the version he believed the most appropriate for
intelligence analysis. It comes from multiple internet sites, including
the following:
http://www.businessballs.com/swotanalysisfreetemplate.htm,
http://en.wikipedia.org/wiki/SWOT_analysis,
http://www.mindtools.com, http://www.valuebasedmanagement.net,
and http://www.mycoted.com. Pros and cons for using this
technique, along with interactive templates, can be found at
https://www.mindtools.com/pages/article/newTMC_05.htm.
10.5 CRITICAL PATH ANALYSIS
Critical Path Analysis is a modeling technique for identifying the
critical stages required to move from a beginning to an end point.
When to Use It
Critical Path Analysis is used for scheduling a set of project activities
often in conjunction with Program Evaluation and Review Technique
(PERT) charts.
Value Added
Critical Path Analysis uses a model to show the logical progression
of events, the key nodes (or intersections) in the process, and the
routes taken in getting from one state to others. Figure 10.5 provides
an example of Critical Path Analysis used to describe how a country
could move from a relatively stable state to different forms of political
instability.

Critical Path Analysis can assist analysts in reducing the influence of


several cognitive biases and heuristics, including Satisficing, the
Availability Heuristic, and Premature Closure. It also helps analysts
resist the intuitive traps of Overinterpreting Small Samples, Relying
on First Impressions, and Projecting Past Experiences.

More detailed project models show the various paths of critical


activities required to achieve a planned end point of a project. These
models can be used to calculate the earliest, most feasible time each
step of the process can commence and the latest time it needs to be
concluded to avoid making the project longer. The process allows
the analyst to determine which activities are critical to achieving the
outcome in the minimal amount of time possible and which can be
delayed without causing the time frame to be extended. A project
can have several parallel critical or near critical paths.
The Method

Define the task. Define the end point of the project.

Define the activities involved. List all the components or activities required to bring the project to
completion.

Calculate the time to do each task. Indicate the amount of time (duration) it will take to perform
the activity. This can be a set amount of time or a range from shortest to longest expected time.

Identify dependencies. Determine which activities are dependent on other activities and the
sequences that must be followed.

Identify pathways. Identify various ways or combinations of activities that would enable
accomplishment of the project.

Estimate time to complete. Calculate how much time would be required for each path taken. This
could be a set amount of time or a range.

Description

Figure 10.5 Political Instability Critical Path Analysis

Identify the optimal pathway. Rank order the various pathways in terms of time required and
select the pathway that requires the least amount of time. Identify which activities are critical to
achieving this goal.

Identify key indicators. Formulate expectations (theories) about potential indicators at each stage
that could either expedite or impede progress toward achieving the end point of the project.

Generate a final product. Capture all the data in a final chart and distribute it to all participants for
comment.

Given the complexity of many projects, software is often used for project management and tracking.
Microsoft Project is purposely built for this task. There are also free charting tools like yEd that are
serviceable alternatives. Using this process, analysts can better recognize important nodes and
associated key indicators.
10.6 DECISION TREES
Decision Trees establish chains of decisions and/or events that
illustrate a comprehensive range of possible future decision points.
They paint a landscape for the decision maker showing the range of
options available, the estimated value or probability of each option,
and the likely implications or outcomes of choosing each option.
When to Use It
Decision Trees can be used to do the following:

Aid decision making by explicitly comparing options.

Create a heuristic model of the decision-making process of the


subject or adversary.

Map multiple competing hypotheses about an array of possible


actions.

A Decision Tree can help a decision maker resolve a difficult


problem, or assess what options an adversary or competitor might
choose to implement. In constructing a Decision Tree, analysts need
to have a rich understanding of the operating environment in which
the decision is being made. This can include knowledge of motives,
capabilities, sensitivities to risk, current doctrine, and cultural norms
and values.
Value Added
Decision Trees are simple to create and easy to use and interpret. A
single analyst can create a Decision Tree but a group using
brainstorming techniques as discussed in chapter 6 typically yields
better results. Once the tree has been built, it can be posted on a
wall or website and adjusted over time as new information becomes
available. When significant new data are received that add new
branches to the tree or substantially alter the probabilities of the
options, these changes can be inserted into the tree and highlighted
with color to show the decision maker what has changed and how it
may have changed the previous line of analysis.

Both this technique and the Decision Matrix are useful for countering
the impact of cognitive biases and heuristics such as the Anchoring
Effect, Satisficing, and Premature Closure. They also help analysts
avoid falling into the intuitive traps of Relying on First Impressions,
Assuming a Single Solution, and Overrating Behavioral Factors.
The Method
Using a Decision Tree is a fairly simple process involving two steps: (1) building the tree and (2)
calculating the value or probability of each outcome represented on the tree (see Figure 10.6). Follow
these steps:

Draw a square on a piece of paper or whiteboard to represent a decision point.

Draw lines from the square representing a range of options that can be taken.

At the end of the line for each option indicate whether further options are available (by drawing an
oval followed by more lines) or by designating an outcome (by drawing a circle followed by one or
more lines describing the range of possibilities).

Continue this process along each branch of the tree until all options and outcomes are specified.

Once the tree has been constructed, do the following:

Establish a set of percentages (adding to 100) for each set of lines emanating from each oval.

Multiply the percentages shown along each critical path or branch of the tree and record these
percentages at the far right of the tree. Check to make sure all the percentages in this column
add to 100.

Description
Figure 10.6 Counterterrorism Attack Decision Tree

The most valuable or most probable outcome will have the highest percentage assigned to it, and the
least valuable or least probable outcome will have the lowest percentage assigned to it.
Potential Pitfalls
A Decision Tree is only as good as the reliability of the data,
completeness of the range of options, and validity of the qualitative
probabilities or values assigned to each option. A detailed Decision
Tree can present the misleading impression that the authors have
thought of all possible options or outcomes. For example, options
may be available that the authors of the analysis did not imagine,
just as there might be unintended consequences that the authors did
not anticipate.
Relationship to Other Techniques
A Decision Tree is similar structurally to Critical Path Analysis and
Program Evaluation and Review Technique (PERT) charts. Both of
these techniques, however, only show the activities and connections
that need to be undertaken to complete a complex task. A timeline
analysis (as is often done in support of a criminal investigation) is
essentially a Decision Tree drawn after the fact, showing only the
paths of actual events.
Origins of This Technique
This description of Decision Trees was taken from the Canadian
government’s Structured Analytic Techniques for Senior Analysts
course. The Intelligence Analyst Learning Program developed the
course, and the materials are used here with the permission of the
Canadian government. More detailed discussions of how to build
and use Decision Trees are readily available on the internet, for
example, at the MindTools website and at
https://medium.com/greyatom/decision-trees-a-simple-way-to-
visualize-a-decision-dc506a403aeb.
10.7 DECISION MATRIX
A Decision Matrix helps analysts identify the course of action that
best achieves specified goals or preferences.
When to Use It
The Decision Matrix technique should be used when a decision
maker has multiple options from which to choose, has multiple
criteria for judging the desirability of each option, and/or needs to
find the decision that maximizes a specific set of goals or
preferences. For example, a Decision Matrix can help choose among
various plans or strategies for improving intelligence analysis, select
one of several IT systems one is considering buying, determine
which of several job applicants is the right choice, or consider any
personal decision, such as what to do after retiring.

A Decision Matrix is not applicable to most intelligence analysis,


which typically deals with evidence and judgments rather than goals
and preferences. It can be used, however, for supporting a decision
maker’s consideration of alternative courses of action. Analysts can
use the tool in a Red Hat Analysis to demonstrate the possible
choices a leader might make when faced with a given situation or to
help decision makers or clients see the potential effect of a policy
choice or business decision. It can also be used at the conclusion of
a Foresight workshop to identify optimal strategies for making good
scenarios happen or mitigate the impact of bad scenarios.
Value Added
By deconstructing a decision into its component parts, the Decision
Matrix technique makes it easier to identify areas of disagreement or
hidden assumptions and determine their impact on the decision.
Listing all the options or possible choices, the criteria for judging the
options, the weights assigned to each of these criteria, and an
evaluation of the extent to which each option satisfies each of these
criteria makes the analytic process transparent. All the judgments
are available for anyone to see—and challenge—by looking at the
matrix.

Because it is so explicit, the matrix can play an important role in


facilitating communication among those who are involved in, or
affected by, the decision process. It can be easy to identify areas of
disagreement and to determine whether such disagreements have
any material impact on the decision. One can also see how sensitive
a decision is to changes that might be made to the values assigned
to the selection criteria or to the ability of an option to satisfy the
criteria. If circumstances or preferences change, it is easy to go back
to the matrix, make changes, and calculate the impact of the
changes on the proposed decision.

The matrix helps decision makers and analysts avoid the cognitive
traps of Premature Closure, Satisficing, and the Anchoring Effect. It
also helps analysts avoid falling prey to intuitive traps such as
Relying on First Impressions, Assuming a Single Solution, and
Overrating Behavioral Factors.
The Method
Create a Decision Matrix table. To do this, break down the decision problem into two main components
by making two lists—a list of options or alternatives for making a choice and a list of criteria to be used
when judging the desirability of the options. Then follow these steps:

Create a matrix with one column for each option. Write the name of each option at the head of one
of the columns. Add two more blank columns on the left side of the table.

Count the number of selection criteria, and then adjust the table so that it has that many rows plus
two more: one at the top to list the options and one at the bottom to show the scores for each
option. Try to avoid generating a large number of criteria: usually four to six will suffice. In the first
column on the left side, starting with the second row, write in the selection criteria down the left side
of the table, one per row. Listing them roughly in order of importance can sometimes add value but
doing so is not critical. Leave the bottom row blank. (Note: Whether you enter the options across
the top row and the criteria down the far-left column, or vice versa, depends on what fits best on the
page. If one of the lists is significantly longer than the other, it usually works best to put the longer
list in the left-side column.)

Assign weights based on the importance of each of the selection criteria. This can be done in
several ways, but the preferred way is to take 100 percent and divide these percentage points
among the selection criteria. Be sure that the weights for all the selection criteria combined add to
100 percent. Also, be sure that all the criteria are phrased in such a way that a higher weight is
more desirable. (Note: If this technique is being used by an intelligence analyst to support decision
making, this step should not be done by the analyst. The assignment of relative weights is up to the
decision maker.)

Work across the matrix one row at a time to evaluate the relative ability of each of the options to
satisfy each of the selection criteria. For example, assign ten points to each row and divide these
points according to an assessment of the degree to which each of the options satisfies each of the
selection criteria. Then multiply this number by the weight for that criterion. Figure 10.7 is an
example of a Decision Matrix with three options and six criteria.

Add the numbers calculated in the columns for each of the options. If you accept the judgments and
preferences expressed in the matrix, the option with the highest number will be the best choice.

When using this technique, many analysts will discover relationships or opportunities not previously
recognized. A sensitivity analysis may find that plausible changes in some values would lead to a
different choice. For example, the analyst might think of a way to modify an option in a way that makes it
more desirable or might rethink the selection criteria in a way that changes the preferred outcome. The
numbers calculated in the matrix do not make the decision. The matrix is just an aid to help the analyst
and the decision maker understand the trade-offs between multiple competing preferences.
Figure 10.7 Decision Matrix
Origins of This Technique
This is one of the most commonly used techniques for decision
analysis. Many variations of this basic technique have been called by
many different names, including decision grid, Multiple Attribute
Utility Analysis (MAUA), Multiple Criteria Decision Analysis (MCDA),
Multiple Criteria Decision Making (MCDM), Pugh Matrix, and Utility
Matrix. For a comparison of various approaches to this type of
analysis, see Panos M. Parlos and Evangelos Triantaphyllou, eds.,
Multi-Criteria Decision Making Methods: A Comparative Study
(Dordrecht, Netherlands: Kluwer Academic Publishers, 2000).
10.8 FORCE FIELD ANALYSIS
Force Field Analysis is a simple technique for listing and assessing
all the forces for and against a change, problem, or goal.

Kurt Lewin, one of the fathers of modern social psychology, believed


that all organizations are systems in which the present situation is a
dynamic balance between forces driving for change and forces
restraining change. For any change to occur, the driving forces must
exceed the restraining forces, and the relative strength of these
forces is what this technique measures. This technique is based on
Lewin’s theory.3
When to Use It
Force Field Analysis is useful in the early stages of a project or
research effort when the analyst is defining the issue, gathering data,
or developing recommendations for action. It requires that the
analyst clearly define the problem in all its aspects. The technique
aids in structuring the data and assessing the relative importance of
each of the forces affecting the issue. It can also help the analyst
overcome the natural human tendency to dwell on the aspects of the
data that are most comfortable. An individual analyst or a small team
can use this technique.

In the world of business and politics, the technique can help develop
and refine strategies to promote a particular policy or ensure that a
desired outcome actually occurs. In such instances, it is often useful
to define the various forces in terms of key individuals who need to
be persuaded. For example, instead of listing budgetary restrictions
as a key factor, one would write down the name of the person who
controls the budget. Similarly, Force Field Analysis can help
diagnose what forces and individuals need to be constrained or
marginalized to prevent a policy from being adopted or an outcome
from happening.
Value Added
The primary benefit of Force Field Analysis is that it requires an analyst to consider the forces and
factors (and, in some cases, individuals) that influence a situation. It helps analysts think through the
ways various forces affect the issue and fosters recognition that such forces can be divided into two
categories: driving forces and restraining forces. By sorting the evidence into two categories, the analyst
can delve deeply into the issue and consider less obvious factors.

By weighing all the forces for and against an issue, analysts can better recommend strategies that
would be most effective in reducing the impact of the restraining forces and strengthening the effect of
the driving forces.

Force Field Analysis offers a powerful way to visualize the key elements of the problem by providing a
simple tally sheet for displaying the different levels of intensity of the forces individually and together.
With the data sorted into two lists, decision makers can more easily identify which forces deserve the
most attention and develop strategies to overcome the negative elements while promoting the positive
elements. Figure 10.8 is an example of a Force Field diagram.

An issue is held in balance by the interaction of two opposing sets of forces—those seeking to
promote change (driving forces) and those attempting to maintain the status quo (restraining
forces).
—Kurt Lewin, Resolving Social Conflicts (1948)

Description

Figure 10.8 Force Field Analysis: Removing Abandoned Cars from City Streets
Source: Pherson Associates, LLC, 2019.

Force Field Analysis is a powerful tool for reducing the impact of several of the most common cognitive
biases and heuristics: Premature Closure, Groupthink, and the Availability Heuristic. It also is a useful
weapon against the intuitive traps of Relying on First Impressions, Expecting Marginal Change, and
Assuming a Single Solution.
The Method

Define the problem, goal, trend, or change clearly and concisely.

Brainstorm to identify the forces that will most influence the


issue. Consider such topics as needs, resources, costs,
benefits, organizations, relationships, attitudes, traditions, and
interests. Other forces and factors to consider are social and
cultural trends, rules and regulations, policies, values, popular
desires, and leadership to develop the full range of forces
promoting and restraining the factors involved.

Make one list showing the forces or personalities “driving”


change and a second list showing the forces or personalities
“restraining” change.

Assign a value (the intensity score) to each driving or restraining


force to indicate its strength. Give the weakest intensity scores a
value of 1 (weak) and the strongest a value of 5 (strong). The
same intensity score can be assigned to more than one force if
the analyst considers the factors equal in strength. List the
intensity scores in parentheses beside each item.

Examine the two lists to determine if any of the driving forces


balance out or neutralize the restraining forces.

Devise a manageable course of action to strengthen the forces


that lead to the preferred outcome and weaken the forces that
would hinder the desired outcome.

Analysts should keep in mind that the preferred outcome may be


either promoting a change or restraining a change. For example, if
the problem is increased drug use or criminal activity, the analysis
would focus on the factors that would have the most impact on
restraining criminal activity or drug use. On the other hand, if the
preferred outcome is improved border security, the analyst would
highlight the drivers that would be most likely to promote border
security if strengthened.
Potential Pitfalls
When assessing the balance between driving and restraining forces,
the authors recommend caution not to add up the scores on each
side and concluding that the side with the most points will win. Any
numerical calculation can be easily manipulated by simply adding
more forces or factors to either list to increase its overall score.
Origins of This Technique
Force Field Analysis is widely used in social science and business
research. (A Google search on the term brings up more than
seventy-one million hits.) This version of the technique is found in
Randolph H. Pherson, Handbook of Analytic Tools and Techniques,
5th ed. (Tysons, VA: Pherson Associates, LLC, 2019). To learn more
about the Decision Matrix techniques, visit https://asq.org/quality-
resources/decision-matrix.
10.9 PROS-CONS-FAULTS-AND-FIXES
Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy
ideas. It is intended to offset the human tendency of a group of
analysts and decision makers to jump to a conclusion before
completing a full analysis of the problem.
When to Use It
Making lists of pros and cons for any action is a common approach
to decision making. Finding “Faults” and “Fixes” distinguishes this
technique from a simple “Pros and Cons” approach. Use this
technique to make a quick appraisal of a new idea or a more
systematic analysis of a choice between two options.

One advantage of Pros-Cons-Faults-and-Fixes is its applicability to


virtually all types of decisions. Of the various structured techniques
for decision making, it is one of the easiest and quickest to use. It
requires only a certain procedure for making the lists and discussing
them with others to solicit divergent input.

In the business world, the technique can help discover potential


vulnerabilities in a proposed strategy to introduce a new product or
acquire a new company. By assessing how Pros can be “Faulted,”
one can anticipate how competitors might react to a new corporate
initiative; by assessing how Cons can be “Fixed,” potential
vulnerabilities can be addressed, and major mistakes avoided early
in the planning process.
Value Added
It is unusual for a new idea to meet with instant approval. What often
happens in meetings is that a new idea is brought up, one or two
people immediately explain why they don’t like it or believe it won’t
work, and the idea is then dropped. On the other hand, there are
occasions when just the opposite happens. A new idea is
immediately welcomed, and a commitment to support it is made
before the idea is critically evaluated. The Pros-Cons-Faults-and-
Fixes technique helps to offset this human tendency to jump to
conclusions.

The technique first requires a list of Pros and Cons about the new
idea or the choice between two alternatives. If there seems to be
excessive enthusiasm for an idea and a risk of acceptance without
critical evaluation, the next step is to look for “Faults.” A Fault is any
argument that a Pro is unrealistic, won’t work, or will have
unacceptable side effects. On the other hand, if there seems to be a
bias toward negativity or a risk of the idea being dropped too quickly
without careful consideration, the next step is to look for “Fixes.” A
Fix is any argument or plan that would neutralize or minimize a Con,
or even change it into a Pro. In some cases, it may be appropriate to
look for both Faults and Fixes before comparing the two lists and
finalizing a decision.

The Pros-Cons-Faults-and-Fixes technique does not tell an analyst


whether the decision or strategy is “good” or not, nor does it help
decide whether the Pros or the Cons have the strongest argument.
That answer is still based on an analyst’s professional judgment. The
purpose of the technique is to offset any tendency to rush to
judgment. It organizes the elements of the problem logically and
helps ensure that the analyst considers both sides of a problem or
issue systematically. Documenting the elements of a problem and
taking the time to reflect whether all parties would view each element
the same way helps the analyst and decision maker see things more
clearly and become more objective and emotionally detached from
the decision (see Figure 10.9).

The technique militates against classic biases and misapplied


heuristics including Groupthink, Satisficing, and the Anchoring Effect.
It also protects against Projecting Past Experiences, Overrating
Behavioral Factors, and Overinterpreting Small Samples.
The Method
Start by clearly defining the proposed action or choice. Then follow these steps:

List the Pros in favor of the decision or choice. Think broadly and creatively, and list as many
benefits, advantages, or other positives as possible.

List the Cons, or arguments against what is proposed. The Cons usually will outnumber the Pros,
as most humans are naturally critical. It is often difficult to get a careful consideration of a new idea
because it is easier to think of arguments against something new than to imagine how the new idea
might work.

Figure 10.9 Pros-Cons-Faults-and-Fixes Analysis

Review each list and consolidate similar ideas. If two Pros are similar or overlapping, consider
merging them to eliminate any redundancy. Do the same for any overlapping Cons.

If the choice is between two clearly defined options, go through the previous steps for the second
option. If there are more than two options, a technique such as the Decision Matrix may be more
appropriate than Pros-Cons-Faults-and-Fixes.

Decide whether the goal is to demonstrate that an idea will not work or show how best to make it
succeed.

If the goal is to challenge an initial judgment that an idea will not work, take the Cons and see if
they can be “Fixed.” How can their influence be neutralized? Can you even convert them to Pros?
Four possible strategies are to

Propose a modification of the Con that would significantly lower the risk of the Con being a
problem.

Identify a preventive measure that would significantly reduce the chances of the Con being a
problem.

Create a contingency plan that includes a change of course if certain indicators are observed.

Identify a need for further research to confirm the assumption that the Con is a problem.
If the goal is to challenge an initial optimistic assumption that the idea will work and should be
pursued, take the Pros, one at a time, and see if they can be “Faulted.” That means to try to figure
out how the Pro might fail to materialize or have undesirable consequences. This exercise is
intended to counter any wishful thinking or unjustified optimism about the idea. A Pro might be
Faulted in at least three ways:

Identify a reason why the Pro would not work or why the benefit would not be received.

Identify an undesirable side effect that might accompany the benefit.

Identify a need for further research or information gathering to confirm or refute the assumption
that the Pro will work or be beneficial.

A third option is to combine both approaches: to Fault the Pros and Fix the Cons.

Compare the Pros, including any Faults, against the Cons, including the Fixes. Weigh one against
the other and make the choice. The choice is based on your professional judgment, not on any
numerical calculation of the number or value of Pros versus Cons.
Potential Pitfalls
Often when listing the Pros and Cons, analysts will assign weights to
each Pro and Con on the list and then re-sort the lists, with the Pros
or Cons receiving the most points at the top of the list and those
receiving the fewest points at the bottom. This can be a useful
exercise, helping the analyst weigh the balance of one against the
other, but the authors strongly recommend against mechanically
adding up the scores on each side and deciding that the list with the
most points is the right choice. Any numerical calculation can be
easily manipulated by simply adding more Pros or more Cons to
either list to increase its overall score. The best protection against
this practice is simply not to add up the points in either column.
Origins of This Technique
Pros-Cons-Faults-and-Fixes is Richards J. Heuer Jr.’s adaptation of
the Pros-Cons-and-Fixes technique described by Morgan D. Jones
in The Thinker’s Toolkit: Fourteen Powerful Techniques for Problem
Solving (New York: Three Rivers Press, 1998), 72–79. Jones
assumed that humans are “compulsively negative” and that
“negative thoughts defeat creative objective thinking.” Thus, his
technique focused only on Fixes for the Cons. The technique
described here recognizes that analysts and decision makers can
also be biased by overconfidence, in which case Faulting the Pros
may be more important than Fixing the Cons.
10.10 COMPLEXITY MANAGER
Complexity Manager helps analysts and decision makers understand
and anticipate changes in complex systems. As used here, the word
“complexity” encompasses any distinctive set of interactions that are
more complicated than even experienced analysts can think through
solely in their heads.4
When to Use It
As a policy support tool, Complexity Manager can help assess the
chances for success or failure of a new or proposed program or
policy and identify opportunities for influencing the outcome of any
situation. It is also useful in identifying what would have to change to
achieve a specified goal as well as the unintended consequences
from the pursuit of a policy goal.

When trying to foresee future events, both the intelligence and


business communities have typically dealt with complexity by doing
the following:

Assuming that the future is unpredictable and generating


alternative future scenarios and indicators that can be tracked to
obtain early warning of which future is emerging.

Developing or contracting for complex computer models and


simulations of how the future might play out. This practice is
costly in time and money and often of limited practical value to
the working analysts.

Making multiple assumptions and relying on the analyst’s


intuition or expert judgment to generate a best guess of how
things will work out.

The use of Complexity Manager is a fourth approach that may be


preferable in some circumstances, especially in cases of what one
might call “manageable complexity.” It can help decision makers ask
better questions and anticipate problems.

Complexity Manager is different from other methods for dealing with


complexity, because we believe the average analyst who lacks
advanced quantitative skills can use it. There is no need for
programs such as Causal Loop Diagramming or Block-Flow
Diagramming commonly used in System Dynamics Analysis.
Value Added
We all know that we live in a complex world of interdependent
political, economic, social, and technological systems in which each
event or change has multiple effects. These effects then affect other
elements of the system. Although we understand this, we usually do
not analyze the world in this way, because the multitude of potential
interactions is too difficult for the human brain to track
simultaneously. As a result, analysts often fail to foresee future
problems or opportunities that may be generated by current trends
and developments. Or they fail to foresee the undesirable side
effects of well-intentioned policies.5

Complexity Manager can often improve an analyst’s understanding


of a complex situation without the time delay and cost required to
build a computer model and simulation. The steps in the Complexity
Manager technique are the same as the initial steps required to build
a computer model and simulation. These are identification of the
relevant variables or actors, analysis of all the interactions between
them, and assignment of rough weights or other values to each
variable or interaction.

Scientists who specialize in the modeling and simulation of complex


social systems report that “the earliest—and sometimes most
significant—insights occur while reducing a problem to its most
fundamental players, interactions, and basic rules of behavior,” and
that “the frequency and importance of additional insights diminishes
exponentially as a model is made increasingly complex.”6 In many
cases the Complexity Manager is likely to provide much, although
not all, of the benefit one could gain from computer modeling and
simulation, but without the time lag and contract costs. However, if
key variables are quantifiable with changes that are trackable over
time, it would be more appropriate to use a quantitative modeling
technique such as System Dynamics.
Complexity Manager, like most Structured Analytic Techniques, does
not itself provide analysts with answers. It enables analysts to find a
best possible answer by organizing in a systematic manner the
jumble of information about many relevant variables. It helps
analysts comprehend the whole problem, not just one part of the
problem at a time. Analysts can then apply their expertise to make
an informed judgment about the problem. This structuring of the
analyst’s thought process also provides the foundation for a well-
organized report that clearly presents the rationale for each
conclusion. This may also lead to some form of visual presentation,
such as a Concept Map or Mind Map, or a causal or influence
diagram.

It takes time to work through the Complexity Manager process, but it


may save time in the long run. This structured approach helps
analysts work efficiently without getting mired down in the complexity
of the problem. Because it produces a better and more carefully
reasoned product, it also saves time during the editing and
coordination processes.

The Complexity Manager is helpful in reducing the influence of many


cognitive biases and misapplied heuristics, among them, Premature
Closure, Mental Shotgun, and the Availability Heuristic. It also helps
mitigate the impact of several intuitive traps including Relying on
First Impressions, Overinterpreting Small Samples, and
Overestimating Probability.
The Method
Complexity Manager requires the analyst to proceed through eight specific steps:

1. Define the problem. State the problem (plan, goal, outcome) to be analyzed, including the time
period covered by the analysis.
2. Identify and list relevant variables. Use one of the brainstorming techniques described in chapter
6 to identify the significant variables (factors, conditions, people, etc.) that may affect the situation of
interest during the designated time period. Think broadly to include organizational or environmental
constraints that are beyond anyone’s ability to control. If the goal is to estimate the status of one or
more variables several years in the future, those variables should be at the top of the list. Group the
other variables in some logical manner with the most important variables at the top of the list.
3. Create a Cross-Impact Matrix. Create a matrix in which the number of rows and columns are each
equal to the number of variables plus one header row (see chapter 7). Leaving the cell at the top-
left corner of the matrix blank, enter all the variables in the cells in the row across the top of the
matrix and the same variables in the column down the left side. The matrix then has a cell for
recording the nature of the relationship between all pairs of variables. This is called a Cross-Impact
Matrix—a tool for assessing the two-way interaction between each pair of variables. Depending on
the number of variables and the length of their names, it may be convenient to use the variables’
letter designations across the top of the matrix rather than the full names.

When deciding whether to include a variable, or to combine two variables into one, keep in mind
that the number of variables has a significant impact on the complexity and the time required for an
analysis. If an analytic problem has five variables, there are 20 possible two-way interactions
between those variables. That number increases rapidly as the number of variables increases. With
10 variables, as in Figure 10.10, there are 90 possible interactions. With 15 variables, there are
210. Complexity Manager may be impractical with more than 15 variables.

4. Assess the interaction between each pair of variables. Use a diverse team of experts on the
relevant topic to analyze the strength and direction of the interaction between each pair of
variables. Enter the results in the relevant cells of the matrix. For each pair of variables, ask the
question: Does this variable affect the paired variable in a manner that will increase or decrease the
impact or influence of that variable?

When entering ratings in the matrix, it is best to take one variable at a time, first going down the
column and then working across the row. Note that the matrix requires each pair of variables to be
evaluated twice—for example, the impact of variable A on variable B and the impact of variable B
on variable A. To record what variables impact variable A, work down column A and ask yourself
whether each variable listed on the left side of the matrix has a positive or negative influence, or no
influence at all, on variable A. To record the reverse impact of variable A on the other variables,
work across row A to analyze how variable A impacts the variables listed across the top of the
matrix.

Analysts can record the nature and strength of impact that one variable has on another in two
different ways. Figure 10.10 uses plus and minus signs to show whether the variable being
analyzed has a positive or negative impact on the paired variable. The size of the plus or minus
sign signifies the strength of the impact on a three-point scale. The small plus or minus sign shows
a weak impact; the medium size a medium impact; and the large size a strong impact. If the
variable being analyzed has no impact on the paired variable, the cell is left empty. If a variable
might change in a way that could reverse the direction of its impact, from positive to negative or vice
versa, this is shown by using both a plus and a minus sign.

The completed matrix shown in Figure 10.10 is the same matrix you will see in chapter 11, when
the Complexity Manager technique is used to forecast the future of Structured Analytic Techniques.
The plus and minus signs work well for the finished matrix. When first populating the matrix,
however, it may be easier to use letters (P and M for plus and minus) to show whether each
variable has a positive or negative impact on the other variable with which it is paired. Each P or M
is then followed by a number to show the strength of that impact. A three-point scale is used, with 3
indicating a Strong Impact, 2 Medium, and 1 Weak.

After rating each pair of variables, and before doing further analysis, consider pruning the matrix to
eliminate variables that are unlikely to have a significant effect on the outcome. It is possible to
measure the relative significance of each variable by adding up the weighted values in each row
and column. The sum of the weights in each row is a measure of each variable’s impact on the
entire system. The sum of the weights in each column is a measure of how much each variable is
affected by all the other variables. Those variables most impacted by the other variables should be
monitored as potential indicators of the direction in which events are moving or as potential sources
of unintended consequences.

5. Analyze direct impacts. Document the impact of each variable, starting with variable A. For each
variable, provide further clarification of the description, if necessary. Identify all the variables that
have an impact on that variable with a rating of 2 or 3, and briefly explain the nature, direction, and,
if appropriate, the timing of this impact. How strong is it and how certain is it? When might these
effects be observed? Will the effects be felt only in certain conditions? Next, identify and discuss all
variables on which this variable has an effect with a rating of 2 or 3 (Medium or Strong Impact),
including the strength of the impact and how certain it is to occur. Identify and discuss the
potentially good or bad side effects of these impacts.

6. Analyze loops and indirect impacts. The matrix shows only the direct effect of one variable on
another. When you are analyzing the direct impacts variable by variable, there are several things to
look for and make note of. One is feedback loops. For example, if variable A has a positive impact
on variable B, and variable B also has a positive impact on variable A, this is a positive feedback
loop. Or there may be a three-variable loop, from A to B to C and back to A. The variables in a loop
gain strength from one another, and this boost may enhance their ability to influence other
variables. Another thing to look for is circumstances where the causal relationship between
variables A and B is necessary but not sufficient for something to happen. For example, variable A
has the potential to influence variable B, and may even be trying to influence variable B, but it can
do so effectively only if variable C is also present. In that case, variable C is an enabling variable
and takes on greater significance than it ordinarily would have.

Description

Figure 10.10 Variables Affecting the Future Use of Structured Analysis

All variables are either static or dynamic. Static variables are expected to remain unchanged during
the period covered by the analysis. Dynamic variables are changing or have the potential to
change. The analysis should focus on the dynamic variables, as these are the sources of surprise
in any complex system. Determining how these dynamic variables interact with other variables and
with each other is critical to any forecast of future developments. Dynamic variables can be either
predictable or unpredictable. Predictable change includes established trends or established policies
that are in the process of being implemented. Unpredictable change may be a change in leadership
or an unexpected change in policy or available resources.

7. Draw conclusions. Using data about the individual variables assembled in steps 5 and 6, draw
conclusions about the entire system. What is the most likely outcome, or what changes might be
anticipated during the specified time period? What are the driving forces behind that outcome?
What things could happen to cause a different outcome? What desirable or undesirable side effects
should be anticipated? If you need help to sort out all the relationships, it may be useful to sketch
out by hand a diagram showing all the causal relationships. A Concept Map (chapter 6) may be
useful for this purpose. If a diagram is helpful during the analysis, it may also be helpful to the
reader or customer to include such a diagram in the report.
8. Conduct an opportunity analysis. When appropriate, analyze what actions could be taken to
influence this system in a manner favorable to the primary customer of the analysis.
Relationship to Other Techniques
The same procedures for creating a matrix and coding data can be
applied in using a Cross-Impact Matrix (chapter 7). The difference is
that the Cross-Impact Matrix technique is used only to identify and
share information about the cross-impacts in a group or team
exercise. The goal of Complexity Manager is to build on the Cross-
Impact Matrix to analyze the working of a complex system.

If the goal is to identify alternative scenarios and early warning of


future directions of change, especially in a highly uncertain
environment, a form of Foresight analysis rather than Complexity
Manager would be more appropriate. Use a computerized modeling
system such as System Dynamics rather than Complexity Manager
when changes over time in key variables can be quantified or when
there are more than fifteen variables to be considered.7
Origins of This Technique
Richards J. Heuer Jr. developed Complexity Manager to fill an
important gap in structured techniques available to the average
analyst. It is a simplified version of older quantitative modeling
techniques, such as System Dynamics.
NOTES
1. See Graham T. Allison and Philip Zelikow, Essence of Decision:
Explaining the Cuban Missile Crisis, 2nd ed. (New York: Addison-
Wesley, 1999).

2. Heinz Weihrich, “The TOWS Matrix—A Tool for Situational


Analysis,” Long Range Planning 15, no. 2 (April 1982): 54–66.

3. Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group


Dynamics (New York: Harper & Row, 1948).

4. Seth Lloyd, a specialist in complex systems, has listed thirty-two


definitions of complexity. See Seth Lloyd, Programming the Universe
(New York: Knopf, 2006).

5. Dietrich Dorner, The Logic of Failure (New York: Basic Books,


1996).

6. David S. Dixon and William N. Reynolds, “The BASP Agent-Based


Modeling Framework: Applications, Scenarios, and Lessons
Learned,” Proceedings of the 36th Annual Hawaii International
Conference on System Sciences (February 2003),
https://www.academia.edu/797988/The_BASP_Agent-
Based_Modeling_Framework_Applications_Scenarios_and_Lessons
_Learned_with_William_N._Reynolds. Also see Donnella H.
Meadows and J. M. Robinson, The Electronic Oracle: Computer
Models and Social Decisions (New York: Wiley, 1985).

7. John Sterman, Business Dynamics: Systems Thinking and


Modeling for a Complex World (New York: McGraw-Hill, 2000).
Descriptions of Images and Figures
Back to Figure

Hazard or opportunity, cause or trend consisting of preventive barrier or accelerator, escalation factor
consisting of EF barrier or accelerator for their recovery, and consequences with recovery barrier or
amplifier leads to risk event or opportunity event. Causes are anticipatory and consequences are
reactive.

Back to Figure

Stimulus leads to response, which leads to different forms of instability. The sources of grievances and
conflict are domestic or international, intellectual, social, political, economic, and military. This is the
stimulus. Opposition’s ability to articulate grievance or mobilize discontent leads to government or
society’s capacity to respond. This is the response. Responses include legitimacy or leadership,
resource availability or responsiveness, institutional strength, and monopoly of coersive force. The
response leads to grievance and conflict, and different forms of instability. All and legitimate instability is
peaceful political change. Elite and illegitimate instability are conspiracy or coups d’etat, internal war or
insurgencies, and group-on-group violence. Mass and illegitimate stability are turmoil, internal war or
insurgencies, and group-on-group violence.

Back to Figure

A suspected terrorist is either arrested or not arrested. If arrested, they are either interrogated or
released. If interrogated, the plot to attack is described, or there is no evidence of plot. If the terrorist is
not arrested, they are either under no surveillance or under surveillance. If they are under surveillance,
there is no evidence of plotting or they meet with known terrorist. If there is meeting, then place covert
agent, who either discovers plans to attack or gets killed, or arrest all suspects to be detained in jail or
released on bail.

Back to Figure

Note: The number value and size of the type indicate the significance of each argument. With increase
in the number value, the size of the type increases. The arguments corresponding to each number value
are as follows. 1: The definition of “abandoned cars” is unclear to the public. 2: The public climate favors
cleaning up the city. It is difficult to locate abandoned cars. Health Department has cited old and
abandoned vehicles as potential health hazards. 3: A procedure is needed to verify a car’s status and
notify owners. Advocacy groups have expressed interest. The owners of old cars feel threatened.
Locating and disposing of cars will be expensive. 4: The City Council supports the plan. A location is
needed to put the abandoned cars once identified. 5: Local auto salvage yards will remove cars for free.
The public service director supports the plan.

Back to Figure

Reading the matrix: The cells in each row show the impact of the variable represented by that row on
each of the variables listed across the top of the matrix. The cells in each column show the impact of
each variable listed down the left side of the matrix on the variable represented by the column.
Combination of positive and negative means impact could go either direction. Empty cell equals no
impact.

A B C D E F G H I
A B C D E F G H I

A Increased Nil Strong Positive Nil Strong Weak Medium Strong Medium
use of positive positive positive positive positive positive
Structured and and and and
Analytic strong weak strong medium
Techniques negative negative negative negativ

B Executive Strong Nil Strong Medium Medium Medium Medium Strong Weak
support for positive positive positive positive positive positive positive positive
collaboration and and and
and medium strong weak
Structured negative negative negativ
Analytic
Techniques

Availability of Strong Medium Nil Medium Medium Nil Strong Nil Weak
virtual positive positive positive positive positive positive
technologies and
weak
negativ

D Strong Medium Medium Nil Weak Nil Nil Weak Weak


Generational positive positive positive positive positive negativ
change and
weak
negative

E Availability Strong Strong Medium Nil Nil Nil Weak Weak Medium
of analytic positive positive positive positive positive positive
tradecraft and and and and and and
support strong strong medium weak weak medium
negative negative negative negative negative negativ

F Strong Medium Medium Medium Strong Nil Nil Strong Nil


negative negative negative negative negative positive
Change in and
budget for strong
analysis negative
A B C D E F G H I

G Medium Weak Medium Nil Medium Medium Nil Weak Medium


positive positive positive positive positive positive negativ
Charge in and and
client medium medium
preferences negative negative
for
collaborative,
digital
products

H Research Strong Medium Nil Nil Medium Weak Weak Nil Medium
on positive positive positive positive positive positive
effectiveness and and and and and and
of Structured strong medium medium weak weak medium
Analytic negative negative negative negative negative negativ
Techniques

I Medium Medium Nil Nil Weak Nil Nil Nil Nil


positive positive positive
Analysts’ and and
perception of medium medium
time negative negative
pressure

J Medium Medium Nil Nil Medium Nil Nil Nil Medium


negative negative negative positive
Lack of and
openness to medium
change negativ
among
senior
analysts or
managers
CHAPTER 11 THE FUTURE OF
STRUCTURED ANALYTIC
TECHNIQUES

11.1 Limits of Empirical Analysis [ 349 ]

11.2 Purpose of Structured Techniques [ 351 ]

11.3 Projecting the Trajectory of Structured Techniques [


353 ]

11.3.1 Structuring the Data [ 354 ]

11.3.2 Identifying Key Drivers [ 356 ]

11.4 Role of Structured Techniques in 2030 [ 358 ]

Since the term Structured Analytic Techniques was first introduced in


2005, a persistent and unresolved debate has centered on the
question of their effectiveness in generating higher-quality analysis.
Testing the value of these techniques in the U.S. Intelligence
Community has been done largely through the process of using
them. That experience has certainly been successful, but it has not
been enough to convince skeptics reluctant to change their long-
ingrained habits. Nor has it persuaded academics accustomed to
looking for hard, empirical evidence. Similar questions have arisen
regarding the use of Structured Analytic Techniques in business,
medicine, and other fields that consistently deal with probabilities
and uncertainties rather than hard data.
11.1 LIMITS OF EMPIRICAL ANALYSIS
A few notable studies have evaluated the efficacy of structured
techniques. A RAND study in 2016, for example, found that
intelligence publications using the techniques generally addressed a
broader range of potential outcomes and implications than did other
analyses, but that more controlled experiments were needed to
provide a complete picture of their contribution to intelligence
analysis.1

Coulthart in his 2015 doctoral dissertation, “Improving the Analysis of


Foreign Affairs: Evaluating Structured Analytic Techniques,”
evaluates the use of twelve core techniques in the U.S. Intelligence
Community.2 His study found moderate to strong evidence affirming
the efficacy of using Analysis of Competing Hypotheses,
Brainstorming, and Devil’s Advocacy. Other findings were that face-
to-face collaboration decreases creativity, weighting evidence
appears to be more valuable than seeking disconfirming evidence,
and conflict improves the quality of analysis.

Chang et al., in a 2018 article, “Restructuring Structured Analytic


Techniques in Intelligence,” identify two potential problems that could
undercut the effectiveness of structured techniques.3 First,
Structured Analytic Techniques treat all biases as unipolar when, in
fact, many are bipolar. For example, analysts using structured
techniques to mitigate the impact of Confirmation Bias, which would
make them too confident in the soundness of their key judgments,
could trigger the opposing problem of under-confidence. Second,
many structured techniques involve the process of decomposing a
problem into its constituent parts. No one has tested, however,
whether the process of decomposition is adding or subtracting noise
from the analytic process. They suspect that, on balance,
decomposition is most likely to degrade the reliability of analytic
judgments. As they conclude—and the authors agree—more
sustained scientific research is needed to determine whether these
and other shortcomings pose problems when evaluating the utility of
structured techniques in improving analytic reasoning.

Efforts to conduct such qualitative studies, however, confront several


obstacles not usually encountered in other fields of study. Findings
from empirical experiments can be generalized to apply to
intelligence analysis or any other specific field only if the test
conditions match the conditions in which the analysis is conducted.
Because so many variables can affect the research results, it is
extremely difficult to control for all, or even most, of them. These
variables include the purpose for which a technique is used,
implementation procedures, context of the experiment, nature of the
analytic task, differences in analytic experience and skill, and
whether the analysis is done by a single analyst or as a group
process. All of these variables affect the outcome of any experiment
that ostensibly tests the utility of a Structured Analytic Technique.
Many of these same challenges are present, and should be factored
into, efforts by intelligence organizations to formalize processes for
evaluating the quality of papers produced by their analysts.

Two specific factors raise questions about the practical feasibility of


valid empirical testing of Structured Analytic Techniques as used in
intelligence analysis. First, these techniques are commonly used as
a group process. That would require testing with groups of analysts
rather than individual analysts. Second, intelligence deals with
issues of high uncertainty. Former Central Intelligence Agency
director Michael Hayden wrote that because of the inherent
uncertainties in intelligence analysis, a record of 70 percent accuracy
is a good performance.4 If this is true, a single experiment testing the
use of a structured technique that leads to a wrong answer does not
prove the lack of effectiveness of the technique. Multiple repetitions
of the same experiment would be needed to evaluate how often the
analytic judgments were accurate.

Many problems could largely be resolved if experiments were


conducted with intelligence analysts using techniques as they are
used every day to analyze typical intelligence issues.5 But even if
such conditions were met, major obstacles to meaningful
conclusions would remain. Since many Structured Analytic
Techniques can be used for several purposes, research findings on
the effectiveness of these techniques can be generalized and
applied to the intelligence community only if the techniques are used
in the same way and for the same purpose as actually used by
intelligence analysts.

Philip Tetlock, for example, in his pathbreaking book, Expert Political


Judgment, describes two experiments that show scenario
development may not be an effective analytic technique. The
experiments compared judgments on a political issue before and
after the test subjects prepared scenarios to try to gain a better
understanding of the issues.6 The experiments showed that the
predictions by both experts and nonexperts were more accurate
before generating the scenarios; in other words, the generation of
scenarios actually reduced the accuracy of their predictions. Several
experienced analysts have separately cited this finding as evidence
that scenario development may not be a useful method for
intelligence analysis.7

However, Tetlock’s conclusions should not be generalized to apply to


intelligence analysis, as his experiments tested scenarios as a
predictive tool. The intelligence community does not use scenarios
for prediction. Scenario development is best used to describe
several outcomes or futures that a decision maker should consider
because intelligence is unable to predict a single outcome with
reasonable certainty. For most decision makers, the most important
product generated by Foresight analysis is the identification of a set
of key drivers that will likely determine how the future will evolve.
These drivers then can be leveraged by the decision maker to
mitigate harmful scenarios and facilitate the emergence of beneficial
scenarios. Two other often-cited benefits are the discovery of
emerging trends and the identification of indicators and milestones
for each scenario. The indicators and milestones can then be
monitored to gain early warning of the direction in which events
seem to be heading. Tetlock’s experiments did not use scenarios in
this way.
11.2 PURPOSE OF STRUCTURED
TECHNIQUES
We believe the easiest way to assess the value of Structured
Analytic Techniques is to look at the purpose for which a technique is
used. Once that is established, the next step is to determine whether
it achieves that purpose, or some better way exists to achieve that
same purpose.

A key distinction in this debate is that Structured Analytic Techniques


are designed primarily to help analysts think, not to predict what will
occur in the future. The authors often describe structured techniques
as “thinking tools” analysts can use to instill more rigor, structure,
and imagination in the analysis. Most analysts report that the
techniques help them avoid—or at least mitigate—the impact of
cognitive bias, misapplied heuristics, and intuitive traps, thereby
reducing error rates. Structured Analytic Techniques also spur
analysts to reframe issues and discover “unknown unknowns” that
they otherwise would have missed.

For these reasons, basing an analysis of the value of structured


techniques on how accurately they can be used to predict the future
would be applying an incomplete and misleading standard for the
evaluation. The better questions to test would be, Did the analysts
correctly frame the issue? Was the analysis done with rigor and
transparency? Were incorrect mental mindsets identified and
corrected? Did the analysis explore both challenges and
opportunities for the decision maker? and Did use of the techniques
save the analysts time over time? Moreover, applying a standard of
predictive accuracy could be highly misleading if the analyst
accurately identified an emerging problem and policymakers took
action to prevent it from occurring. The function of a good warning
analyst is to alert decision makers to a developing problem in time
for them to prevent a prediction from becoming true.
This book has six chapters of techniques. Each Structured Analytic
Technique has what is called face validity, which means there is
reason to expect that it will help to mitigate or avoid a type of
problem that sometimes occurs when one is engaged in doing
analysis. The following paragraphs provide examples of face validity
or how structured techniques help analysts do a better job.

A great deal of research in human cognition during the past sixty


years shows the limits of working memory and suggests that one can
manage a complex problem most effectively by breaking it down into
smaller pieces. That is, in fact, the dictionary definition of “analysis,”8
and that is what techniques that make lists, trees, matrices,
diagrams, maps, and models do. It is reasonable to expect,
therefore, that an analyst who uses such tools for organization or
visualization of information will do a more thorough job than an
analyst who does not.

Similarly, much empirical evidence suggests that the human mind


tends to see what it is looking for and often misses what it is not
looking for (i.e., Confirmation Bias and Ignoring Inconsistent
Evidence). Given this cognitive limitation, it seems useful to develop
scenarios and indicators of possible future events for which
intelligence needs to provide early warning. These techniques can
help collectors target needed information. For analysts, they prepare
the mind to recognize the early signs of significant change.

“Satisficing” is the term Herbert Simon invented to describe the act of


selecting the first identified alternative that appears “good enough”
rather than evaluating all the likely alternatives and identifying the
best one (see the introduction to chapter 6). Satisficing is a common
analytic shortcut that people use in making everyday decisions when
there are multiple possible answers. It saves a lot of time when
making judgments or decisions of little consequence, but it is ill-
advised when making judgments or decisions with significant
consequence for national security. It seems self-evident that an
analyst who deliberately identifies and analyzes alternative
hypotheses before reaching a conclusion is more likely to find a
better answer than an analyst who does not.

Given the necessary role that assumptions play when making


intelligence judgments based on incomplete and ambiguous
information, an analyst who uses the Key Assumptions Check is
likely to do a better job than an analyst who makes no effort to
identify and validate assumptions. Extensive empirical evidence
suggests that reframing a question helps to unblock the mind. It
helps one to see other perspectives.

The empirical research on small-group performance is virtually


unanimous in emphasizing that groups make better decisions when
their members bring to the table a diverse set of ideas, experiences,
opinions, and perspectives.9 Looking at these research findings, one
may conclude that the use of any structured technique in a group
process is likely to improve the quality of analysis, as compared with
analysis by a single individual using that technique or by a group that
does not use a structured process for eliciting divergent ideas or
opinions.

The experience of U.S. Intelligence Community analysts using the


Analysis of Competing Hypotheses (ACH) software and similar
computer-aided analytic tools provides anecdotal evidence to
support this conclusion. One of the goals in using ACH is to gain a
better understanding of the differences of opinion with other analysts
or between analytic offices.10 The creation of an ACH matrix requires
step-by-step discussion of evidence and arguments being used and
deliberation about how these are interpreted as either consistent or
inconsistent with each of the hypotheses. This process takes time,
but many analysts believe it is time well spent; they say it saves
them time in the long run once they have learned the technique.

Our experience teaching ACH to intelligence analysts illustrates how


structured techniques can elicit significantly more divergent
information when used as a group process. Intelligence and law
enforcement analysts consider this group discussion the most
valuable part of the ACH process. Use of structured techniques does
not guarantee a correct judgment, but this anecdotal evidence
suggests that these techniques make a significant contribution to
higher-quality analysis.
11.3 PROJECTING THE TRAJECTORY OF
STRUCTURED TECHNIQUES
Intelligence analysts and managers are continuously looking for
ways to improve the quality of their analysis. One of these paths is
the increased use of Structured Analytic Techniques. This book is
intended to encourage and support that effort.

This final chapter employs a new technique called Complexity


Manager (chapter 10) to instill rigor in addressing a complex problem
—the future of Structured Analytic Techniques. Richards J. Heuer Jr.
developed the Complexity Manager as a simplified combination of
two long-established futures analysis methods, Cross-Impact Matrix
and System Dynamics. It is designed for analysts who have not been
trained in the use of advanced, quantitative techniques.

We apply the Complexity Manager technique specifically to address


the following questions:

What is the prognosis for the use of Structured Analytic


Techniques in 2030? Will the use of Structured Analytic
Techniques gain traction and be used with greater frequency by
intelligence agencies, law enforcement, the business sector, and
other professions? Or will its use remain at current levels? Or
will it atrophy?

What forces are spurring the increased use of structured


analysis, and what opportunities are available to support these
forces?

What obstacles are hindering the increased use of structured


analysis, and how might these obstacles be overcome?
In this chapter, we suppose that it is now the year 2030 and the use
of Structured Analytic Techniques is widespread. We present our
vision of what has happened to make this a reality and how the use
of structured techniques has transformed the way analysis is done—
not only in intelligence but across a broad range of professional
disciplines.
11.3.1 Structuring the Data
The analysis for this future of Structured Analytic Techniques case study starts with a brainstormed list
of variables that will influence—or be impacted by—the use of Structured Analytic Techniques in the
coming years. The first variable listed is the target variable, followed by nine other variables related to it.

A. Increased use of Structured Analytic Techniques


B. Executive support for collaboration and Structured Analytic Techniques
C. Availability of virtual collaborative technology platforms
D. Generational change of analysts
E. Availability of analytic tradecraft support and mentoring
F. Change in budget for analysis
G. Change in client preferences for collaborative, digital products
H. Research on effectiveness of Structured Analytic Techniques
I. Analysts’ perception of time pressure
J. Lack of openness to change among senior analysts and mid-level managers

The next step in Complexity Manager is to put these ten variables into a Cross-Impact Matrix. This is a
tool for the systematic description of the two-way interaction between each pair of variables. Each pair is
assessed using the following question: Does this variable affect the paired variable in a manner that will
contribute to increased or decreased use of Structured Analytic Techniques in 2030? The completed
matrix is shown in Figure 11.3.1. This is the same matrix that appears in chapter 10.

Description

Figure 11.3.1 Variables Affecting the Future Use of Structured Analysis

The goal of this analysis is to assess the likelihood of a substantial increase in the use of Structured
Analytic Techniques by 2030, while identifying any side effects that might be associated with such an
increase. That is why increased use of structured techniques is the lead variable, variable A, which
forms the first column and top row of the matrix. The letters across the top of the matrix are
abbreviations of the same variables listed down the left side.

To fill in the matrix, the authors started with column A to assess the impact of each of the variables listed
down the left side of the matrix on the frequency of use of structured analysis. This exercise provides an
overview of what likely are the most important variables that will impact positively or negatively on the
use of structured analysis. Next, the authors completed row A across the top of the matrix. This shows
the reverse impact—the impact of increased use of structured analysis on the other variables listed
across the top of the matrix. Here one identifies the second-tier effects. Does the growing use of
structured techniques affect any of these other variables in ways that one needs to be aware of?11
The remainder of the matrix was then completed one variable at a time, while identifying and making
notes on potentially significant secondary effects. A secondary effect occurs when one variable
strengthens or weakens another variable, which in turn has an effect on or is affected by Structured
Analytic Techniques.
11.3.2 Identifying Key Drivers
A rigorous analysis of the interaction of all the variables suggests
several conclusions about the future of structured analysis. The
analysis focuses on those variables that (1) are changing or that
have the potential to change and (2) have the greatest impact on
other significant variables.

The principal potential positive drivers of the system are the extent to
which (1) senior executives support a culture of collaboration and (2)
the work environment supports the development of virtual
collaborative communities and technologies. These two variables
provide strong support to structured analysis through their
endorsement of and support for collaboration. Structured analysis
reinforces them in turn by providing an optimal process through
which collaboration occurs.

A third variable, the new generation of analysts accustomed to social


networking, is strongly supportive of information sharing and
collaboration and therefore indirectly supportive of growth in the use
of Structured Analytic Techniques. The impact of the new generation
of analysts is important because it means time is not neutral. In other
words, with the new generation, time is now on the side of change.
The interaction of these three variables, all reinforcing one another
and moving in the same direction, signals that the future of
structured techniques is most likely to be positive.

Two other variables are likely to play a major role because they have
the most cross-impact on other variables as shown in the matrix.
These two variables represent opportunities either to facilitate the
change or obstacles that need to be managed. The two variables are
(1) the level of support for analytic tradecraft cells, on-the-job
mentoring, and facilitators to assist analysts and analytic teams in
using structured techniques12 and (2) the results of ongoing research
on the effectiveness of structured techniques.
The speed and ease of the change in integrating structured
techniques into the analytic process will be significantly
influenced by the availability of senior mentors and facilitators
who can identify which techniques to use and explain how to
use them correctly.

Ongoing research into the viability of structured techniques and


best ways to harness their potential could provide strong
validation for their use or undercut the prima facia case for their
use. Research that discusses some of the obstacles identified
earlier in this chapter could be helpful in optimizing their use and
counter the opposition from those who are hesitant using the
techniques.

The odds seem to favor continuing, fundamental change in how


analysis is done. However, any change is far from guaranteed,
because the outcome depends on two assumptions, either of which,
if wrong, could preclude the desired outcome.

One assumption is that funding for analysis during the next


decade will be adequate to provide an environment conducive to
the expanded use of Structured Analytic Techniques. Increased
training of managers as well as analysts in the proper use of
structured techniques is important, but the provision of online
programs and informal “brown bag” sessions to reinforce what
was taught as well as the availability of knowledgeable mentors
and facilitators is even more important. In addition, funding is
needed to establish and sustain analytic tradecraft and
collaboration support cells, facilitation support, mentoring
programs, and research on the effectiveness of Structured
Analytic Techniques.

A second assumption is that senior executives will have the


wisdom to allocate the necessary personnel and resources to
create robust collaboration communities within and external to
their organizational units. A critical requirement is the
introduction and institutionalization of inventive and effective
incentives to foster the broader use of structured techniques in
support of their analysis.
11.4 ROLE OF STRUCTURED
TECHNIQUES IN 2030
Imagine it is now 2030. Our assumptions have turned out to be
accurate, and collaboration in the use of Structured Analytic
Techniques is widespread. What has happened to make this
outcome possible? How has it transformed the way analysis is done
in 2030? This is our vision of what could be happening by that date.

The use of analytic teams and virtual collaborative platforms has


been growing rapidly over the past decade. Analysts working in
small groups, often from different locations, have increasingly
embraced digital collaborative systems as user-friendly vehicles to
produce joint papers with colleagues working on related topics in
other offices. Analysts in different geographic locations arrange to
meet from time to time, but most of the ongoing interaction is
accomplished using asynchronous and synchronous computer
applications and systems.

Analysts, with a click of the mouse or a simple voice command, can


find themselves participating in a virtual meeting conferring with
experts from multiple geographic locations. They can post their
papers—or more likely a digital product—for others to review and
edit in their virtual world, call up an internet site that merits
examination, or project what they see on their own computer screens
so that others can view their presentation or how they are using a
specific software tool. Analysts or small teams can use virtual,
collaborative platforms to be mentored “on demand” by a senior
analyst on the use of a particular technique or by an instructor who
can teach a structured techniques workshop without requiring
anyone to leave his or her cubicle.

Structured Analytic Techniques have become a primary vehicle by


which information is shared as analysts work together to deliver a
high-quality product. Analysts readily employ a basic set of
techniques and critical thinking skills at the beginning of most
projects to establish a shared foundation for their communication
and work together. They routinely use structured brainstorming
techniques to identify key drivers and relevant variables to be
tracked and considered, a Cross-Impact Matrix as a basis for
discussion and learning from one another about the relationships
between key variables, and a Key Assumptions Check to review and
critically assess the assumptions that will provide the foundation for
the analysis. They usually incorporate the results of these exercises
as dropdowns in their tablet presentations.

The techniques provide a common base of knowledge and


understanding about a topic of interest. They also help reveal, at an
early stage of the production process, potential differences of
opinion, gaps in the available information, what graphics to use, and
where best to find the data and tap the expertise of people most
knowledgeable about various aspects of the project.

By 2030, most social media service providers have established large


analytic units to vet what is posted on their sites and combat the
proliferation of Digital Disinformation or “Fake News.” Many of these
units have started to employ structured techniques to instill more
rigor into their analytic processes and anticipate new ways
perpetrators of Digital Disinformation could thwart their curation
processes.

By 2030, all the principal elements of the U.S. Intelligence


Community, many foreign intelligence services, and a growing
number of business analysis units have created analytic tradecraft or
collaboration support cells—or support mechanisms—in their
analytic components. Academic institutions now routinely teach
courses on critical thinking, cognitive bias, Structured Analytic
Techniques, combating Digital Disinformation, and using structured
techniques to better exploit Big Data. Analysts with experience in
using structured techniques routinely help other analysts overcome
their uncertainty when using a technique for the first time. They help
others decide which techniques are most appropriate for their
particular needs, provide oversight when needed to ensure that a
technique is being used appropriately, and teach other analysts
through example and on-the-job training how to effectively facilitate
team or group meetings.

In 2030, the process for coordinating analytic papers and


assessments is dramatically different. Formal coordination prior to
publication is now a formality. Collaboration among interested parties
takes place from the start as papers are initially conceptualized and
relevant information is collected and shared. Basic critical thinking
techniques such as the use of AIMS (Audience, Issue, Message,
and Storyline) to describe the key components of an analyst’s project
and the Getting Started Checklist are used regularly. Differences of
opinion are surfaced and explored early in the preparation of an
analytic product. Analytic techniques, such as Premortem Analysis
and Structured Self-Critique, have become a requirement to
bulletproof analytic products. Several Adversarial Collaboration
techniques have become ingrained into the culture as the most
effective mechanisms to resolve disagreements before final
coordination and delivery of an analytic product.

Exploitation of outside knowledge—especially cultural,


environmental, and technical expertise—has increased significantly.
Outside-In Thinking, Structured Analogies, and the Delphi Method
are used extensively to obtain ideas, judgments, or forecasts
electronically from geographically dispersed panels of experts.
Almost all analytic units have a dedicated unit for conducting
Foresight analysis that (1) identifies key drivers to help frame basic
lines of analysis and (2) generates a set of alternative scenarios that
can be tracked using validated indicators to anticipate new
challenges and exploit new opportunities.

By 2030, the use of Structured Analytic Techniques has expanded


across the globe. All U.S. intelligence agencies, all intelligence
services in Europe, and many services in other parts of the world
have incorporated structured techniques into their analytic process.
Over one hundred Fortune 500 companies with competitive
intelligence units routinely employ structured techniques, including
Foresight, Indicators, and Decision Support tools. A growing number
of hospitals have incorporated selected structured techniques,
including the Key Assumptions Check, Differential Diagnosis (their
version of Analysis of Competing Hypotheses), Indicators, and
Premortem Analysis into their analytic processes. Many businesses
have concluded that they can no longer afford multimillion-dollar
mistakes that would have been avoided by embracing competitive
intelligence processes in their business practices.

One no longer hears the old claim that there is no proof that the use
of Structured Analytic Techniques improves analysis. The
widespread use of structured techniques in 2030 is partially
attributable to the debunking of that claim. Several European Union
and other foreign studies involving a sample of reports prepared with
the assistance of several structured techniques and a comparable
sample of reports where structured techniques had not been used
showed that the use of structured techniques had distinct value.
Researchers interviewed the authors of the reports, their managers,
and the clients who received these reports. The studies confirmed
that reports prepared with the assistance of the selected structured
techniques were more thorough, provided better accounts of how the
conclusions were reached, and generated greater confidence in the
conclusions than did reports for which such techniques were not
used. The findings were replicated by several government
intelligence services that use the techniques, and the results were
sufficiently convincing to quiet most of the doubters.

The collective result of all these developments is an analytic climate


in 2030 that produces more rigorous, constructive, and informative
analysis—a development that decision makers have noted and are
making use of as they face increasingly complex and interrelated
policy challenges. As a result, policymakers are increasingly
demanding analytic products that identify key drivers, consider
multiple scenarios, and challenge key assumptions and the
conventional wisdom. The key conclusions generated by techniques
such as Quadrant Crunching™ and What If? Analysis are commonly
discussed among analysts and decision makers alike. In some
cases, decision makers or their aides observe or participate in
Foresight workshops using structured techniques. These interactions
help both clients and analysts understand the benefits and limitations
of using collaborative processes to produce analysis that informs
and augments policy deliberations.

This vision of a robust and policy-relevant analytic climate in 2030 is


achievable. But it is predicated on the willingness and ability of
senior managers in the intelligence, law enforcement, and business
communities to foster a collaborative environment that encourages
the use of Structured Analytic Techniques. Achieving this goal will
require a relatively modest infusion of resources for analytic
tradecraft centers, facilitators, mentors, and methodology
development and testing. It will also require patience and a
willingness to tolerate some mistakes as analysts become familiar
with the techniques, collaborative software, and working in a virtual,
digital landscape. We believe the outcome will be worth the risk
involved in charting a new analytic frontier.
NOTES
1. Stephen Artner, Richard S. Girven, and James B. Bruce,
Assessing the Value of Structured Analytic Techniques in the U.S.
Intelligence Community (Santa Monica, CA: RAND Corporation,
2016), https://www.rand.org/pubs/research_reports/RR1408.html

2. Stephen Coulthart, “Improving the Analysis of Foreign Affairs:


Evaluating Structured Analytic Techniques” (PhD diss., University of
Pittsburgh, 2015), http://d-scholarship.pitt.edu/26055/

3. Welton Chang et al., “Restructuring Structured Analytic


Techniques in Intelligence,” Intelligence and National Security 33, no.
3 (2018): 337–56, https://doi.org/10.1080/02684527.2017.1400230

4. Paul Bedard, “CIA Chief Claims Progress with Intelligence


Reforms,” U.S. News and World Report, May 16, 2008,
www.usnews.com/articles/news/2008/05/16/cia-chief-claims-
progress-with-intelligence-reforms.html

5. One of the best examples of research that does meet this


comparability standard is Robert D. Folker, Jr., Intelligence Analysis
in Theater Joint Intelligence Centers: An Experiment in Applying
Structured Methods (Washington, DC: Joint Military Intelligence
College, 2000).

6. Philip Tetlock, Expert Political Judgment (Princeton, NJ: Princeton


University Press, 2005), 190–202.

7. These judgments have been made in public statements and in


personal communications to the authors.

8. Merriam-Webster Online, www.m-w.com/dictionary/analysis

9. Charlan J. Nemeth and Brendan Nemeth-Brown, “Better Than


Individuals? The Potential Benefits of Dissent and Diversity for
Group Creativity,” in Group Creativity: Innovation through
Collaboration, eds. Paul B. Paulus and Bernard A. Nijstad (New
York: Oxford University Press, 2003), 63–64.

10. This information was provided by a senior U.S. Intelligence


Community educator in December 2006 and has been validated
subsequently on many occasions in projects done by government
analysts.

11. For a more detailed explanation of how each variable was rated
in the Complexity Analysis matrix, send an email requesting the data
to think@globalytica.com.

12. The concept of analytic tradecraft support cells is explored more


fully in Randolph H. Pherson, “Transformation Cells: An Innovative
Way to Institutionalize Collaboration,” in Collaboration in the National
Security Arena: Myths and Reality—What Science and Experience
Can Contribute to Its Success, June 2009. It is part of a collection
published by the Topical Strategic Multilayer Assessment (SMA),
Multi-Agency/Multi-Disciplinary White Papers in Support of Counter-
Terrorism and Counter-WMD, Office of Secretary of
Defense/DDR&E/RTTO, http://www.hsdl.org/?view&did=712792.

Structured Analytic Techniques: Families and Linkages

The sixty-six Structured Analytic Techniques presented in this


book can be used independently or in concert with other
techniques. The art and science of analysis is dynamic,
however, and we expect this list of techniques to continue to
change over time.

For ease of presentation, we have sorted the techniques into


six groups, or families, mirroring when they most often are
used in the analytic production process. See chapter 3 for
guidance on how to select the proper technique(s).
The graphic on the opposing page illustrates the relationships
among the techniques. Mapping the techniques in this
manner highlights the mutually reinforcing nature of many of
the techniques.

Many of these techniques have value for more than one


family. These “core” techniques relate to three or more
families and are highlighted in a light color. These techniques
are often cited by analysts in the intelligence and business
communities as tools they are most likely to use in their
analysis.

Structured Analytic Techniques that make use of indicators


are designated by stars.
Descriptions of Images and Figures
Back to Figure

Reading the matrix: The cells in each row show the impact of the variable represented by that row on
each of the variables listed across the top of the matrix. The cells in each column show the impact of
each variable listed down the left side of the matrix on the variable represented by the column.
Combination of positive and negative means impact could go either direction.

Empty cell equals no impact.

A B C D E F G H I

A Increased use of Strong Weak Strong Weak Medium Strong M


Structured Analytic positive positive positive positive positive positive p
Techniques and and and a
strong weak strong m
negative negative negative n

B Executive Strong Strong Medium Medium Medium Medium Strong W


support for positive positive positive positive positive positive positive p
collaboration and and and a
Structured Analytic medium strong w
Techniques negative negative n

Availability of Strong Medium Medium Medium Strong W


virtual positive positive positive positive positive p
technologies a
w
n

D Generational Strong Medium Medium Weak Weak W


change positive positive positive positive positive n
and
weak
negative

E Availability of Strong Strong Medium Weak Weak M


analytic tradecraft positive positive positive positive positive p
support and and and and and a
strong strong medium weak weak m
negative negative negative negative negative n
A B C D E F G H I

F Change in Strong Medium Medium Medium Strong Strong


budget for analysis negative negative negative negative negative positive
and
strong
negative

G Charge in client Medium Weak Medium Medium Medium Weak M


preferences for positive positive positive positive positive positive n
collaborative, and and
digital products medium medium
negative negative

H Research on Strong Medium Medium Weak Weak M


effectiveness of positive positive positive positive positive p
Structured Analytic and and and and and a
Techniques strong medium medium weak weak m
negative negative negative negative negative n

I Analysts’ Medium Medium Weak


perception of time positive positive positive
pressure and and
medium medium
negative negative

J Lack of Medium Medium Medium M


openness to negative negative negative p
change among a
senior m
analysts/managers n

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy