DHS Cybersecurity Roadmap
DHS Cybersecurity Roadmap
DHS Cybersecurity Roadmap
November 2009
Contents
Executive Summary.................................................................................................................................................iii
Introduction...............................................................................................................................................................v
Acknowledgements..................................................................................................................................................ix
Current Hard Problems in INFOSEC Research
1. Scalable Trustworthy Systems ...................................................................................................................1
2. Enterprise-Level Metrics (ELMs) ...........................................................................................................13
3. System Evaluation Life Cycle....................................................................................................................22
4. Combatting Insider Threats . ...................................................................................................................29
5. Combatting Malware and Botnets ..........................................................................................................38
6. Global-Scale Identity Management ........................................................................................................50
7. Survivability of Time-Critical Systems . .................................................................................................57
8. Situational Understanding and Attack Attribution ..............................................................................65
9. Provenance .................................................................................................................................................76
10. Privacy-Aware Security ...........................................................................................................................83
11. Usable Security . .......................................................................................................................................90
Appendices
Appendix A. Interdependencies among Topics . .............................................................................................A1
Appendix B. Technology Transfer ..................................................................................................................... B1
Appendix C. List of Participants in the Roadmap Development..................................................................C1
Appendix D. Acronyms....................................................................................................................................... D1
Executive Summary
Executive Summary
The United States is at a significant decision point. We must continue to defend our
current systems and networks and at the same time attempt to get out in front of
our adversaries and ensure that future generations of technology will position us to
better protect our critical infrastructures and respond to attacks from our adversaries.
The term system is used broadly to encompass systems of systems and networks.
This cybersecurity research roadmap is an attempt to begin to define a national R&D
agenda that is required to enable us to get ahead of our adversaries and produce
the technologies that will protect our information systems and networks into the
future. The research, development, test, evaluation, and other life cycle considerations required are far reachingfrom technologies that secure individuals and
their information to technologies that will ensure that our critical infrastructures
are much more resilient. The R&D investments recommended in this roadmap
must tackle the vulnerabilities of today and envision those of the future.
The intent of this document is to provide detailed research and development
agendas for the future relating to 11 hard problem areas in cybersecurity, for use
by agencies of the U.S. Government and other potential R&D funding sources.
The 11 hard problems are:
1. Scalable trustworthy systems (including system architectures and requisite
development methodology)
2. Enterprise-level metrics (including measures of overall system trustworthiness)
3. System evaluation life cycle (including approaches for sufficient assurance)
4. Combatting insider threats
5. Combatting malware and botnets
6. Global-scale identity management
7. Survivability of time-critical systems
8. Situational understanding and attack attribution
9. Provenance (relating to information, systems, and hardware)
10. Privacy-aware security
11. Usable security
For each of these hard problems, the roadmap identifies critical needs, gaps in
research, and research agenda appropriate for near, medium, and long term
attention.
DHS S&T assembled a large team of subject matter experts who provided input
into the development of this research roadmap. The content was developed over
the course of 15 months that included three regional multi-day workshops, two
virtual workshops for each topic, and numerous editing activities by the participants.
iii
Introduction
Introduction
Information technology has become pervasive in every wayfrom our phones and
other small devices to our enterprise networks to the infrastructure that runs our
economy. Improvements to the security of this information technology are essential
for our future. As the critical infrastructures of the United States have become more
and more dependent on public and private networks, the potential for widespread
national impact resulting from disruption or failure of these networks has also
increased. Securing the nations critical infrastructures requires protecting not only
their physical systems but, just as important, the cyber portions of the systems on
which they rely. The most significant cyber threats to the nation are fundamentally
different from those posed by the script kiddies or virus writers who traditionally have plagued users of the Internet. Today, the Internet has a significant role
in enabling the communications, monitoring, operations, and business systems
underlying many of the nations critical infrastructures. Cyberattacks are increasing in frequency and impact. Adversaries seeking to disrupt the nations critical
infrastructures are driven by different motives and view cyberspace as a possible
means to have much greater impact, such as causing harm to people or widespread
economic damage. Although to date no cyberattack has had a significant impact on
our nations critical infrastructures, previous attacks have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for
serious damage. The effects of a successful attack might include serious economic
consequences through impacts on major economic and industrial sectors, threats
to infrastructure elements such as electric power, and disruptions that impede the
response and communication capabilities of first responders in crisis situations.
The United States is at a significant decision point. We must continue to defend our
current systems and networks and at the same time attempt to get out in front
of our adversaries and ensure that future generations of technology will position
us to better protect our critical infrastructures and respond to attacks from our
adversaries. It is the opinion of those involved in creating this research roadmap that
government-funded research and development (R&D) must play an increasing role
to enable us to accomplish this goal of national and economic security. The research
topics in this roadmap, however, are relevant not only to the federal government
but also to the private sector and others who are interested in securing the future.
Historical background
The INFOSEC Research Council (IRC)
is an informal organization of government program managers who sponsor
information security research within the
U.S. Government. Many organizations
have representatives as regular members
of the IRC: Central Intelligence Agency,
Department of Defense (including the
Air Force, Army, Defense Advanced
Research Projects Agency, National
Reconnaissance Office, National Security Agency, Navy, and Office of the
Secretary of Defense), Department
of Energy, Department of Homeland
Security, Federal Aviation Administration, Intelligence Advanced Research
Projects Activity, National Aeronautics
and Space Administration, National
Institutes of Health, National Institute
of Standards and Technology, National
Science Foundation, and the Technical
Support Working Group. In addition,
the IRC is regularly attended by partner
organizations from Canada and the
United Kingdom.
The IRC developed the original Hard
Problem List (HPL), which was composed in 1997 and published in draft
form in 1999. The HPL defines desirable research topics by identifying a set
of key problems from the U.S. Government perspective and in the context of
IRC member missions. Solutions to
these problems would remove major
barriers to effective information security (INFOSEC). The Hard Problem
List was intended to help guide the
research program planning of the IRC
member organizations. It was also hoped
that nonmember organizations and
industrial partners would consider these
problems in the development of their
vi
The area of cybersecurity and the associated research and development activities
have been written about frequently over
the past decade. In addition to both
the original IRC HPL in 1999 and the
revision in 2005, the following reports
have discussed the need for investment
in this critical area:
Toward a Safer and More Secure
Cyberspace
Federal Plan for Cyber Security
Prioritization
Hardening the Internet
Information Security
Cyberspace
Cyber Security Research and
Development Agenda
These reports can be found at http://
www.cyber.st.dhs.gov/documents.html
Current context
On January 8, 2008, the President
issued National Security Presidential Directive 54/Homeland Security
Presidential Directive 23, which formalized the Comprehensive National
Cybersecurity Initiative (CNCI) and a
series of continuous efforts designed to
establish a frontline defense (reducing
current vulnerabilities and preventing
intrusions), defending against the full
spectrum of threats by using intelligence
Document format
The intent of this document is to
provide detailed research and development agendas for the future relating to
11 hard problem areas in cybersecurity,
for use by agencies of the U.S. Government and anyone else that is funding
or doing R&D. It is expected that each
agency will find certain parts of the
document resonant with its own needs
and will proceed accordingly with some
Background
addressed?
What are the potential threats?
Who are the potential
Resources
Measures of success
What needs to be in place for test
and evaluation?
To what extent can we test real
systems?
practice?
What is the status of current
research?
Topic 11, usable security, is different
from the others in its cross-cutting
nature. If taken seriously enough, it
can influence the success of almost all
the other topics. However, some sort
of transcendent usability requirements
need to be embedded pervasively in all
the other topics.
Future Directions
On what categories can we
gaps?
What are some exemplary
must be addressed?
What approaches might be
desirable?
References
[IRC2005]
[USAF-SAB07] United States Air Force Scientific Advisory Board, Report on Implications of Cyber Warfare. Volume 1:
Executive Summary and Annotated Brief; Volume 2: Final Report, August 2007. For Official Use Only.
Additional background documents (including the two most recent National Research Council study reports on cybersecurity)
can be found online. (http://www.cyber.st.dhs.gov/documents.html).
viii
Acknowledgements
Acknowledgements
The content of this research roadmap was developed over the course of 15months
that included three workshops, two phone sessions for each topic, and numerous editing activities by the participants. Appendix C lists all the participants.
The Cyber Security program of the Department of Homeland Security (DHS)
Science and Technology (S&T) Directorate would like to express its appreciation for the considerable amount of time they dedicated to this effort.
DHS S&T would also like to acknowledge the support provided by the staff of SRI
International in Menlo Park, CA, and Washington, DC. SRI is under contract with
DHS S&T to provide technical, management, and subject matter expert support for
the DHS S&T Cyber Security program. Those involved in this effort include Gary
Bridges, Steve Dawson, Drew Dean, Jeremy Epstein, Pat Lincoln, Ulf Lindqvist,
Jenny McNeill, Peter Neumann, Robin Roy, Zach Tudor, and Alfonso Valdes.
Of particular note is the work of Jenny McNeill and Peter Neumann. Jenny
has been responsible for the organization of each of the workshops and phone
sessions and has worked with SRI staff members Klaus Krause, Roxanne Jones,
and Ascencion Villanueva to produce the final document. Peter Neumann
has been relentless in his efforts to ensure that this research roadmap represents the real needs of the community and has worked with roadmap
participants and government sponsors to produce a high-quality product.
ix
FUTURE DIRECTIONS
gaps?
Research relating to composability has
addressed some of the fundamental
problems and underlying theory. For
example, see [Neu2004] for a recent
consideration of past work, current practice, and R&D directions that might be
useful in the future. It contains numerous references to papers and reports
on composability. It also considers a
variety of techniques for compositions
of subsystems that can increase trustworthiness, as well as system and network
architectures and system development
practices that can yield greater trustworthiness. See also [Can2001], which
represents the beginning of work on
the notion of universal composability
applied to cryptography.
However, there are gaps in our understanding of composability as it relates
to security, and to trustworthiness more
generally, primarily because we lack
precise specifications of most of the
important requirements and desired
properties. For example, we are often
good at developing specific solutions
to specific security problems, but we
do not understand how to apply and
combine these specific solutions to
produce trustworthy systems. We lack
methods for analyzing how even small
changes to systems affect their trustworthiness. More broadly, we lack a
good understanding of how to develop
and maintain trustworthy systems comprehensively throughout the entire life
cycle. We lack methods and tools for
decomposing high-level trustworthiness
goals into specific design requirements,
capturing and specifying security requirements, analyzing security requirements,
mapping higher-layer requirements into
lower-layer ones, and verifying system
trustworthiness properties. We do not
understand how to combine systems in
ways that ensure that the combination is
more, rather than less, secure and resilient than its weakest components. We
lack a detailed case history of past successes and failures in the development
of trustworthy systems that could help
us to elucidate principles and properties
of trustworthy systems, both in an overarching sense and in specific application
areas. We lack development tools and
languages that could enable separation
of functionality and trustworthiness
examples
Better understanding of the
composability
Development of building blocks
Continued research in
examples.
Several threads could run through this
timelinefor example, R&D relating
to trustworthy isolation, separation,
and virtualization in hardware and
software; composability of designs and
implementations; analyses that could
greatly simplify evaluation of trustworthiness before putting applications into
operation; robust architectures that
provide self-testing, self-diagnosing,
self-reconfiguring, compromise resilient, and automated remediation; and
architectures that break the current
asymmetric advantage for attackers
(offense is cheaper than defense, at
present). The emphasis needs to be
on realistic, practical approaches to
developing systems that are scalable,
composable, and trustworthy.
The gaps in practice and R&D,
approaches, and potential benefits are
summarized in Table 1.1. The research
directions for scalable trustworthy
systems are intended to address these
gaps. Table 1.2 also provides a summary
of this section.
This topic area interacts strongly with
enterprise-level metrics (Section2) and
evaluation methodology (Section 3) to
provide assurance of trustworthiness.
In the absence of such metrics and suitable evaluation methodologies, security
would be difficult to comprehend, and
the cost-benefit trade-offs would be
difficult to evaluate. In addition, all the
other topic areas can benefit from scalable trustworthy systems, as discussed
in AppendixA.
Gaps in Practice
Gaps in R&D
Approaches
Potential Benefits
Requirements
Nonexistent, inconsistent,
incomplete nonscalable
requirements
Orange Book/Common
Criteria have inherent
limitations
Canonical, composable,
scalable trustworthiness
requirements
Relevant developments;
Simplified procurement
process
System
architectures
Inflexibility; Constraints of
flawed legacy systems
Evolvable architectures,
scalable theory of
composability are needed
Scalably composable
components and
trustworthy architectures
Long-term scalable
evolvability maintaining
trustworthy operation
Development
methodologies
and software
engineering
Unprincipled systems,
unsafe languages, sloppy
programming practices
Principles not
experientially
demonstrated; Good
programming language
theory widely ignored
Built-in assured
scalably composable
trustworthiness
Analytic tools
Rigorously based
composable tools
Whole-system
evaluations
Formal methods,
hierarchical staged
reasoning
Scalable incremental
evaluations
Operational
practices
Enormous burdens on
administrators
Dynamic self-diagnosis
and self-healing
Simplified, scalable
operational management
Challenges: Most of todays systems are built out of untrustworthy legacy systems using inadequate architectures, development
practices, and tools. We lack appropriate theory, metrics of trustworthiness and scalability, sound composable architectures, synthesis and
analysis tools, and trustworthy building blocks.
Goals: Sound foundations and supporting tools that can relate mechanisms to policies, attacks to mechanisms, and systems to
requirements, enabling facile development of composable TSoS systematically enhancing trustworthiness (i.e., making them more
trustworthy than their weakest components); documented TSoS developments, from specifications to prototypes to deployed systems.
MILESTONES
Incremental Systems
Clean-Slate Systems
Hybrid Systems
Near-term milestones:
Near-term milestones:
Near-term milestones:
Alternative architectures
Well-specified requirements
Sound kernels/VMMs
Mix-and-match systems
Integration tools
Evaluation strategies
Medium-term milestones:
Medium-term milestones:
Medium-term milestones:
Use in infrastructures
Integration experiments
Long-term milestones:
Long-term milestones:
Long-term milestones:
Test/evaluation: Identify measures of trustworthiness, composability, and scalability, and apply them to real systems.
Tech transfer: Publish composition methodologies for developing TSoS with mix-and-match components. Release open-source tools
for creating, configuring, and maintaining TSoS. Release open-source composable, trustworthy components. Publish successful, welldocumented TSoS developments. Develop profitable business models for public-private TSoS development partnerships for critical
applications, and pursue them in selected areas.
Resources
10
Measures of success
Overall, the most important measure
of success would be the demonstrable
avoidance of the characteristic system
failures that have been so common in
the past (e.g., see [Neu1995]), just a few
of which are noted earlier in this section.
Properties that are important to the
designers of systems should be measured
in terms of the scale of systems that can
be shown to have achieved a specified
level of trustworthiness. As noted at the
beginning of this section, trustworthiness typically encompasses requirements
for security, reliability, survivability, and
many other system properties. Each
system will need to have its own set
of metrics for evaluation of trustworthiness, composability, and scalability.
Those metrics should mirror generic
requirements, as well as any requirements that are specific to the intended
applications. The effectiveness of any
References
[Can2001]
Ran Canetti. Universally composable security: A new paradigm for cryptographic protocols
(http://eprint.iacr.org/2000/067), 2005. An extended version of the paper from the 42nd
Symposium on Foundations of Computer Science (FOCS01) began a series of papers
applying the notion of universal composability to cryptography. Much can be learned
from this work regarding the more general problems of system composability.
[Neu1995]
Peter G. Neumann. Computer-Related Risks, Addison-Wesley/ACM Press, New York, 1995. See also an
annotated index to online sources for the incidents noted here, as well as many more recent cases
(http://www.csl.sri.com/neumann/illustrative.html).
11
[Neu2004]
[Sal+2009]
J.H. Saltzer and F. Kaashoek. Principles of computer design. Morgan Kauffman, 2009. (Chapters
1-6; Chapters 7-11 are online at: http://ocw.mit.edu/ans7870/resources/system/index.htm).
12
technology?
How do we compare with our peers with respect to security?
How secure is this product or software that we are purchasing or deploying?
How does that product or software fit into the existing systems and
networks?
What is the marginal change in our security (for better or for worse), given
risks?
What combination of requirement specification, up-front architecture,
13
ENTERPRISE-LEVEL METRICS
Challenges
Needs
Developers
System procurers
Certified evaluations
User communities
ENTERPRISE-LEVEL METRICS
15
Security implementation
ENTERPRISE-LEVEL METRICS
FUTURE DIRECTIONS
Analysis
Analysis focuses on determining how
effectively the metrics describe and
predict the performance of the system.
The prediction should include both
current and postulated adversary
capabilities. There has been relatively
little work on enterprise-level analyses, because a foundation of credible
metrics and foundational approaches
for deriving enterprise-level evaluations
from more local evaluations have been
lacking.
Composition
Collection
ENTERPRISE-LEVEL METRICS
17
and collection
18
hierarchies
Measures and metrics for security
primitives
Appropriate uses of metrics
security values
Measuring human-system
interaction (HSI)
Tools to enhance and automate
ENTERPRISE-LEVEL METRICS
propensity to attempt a
particular attack, in response
to a defensive posture adopted
by the enterprise, needs to be
conducted.
cybersecurity recommendations
on public- and private-sector
enterprise-level systems.
Resources
Industry trends such as exposure to data
breaches are leading to the development
of tools to measure the effectiveness
of system implementations. Industry
mandates and government regulations
such as the Federal Information Security Management Act (FISMA) and
Sarbanes-Oxley require the government and private-sector firms to become
accountable in the area of IT security.
These factors will lead industry and
government to seek solutions for the
improvement of security metrics.
Government investment in R&D is still
required to address the foundational
questions that have been discussed,
such as adversary capabilities and threat
measurements.
Measures of success
ENTERPRISE-LEVEL METRICS
19
References
[And2008]
R. Anderson. Security Engineering: A Guide to Building Dependable Distributed Systems. Wiley, Indianapolis,
Indiana, 2008.
[Avi+2004]
A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and
secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1):11-33, January-March2004.
[Che2006]
E. Chew, A. Clay, J. Hash, N. Bartol, and A. Brown. Guide for Developing Performance Metrics for Information
Security. NIST Special Publication 800-80, National Institute of Standards and Technology, Gaithersburg,
Maryland, May 2006.
[CRA2003]
Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research
Challenges in Computer Science and Engineering. Computing Research Association, Washington, D.C.,
2006 (http://www.cra.org/reports/trustworthy.computing.pdf ).
[Jaq2007]
A. Jaquith. Security Metrics. Addison Wesley Professional, Upper Saddle River, New Jersey, 2007.
[IDA2006]
Institute for Defense Analysis. National Comparative Risk Assessment Pilot Project. Draft Final,
September2006, IDA Document D-3309.
[McQ2008]
M.A. McQueen, W.F. Boyer, S. McBride, M. Farrar, and Z. Tudor. Measurable control system security through
ideal driven technical metrics. In Proceedings of the SCADA Scientific Security Symposium, January2008.
[Met2008]
[NIS2009]
Information Security Training Requirements: A Role- and Performance-Based Model. NIST Special
Publication 800-16 Revision 1, National Institute of Standards and Technology, Gaithersburg, Maryland,
March 20, 2009 (http://csrc.nist.gov/publications/PubsDrafts.html).
20
ENTERPRISE-LEVEL METRICS
[QoP2008]
4th Workshop on Quality of Protection (Workshop co-located with CCS-2008), October 2008 (http://
qop-workshop.org/)
[Swa+2003]
M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo. Security Metrics Guide for Information Technology
Systems. NIST Special Publication 800-55, National Institute of Standards and Technology, Gaithersburg,
Maryland, July 2003.
21
23
Challenges
Needs
System developers
24
FUTURE DIRECTIONS
reevaluations of successive
versions resulting from changes
in requirements, designs,
implementation, and experience
gained from uses of system
applications.
The following discussion considers the
individual phases.
Requirements
Establish a sounder basis for
how security requirements get
specified, evaluated, and updated
at each phase in the life cycle.
languages, constraints on or
subsets of existing languages,
and hardware design techniques
that express security properties,
enforce mandatory access
controls, and specify interfaces,
so that automated code analysis
can be used to extract what the
code means to do and what its
assumptions are.
Testing
Select and evaluate metrics for
evaluation of trustworthiness
requirements.
Select and use evaluation
Decommissioning
Develop end-of-life evaluation
25
26
representations such as
abstraction models to describe
threats, so that designers can
develop detailed specifications.
Developing user interfaces, tools,
standardize testing.
Developing understanding about
Resources
Academia and industry should collaborate to share data about traffic,
attacks, and network environments
and to jointly define standards and
27
a PREDICT-like repository
for attack data sharing. Yet a
third way is developing market
incentives for data sharing.
Fund joint academic/industry
Measures of success
One key milestone as a measure of
success will be the eventual adoption
by standards bodies such as NIST or
ISO of consistent frameworks, methodologies, and tools for system evaluation.
System developers will be able to choose
components from vendors based on
results obtained from well-known and
References
[Ade2008]
S. Adee. The hunt for the kill switch. IEEE Spectrum, 45(5):32-37, May 2008
(http://www.spectrum.ieee.org/may08/6171).
[DSB2005]
Defense Science Board Task Force on High Performance Microchip Supply, February 2005
(http://www.acq.osd.mil/dsb/reports/2005-02-HPMS_Report_Final.pdf ).
[How+2006]
M. Howard and S. Lipner. The Security Development Life Cycle. Microsoft Press, Redmond, Washington,2006.
[ISO1999]
[NIS2008]
Security Considerations in the System Development Lhife Cycle. NIST Special Publication 800-64 Revision2
(Draft), National Institute of Standards and Technology, Gaithersburg, Maryland, March 2008.
28
FUTURE DIRECTIONS
the open.
Beacons in decoy (and real)
31
consideration, although it
typically depends on the specific
policies of each organization.
Existing access controls tend
to be inadequately fine-grained
32
be effectively integrated.
Definition
Potential Approaches
Deter, Protect
Predict, React
Deter
Fine-grained access controls
and correspondingly detailed
accountability need to have
adequate assurance. Audit logs
must be reduced to be correctly
interpreted, without themselves
leaking information.
Deterrence policies need to be
anonymous whistle-blowing,
engendering an atmosphere of
peer-level misuse detection and
monitoring.
Social, ethical, and legal issues, as
33
prevent overescalation of
privileges on a systemwide basis
(e.g., chained access that allows
unintended access to a sensitive
piece of data). However, note
that neither trust nor delegation
is a transitive operation.
Predict
Various predictive models
are neededfor example, for
indicators of risks of insider
misuse, dynamic precursor
indicators for such misuse, and
determining what is operationally
relevant (such as the potentially
likely outcomes).
Dynamic analysis techniques
34
Develop anti-tampering
approaches. (Protect)
Medium Term
Develop feature extraction and
machine learning mechanisms to
find outliers. (Detect)
Develop tools to exhaustively and
Research Initiatives
Benefit
Time Frame
Inadequately fine-grained
access controls
Near term
Difficulties in remediation
Longer term
effectiveness of deception
techniques. (Protect)
Incorporate integrity protection
indicators. (React)
Long Term
Establish effective methods
to apply the principle of least
privilege. (Protect)
Develop methods to address
Resources
Measures of success
Various metrics are needed with respect
to the ability of systems to cope with
insiders. Some will be generic: others
will be specific to given applications and
given systems. Metrics might consider
the extent to which various approaches
to authentication and authorization
35
References
[And2008]
[Bib1977]
K.J. Biba. Integrity Considerations for Secure Computer Systems. Technical Report MTR 3153,
The MITRE Corporation, Bedford, Massachusetts, June 1975. Also available from USAF
Electronic Systems Division, Bedford, Massachusetts, as ESD-TR-76-372, April 1977.
[Bis2002]
M. Bishop. Computer Security: Art and Science. Addison-Wesley Professional, Boston, Massachusetts, 2002.
[Bra2004]
Richard D. Brackney and Robert H. Anderson. Understanding the Insider Threat: Proceedings of a
March 2004 Workshop. RAND Corporation, Santa Monica, California, 2004
(http://www.rand.org/pubs/conf_proceedings/2005/RAND_CF196.pdf ).
[Cap2008.1]
D. Capelli, T. Conway, S. Keverline, E. Kowalski, A. Moore, B. Willke, and M. Williams. Insider Threat
Study: Illicit Cyber Activity in the Government Sector, Carnegie Mellon University, January 2008
(http://www.cert.org/archive/pdf/insiderthreat_gov2008.pdf ).
[Cap2008.2]
D. Capelli, E. Kowalski, and A. Moore. Insider Threat Study: Illicit Cyber Activity in the Information
Technology and Telecommunications Sector. Carnegie Mellon University, January 2008
(http://www.cert.org/archive/pdf/insiderthreat_it2008.pdf ).
[Dag2008]
[FSS2008]
Financial Services Sector Coordinating Council for Critical Infrastructure Protection and
Homeland Security, Research and Development Committee. Research Agenda for the Banking
and Finance Sector. September 2008 (https://www.fsscc.org/fsscc/reports/2008/RD_AgendaFINAL.pdf ). Challenge 4 of this report is Understanding the Human Insider Threat.
[HDJ2006]
IT Security: Best Practices for Defending Against Insider Threats to Proprietary Data, National Defense
Journal Training Conference, Arlington, Virginia. Homeland Defense Journal, 19 July 2006
36
[IAT2008]
Information Assurance Technical Analysis Center (IATAC). The Insider Threat to Information
Systems: A State-of-the-Art Report. IATAC, Herndon, Virginia, February 18, 2008.
[Kee2005]
M. Keeney, D. Cappelli, E. Kowalski, A. Moore, T. Shimeali, and St. Rogers. Insider Threat
Study: Computer System Sabotage in Critical Infrastructure Sectors. Carnegie Mellon
University, May 2005 (http://www.cert.org/archive/pdf/insidercross051105.pdf ).
[Moo2008]
Andrew P. Moore, Dawn M. Cappelli, and Randall F. Trzeciak. The Big Picture of IT Insider Sabotage
Across U.S. Critical Infrastructures. Technical Report CMU/SEI-2008-TR-009, Carnegie Mellon
University, 2008 (http://www.cert.org/archive/pdf/08tr009.pdf ). This report describes the MERIT model.
[Neu2008]
Peter G. Neumann. Combatting insider misuse with relevance to integrity and accountability in elections
and other applications. Dagstuhl Workshop on Insider Threats, July 2008
(http://www.csl.sri.com/neumann/dagstuhl-neumann.pdf ). This position paper expands on the
fuzziness of trustworthiness perimeters and the context-dependent nature of the concept of insiders.
[Noo+2008]
Thomas Noonan and Edmund Archuleta. The Insider Threat to Critical Infrastructures. National
Infrastructure Advisory Council, April 2008
(http://www.dhs.gov/xlibrary/assets/niac/niac_insider_threat_to_critical_infrastructures_study.pdf ).
[Pfl2003]
[Ran04]
M.R. Randazzo, D. Cappelli, M. Keeney, and A. Moore. Insider Threat Study: Illicit Cyber Activity in the
Banking and Finance Sector, Carnegie Mellon University, August 2004
(http://www.cert.org/archive/pdf/bankfin040820.pdf ).
[Sto+08]
Salvatore Stolfo, Steven Bellovin, Shlomo Hershkop, Angelos Keromytis, Sara Sinclair, and Sean
Smith (editors). Insider Attack and Cyber Security: Beyond the Hacker. Springer, New York,2008.
37
38
platform.
installed.
Examples
Life cycle
Compromised Devices
Tainted File
E-mail attachment
Web
39
Challenges
Needs
Users
Administrators
Infrastructure Systems
ISPs
Law Enforcement
41
(such as A/V software and system patching) are becoming less effective. For
example, malware writers have evolved
strategies such as polymorphism,
packing, and encryption to hide their
signature from existing A/V software.
There is also a window of vulnerability between the discovery of a new
malware variant and subsequent system
patches and A/V updates. Further,
malware authors also strive to disable
or subvert existing A/V software once
their malware has a foothold on the
target system. (This is the case with a
later version of Conficker, for example.)
A/V software may itself be vulnerable
to life cycle attacks that subvert it prior
to installation. Patching is a necessary
system defense that also has drawbacks.
For example, the patch can be reverse
engineered by the adversary to find
the original vulnerability. This may
allow the malware writers to refine
their attacks against the unpatched
systems. Much can be learned from
recent experiences with successive versions of Conficker.
Specifically with respect to identity theft,
which is one potential consequence
of malware but may be perpetrated
by other means, there is an emerging
commercial market in identity theft
insurance and remediation. This implies
that some firms believe they have adequate metrics to quantify risk in this
case.
FUTURE DIRECTIONS
Definition
Potential Approaches
Prevent
Protect
Detect
Analyze
React
43
Research Initiatives
Benefit
Time Frame
Near
Prolongs usefulness of
virtualization as a defensive
strategy
Near
Difficulty of remediation
Near
Internet-scale emulation
Near
Attacker/defender asymmetry
Medium/Long
No attack tolerance
Medium
44
Medium/Long
Thin-client technology has been proposed in the past. In this model, the
users machine is stateless, and all files
and applications are distributed on some
network (the terminology in the cloud
is occasionally used, although there are
also parallels with traditional mainframe computing). If we can make the
distributed resources secure, and that is
itself a big question, the attacker options
against user asset are greatly reduced,
and remediation is merely a question
of restarting. The long-term research
challenges toward this secure cloud
computing paradigm are securing the
distributed resource base and making
this base available to the authenticated
and authorized user from any location,
supported by a dedicated, integrated
infrastructure.
Remediation of infected systems is
extremely difficult, and it is arguably
45
46
are required.
Not enough is being done in threat
analysis. In any case, the nature of
the threat changes over time. One
interesting avenue of research is economic analysis of adversary markets.
Attackers sell malware exploits (and
also networks of infected machines, or
botnets). The price fluctuations may
permit analysis of adversary trends and
may also enable definition of metrics as
to the effectiveness of defenses. Related
to the economic approach is research
into making malware economically less
attractive to adversaries (for example,
by much better damage containment,
increasing the effectiveness of attribution, limiting the number of systems
that can be targeted with a given exploit,
and changing existing laws/policies so
that the punishments reflect the true
societal cost of cybercrime).
in this domain range from privacy concerns, legal aspects of data sharing, and
the sheer volume of data itself. Research
in generating adequate metadata and
provenance is required to overcome
these hurdles.
Techniques to capture and analyze
malware and propagate defenses faster
are essential in order to contain epidemics. Longer-term research should focus
on inherently secure, monitorable, and
auditable systems. Threat analysis and
economic analysis of adversary markets
should be undertaken in pilot form in
the near term, and pursued more vigorously if they are shown to be useful.
Measures of success
We require baseline measurements of
the fraction of infected machines at any
time; success would be a reduction in
this fraction over time.
Some researchers currently track the
emergence of malware. In this way, they
are able to identify trends (for example,
the number of new malware samples per
month). A reversal of the upward trend
in malware emergence would indicate
success.
Time between malware capture and
propagation of defense (or, perhaps
more appropriately, implementation
of the defense on formerly vulnerable
systems) tracks progress in human and
automated response time.
With reference to the repository, we
may define a minimal set of exemplars
new malware?
Since spam is a primary botnet
malware propagators?
What is the cost to identify
47
References
[Ant2008]
A.M. Antonopoulos. Georgia cyberwar overblown. Network World, August 19, 2008
(http://www.pcworld.com/businesscenter/article/150021/georgia_cyberwar_overblown.html).
[CAT2009]
Conference for Homeland Security 2009 (CATCH 09), Cybersecurity Applications and Technology,
March 34, 2009. The IEEE proceedings of this conference include relevant papers on detection and
mitigation of botnets, as well as correlation and collaboration in cross-domain attacks, from the University
of Michigan and Georgia Tech, as well as Endeavor, HBGary, Milcord, and Sonalyst (among others).
[Dai2006]
Dino Dai Zovi, Vitriol: Hardware virtualization rootkits. In Proceedings of the Black Hat
USA Conference, 2006.
[DET]
[Fra2007]
J. Franklin, V. Paxson, A. Perrig, and S. Savage. An inquiry into the nature and
causes of the wealth of Internet miscreants. Proceedings of ACM Computer and
Communications Security Conference, pp. 375-388, October 2007.
[GAO2007]
CYBERCRIME: Public and Private Entities Face Challenges in Addressing Cyber Threats. Report
GAO-07705, U.S. Government Accountability Office, Washington, D.C., July 2007.
[Hal2006]
J.A. Halderman and E.W. Felten. Lessons from the Sony CD DRM episode. In
Proceedings of the 15th USENIX Security Symposium, August 2006.
48
[Hol2008]
[Kim2004]
Hyang-Ah Kim and Brad Karp, Autograph: Toward automated, distributed worm signature
detection, In Proceedings of the 13th USENIX Security Symposium, August 2004.
[IW2007]
L. Greenemeier. Estonian attacks raise concern over cyber nuclear winter. Information Week, May 24,
2007 (http://www.informationweek.com/news/internet/showArticle.jhtml?articleID=199701774).
[LAT2008]
[Mes2003]
Ellen Messmer. Welchia Worm Nails Navy Marine Corps, Network World Fusion, August 19,
2003. (http://pcworld.com/article/112090/welchia_worm_nails_navy_marine_corps.html).
[Pou2003]
Kevin Poulsen. Slammer worm crashed Ohio nuke plant network. SecurityFocus, August 19, 2003
(http://www.securityfocus.com/news/6767).
[Sha2004]
[Sha2008]
[Sch2005]
Bruce Schneier. Real story of the rogue rootkit. Wired, November 17, 2005 (http://
www.wired.com/politics/security/commentary/securitymatters/2005/11/69601).
[SRI2009]
[Thu2008]
R. Thurston. Coffee drinkers in peril after espresso overspill attack. SC Magazine, June 20, 2008 (http://
www.scmagazineuk.com/coffee-drinkers-in-peril-after-espresso-overspill-attack/article/111458).
[Vir]
[Vra+2005]
49
51
Shibboleth is a standards-based,
open-source software system for
single sign-on across multiple
websites. (See http://shibboleth.
internet2.edu.) Also of interest
are Card Space, Liberty Alliance,
SAML, and InCommon (all of
which are federated approaches,
in active use, undergoing further
development, and evolving in
the face of various problems with
security, privacy, and usability).
The Homeland Security
Presidential Directive 12
(HSPD-12) calls for a common
identification standard for federal
employees and contractors. An
example of a solution in compliance
with HSPD-12 is the DoD
Common Access Card (CAC).
Various other approaches such as the
following could play a role but are not
by themselves global-scale identity
solutions. Nevertheless, they might
be usefully considered. Open ID provides transitive authentication, but only
minimal identification; however, trust is
inherently not transitive, and malicious
misuse is not addressed. Medical ID
is intended to be HIPAA compliant.
Enterprise Physical Access is representative of token-based or identity-based
physical access control systems. Stateless
identity and authentication approaches
include LPWA, the Lucent Personalized Web Assistant. OTP/VeriSign is
a symmetric key scheme. Biometrics
52
authentication, attribution,
accountability, revocation,
federation, usable user interfaces,
user-accessible conceptual
models, presentation, and
evaluations thereof ).
Policy-related research (e.g.,
privacy, administration,
revocation policies, international
implications, economic, social
and cultural mores, and policies
relating to the effective use of the
above mechanisms)
As is the case for the other topics, the
term research is used here to encompass the full spectrum of R&D, test,
evaluation, and technology transfer.
Legal, law enforcement, political,
international, and cultural issues are
cross-cutting for both of these bins and
need to be addressed throughout.
Mechanisms for enhancing global identity management (with some policy
implications) include the following:
cross-organization information
exposure requirements,
lightweight aliasing, and
unlinking.
Effective presentation of specific
(altering or withdrawing
attributes).
Avoidance of having to carry too
of cryptographically based
approaches, with respect
to integrity, spoofability,
revocation when compromised,
accountability, credential
renewals, problems that result
from system updates, and so on.
Identity management for
acceptance: usability,
interoperability, costs, sustainable
economic models; presentation
to users.
Accommodating international
53
Maintaining consistency of
Definition
Potential Approaches
Mechanisms
Policies
Rules and procedures for enforcing identitybased controls, using relevant mechanisms
54
Resources
Short-term gains can be made, particularly in prototypes and in the policy
research items noted in the Background
section above. In particular, the intelligent use of existing techniques and
implementations would help. However,
serious effort needs to be devoted to
long-term approaches that address
inherent scalability, trustworthiness,
and resistance to cryptanalytic and systemic attacks, particularly in federated
Measures of success
Ideally, any system for identification,
authentication, and access control
should be able to support hundreds of
millions of users with identity-based or
role-based authentication. IDs, authentication, and authorization of privileges
may sometimes be considered separately,
but in any case must be considered
compatibly within a common context.
An identifier declares who a person is
and may have various levels of granularity and specificity. Who that person
is (along with the applicable roles and
other attributes, such as physical location) will determine the privileges to be
granted with respect to any particular
system policy. The system should be
able to handle millions of privileges
and a heavy churn rate of changes in
users, devices, roles, and privileges. In
addition, each user may have dozens of
distinct credentials across multiple organizations, with each credential having its
own set of privileges. It should be possible to measure or estimate the extent
to which incremental deployment of
new mechanisms and new policies could
be implemented and enforced. Revocation of privileges should be effective for
near-real-time use. Measurable metrics
need to encompass all these aspects of
global identity management. Overall,
it should be extremely difficult for any
national-level adversary to spoof a critical infrastructure system into believing
that anyone attempting access is anything other than the actual adversary or
adversaries.
55
References
[FSS2008]
Financial Services Sector Coordinating Council for Critical Infrastructure. Protection and Homeland
Security, Research and Development Committee. Research Agenda for the Banking and Finance
Sector. September 2008, (https://www.fsscc.org/fsscc/reports/2008/RD_Agenda-FINAL.pdf ).
[IDT2009]
8th Symposium on Identity and Trust on the Internet (IDtrust 2009), NIST, April 14-16, 2009
(http://middleware.internet2.edu/idtrust). The website contains proceedings of previous
years conferences. The 2009 proceedings include three papers representing team members
from the I3P Identity Management project (which includes MITRE, Cornell, Georgia
Tech, Purdue, SRI, and the University of Illinois at Urbana-Champaign).
56
57
Primary relevance
Pacemaker
Avionics
Enterprise/sector
critical transaction system
Ad-hoc emergency
response system
Corporate
office server
Home PC
Social networking website
Batch processing system
failures, and accidents. Rather than enumerate a long list, we refer throughout
to all relevant adversities for which
survivability is required.
58
59
There is no one-size-fits-all
Definition
Potential Approaches
Protect
Detect
React
60
a challenge in monocultures,
whereas system maintenance
is problematic in diversified
and heterogeneous systems.
Techniques are needed
to determine appropriate
balances between diversity
and monoculture to achieve
survivability in time-critical
systems.
Considerable effort is being
devoted to developing
hypervisors and virtualization.
Perhaps these approaches could
be applied to integrating COTS
61
challenge-response, built-in
monitoring of critical functions,
detection of process anomalies).
Intrinsically auditable systems
appropriate timeliness.
Strategies for course of action
Near term
Realistic, comprehensive
requirements
Existing protocols
Identification of time-critical
components
Medium term
Detection
Strategies for reaction
Experimentation with
Higher-speed
intercommunications and
coordination
Development tools
System models
Long term
Evaluatable metrics
Establishment of trustworthy
Resources
Making progress on the entire set
of in-scope systems requires focused
research efforts for each of the underlying technologies and each type of critical
system, together with a research-coordinating function that can discern and
understand both the common and the
disparate types of solutions developed
by those working on specific systems.
An important role for the coordinating
function is to expedite the flow of ideas
and understanding among the focused
groups.
For a subject this broad and all-encompassing (it depends on security, reliability,
situational awareness and attack attribution, metrics, usability, life cycle
evaluation, combating malware and
insider misuse, and many other aspects),
it seems wise to be prepared to launch
multiple efforts targeting this topic area.
Measures of success
Success should be measured by the range
of environments over which the system
is capable of delivering adequate service
for top-priority tasks. These environments will vary by topology and spatial
distribution: number, type, and location
of compromised machines; and a broad
range of disruption strategies.
engaged.
Analytical models should be
63
References
[Avi+2004]
[DIS2003]
3rd DARPA Information Survivability Conference and Exposition (DISCEX-III 2003), 22-24
April 2003, Washington, DC, USA. IEEE Computer Society 2003, ISBN 0-7695-1897-4.
[Ell+1999]
R.J. Ellison, D.A. Fisher, R.C. Linger, H.F. Lipson, T. Longstaff, and N.R.
Mead. Survivable Network Systems: An Emerging Discipline. Technical Report
CMU/SEI-97-TR-013, Carnegie Mellon University, May 1999.
[Hai+2007]
Yacov Y. Haimes, Joost R. Santos, Kenneth G. Crowther, Matthew H. Henry, Chenyang Lian, and
Zhenyu Yan. Analysis of Interdependencies and Risk in Oil & Gas Infrastructure Systems. I3P Research
Report No. 11, June 2007 (http://www.thei3p.org/docs/publications/researchreport11.pdf ).
[Ker+2008]
Peter Kertzner, Jim Watters, Deborah Bodeau, and Adam Hahn. Process Control System Security
Technical Risk Assessment Methodology & Technical Implementation. I3P Research Report No.
13, March 2008 (http://www.thei3p.org/docs/publications/ResearchReport13.pdf ).
[Neu2000]
P.G. Neumann. Practical Architectures for Survivable Systems and Networks. SRI International,
Menlo Park, California, June 2000 (http://www.csl.sri.com/neumann/survivability.html).
64
whole)?
What (possibly rogue) infrastructure enables the attack?
How can we prevent, deter, and/or mitigate future similar occurrences?
Situational understanding includes the state of ones own system from a defensive
posture irrespective of whether an attack is taking place. It is critical to understand
system performance and behavior during non-attack periods, in that some attack
indicators may be observable only as deviations from normal behavior. This
understanding also must include performance of systems under stress that are
not caused by attacks, such as a dramatic increase in normal traffic due to sudden
popularity of a particular resource.
Situational understanding also encompasses both the defender and the adversary.
The defender must have adversary models in order to predict adversary courses of
action based on the current defensive posture. The defenders system-level goals
are to deter unwanted adversary actions (e.g., attacking our information systems)
and induce preferred courses of action (e.g., working on socially useful projects as
opposed to developing crimeware, or redirecting attacks to a honeynet).
Attack attribution is defined as determining the identity or location of an attacker
or an attackers intermediary. Attribution includes the identification of intermediaries, although an intermediary may or may not be a willing participant in an
65
There have been numerous widely publicized large-scale attacks launched for
a variety of purposes, but recently there
is a consensus that skilled nonstate
actors are now primarily going after
financial gain [GAO2007, Fra2007].
Click fraud, stock pump and dump,
and other manipulations of real-time
markets prove that it is possible to profit
from cybercrime without actually taking
down the systems that are attacked. In
this context, situational understanding
should clearly encompass law enforcement threat models and priorities, as
well as how financial gains can accrue.
66
Challenges
Needs
System Administrators
Service Providers
Law Enforcement
Civil Government
Military
67
68
FUTURE DIRECTIONS
en.wikipedia.org/wiki/OODA_Loop).
By analogy to physical security systems,
reaction might be further broken out
into delay, response, and mitigation
steps. Some courses of action by the
defender might delay the adversary from
achieving the ultimate objective of the
attack. This buys time for the defender
to mount an effective response that
thwarts the adversarys goal. Another
response might be to seek out additional information that will improve
situational awareness. If an effective
response is not possible, then mitigation
of the consequences of the adversarys
action is also a valuable course of action.
Many responses may require coordination across organizational boundaries,
and shared situational awareness will be
important in supporting such activities.
69
70
Definition
Sample Solutions
Massive-Scale Analysis
71
positives. Although response and reaction are not directly a part of situational
understanding, situational understanding is needed to enable response and
reaction, and situational understanding
may drive certain kinds of responses
(e.g., changing information collection
to improve attribution). Thus, advances
in reaction and response techniques
directly affect the kind of situational
awareness that is required.
Resources
Situational understanding requires collection or derivation of relevant data on
a diverse set of attributes. Some of the
attributes that support global situational
understanding and attack attribution
are discussed above relating to the kinds
of data to collect. A legal and policy
framework, including international
coordination, is necessary to enable
the collection and permit the exchange
of much of this information, since it
often requires crossing international
boundaries. In addition, coordination
across sectors may be needed in terms
of what information can be shared and
how to gather it in a timely way. Consider an attack that involves patient data
information systems within a hospital
in the United States, a military base in
Germany, and an educational institution
in France. All three institutions have
different requirements for what can and
cannot be shared or recorded.
Modifications to U.S. law and policy
may be needed to facilitate data sharing
and attack attribution research. As an
example, institutional review boards
(IRBs) play an important role in protecting individuals and organizations from
the side effects of experimentation that
Measures of success
We will measure progress in numerous
ways, such as decreased personnel hours
required to obtain effective situational
understanding; increased coverage of the
attack space; based on mission impact,
improved ability to triage the serious
attacks from the less important and
the ones where immediate reaction is
needed from those where an alternative approach is acceptable; improved
response and remediation time; and
timely attribution with sound forensics.
These all require reliable collection of
data on the diverse set of attributes listed
previously.
On the basis of these attributes, we
could define measures of success at
a high level within a given organizations stated security goals. For example,
an organization aimed primarily at
maintaining customer access to a particular service might measure success
by observing and tracking over time
such variables as the estimated number
of hosts capable of serving information
over some service, and the estimated
near-steady-state number or growth
trend of these machines.
Success depends on timely identification
73
informational boundaries to
enable cooperative response?
Can we quickly quarantine
ultimate attribution?
References
[Fra2007]
J. Franklin, V. Paxson, A. Perrig, and S. Savage. An inquiry into the nature and
causes of the wealth of Internet miscreants. Proceedings of ACM Computer and
Communications Security Conference, pp. 375-388, October 2007.
[GAO2007]
CYBERCRIME: Public and Private Entities Face Challenges in Addressing Cyber Threats. Report
GAO-07-705, U.S. Government Accountability Office, Washington, D.C., July 2007.
[Hol2008]
T. Holz, C. Gorecki, K. Rieck, and F. Freiling. Measuring and detecting fast-flux service networks. In
Proceedings of the 15th Annual Network & Distributed System Security (NDSS) Symposium, February 2008.
74
[ICA2008]
Draft Initial Report of the GNSO Fast Flux Hosting Working Group. ICANN. December 8, 2008
(https://st.icann.org/pdp-wg-ff/index.cgi?fast_flux_pdp_wg).
[ISC]
[Phi]
PhishTank: http://www.phishtank.com.
75
PROVENANCE
77
annotation in scientific
computing. Chimera [Fos2002]
allows a user to define a
workflow, consisting of data sets
and transformation scripts. The
system then tracks invocations,
annotating the output with
information about the runtime
environment. The myGrid
system [Zha2004], designed
to aid biologists in performing
computer-based experiments,
allows users to model their
workflows in a Grid environment.
CMCS [Pan2003] is a toolkit for
chemists to manage experimental
data derived from fields such
as combustion research. ESSW
[Fre2005] is a data storage system
for earth scientists; the system
can track data lineage so that
errors can be traced, helping
maintain the quality of large
data sets. Trio [Wid2005] is a
data warehouse system that uses
data lineage to automatically
compute the accuracy of the
data. Additional examples can be
found in the survey by Bose and
Frew [Bos2005].
78
PROVENANCE
Provenance-aware storage
FUTURE DIRECTIONS
of provenance. (R)
Efficiently representing
provenance. An extreme
goal would be to efficiently
represent provenance for every
bit, enabling bit-grained data
transformations, while requiring
a minimum of overhead in time
and space. (RMp)
Scale: the need for solutions that
(RM)
Intrinsic vs. extrinsic provenance
Definition
Potential Approaches
Representation
Management
Presentation
System engineering
Secure implementation
Social implications
PROVENANCE
79
provenance. (M)
PROVENANCE
Resources
With respect to the extensive list of
research gaps noted above, resources will
be needed for research efforts, experimental testbeds, test and evaluation, and
technology transition.
Measures of success
One indicator of success will be the
ability to track the provenance of information in large systems that process and
transform many different, heterogeneous
types of data. The sheer number of different kinds ofsensors and information
systems involved and, in particular, the
number of legacy systems developed
without any attention to maintenance
of provenance present major challenges
in this domain.
Red Teaming can give added analysis
for example, assessing the difficulty of
planting false content and subverting
provenance mechanisms.
Also, confidence-level indicators are
desirablefor example, assessing the
estimated accuracy of the information
or the probability that information
achieves a certain accuracy level.
More generally, analytic tools can evaluate (measure) metrics for provenance.
Cross-checking provenance with
archived file modifications in environments that log changes in detail could
identifiable information
connected with embarrassing or
insurance-relevant information
may be used to make life-critical
health care decisions.
An emergency responder system
profession.
Credit history and scoringfor
References
[Bos2005]
R. Bose and J. Frew. Lineage retrieval for scientific data processing: a survey. ACM Computing Surveys,
37(1):1-28, 2005.
[Fos2002]
I.T. Foster, J.-S. Voeckler, M. Wilde, and Y. Zhao. Chimera: A virtual data system for representing, querying,
and automating data derivation. In Proceedings of the 14th Conference on Scientific and Statistical Database
Management, pp. 37-46, 2002.
[Fre2005]
J. Frew and R. Bose. Earth System Science Workbench: A data management infrastructure for earth science
products. In Proceedings of the 13th Conference on Scientific and Statistical Database Management, p. 180,
2001.
[Hey2001]
A. Heydon, R. Levin, T. Mann, and Y. Yu. The Vesta Approach to Software Configuration Management.
Technical Report 168, Compaq Systems Research Center, Palo Alto, California, March 2001.
[LFS]
PROVENANCE
81
[OPM2007]
L. Moreau, J. Freire, J. Futrelle, R.E. McGrath, J. Myers, and P. Paulson. The Open Provenance Model.
Technical report, ECS, University of Southampton, 2007 (http://eprints.ecs.soton.ac.uk/14979/).
[Pan03]
C. Pancerella et al. Metadata in the collaboratory for multi-scale chemical science. In Proceedings of the
2003 International Conference on Dublin Core and Metadata Applications, 2003.
[PAS]
[SPI2007]
M.M. Gioioso, S.D. McCullough, J.P. Cormier, C. Marceau, and R.A. Joyce. Pedigree management and
assessment in a net-centric environment. In Defense Transformation and Net-Centric Systems 2007. Proceedings
of the SPIE, 6578:65780H1-H10, 2007.
[TAP2009]
First Workshop on the Theory and Practice of Provenance, San Francisco, February 23, 2009 (http://www.
usenix.org/events/tapp09/).
[Wid2005]
J. Widom. Trio: A system for integrated management of data, accuracy, and lineage. In Proceedings of the
Second Biennial Conference on Innovative Data Systems Research, Pacific Grove, California, January 2005.
[Zha2004]
J. Zhao, C.A. Goble, R. Stevens, and S. Bechhofer. Semantically linking and browsing provenance logs for
e-science. In Proceedings of the 1st International Conference on Semantics of a Networked World, Paris,2004.
82
PROVENANCE
audit logs
Control of secondary reuse
Remediation of incorrect information that is disclosed, especially if done
national security
Medical emergencies (for example, requiring information about allergic
PRIVACY-AWARE SECURITY
partnerships, and
collaborations need to selectively
reveal proprietary data to a
limited audience for purposes of
least privilege
appropriately
Protecting data in transmission
management
privacy: (http://www.research.
microsoft.com/jump/50709
and http://www.microsoft.com/
mscorp/twc/iappandrsa/research.
mspx)
PRIVACY-AWARE SECURITY
85
briefing (http://www.cs.berkeley.
edu/~tygar/papers/ISAT-finalbriefing.pdf )
Naval Research Lab: Reputation
in Privacy Enhancing
Technologies (http://chacs.
nrl.navy.mil/publications/
chacs/2002/2002dingledinecfp02.pdf )
ITU efforts related to security,
program (http://www.dhs.gov/
xlibrary/assets/privacy/privacy_
rpt_advise.pdf )
UMBC Assured privacy
(http://freehaven.net/anonbib)
Statistics research community, as
[Pfi+2001])
86
PRIVACY-AWARE SECURITY
FUTURE DIRECTIONS
Policy issues
Distinctions between individual
and group privacy are unclear.
communications privacy
Definition
Potential Approaches
Specification frameworks
PRIVACY-AWARE SECURITY
87
Medium term
Anonymous credentials
Role-based Access Control
(RBAC)
Attribute-based encryption
Distributed RBAC: no central
Measures of success
for privacy
Long term
Private information retrieval
(PIR)
Multiparty communication
Use of scale for privacy
Resistance to active attacks for
deanonymizing data
Developing measures of privacy
Game changing
Limited data retention
Any two databases should be
communications resistant to
timing attack
Resources
This topic is research-intensive, with
considerable needs for testbeds demonstrating effectiveness and for subsequent
technology transfer to demonstrate the
feasibility of the research. It will require
88
PRIVACY-AWARE SECURITY
privacy.
Risk analysis: This has been
insurance.
Black market price of stolen
identity.
fedstats.gov)
Google Trends
PREDICT (e.g., network traffic
data; http://www.predict.org)
Medical research data
E-mail data (e.g., for developing
spam filters)
Possible experimental testbeds include
the following:
Isolated networks and their users
Virtual societies
References
[Bri+1997]
J. Brickell, D.E. Porter, V. Shmatikov, and E. Witchell. Privacy-preserving remote diagnostics, CCS 07,
October 29 November 2, 2007.
[Pfi+2001]
A. Pfitzmann and M. Khntopp. Anonymity, unobservability, and pseudonymity: A proposal for terminology.
In Designing Privacy Enhancing Technologies, pp. 1-9, Springer, Berlin/Heidelberg, 2001.
[Rab1981]
M. Rabin. How to exchange secrets by oblivious transfer. Technical Report TR-81, Aiken Computation
Laboratory, Harvard University, 1981.
[Shib]
Many additional references can be found by browsing the URLs noted above in the text of this section.
PRIVACY-AWARE SECURITY
89
USABLE SECURITY
91
Challenges
Needs
Nontechnical users
Occasional users
System administrators
System designers
Lack of security and/or usability emphasis in Design standards and documented best
education and training
practices for usable security
System developers
Policy makers
USABLE SECURITY
CAPTCHA (Completely
Automated Public Turing test
to tell Computers and Humans
Apart) is a challenge-response
mechanism intended to ensure
that the respondent is a human
and not a computer. CAPTCHAs
are familiar to most web users
as distorted images of words or
other character sequences that
must be input correctly to gain
access to some service (such
as a free e-mail account). To
make a CAPTCHA effective
for distinguishing humans from
computers, solving it must be
difficult for computers but
relatively easy for humans. This
balance has proven difficult to
achieve, resulting in CAPTCHAs
that are either breakable by
computers or too difficult for
humans. Another challenge is
to produce CAPTCHAs that
accommodate users with special
needs.
Not accounting for cultural
USABLE SECURITY
93
Federated identity
management. Cross-domain
access is complex. Simplistic
approaches such as single signon can lead to trust violations.
Conversely, managing too
many passwords is unworkable.
More work is needed on access
cards such as the CAC system,
DoDs Common Access Card,
(which combines authentication,
encryption of files and e-mail,
and key escrow) and other such
systems to identify security
vulnerabilities. In all such
systems, usability is critical.
94
USABLE SECURITY
Overloading of security
security model.
Integration of biometrics with
FUTURE DIRECTIONS
security (E)
Tool development (T)
policies
security technology
USABLE SECURITY
95
usable security
Usable management of access
controls
Usable secure certificate services
Resistance to social engineering
USABLE SECURITY
industry
Medium term
Usable access control mechanisms
(such as a usable form of RBAC)
Usable authentication
Developing a common
Long term
Composability of usable
Resources
Designing and implementing systems
with usable security is an enormously
challenging problem. It will necessitate
embedding requirements for usability
in considerable detail throughout the
development cycle, reinforced by extensive evaluation of whether it was done
adequately. If those requirements are
incomplete, it could seriously impair
the resulting usability. Thus, significant
resourcespeople, processes, and software developmentneed to be devoted
to this challenge.
Measures of success
Meaningful metrics for usable security
must be established, along with
generic principles of metrics. These
must then be instantiated for specific
systems and interfaces. We need to
measure whether and to what extent
increased usability leads to increased
security, and to be able to find sweet
spots on the usability and security
curves. Usable security is not a blackand-white issue. It must also consider
returns on investment.
We do not have metrics that allow
direct comparison of the usability of
two systems (e.g., we cannot say definitively that system A is twice as usable as
system B), but we do perhaps have some
well-established criteria for what constitutes a good usability evaluation. One
possible approach would be to develop
a usable solution for one of the exemplar
problems and demonstrate both that
users understand it and that its adoption
reduces the incidence or severity of the
associated attack. For example, demonstrate that a better anti-phishing scheme
reduces the frequency with which users
follow bogus links. Admittedly, this
would demonstrate success on only a
single problem, but it could be used to
show that progress is both possible and
demonstrable, something that many
people might not otherwise believe is
true about usable security.
USABLE SECURITY
97
References
[Cra+2005]
L.F. Cranor and S. Garfinkel, editors. Security and Usability: Designing Secure Systems That People Can Use.
OReilly Media, Inc., Sebastopol, California, 2005
(http://www.oreilly.com/catalog/securityusability/toc.html).
[Joh2009]
Linda Johansson. Trade-offs between Usability and Security. Masters thesis in computer
science, Linkoping Institute of Technology Department of Electrical Engineering,
LiTH-ISY-EX-3165, 2001 (http://www.accenture.com/xdoc/sv/locations/sweden/
pdf/Trade-offs%20Between%20Usiability%20and%20Security.pdf ).
[SOU2008]
[Sun+09]
[Whi+1999]
Alma Whitten and J.D. Tygar. Why Johnny cant encrypt: A usability evaluation of PGP 5.0.
In Proceedings of the 8th USENIX Security Symposium, Washington, D.C., August 2326, 1999,
pp. 169184 (http://www.usenix.org/publications/library/proceedings/sec99/whitten.html).
98
USABLE SECURITY
Appendix A
Appendix A. Interdependencies Among Topics
This appendix considers the interdependencies among the 11 topic areasnamely,
which topics can benefit from advances in the other topic areas and which topics
are most vital to other topics. Although it is in general highly desirable to separate
different topic areas in a modular sense with regard to R&D efforts, it is also desirable to explicitly recognize their interdependencies and take advantage of them
synergistically wherever possible.
These interdependencies are summarized in Table A.1.
Table A.1: Table of Interdependencies
Y:
X: Topic
10
11
10
10
10
*57
1: Scalable
Trustworthiness
2: Enterprise
Metrics
3: Evaluation
Life Cycle
4: Combatting
Insider
5: Combatting
Malware
6: Global ID
Management
7: System
Survivability
8: Situational
Awareness
9: Provenance
10: PrivacyAware Security
11: Usable
Security
*50
*3
A1
Almost every topic area has some potential influence and/or dependence on the
success of the other topics, as summarized in the table. The extent to which
topic X can contribute to topic Y is represented by the letter H, M, or L, which
indicate that Topic X can make high,
medium, or low contribution to the
success of Y. These ratings, of course are
very coarse and purely qualitative. On
the other hand, any finer-grained ratings
are not likely to be useful in this context.
The purpose of the table is merely to
illustrate the pervasive nature of some
relatively strong interdependencies.
A preponderance of H in a row indicates
that the corresponding row topic is of
fundamental importance to other topics.
That is, it can contribute strongly to the
success of most other topics.
Examples: rows 1 (SCAL: all H),
2 (METR: 9 H), 3 (EVAL: 8 H).
The preponderance of H in a column
indicates that the corresponding column
topic is a primary beneficiary of the
other topics.
Examples: columns 11 (USAB: 10 H),
8 (SITU: 8 H), 4 (INSI: 7 H),
5 (MALW: 7 H).
Not surprisingly, the table is not symmetric. However, there are numerous
potential synergies here, such as the
following:
Scalable Trustworthy Systems
A3
A4
Privacy-aware security: Of
Topic 2: Enterprise-Level
Metrics (ELMs)
What capabilities from other topic areas
are required for effective progress in this
topic area?
Each of the other topic areas is expected
to define local metrics relevant to its
own area. Those local metrics are likely
to influence the enterprise-level metrics.
How does progress in this area support
advances in others?
Proactive establishment of sensible
enterprise-level metrics would naturally tend to drive refinements of the
local metrics.
A5
Survivability of time-
A7
manageability of configurations
and remediation of potentially
dangerous system configurations
are included in the design and
operation of those systems.
Scalable trustworthy systems:
Privacy-aware security: As
Reference
[Neu2006]
A8
Peter G. Neumann. Holistic systems. ACM SIGSOFT Software Engineering Notes 31(6):4-5, November2006.
Appendix B
Appendix B. Technology Transfer
This appendix considers approaches for transitioning the results of R&D on the
11 topic areas into deployable systems and into the mainstream of readily available
trustworthy systems.
B.1 Introduction
R&D programs, including cyber security R&D, consistently have difficulty in
taking the research through a path of development, testing, evaluation, and transition into operational environments. Past experience shows that transition plans
developed and applied early in the life cycle of the research program, with probable transition paths for the research products, are effective in achieving successful
transfer from research to application and use. It is equally important, however, to
acknowledge that these plans are subject to change and must be reviewed often.
It is also important to note that different technologies are better suited for different technology transition paths; in some instances, the choice of the transition
path will mean success or failure for the ultimate product. Guiding principles for
transitioning research products involve lessons learned about the effects of time/
schedule, budgets, customer or end-user participation, demonstrations, testing and
evaluation, product partnerships, and other factors.
A July 2007 Department of Defense Report to Congress on Technology Transition
noted evidence that a chasm exists between the DoD S&T communities focused
on demonstration of a component and/or breadboard validation in a relevant
environment and acquisition of a system prototype demonstration in an operational environment. DoD is not the only government agency that struggles with
technology transition. That chasm, commonly referred to as the valley of death,
can be bridged only through cooperative efforts and investments by research and
development communities as well as acquisition communities.
In order to achieve the full potential of R&D, technology transfer needs to be a
key consideration for all R&D investments. This requires the federal government
to move past working models in which most R&D programs support only limited
operational evaluations/experiments, most R&D program managers consider their
job done with final reports, and most research performers consider their job done
with publications. Government-funded R&D activities need to focus on the real end
goal, namely technology transfer, which follows transition. Current R&D Principal
Investigators (PIs) and Program Managers (PMs) are not rewarded for technology
transfer. Academic PIs are rewarded for publications, not technology transfer. The
government R&D community needs to reward government program managers
and PIs for transition progress.
There are at least five canonical transition paths for research funded by the
Federal Government. These transition paths are affected by the nature of the
technology, the intended end-user, participants in the research program, and
B1
(Industry)
Department/Agency to Academia
to Industry (Start-up)
Department/Agency to Open
B2
TECHNOLOGY TRANSFER
B.3 Topic-Specific
Considerations
In this section, certain issues that are
specific to each of the 11 topics are
considered briefly.
Topic 1: Scalable Trustworthy Systems
Description
Technology has been proven to work in its final form and under
expected conditions. In almost all cases, this TRL represents the end of
true system development. Examples include developmental test and
evaluation of the system in its intended weapon system to determine if
it meets design specifications.
9. Actual system proven through successful mission operations. Actual application of the technology in its final form and under
mission conditions, such as those encountered in operational test
and evaluation. Examples include using the system under operational
mission conditions.
TECHNOLOGY TRANSFER
B3
TECHNOLOGY TRANSFER
TECHNOLOGY TRANSFER
B5
Appendix C
Appendix C. List of Participants in the Roadmap Development
We are very grateful to many people who contributed to the development of this roadmap for cybersecurity research,
development, test, and evaluation. Everyone who participated in at least one of the five workshops is listed here.
Deb Agarwal
Bob Hutchinson
William H. Sanders
Tom Anderson
Cynthia Irvine
Mark Schertler
Paul Barford
Markus Jakobsson
Fred Schneider
Steven M. Bellovin
David Jevans
Kent Seamons
Terry Benzel
Richard Kemmerer
John Sebes
Gary Bridges
Carl Landwehr
Frederick T. Sheldon
KC Claffy
Karl Levitt
Ben Shneiderman
Ben Cook
Jun Li
Pete Sholander
Lorrie Cranor
Pat Lincoln
Robert Simson
Rob Cunningham
Ulf Lindqvist
Dawn Song
David Dagon
Teresa Lunt
Joe St Sauver
Claudiu Danilov
Doug Maughan
Sal Stolfo
Steve Dawson
Jenny McNeill
Paul Syverson
Drew Dean
Miles McQueen
Kevin Thompson
Jeremy Epstein
Wayne Meitzler
Gene Tsudik
Sonia Fahmy
Jennifer Mekis
Zach Tudor
Rich Feiertag
Jelena Mirkovic
Al Valdes
Stefano Foresti
Ilya Mironov
Deb Frincke
John Mitchell
Jim Waldo
Simson Garfinkel
John Muir
Nick Weaver
Mark Graff
Deirdre Mulligan
Rick Wesson
Josh Grosh
Clifford Neuman
Greg Wigton
Minaxi Gupta
Peter Neumann
Bill Woodcock
Tom Haigh
David Nicol
Bill Worley
Carl Hauser
Chris Papadopoulos
Stephen Yau
Jeri Hessman
Vern Paxson
James Horning
Peter Reiher
James Hughes
Robin Roy
C1
Appendix D
Appendix D. Acronyms
A/V
antivirus
AMI
BGP
C2
CAC
CAPTCHA
Completely Automated Public Turing test to tell Computers and Humans Apart
CASSEE
CERTs
CMCS
COTS
commercial off-the-shelf
CUI
CVS
DAC
DARPA
DDoS
DETER
DHS
DKIM
DNS
DNSSEC
DoS
denial of service
DRM
ESSW
EU
European Union
FIPS
FISMA
GPS
HDM
HIPAA
HSI
human-system interaction
HVM
I&A
I3P
IDA
IDE
IDS
INL
IPS
D1
IPsec
IPv4
IPv6
IRB
ISP
IT
information technology
LPWA
MAC
MIT
MLS
multilevel security
MTBF
NIST
NOC
OODA
OS
operating system
OTP
one-time password
P2P
peer-to-peer
P3P
PDA
PGP
PII
PIR
PKI
PL
programming language
PMAF
PSOS
PREDICT
QoP
Quality of Protection
RBAC
RBN
RFID
ROM
read-only memory
SBU
SCADA
SCAP
SIEM
SOHO
SPF
SQL
SRS
Self-Regenerative Systems
SSL
T&E
TCB
TCP/IP
TLD
top-level domain
TPM
TSoS
UI
user interface
UIUC
USB
US-CERT
VM
virtual machine
VMM