RAND_RRA2873-1 (1)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

HS AC HOMELAND SECURITY

OPERATIONAL ANALYSIS CENTER RESEARCH REPORT


AN FFRDC OPER ATED BY R AND UNDER CONTR ACT WITH DHS

DANIEL M. GERSTEIN, ERIN N. LEIDY

Emerging
Technology and
Risk Analysis
Artificial Intelligence and Critical Infrastructure

T
he National Institute of Standards and At a more granular level, tools supporting AI
Technology’s (NIST’s) Computer Security technology development include large-language
Resource Center defines artificial intel- models (LLMs), neural networks, supervised and
ligence (AI) as “[a] branch of computer unsupervised training, and computational linguis-
science devoted to developing data processing tics. Other enabling and converging technologies
systems that performs functions normally associ- such as cyber, big data, and the Internet of Things
ated with human intelligence, such as reasoning, (IoT), have also become inextricably linked to AI
learning, and self-improvement.”1 However, no research and development (R&D). These lists and
single authoritative AI taxonomy exists because the components that directly relate to and support
the key technologies, which come from several AI will likely expand as use cases continue to be
fields, have each developed specific taxonomies identified, AI technologies continue to evolve, and
for AI technologies. For example, in “Adoption of operational AI systems are fielded.
Artificial Intelligence in Smart Cities: A Compre- In this report, we draw from the literature
hensive Review,” the authors identify six key AI on smart cities. As such, we consider early AI-
technologies used in smart cities: machine learning embedded capabilities in a city’s critical infra-
(including deep learning and predictive analysis), structure to be the initial stages of developing a
natural language processing (NLP) (including trans- smart city. Inherent in this assumption is that these
lation, information extraction and classification, and infrastructure components are vital to the safe and
clustering), computer speech (including speech to effective functioning of society, the safety of the
text and text to speech), computer vision (includ- population, and the functioning of the economy. In
ing image recognition and machine vision), expert making this assumption, we focus on critical infra-
systems, and robotics.2 This list is similar to those structure because purpose-built smart cities are
developed for other applications in which AI is unlikely to be widespread within the time horizon of
being incorporated but is tailored for smart cities this report (i.e., ten years). In preparing this report,
and applies to AI-enabled critical infrastructure. we recognize that there are many uses of AI and
2 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: UNMANNED AERIAL SYSTEMS INTELLIGENT SWARM TECHNOLOGY

that our assessment pertains only to critical infrastructure


and smart cities within a ten-year time horizon.
The smart city example provides a road map for how
AI-enabled applications are likely to be incorporated into
KEY FINDINGS such areas as education; health care; energy; environ-
ment, waste, and hazard management; agriculture and
■ Artificial intelligence (AI) is transformative irrigation; privacy and security; mobility and transporta-
technology and will likely be incorporated
tion; and localized disaster and risk management.3 This
broadly across society—including in critical
list of technologies and applications indicates the degree
infrastructure—as occurred with the internet. We
assessed that AI will likely be affected by many to which AI will likely become pervasive in all fields. By
of the same factors as other information age way of a comparison, we assessed that AI will become
technologies, such as cybersecurity, protecting as ubiquitous as the internet, mobile cellular devices, and
intellectual property, ensuring key data protec- GPS technology. This ubiquity will present both opportu-
tions, and protecting proprietary methods and nities and challenges for AI use. AI has already increased
processes. productivity and created efficiencies in numerous critical
■ The AI field contains numerous technologies infrastructure areas. More broadly, AI could provide the
that will be incorporated into AI systems as ability to explore and analyze large datasets to develop
they become available. As a result, AI science greater understanding of the natural world, allow interac-
and technology maturity will be based on key tions across the globe, and enhance the human experi-
dependencies in several essential technology ence through the promise of personalized medicine.
areas, including high-performance comput- Yet many also provide admonitions about AI that
ing, advanced semiconductor development
focus on catastrophic scenarios in which humans
and manufacturing, robotics, machine learning,
natural language processing, and the ability to lose agency to AI systems. In such scenarios, humans
accumulate and protect key data. become subservient to the machines they created, or the
AI systems have values and goals that are antithetical to
■ To place AI in its current state of maturity, it is those of humanity. As more uses of AI are incorporated
useful to delineate three AI categories: artificial into everyday life, such scenarios could become more
narrow intelligence (ANI), artificial general intel-
likely and should be considered and prudently addressed;
ligence (AGI), and artificial super intelligence
(ASI). By the end of the ten-year period of this however, the more mundane uses of AI could also have
analysis, the technology will very likely still only important adverse consequences that should be consid-
have achieved ANI. ered. Two examples from deployed AI systems include
a facial recognition system that performed differently for
■ AI will present both opportunities and challenges different demographics,4 and LLM ChatGPT, which hal-
for critical infrastructure and the eventual devel-
lucinated court cases in 2023.5 Additionally, generative AI
opment of purpose-built smart cities. Developing
and monitoring systems will benefit from the use applications could be used to interfere with the 2024 U.S.
of AI-enabled designs that can increase situ- elections.6
ational awareness and reduce vulnerabilities. AI R&D efforts have some key dependencies. They
However, threat actors will find increased attack rely on vast amounts of data, which are a critical com-
surfaces, especially regarding Internet of Things ponent of training AI systems. Without access to such
applications that are embedded in critical infra- data, confidence in the AI systems’ output and decisions
structure sectors and systems.
should be suspect. In highlighting the importance of
■ The ChatGPT-4 rollout in March 2023 pro- inputs for use in big data analysis (and for our purposes,
vides an interesting case study for how these AI), variations on the V characteristics of volume, velocity,
AI technologies—in this case, large-language value, variety, veracity, validity, volatility, and visualization
models—are likely to mature and be integrated have been used to describe the essential data inputs.7
into society. The initial rollout highlighted that Should AI training data not consider these characteristics,
there were accuracy and consistency problems,
the chances of miscues and misuse of AI increase.
forcing developers to quickly make changes to
In seeking to ensure the U.S. Department of
respond to the issues. This cycle—development,
deployment, identification of shortcomings and Defense’s responsible “design, development, and deploy-
other areas of potential use, and rapid updating ment of AI,” the Defense Innovation Board identified five
of AI systems—will likely be a feature of AI. principles for AI technologies: that they should be respon-
// 3

sible, equitable, traceable, reliable, and governable.8 We The size of the global AI market is expected to grow
assessed that these principles have applicability in many from $86.9 billion (in 2022) to $407 billion by 2027 and
domains and use cases of AI, including for use in AI projects further growth by the end of the study period
systems supporting critical infrastructure. More recently, considered.11
President Joseph R. Biden issued an “Executive Order on We also assess that AI opportunities and challenges
the Safe, Secure, and Trustworthy Development and Use will be cumulative. Increasingly AI will be incorporated
of Artificial Intelligence” on October 30, 2023.9 The execu- into more applications that affect economic prosper-
tive order (EO) establishes “new standards for AI safety ity, national security, and societal well-being. In terms
and security, protects Americans’ privacy, advances of homeland security, we assessed that people’s lives,
equity and civil rights, stands up for consumers and cities, and critical infrastructure will increasingly incorpo-
workers, promotes innovation and competition, advances rate AI. The number of threat actors with access to these
American leadership around the world, and more.”10 capabilities should also be expected to grow as the tech-
In our analysis, we consider four attributes in assess- nology proliferates, more use cases are identified, and the
ing the technology: technology availability (TAV ), and risks full power of AI technology becomes evident. As AI tech-
and scenarios (RS), which we have divided into threat, nologies continue to be developed and mature, the power
vulnerability, and consequence. The RS considered in this of the technology will expand, sometimes in predictable
analysis pertain to AI use affecting critical infrastructure. trajectories and at other times in unpredictable and often
The use cases could be for monitoring and controlling in a discontinuous (or disruptive) manner.
either critical infrastructure or adversaries employing AI
for use in illicit activities and nefarious acts directed at
critical infrastructure. The RS have been provided by the Technology Description and
Scenarios for Consideration
study sponsors from the U.S. Department of Homeland
Security (DHS) Science and Technology Directorate and
the Office of Policy. We compared these four attributes Much of the societal knowledge about AI has come in the
across three periods for critical infrastructure applications form of sound bites. Futurists have highlighted the oppor-
(see Figure 1). tunities and warned of the threats of AI. Movies have
Overall, we assessed that AI technology will continue sensationalized the dangers of out-of-control AI-enabled
to mature during the ten-year horizon of this analysis. technology. Meanwhile, AI has continued to proliferate

FIGURE 1
Artificial Intelligence Risk Assessment for critical Infrastructure

TAV Average
5.0

4.0

3.0

2.0

1.0
Short term (up to 3 years)
Consequence 0.0 Threat Medium term (3–5 years)
Long term (5 to < 10 years)

Vulnerability

NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or
possible, and 4 to 5 = high impact or likely feasible.
4 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

Methodology
Today, the technology We developed a framework for assessing the risks of
emerging technologies. The assessment included an
has achieved only ANI, evaluation of the TAV and potential RS for which a technol-
ogy could be used. The TAV evaluation covered five areas:
though some argue science and technology maturity; use case, demand,
and market forces; resources committed; policy, legal,
that several applications ethical, and regulatory impediments; and technology
accessibility.17 The RS evaluation covered threat, vulner-
have demonstrated ability, and consequence. The ratings for the TAV and RS
categories ranged from 1 to 5, where 1 corresponds to
early AGI. many challenges and 5 to very few, if any, challenges. The
five TAV areas were averaged and included in the emerg-
ing technology risk assessment. To allow the compari-
throughout various fields and purposes, mostly in the sons of emerging technologies in our assessment of the
past 20 years.12 This includes uses in critical infrastruc- consequences, we rated impacts according to the likely
ture applications, such as fraud detection in the banking affected level (national, regional, or local), potential mortal-
industry and monitoring and control of industrial control ity and morbidity, and likely economic and societal disrup-
systems.13 We examined AI applications as they pertain tion. By averaging the threat, vulnerability, consequence,
to the DHS 16 critical infrastructure sectors. However, we and TAV average calculated previously, an emerging tech-
do not seek to provide a detailed history, suggest a single nology risk for a particular scenario could be assessed as
authoritative definition, or capture the many AI use cases low (0 < 2), moderate (2 < 4), or high (4 to 5).
likely to be discovered for use in critical infrastructure– These assessments were repeated for three periods:
related applications. short term (up to three years), medium term (three to five
This report was also written during the early stages years), and long term (five to ten years). This allowed us
of AI development, as humankind continues to learn more to assess individually and collectively how the TAV and RS
about the uses and misuses of AI. The study considers would be affected over time. In the assessment, we con-
only a ten-year time horizon, long before “AI would inte- sidered how the threat, vulnerability, and consequence
grate new technologies, into a ‘singularity,’” which is a evolved over time, as well as whether any preparedness,
“hypothetical point in time at which technological growth mitigation, and response activities had been undertaken
becomes uncontrollable and irreversible, resulting in that could reduce the risk.
unforeseeable changes to human civilization.”14
To place AI into its current state of maturity, it is
useful to delineate three AI categories: artificial narrow
intelligence (ANI), artificial general intelligence (AGI), and
Technology Availability
artificial super intelligence (ASI). The global Millennium Assessment
Project describes these categories: “Artificial narrow
The TAV assessment was conducted without regard to the
intelligence is often better and faster than humans in,
specific RS. Those factors were addressed in the subse-
for example, driving trucks, playing games, and medical
quent RS assessment. We separated these assessments
diagnostics. Artificial general intelligence is [the] hypo-
in this way to isolate the effects of the changes in technol-
thetical ability of an intelligent agent to understand or
ogy over the ten-year study time frame.
learn any intellectual task that a human being can. Artifi-
There are several interesting points of departure for
cial super intelligence sets its own goals independent of
this TAV assessment. First, the many technologies that
human awareness and understanding.”15 Today, the tech-
comprise AI mature along different time scales, making
nology has achieved only ANI, though some argue that
it challenging to identify an exact technology readiness
several applications have demonstrated early AGI.16
level (TRL) for the field of AI. As a result, a more produc-
tive approach to examining AI availability is to associate
the TRLs of the individual AI technologies with the use
cases that are likely to be developed. Second, given the
// 5

potential of AI growth to become “uncontrollable and All of these fields, regardless of where they fit in the
irreversible, resulting in unforeseeable changes to human overall AI development ecosystem, will mature according
civilization,”18 the development of the technology and to their own timelines, and each of the different use cases
associated guardrails should be considered simultane- incorporating these technologies will also mature along
ously to ensure that this runaway condition does not their own timelines. As such, seeking to identify a single
occur. Third, developers, users, and even society should AI TRL would not be fruitful. However, understanding
expect that AI applications will have vulnerabilities and individual TRLs of the key supporting technologies within
challenges that are inherent in the associated technolo- a specific use case, such as critical infrastructure, will be
gies used in AI applications—we assessed that this essential for predicting how AI will mature.
effect will be cumulative. As an example, data science, Understanding AI’s relationships to these other key
the cyber domain, and the IoT have vulnerabilities that technologies will be essential and likely affect the long-
will combine and converge within AI-enabled platforms. term prospects for AI systems. For example, debates
These affects will need to be mitigated to ensure the safe are occurring about intellectual property (IP) protections
use of AI. relevant to AI systems. AI systems require vast amounts
AI is already being used in applications directly of data. Some experts have spoken about ingesting the
related to critical infrastructure. Examples include health cumulative totals of all of the books ever written within
care (e.g., diagnosis of patients and predicting patient an AI platform when conducting supervised learning.21
outcomes), finance (e.g., detecting fraud and improv- However, the creators of this content—including authors
ing customer service), transportation (e.g., developing and artists—have raised concerns about not being com-
self-driving vehicles and predictive maintenance), and pensated for their role in producing the inputs for the AI
manufacturing (e.g., optimizing production and improving platforms.22 This creates a challenging dilemma in that
quality).19 Despite these uses, we assessed that these AI developers might decide to forgo key potential inputs
are narrow AI uses that reflect areas in which technology that have IP rights protections. However, doing so likely
maturity has been achieved. However, these remain early excludes important work that could be helpful in training
use cases, which are likely to become more sophisticated AI models. How this issue is addressed has implications
as AI technology areas and the individual component for the AI systems that are developed. This also has impli-
technologies continue to mature. cations for the policy, legal, ethical, and regulatory impedi-
ments to developing both AI foundation and transformer
models.
Science and Technology Maturity AI neural networks use these foundation and trans-
The AI field contains numerous technologies that will former models, which are pretrained for NLP and com-
be incorporated into AI systems as they become avail- puter vision using large volumes of data and computing
able. We could also decompose these technology fields power.23 Foundation models provide a starting point for
into lower levels. As a result, the progress in AI is based developing more-advanced and -complex models, while
on key dependencies in several essential technology transformer models use a neural network structure and
areas, including high-performance computing, advanced unsupervised learning.24 Both models require vast data
semiconductor development and manufacturing, robot- inputs that consider the V characteristics of volume,
ics, machine learning, NLP, and data science (including velocity, value, variety, veracity, validity, volatility, and
accumulating and protecting key data). Specific applica- visualization; failure to do so increases the chances of
tions also mature according to their own timelines. One miscues and misuses of AI.25
analysis from ContextualAI indicates that handwriting The ChatGPT-4 rollout in March 2023 provides an
analysis, speech recognition, image recognition, reading interesting case study for how these AI technologies—in
comprehension, and language understanding have all this case, LLMs—are likely to mature and be exposed
surpassed human performance, while other tasks—such to society. Previous versions of the software had been
as common-sense completion, grade school math, and in development for years, largely for R&D purposes.
code generation—have achieved approximately 85 to In ChatGPT-4, the software captivated the interest of
98 percent of human performance.20 One could argue broader society. Even casual users were intrigued by the
that AI in those narrow applications (in which it has new AI platform. However, the initial rollout highlighted
already exceeded human performance) has achieved that there were accuracy and consistency problems
AGI-like capabilities. within the platform, forcing the developers to quickly
6 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

neural networks: the generator and discriminator. Genera-


tors “produce realistic fake data from a random seed,”
The cycle— and discriminators learn to “distinguish the fake data from
realistic data.”27 This is what allows AI systems “to learn
development, from a set of training data and generate new data with
the same characteristics as the training data.”28 It also
deployment, provides a capability to assess the veracity of the AI, as
well as identify deepfakes and other adulterations of the
identification of input data.
The development of these AI systems and counter-
shortcomings and other measures remains challenging. As an example, NATO
conducted a test to determine whether AI could help
areas of potential use, critical infrastructure withstand cyberattacks. The goal
was “to test and measure AI’s efficiency in collecting
and rapid updating of AI data and assisting teams in responding to cyberat-
tacks against critical systems and services, along with
systems—will likely be a highlighting the need for tools that improve collaboration
between humans and machines to reduce cyber risk.”29
feature of AI. The findings highlighted that AI “can act without human
intervention, can help identify critical infrastructure cyber-
attack patterns and network activity, and detect malware
make changes to respond to the issues that were being to enable enhanced decision-making about defensive
identified.26 The same type of early prototyping is likely responses” while also warning that “numerous key gov-
to occur with regard to critical infrastructure because ernment entities are flying blind on critical infrastructure
individual systems that together will comprise a future AI security, having failed to implement most recommenda-
architecture are themselves AI-enabled technologies. tions related to protecting critical infrastructure since
The cycle—development, deployment, identification 2010.”30
of shortcomings and other areas of potential use, and Additionally, these same AI capabilities—including
rapid updating of AI systems—will likely be a feature of AI. GANs—which can provide advances in cybersecurity for
Not surprisingly, this approach is common to many tech- defenders, can also be used to develop advanced capa-
nologies, such as the internet, social media, and now AI. bilities for cyber attackers. AI systems could be trained
For critical infrastructure, this will mean that subsystems to adapt their behavior to encourage defenders to reach
that rely on big data, cyber networks, high-performance erroneous conclusions or even bait defender systems
computing, and the IoT will be incorporated before the before attacking using different tactical approaches.31
limitations are fully understood; it also means that the They could also develop new malware versions that are
stakes are likely to be very high should an AI-enabled more likely to bypass antivirus scanners,32 conduct recon-
system fail catastrophically, especially in a critical infra- naissance of cyber networks, or identify opportunities
structure sector. for and conduct social engineering attacks to penetrate
Mitigating such challenges implies the codevelop- networks.33
ment of the technology and appropriate mitigations.
The values and goals of the AI platforms must be clearly
understood, and limitations developed to prevent devia- Use Case, Demand, and Market Forces
tion from human expectations and norms. We discuss Slowly, in the past 20 years, use of AI has increased. In
this in more detail in the next section about use cases. many cases, users do not even realize that they are inter-
We also assess that a better understanding of the use of acting with the technology. Consider Siri and Alexa, which
unsupervised learning must be developed; as unsuper- have continued to add increasingly sophisticated AI to
vised learning is allowed to occur in AI systems, the trace- their systems. Forbes reports that more than half of U.S.
ability of the systems becomes more challenging. The AI mobile phone users use voice search daily. More recently,
platforms should also incorporate the use of generative there has been a growing interest in technologies that
adversarial networks (GANs). GANs are made of two
// 7

employ AI. ChatGPT-4 had more than 1 million users in Development of AI, especially in these early stages,
the first five days that it was deployed.34 requires access to key technologies that are currently
Industry has also become interested in continuing to limited by resource availability. As a Washington Post
proliferate AI capabilities, driven largely by the need for October 2023 account offered, “To build AI at any kind
efficiencies. We assessed that these market forces are of meaningful scale, any developer is going to have core
likely to continue to drive this increasing AI use. For exam- dependencies on resources that are largely concentrated
ple, one survey reports that “97% of business owners in only a few firms.”40
believe that ChatGPT will benefit their businesses.” This Four areas in particular demonstrate the costs
same survey said that “64% of business owners believe AI and resources required for developing AI technolo-
has the potential to improve customer relationships” and gies: specialized equipment (e.g., semiconductors and
that “over 60% of business owners believe AI will increase high-performance computers), data, infrastructure, and
productivity.”35 human capital. Semiconductors are a key component
An increasingly large market for AI technologies of AI systems, and the costliest and hardest-to-develop
exists. One report indicates that three factors are driving semiconductors continue to be a resource constraint.41
the increase in size of the total addressable market: the High-performance computing is also in short supply but
increasing availability of data, the falling cost of computing continues to be essential to handling the extremely large
power, and a growing talent pool. Key industries cited in volumes of data that will be necessary to gain confidence
the report include health care, manufacturing, retail, and in the AI systems being developed.
finance.36 Data are an essential component of AI, and confi-
The same report from Grand View Research identi- dence in the AI models grows as the amount of data used
fies the 2022 market as being worth $136.55 billion and in developing and training the models increases. The
projected an expansion of the compound annual growth Washington Post article referenced earlier adds, “Training
rate of 37.3 percent from 2023 to 2030, resulting in a ‘generative’ AI systems like chatbots and image genera-
revenue forecast of $1.811 trillion in 2030. The report tors is hugely expensive. The technology behind them has
also expanded the number of sectors that will specifically to crunch through trillions of words and images before it
benefit from AI to include health care, finance, law, retail, can produce humanlike text and photorealistic pictures
advertising and media, automotive and transportation, from simple prompts,” requiring “thousands of special-
agriculture, and manufacturing.37 In comparing corporate ized computer chips sitting in huge data centers that use
adoption of AI in China and the United States, a Forbes enormous amounts of energy.”42
survey found that 58 percent of companies in China were On the confluence of the high-performance comput-
deploying AI, while only 25 percent of U.S.-based com- ing increases and data capacity, we have seen that “for a
panies were using AI.38 Still, the United States appears to model trained on one trillion data points, a model trained
have an advantage in technology development, especially in 2021 required ~16,500x less compute than a model
in basic and applied research and early development.39 trained in 2012.”43 Two LLMs, Alphabet’s Pathways Lan-
Of course, with any new technology with such dis- guage Model 2 and Meta’s Large Language Model Meta
ruptive potential, there are likely going to be concerns. AI, both developed in 2023, used 3 trillion and 1 trillion
Some people have expressed unease about the potential data points, respectively.44 Infrastructure, including labo-
loss of jobs to AI technologies. Others have expressed ratories for AI R&D, fabrication plants for manufacturing
concerns about becoming overly dependent on AI. Still semiconductors, and the skilled workforce necessary to
others have expressed concerns about not adhering develop the AI systems, represents key technology areas
to key AI principles and the potential for loss of human that will be essential to AI development. The entry costs
agency. for many of these essential feeder technologies are often
quite high, which limits access to technology develop-
ment capabilities.
Resources Committed Human capital provides a key barrier to entry and
AI presents an interesting dichotomy between the cost to remains in short supply, despite recent growth in the AI
develop the technology and the cost to use the technol- workforce. AI researchers tend to be a homogeneous
ogy. This will become more pronounced as the science cohort; “[s]ixty-two percent were concentrated in 10 elite
and technology maturity continues and more AI systems universities and top companies,” and 70 percent of
and use cases are identified and become available. U.S.-based AI researchers “were foreign-born or foreign-
8 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

Despite concerns about AI, there are few policy,


legal, ethical, and regulatory impediments to its
development.
educated.”45 Men made up 94 percent of U.S.-based A December 2020 EO, “Promoting the Use of Trust-
AI researchers. The “most-published and highest-h- worthy Artificial Intelligence in the Federal Government,”
indexed researchers” worked in companies, rather than in directs federal agencies to inventory their AI use cases
academia.46 and share their inventories with other government agen-
The resource dichotomy comes in comparing the rel- cies and the public. The EO describes nine “Principles
atively high cost and complexity of developing AI systems for Use of AI in Government,” which require that when
and the relatively low cost and ease of use of interacting “designing, developing, acquiring, and using AI in the
with AI systems. The CEO of OpenAI stated that “GPT-4 Federal Government, agencies shall adhere to the follow-
cost over $100 million to train,”47 but the product is avail- ing Principles:
able online with little or no payment required, depending (a) Lawful and respectful of our Nation’s values. . . .
on the task to be undertaken. We assessed that these (b) Purposeful and performance-driven. . . .
trends are likely to continue. (c) Accurate, reliable, and effective. . . .
(d) Safe, secure, and resilient. . . .
Policy, Legal, Ethical, and Regulatory (e) Understandable. . . .
(f) Responsible and traceable. . . .
Impediments (g) Regularly monitored. . . .
Despite concerns about AI, there are few policy, legal, (h) Transparent. . . .
ethical, and regulatory impediments to its development. (i) Accountable.”50
Many experts, government officials, industry leaders,
A NIST AI risk management framework (RMF) was
policy experts, and concerned members of the public
released on January 26, 2023, and “is intended for
have spoken about the emerging dangers of unfettered
voluntary use and to improve the ability to incorporate
AI being introduced into the wild, but there is little to
trustworthiness considerations into the design, develop-
impede its development. In the past several years, numer-
ment, use, and evaluation of AI products, services, and
ous documents have described the growing global and
systems.”51 The NIST AI RMF Playbook serves as the
national pervasiveness of AI and made recommendations
accompanying document to the AI RMF and contains
for managing the technology.
four functions—govern, map, measure, and manage—
Emphasizing the importance of AI to the future of
that provide (voluntary) guidance for developing specific
the U.S. economy and national security, on February 11,
actions and outcomes.52 The National Artificial Intelligence
2019, President Donald Trump issued an EO direct-
Advisory Committee year 1 report from May 2023, offers
ing federal agencies to ensure that the United States
further guidance on AI, stating, “direct and intentional
maintains its leadership position in AI. Among its objec-
action is required to realize AI’s benefits, reduce potential
tives, the EO intends to “[e]nsure that technical stan-
risks, and guarantee equitable distribution of its benefits
dards . . . reflect Federal priorities for innovation, public
across our society.” It goes on to emphasize the impor-
trust, and public confidence in systems that use AI tech-
tance of responsible governance, beginning with “stan-
nologies; and develop international standards to promote
dards and best practices.”53 Still, there is little that can
and protect those priorities.” The EO directs the Secretary
be considered anything beyond guidance and voluntary
of Commerce, through NIST, to issue “a plan for Federal
measures to improve the development and use of AI.
engagement in the development of technical standards
The Biden administration announced new actions
and related tools in support of reliable, robust, and trust-
that will further promote responsible American innovation
worthy systems that use AI technologies.”48 This plan was
in AI and protect people’s rights and safety.54 An AI Bill of
prepared with broad public- and private-sector input.49
Rights includes five principles “for a society that protects
// 9

all people from these threats—and uses technologies in legal, ethical, and regulatory measures regarding AI. For
ways that reinforce our highest values”: safe and effec- example, the 2018 EU general data protection regulation
tive systems; algorithmic discrimination protections; data (GDPR) relates to data that would be useful in develop-
privacy; notice and explanation; and human alternatives, ing AI foundation and transformer models. The EU is also
consideration, and fallback.55 As with other initiatives, proposing a European law on AI—the EU AI Act—which
these principles are guidance and do not reflect man- develops three risk categories: “(a) systems that create
dates for AI development or use. an unacceptable risk, which will be banned, (b) systems
As highlighted earlier, President Biden issued the that are high risk—that will be regulated, and (c) other
EO, which establishes “new standards for AI safety and applications, which are left unregulated.”61 Debates
security, protects Americans’ privacy, advances equity continue regarding “exact criterion and specifics of the
and civil rights, stands up for consumers and workers, law.”62 Despite the continued debate, some are already
promotes innovation and competition, advances Ameri- indicating that this act could serve as a model for others
can leadership around the world, and more.”56 This EO to follow.
potentially represents an important inflection point in AI Industry has also talked about the need for a broad
development, management, and use; however, as with understanding that responsible AI development will likely
all such policy documents, implementation of the EO and need assistance by way of laws, policies, and regulations.
follow-through will be key. At President Biden’s request, seven U.S.-based AI com-
Export controls serve to restrict the proliferation panies (including Amazon.com, Inc., Alphabet, Inc., and
of key AI technologies. One source comments, “The Meta Platforms, Inc.) “promised to allow outside teams
technology underpinning AI is heavily regulated when it to probe for security defects and risks to consumer pri-
is exported from the US. The data used to train AI also vacy” as an initial measure.63 Google’s Recommendations
could be considered ‘technical data’ under the Export for Regulating AI states directly, “[W]hile self-regulation
Administration Regulations (EAR) and International is vital, it is not enough. Balanced, fact-based guidance
Trade in Arms Regulations (ITAR), especially if the data from governments, academia and civil society is also
[are] proprietary (some data sources, including publicly needed to establish boundaries, including in the form of
available information, educational information, and fun- regulation.”64 It is noteworthy that other industries have
damental research, generally are not subject to export already established policies, regulations, standards,
restrictions). Running afoul of these regulations can result norms, and ethics, either national or internationally.65
in criminal and civil sanctions, including jail time, fines, Congress has also begun to deliberate on AI issues
and being prohibited from exporting.”57 The Committee but appears to be in the early stages, with concrete mea-
on Foreign Investment in the United States (CFIUS) also sures likely to come well in the future. If the cyber legisla-
“establishes mandatory declaration requirements for tion and social media deliberations are any indications,
foreign investment transactions involving critical technolo- progress will take time because balancing protections,
gies.” These regulations limit sharing AI technologies.58 risk, stakeholder prerogatives, and innovation will likely
A recent expansion of the restrictions could halt the flow prove challenging. Finally, the American Bar Association’s
of advanced Nvidia semiconductors to China—Nvidia Legal Rebels podcast summarizes the overall progress
is concerned because the chips were designed to be toward specific AI regulation of legislation as follows, “At
export-compliant and one-fifth of Nvidia’s revenue comes present, the regulation of AI in the United States is still in
from Chinese sales.59 its early stages, and there is no comprehensive federal
Regarding international AI concerns, the United legislation dedicated solely to AI regulation. However,
Nations Security Council conducted its first debate on there are existing laws and regulations that touch upon
AI on July 23, 2023. In his remarks, Secretary-General certain aspects of AI, such as privacy, security, and
António Guterres focused on the “opportunity to con- anti-discrimination.”66
sider the impact of artificial intelligence on peace and Despite all of the activities surrounding AI, there are
security—where it is already raising political, legal, ethical relatively few impediments that would hinder development
and humanitarian concerns.”60 He has already spoken or use of AI systems other than those caused by export
forcefully about lethal autonomous weapon systems on control provisions, resource shortfalls, or knowledge.
the battlefield.
The European Union (EU) should likely be considered
the most stringent governing body in developing policy,
10 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

The growing use of AI for national security


purposes, including for monitoring critical
infrastructure systems, suggests that proliferation
of AI systems will continue.
Technology Accessibility The growing use of AI for national security purposes,
including for monitoring critical infrastructure systems,
In the last category of TAV, we assessed that it is unlikely
suggests that proliferation of AI systems will continue.
that average people could develop advanced AI tech-
Furthermore, we assessed that critical infrastructure and
nology given the need for sophisticated semiconduc-
eventually smart cities by their nature will require the use
tors, high-performance computing, and the volume of
of AI technologies to gain the benefits of a “a municipality
data necessary to conduct supervised and unsuper-
that uses information and communication technologies
vised training.67 However, as identified in three other
(ICT) to increase operational efficiency, share information
TAV categories—use case demand and market forces;
with the public and improve both the quality of govern-
resources committed; and policy, legal, ethical, and regu-
ment services and citizen welfare.”70
latory impediments—there is little that would slow the
use of the technology once the initial foundation or trans-
former models have been developed. Consider that voice Overall Technology Availability
and video open-source generators have already been AI applications will continue to become integrated across
used in making convincing deepfakes.68 The adoption of new markets and use cases, leading to increased TAV.
ChatGPT-4 serves as a recent example of how rapidly the We assessed that the applications will likely continue to
AI technology could proliferate if users sense that there is become more sophisticated and provide greater capabili-
a potential advantage to incorporating the technologies. In ties for users. Applications, such as querying LLMs to
fact, ChatGPT-4 allows users to customize the AI platform provide information about how to conduct an attack on
for specific applications. This will occur despite concerns critical infrastructure, will also mature, thus providing more
that GANs can be used to develop attacks against critical details that could be useful in perpetrating such an attack.
infrastructure. We assessed that applications—such as using AI to iden-
In a November 2023 speech at the Carnegie Endow- tify cyber insecurities in networks—will also continue to be
ment, Arati Prabhakar (director of the Office of Science integrated into cybersecurity protocols. Such capabilities
and Technology Policy at the White House and former will be dual use, in that they could also be used to identify
DARPA director) illustrated the rapidly increasing avail- cyber vulnerabilities for the purpose of gaining access
ability of AI along with its decreasing costs. Prabhakar to networks for a variety of illicit purposes. Once these
stated that in early 2023, a powerful AI model could AI systems are in use, making changes to their inputs
cost $100 million to develop, but with advancements— through supervised training with erroneous data or unsu-
including better engineering, more efficient algorithms, pervised training (which makes traceability more challeng-
and more open-source models out in the world—the ing) are likely to continue to increase.
landscape has changed dramatically. "Many organiza- Table 1 provides our assessment of the potential for
tions and companies are developing powerful general- AI TAV for critical infrastructure applications in the short
purpose models for under $1 million, and if you look at term, medium term, and long term. To reiterate, our find-
what people are able to do, customizing models that are ings from this TAV assessment do not necessarily indicate
based on open source, those can be turned into specific, that the technology will be used for illicit purposes—this
fairly powerful, narrower applications [with] . . . a few possibility is addressed in the RS discussion. However,
hours of effort on commodity hardware—just off-the-shelf they do indicate that TAV barriers have been considerably
computing. That’s $100, not $100 million.”69
// 11

TABLE 1
Artificial Intelligence Technology Availability for critical infrastructure applications
Policy, Legal,
Science and Use Case, Ethical, and
Technology Demand, and Resources Regulatory Technology
Scenario Maturity Market Forces Committed Impediments Accessibility TAV Average

Short term 2 2.5 3 5 2 2.9


(up to 3 years)

Medium term 2.5 3 4 5 2.5 3.4


(3–5 years)

Long term 3 3.5 5 4 3 3.7


(5–10 years)
NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or possible, and 4 to
5 = high impact or likely feasible.

lowered and that the potential for employing AI capabili- applications, in which high reliability and risk aversion are
ties for illicit purposes will grow. important considerations in the development and field-
In our evaluation of the individual TAV categories, we ing of new capabilities. For such applications, integration
assessed that science and technology maturity will and testing will be required to ensure that full functionality,
continue to increase. To date, many of the AI applications safety, and failure modes are well understood and within
available to users have been more like demonstrations established limits.
or prototypes. Overall, we assessed that AI science and The resource dichotomy will remain. We assessed
technology maturity will continue to increase as the R&D that developing complex AI systems will be outside the
in individual AI technologies matures. However, we also capability of most users who might seek to develop new
assess that it is extremely unlikely that widespread AGI AI systems, but using AI systems for a variety of licit and
will be available by the end of the ten-year period under illicit purposes is likely to become more prevalent. Policy,
consideration; some narrow AI applications might sug- legal, ethical, and regulatory impediments will not
gest that AGI has been achieved, but these cases will be present significant barriers to the employment of AI tech-
limited and likely result in a consensus that general intel- nologies. Given the ratings for the other four elements
ligence has not yet been achieved. of TAV, we assessed that the final category—technology
As AI continues to be employed, additional use accessibility—would go from low to moderate by the
cases, demand, and market forces will be identified, medium term and remain moderate through the end of
and the technology will become more readily available the ten-year period under consideration.
during the ten-year study time frame. For example, AI
applications are likely to be identified across even more
critical infrastructure sectors and applications. To date,
many of the envisioned illicit use cases have focused
The Risk Assessment
on using AI to gather information (e.g., asking an LLM to This risk assessment focuses on the threat, vulnerability,
provide a formula for a toxin). However, we assessed that and consequence to critical infrastructure of employing
there will be a shift toward using AI to compile informa- AI technologies for either licit or illicit purposes.71 DHS’s
tion about taking actions, such as penetrating critical Cybersecurity and Infrastructure Security Agency (CISA)
infrastructure. In terms of both science and technology plays a pivotal role in critical infrastructure because it
maturity and use cases, demand, and market forces, coordinates with the 16 sectors across an array of areas,
we assessed that ChatGPT-4’s experience will recur for including regulations and standards and preparedness
other breakthroughs in the technology. To reiterate, the and response.72
systems will be made available in early stages of maturity, We will not conduct an exhaustive analysis across
and users are likely to be used to test the applications. all 16 sectors, but rather will identify the types of threat,
Once errors are discovered, developers will undoubt- vulnerability, and consequence that are possible. As a
edly upgrade the AI platforms. However, this approach reminder, in this report, we draw from the literature on
would not be acceptable in many critical infrastructure smart cities and consider early AI-embedded capabilities
12 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

in a city’s critical infrastructure to be the initial stages of for swiftly analyzing and correlating patterns across bil-
developing a smart city. Inherent in the assumption is that lions of data points to track down a wide variety of cyber
these infrastructure components are vital to the safe and threats.”76 Still, GAN technology could also be used to
effective functioning of society, the safety of the popula- effectively perpetrate an attack.77
tion, and the functioning of the economy. In making this The increasing incorporation of IoT capabilities
assumption, we focus on AI-enabled critical infrastructure increases vulnerability to cyberattacks. As one source
because purpose-built smart cities are unlikely to be notes, “AI impacts the power grid system through its
widespread within the time horizon of this report (i.e., ten capacity to absorb usage pattern data and deliver precise
years). calculations of prospective demand, making it a prime
Although our analysis does not pertain exclusively to technology for grid management.”78 However, it also
smart cities, we use the smart city concept as an exam- increases the digital footprint and entry points for hack-
ple because such cities ordinarily contain a majority of the ers to target in a cyberattack. This increased attack sur-
critical infrastructure sectors and the digital capabilities face creates other potential vulnerabilities that could be
that could use AI capabilities for monitoring and control exploited. AI systems could be used to conduct network
and that could be targeted by adversarial AI systems.73 reconnaissance, develop network penetration plans, and
Smart cities build upon the IoT, which is composed of even conduct attacks, all without human intervention.79
networked sensors and actuators that offer situational AI systems—including using GANs and machine
awareness for a city and its critical infrastructure, as well learning—have developed malware that is effective at
as the ability to remotely control and manage major func- evading cyber defenses, allowing the enhanced malware
tional parts of the city.74 to bypass antivirus scanners.80 Research has also dem-
Our RS analysis builds on the previous TAV section, onstrated how GANs have been effective at targeting indi-
in which we concluded that AI deployment remains in viduals for social engineering attacks capable of penetrat-
the early stages for many use cases but has achieved ing networks based on their social media presence; the
important penetration in other areas, including some criti- findings demonstrated that “threat actors can enhance
cal infrastructure sectors, such as the financial services the effectiveness of their phishing attacks by using AI as a
sector. In considering AI’s likely effect on risk, one source malicious tool.”81
offers, “AI is expected to play a foundational role across Insider threats represent another category of activ-
our most critical infrastructures,” but the source also goes ity that could be used in targeting critical infrastructure
on to raise several important questions about AI’s use, during multiple stages of AI, from development to use
including “whether it’s capable of taking on such vital of the technology. During development of AI systems,
tasks, collaborative enough to cooperate with humans insiders could create insecurities and back doors that
and trustworthy enough to prove its transparency, reliabil- could be targeted for exploitation. Data from supervised
ity and dependability.”75 or unsupervised learning could have been adulterated in
the development phase, resulting in incorrect training of
the foundation or transformer models. This could cause
Threat errant signals to be produced regarding sensor informa-
AI threats to critical infrastructure could emanate from tion and even result in operators taking unnecessary or
several possible sources. AI could be used in developing dangerous actions.
and monitoring critical infrastructure, which could provide The degree of autonomy granted to AI systems also
benefits for optimizing designs, improving efficiency, and represents a possible threat. In a smart grid, for example,
ensuring safety. However, using AI in this way also has the the industrial control systems could alert the operator
potential to create threats if inputs used in the design of that the system is malfunctioning and call for action to be
the infrastructure—say, a bridge—had been either incor- taken. A human operator or even an AI–human opera-
rectly used or tampered with. tor team might do further interrogation of the system to
In this way, AI provides advances for both attackers validate the fault signal, whereas an AI operator could
and defenders. A Microsoft report states, “AI technologies decide to take an immediate action based on its incorrect
can provide automated interpretation of signals gener- training. Either of the cases could present issues—the
ated during attacks, effective threat incident prioritization, human and AI–human operator team might be too slow
and adaptive responses to address the speed and scale to act and the AI operator could act quickly but be taking
of adversarial actions. The methods show great promise erroneous actions.
// 13

The threats discussed previously represent a small example of AI use applicable to the financial sector is the
subset of possible concerns resulting from AI-generated prevention and mitigation of distributed denial-of-service
activities targeting critical infrastructure. Finally, we attacks.85
assessed that threats will continue to be cumulative. By As the scope and scale become more encompass-
this, we signal that all technologies that contribute to AI ing and the uses more generalizable, additional concerns
development and use—such as the ability to accumu- are likely to surface. For example, AI platforms capable of
late and protect key data, high-performance computing, planning and controlling smart cities or even critical infra-
semiconductor development and manufacturing, robot- structure (such as the electrical grid) could be employed
ics, machine learning, and NLP—could be targeted and to develop attack plans against this critical infrastructure.
should be considered as having potential vulnerabilities. The use of AI to target the U.S. election system—be it
This includes the enabling and converging technologies, deepfakes and misinformation or gerrymandering con-
such as cyber, big data, and IoT. gressional districts—provides another example of a scope
and scale that would be expansive and likely a cause of
great concern. AI has also been contemplated for use
Vulnerability in developing advanced biological and chemical weap-
In the “Threat” section, we allude to potential vulnerabili- ons that could be used against critical infrastructure or
ties; in this section, we provide a conceptual approach populations.86
to considering critical infrastructure vulnerabilities. In this Complexity also leads to concerns. We assessed
approach, we begin from the premise that an impor- that as AI systems become more complex and approach
tant driver of the vulnerability is the use case for the AI AGI, it will become increasingly difficult for humans to
systems. Here we assessed that the scale, scope, and understand and evaluate the outcomes of AI-enabled
complexity of the applications are important to the vulner- processes. Here, incorporating the Defense Innova-
ability caused by the AI system. For our purposes, scale tion Board’s five principles for AI technologies—that
refers to the size of the affected population, while scope they be responsible, equitable, traceable, reliable, and
refers to the number of applications into which an AI governable—is essential for reducing AI vulnerabilities.87
system is incorporated. Additionally, the use of GANs, protocols for the validation
AI systems confined in a defined system will likely not and verification of AI models, and reducing vulnerabilities
be as concerning as more-general applications that cut associated with technologies that contribute to AI devel-
across numerous areas and applications. Consider medi- opment and use will be critical.
cal applications in the field of oncology, in which AI-based
technology is being used to characterize tumors; such
abilities have proven quite useful.82 Given that the treat-
ment protocols for the diagnosis rely on human–machine
teaming, few would likely voice objections to this use of
AI. However, as the scope and scale of the applications AI platforms capable of
increase, it is likely that concerns will surface.
A transportation sector that includes self-driving planning and controlling
cars and long-haul trucks empowered by AI introduces
vulnerabilities that some have argued introduce too many smart cities or even
issues, including loss of agency for those riding in the
vehicles, liability and insurance concerns should an acci- critical infrastructure
dent occur, and the regulation and standardization of the
vehicles.83 These same concerns could be magnified if (such as the electrical
an entire sector were to turn to AI, such as the financial
services sector. AI has already been incorporated into the grid) could be
traditional financial services sector for protecting finan-
cial cyber networks, fraud detection, loan decisions, and employed to develop
newer financial technology applications, such as mobile
banking, peer-to-peer payment services, automated attack plans against this
portfolio managers, and trading platforms.84 A specific
critical infrastructure.
14 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

Finally, to mitigate vulnerabilities associated with AI, tions or not to act on the recommendations. Acting on
integration and testing will be imperative during all phases the outcome in a hypothesis 1 provides a good outcome
of development and deployment to ensure that full func- because the AI works as intended and its recommenda-
tionality, safety, and failure modes are well understood tions are followed. However, acting on the outcome in
and within established limits for any critical infrastructure hypothesis 2 could result in an adverse, potentially cata-
applications. strophic outcome. To avoid such negative consequences,
humans must retain control and understanding of the AI
systems. The greater the consequences, the more impor-
Consequence tant this becomes for ensuring the appropriate uses of AI
The consequences associated with AI use cases will also technology and preventing catastrophic outcomes.
be affected by the scope, scale, and complexity of the Regarding consequences, we assessed that the
applications. Critical infrastructure projects and sectors ChatGPT-4 difficulties when that technology was first
that function across a wide scope and large scale should rolled out suggest that moving deliberately to field AI sys-
be expected to have serious consequences. Critical infra- tems is a useful approach, especially if they directly inter-
structure that affects entire sectors or perhaps even cru- face with the public. Additionally, ensuring that appro-
cial parts of a single sector have the potential for greater priate guardrails are in place such that the AI reflects
impacts should they be attacked by AI-enabled threats or appropriate goals and objectives will be an important
should AI design systems fail. Although it was not caused outcome to achieve. Building public trust in AI systems
by an AI-enabled system, the 2021 Colonial Pipeline will be essential for taking advantage of the opportunities
Company ransomware attack provides an example of and mitigating the challenges that are likely to result from
how the targeting of even a single company responsible the incorporation of these new capabilities.
for an important segment of a sector can have dispro-
portionately adverse consequences. The attack caused a
“shutdown of nearly half of the gasoline and jet fuel supply Emerging Technology Risk Assessment
delivered to the East Coast.”88 It also demonstrated the AI remains in a relatively early state of TAV (and science
risks associated with even narrowly targeted critical infra- and technology maturity in particular) for a majority
structure attacks. Some industry experts called the attack of the critical infrastructure applications, with contribut-
a clarion call for energy cybersecurity.89 ing technologies maturing along discrete timelines. As
Complexity in developing and protecting critical infra- a result, we assessed that advances in AI systems will
structure is also an important factor to consider in exam- occur both incrementally as these individual technologies
ining potential consequences of AI use and misuse. One mature, and, at times, in a discontinuous manner—as
AI expert offers the following warning about the increas- occurred with ChatGPT-4. AI represents transformative
ing complexity of AI: “With super intelligent machines the technology and will likely be incorporated broadly across
problem with making them more and more intelligent is society—including in critical infrastructure—as occurred
that we don’t know how to specify their objectives cor- with the internet. We assessed that AI will likely be
rectly.” The expert went on, “[t]hen we have a machine affected by many of the same factors as other emerging
that’s more intelligent, more powerful than human beings technologies, such as ensuring adequate cybersecurity,
that’s doing what we asked for but not what we really protecting IP, ensuring key data protections, and protect-
want.”90 The warning is that humans will lose the ability to ing proprietary methods and processes. Table 2 provides
describe the system requirements of the very AI-enabled our AI risk assessment for critical infrastructure.
systems that are being developed. The threat landscape is likely to be defined by incre-
Relating this back to consequences, we posit that mental advances resulting from threat actors and critical
severe adverse outcomes could occur if AI systems reach infrastructure stakeholders engaging in a constant com-
erroneous conclusions, regardless of the reason for the petition for supremacy. We also assess that early rollout
mistake. Considering how to evaluate the outcomes and of the technology before appropriate validation and veri-
recommendations of AI platforms proves to be a useful fication has been conducted could present a significant
exercise. Here we can consider two hypotheses. Hypoth- target for adversaries and a loss of confidence that could
esis 1 is that AI works as intended, and hypothesis 2 is doom additional use cases.
that AI does not work as intended. For both hypotheses, The vulnerability and consequence will both be
one can choose either to act on the AI recommenda- directly related to the use cases for the AI systems—
// 15

TABLE 2
artificial intelligence Risk Assessment for critical infrastructure
Scenario TAV Average Threat Vulnerability Consequence Average

Short term 2.9 3 5 3 3.5


(up to 3 years)

Medium term 3.4 3.5 4.5 3 3.6


(3–5 years)

Long term 3.7 4 4 3.5 3.8


(5–10 years)
NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or possible, and
4 to 5 = high impact or likely feasible.

specifically, the scope, scale, and complexity of the AI


platforms. Narrower use cases that are more confined
U.S. Department of Homeland
or limited and have less potential for large-scale, cata- Security Equities
strophic outcomes will likely receive less scrutiny. We
The entire department will be affected by AI, either
assessed that, as the technology matures and more use
through its incorporation of AI into the department’s mis-
cases are identified for critical infrastructure applications,
sions or as nefarious actors seek to incorporate AI into
the technology will proliferate. Early attention to vulner-
their planning and operations. Secretary of Homeland
ability and consequences during the ten-year horizon
Security Alejandro N. Mayorkas created an Artificial Intel-
for this study could have long-term benefits in planning for
ligence Task Force to integrate, leverage, apply and part-
an AGI future.
ner on AI issues “to advance critical homeland security
Despite concerns about AI threat, vulnerability, and
missions.”91
consequence, it is not possible to consider a future in
Attempting to elucidate all uses of AI likely to be
which AI is not embedded into critical infrastructure appli-
incorporated across DHS would not be possible. The cor-
cations. Threat actors will likely incorporate AI to assist
ollary would be for the department to highlight all areas
with using the technology for illicit purposes, penetrating
in which the internet has been incorporated into depart-
networks and critical infrastructure control systems, and
mentwide operations. There will likely be important equi-
even directly attacking critical infrastructure. An important
ties across the entire department that provide opportuni-
countermeasure to adversary use of AI will be having
ties and challenges for DHS. The Biden administration’s
friendly AI capabilities, such as GANs, which can assist
October 2023 AI EO contains numerous requirements for
in identifying network vulnerabilities and intrusions much
(1) areas requiring DHS action that cut across the U.S.
more rapidly than humans can today.
As a result, our overall AI risk assessment regard-
ing critical infrastructure identifies continuing increases
in the TAV, threat, and consequence. Still, we assessed
that the vulnerability is likely to be reduced because
key preparedness, response, and mitigation activities are The use of AI-enabled
likely to be undertaken, which would keep the RS moder-
ate throughout the ten-year period. However, should this technologies is not
assumption not be valid and the vulnerabilities not be
reduced, the overall RS in the long term would go from new to DHS, and they
moderate to high impact.
have already been
incorporated into a wide
variety of applications.
16 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

government, and (2) ensuring that AI is being appropri- infrastructure—as occurred with the internet. As DHS
ately considered and managed in key DHS internal opera- states in its AI strategy document, AI has already been
tional and management areas. used in several key critical infrastructure sectors, includ-
The use of AI-enabled technologies is not new to ing “manufacturing, financial services, transportation,
DHS, and they have already been incorporated into a healthcare, energy, and food and agriculture.”94 Figure 2
wide variety of applications, including facial recognition provides our overall risk assessment, by time frame.
and predictive analysis using big data applications for We assessed that protecting AI development and
assisting law enforcement officers. For example, DHS’s security will likely be affected by many of the same fac-
Homeland Security Investigations department and the tors as those affecting other emerging technologies,
Science and Technology Directorate want to employ AI such as ensuring cybersecurity, protecting IP, ensuring
technology to discover, stop, and prosecute child sexual key data protections, and protecting proprietary methods
abuse cases.92 and processes. Failing to do so could create poten-
Given the likely future ubiquity of AI, we should tially catastrophic vulnerabilities in the country’s critical
expect that DHS stakeholders will also be incorporating infrastructure.
AI capabilities into their operations. As an example, New We further assess that the AI field contains numer-
York City released a plan for responsible AI on Octo- ous technologies that will be incorporated into AI critical
ber 16, 2023, called the “New York City Artificial Intel- infrastructure–related systems as they become available.
ligence Action Plan.” The plan provides a “framework for As a result, AI science and technology maturity will be
city agencies to carefully evaluate AI tools and associated based on key dependencies in several essential tech-
risks, help city government employees build AI knowledge nology areas, including high-performance computing,
and skills, and support the responsible implementation advanced semiconductor development and manufactur-
of these technologies to improve quality of life for New ing, robotics, machine learning, and NLP, and on the
Yorkers.”93 ability to accumulate and protect key data.
Despite significant excitement about AI, particu-
larly in the past decade or so, the technology remains
Conclusion in a relatively early state of maturity. This has significant
implications for setting the conditions for the responsible
AI is transformative technology and will likely be incor- development and use of AI today and gaining a better
porated broadly across society—including in critical

FIGURE 2
artificial intelligence Risk Assessment for critical Infrastructure

TAV Average
5.0

4.0

3.0

2.0

1.0
Short term (up to 3 years)
Consequence 0.0 Threat Medium term (3–5 years)
Long term (5 to < 10 years)

Vulnerability

NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or
possible, and 4 to 5 = high impact or likely feasible.
// 17

18 Kuusi and Heinonen, “Scenarios from Artificial Narrow Intelligence to


understanding of how to manage development and use
of AI in the future. Recent government documents have Artificial General Intelligence.”
19
been important in this regard, but the implementation of Gülen, “The Building Blocks of AI.”
these strategies and directives will be key to ensuring that 20 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
adequate guardrails and protections are in place. 21 Andersen, “What Happens When AI Has Read Everything?”
Finally, we expect that critical infrastructure will con- 22Authors Guild, “Survey Reveals 90 Percent of Writers Believe Authors
tinue to see significant proliferation of AI capabilities to Should Be Compensated for the Use of Their Books in Training Generative AI.”
improve the effectiveness and efficiency of current U.S. 23 Goyal, Varshney, and Rozsa, “What Is Generative AI, What Are Foundation
infrastructure. This too presents challenges and opportu- Models, and Why Do They Matter?”
nities that will need to be carefully managed. Doing so will 24 Goyal, Varshney, and Rozsa, “What Is Generative AI, What Are Foundation
present significant challenges given that the private sector Models, and Why Do They Matter?”
owns approximately 85 percent of critical infrastructure, 25 Bansal, “Characteristics of Big Data.”
and as owners seek to balance the need for gaining 26 Wilson, “ChatGPT Gets Its Biggest Update So Far.”
efficiencies with the need for ensuring security.95 We do
27 Wood, "Generative Adversarial Network."
recognize that AI is a rapidly moving technology field
28
and is likely to be incorporated in some applications and Yasar and Lewis, “Generative Adversarial Network (GAN).”
use cases more rapidly than in others. In this regard, we 29Hill, “NATO Tests AI’s Ability to Protect Critical Infrastructure Against
assessed that AI incorporation will be slower and more Cyberattacks.”
30 Hill, “NATO Tests AI’s Ability to Protect Critical Infrastructure Against
methodical in critical infrastructure applications.
Cyberattacks.”

Notes
31 Whyte, “Beyond ‘Bigger, Faster, Better,’” p. 26.
32 Kolosnjaji et al., “Adversarial Malware Binaries.”
1 NIST, Computer Security Resource Center, “Artificial Intelligence.”
33 Bahnsen et al., “DeepPhish.”
2 Herath and Mittal, “Adoption of Artificial Intelligence in Smart Cities.”
34 Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
3 Herath and Mittal, “Adoption of Artificial Intelligence in Smart Cities.”
35 Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
4 Initially, this assertion was made; however, NIST reevaluated the results
36
and concluded that facial recognition algorithms performed similarly across Grand View Research, Artificial Intelligence Market Size and Share Report,
demographic groups (NIST, “NIST Evaluates Face Recognition Software’s 2030.
Accuracy for Flight Boarding”). 37Grand View Research, Artificial Intelligence Market Size and Share Report,
5 Olavsrud, “9 Famous Analytics and AI Disasters.” 2030.
38
6 Norden and Ramachandran, “Artificial Intelligence and Election Security.” Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
39
7 Bansal, “Characteristics of Big Data.” Lawler, “How the U.S. Is Trying to Stay Ahead of China in the AI Race.”
40
8U.S. Department of Defense, Defense Innovation Board, AI Principles: Rec- De Vynck, “How Big Tech Is Co-Opting the Rising Stars of Artificial Intel-
ommendations on the Ethical Use of Artificial Intelligence by the Department of ligence.”
Defense, p. 2. 41 Batra et al., Artificial Intelligence Hardware.
9EO 14110, “Executive Order on the Safe, Secure, and Trustworthy Develop- 42 De Vynck, “How Big Tech Is Co-opting the Rising Stars of Artificial Intel-
ment and Use of Artificial Intelligence.” ligence.”
10White House, “President Biden Issues Executive Order on Safe, Secure, 43 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
and Trustworthy Artificial Intelligence.”
44
11 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
45
12 Abdulla and Chahal, Voices of Innovation.
Roser, “The Brief History of Artificial Intelligence.”
46
13 Highest-h-indexed researchers refers to the most-cited researchers
“AI is paired with human operators across functions like equipment calibra-
(Abdulla and Chahal, Voices of Innovation).
tion, robotics controls in assembly plants, predictive maintenance, and more”
(Microsoft, “Autonomy for Industrial Control Systems”). 47 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
14 Kuusi and Heinonen, “Scenarios from Artificial Narrow Intelligence to 48 EO 13859, “Maintaining American Leadership in Artificial Intelligence.”
Artificial General Intelligence.” 49 NIST, U.S. Leadership in AI.
15 Kuusi and Heinonen, “Scenarios from Artificial Narrow Intelligence to 50
Artificial General Intelligence.” EO 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the
Federal Government,”
16 Roser, “The Brief History of Artificial Intelligence.” 51 NIST, “AI RMF Development.”
17 This framework is described in Chapter 6 (pp. 143–162) and Appendix B 52
(pp. 301–304) of Gerstein, The Story of Technology. NIST, NIST AI RMF Framework. This reference includes specific tables that
contain recommendations and outcomes. NIST, “5 AI RMF Core.”
18 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

53 National Artificial Intelligence Advisory Committee, National Artificial Intel- 81 Bahnsen et al., “DeepPhish.”
ligence Advisory Committee (NAIAC), p. 3. 82 Kumar et al., “Artificial Intelligence in Disease Diagnosis.”
54 White House, “President Biden Issues Executive Order on Safe, Secure, 83
and Trustworthy Artificial Intelligence.” Spray, “8 Biggest Challenges with Self-Driving Cars.”
84
55 White House, “Blueprint for an AI Bill of Rights.” Columbia University, Fu Foundation School of Engineering and Applied
Science, “What Is Financial Technology (FinTech)?”
56 White House, “President Biden Issues Executive Order on Safe, Secure, 85
and Trustworthy Artificial Intelligence.” Frackiewicz, “The Role of Artificial Intelligence in DDoS Mitigation.”
86
57 Bloomberg Law, “Uses and Risks of AI in Export Controls Compliance.” Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery.”
87
58 U.S. Department of the Treasury, “The Committee on Foreign Investment in U.S. Department of Defense, Defense Innovation Board, AI Principles:
the United States (CFIUS).” Recommendations on the Ethical Use of Artificial Intelligence by the Depart-
ment of Defense.
59 Lane, “Nvidia Could Be Affected by Expanded US Export Controls on AI
88
Chips to China.” Vasquez, “How the Colonial Pipeline Hack Galvanized a Nation at Risk.”
89
60 United Nations, “Secretary-General Urges Security Council to Ensure Vasquez, “How the Colonial Pipeline Hack Galvanized a Nation at Risk.”
Transparency, Accountability, Oversight, in First Debate on Artificial Intel- 90 Lebans, “The Threat from AI Is Not That It Will Revolt, It’s That It’ll Do
ligence.” Exactly as It’s Told.”
61 Talagala, “The AI Act.” 91DHS, “Secretary Mayorkas Announces New Measures to Tackle A.I., PRC
62 Talagala, “The AI Act.” Challenges at First State of Homeland Security Address.”
92
63Rozen, “AI Leaders Are Calling for More Regulation of the Tech. Here’s Kelley, “DHS Looks to AI to Help Solve Child Abuse Cases.”
What That May Mean in the US.” 93City of New York, “Mayor Adams Releases First-of-Its-Kind Plan for
64 Google, Recommendations for Regulating AI, p. 1. Responsible Artificial Intelligence Use in NYC Government.”
94
65 Gerstein, Tech Wars. DHS, U.S. Department of Homeland Security Artificial Intelligence Strategy,
p. ii.
66 Li, “What Could AI Regulation in the US Look Like?” 95 Federal Emergency Management Agency, Strategic Foresight Initiative,
67 Smith, “What Large Models Cost You—There Is No Free AI Lunch.” “Critical Infrastructure.”
68Bond, “It Takes a Few Dollars and 8 Minutes to Create a Deepfake. And
That’s Only the Start.”
69

70
Carnegie Endowment of International Peace, “The Future of AI Regulation.”
References
Shea and Burns, “Smart City.”
Abdallat, A. J., “Can We Trust Critical Infrastructure to Artificial Intelligence?”
71 For this risk assessment, the analysis of threat is focused on the capabili- Forbes, July 1, 2022.
ties and intentions of the actor; the vulnerabilities are weaknesses to be
exploited; and the consequences are the outcomes of the event occurrence Abdulla, Sara, and Husanjot Chahal, Voices of Innovation: An Analysis of
(Cox, “Some Limitations of ‘Risk = Threat × Vulnerability × Consequence’ for Influential AI Researchers in the United States, Center for Security and
Risk Analysis of Terrorist Attacks”). Emerging Technology, July 2023.
72 The 16 critical infrastructure sectors are chemical; commercial facilities; Andersen, Ross, “What Happens When AI Has Read Everything?” The
communications; critical manufacturing; dams; defense industrial base; emer- Atlantic, January 18, 2023.
gency services; energy; financial services; food and agriculture; government Authors Guild, “Survey Reveals 90 Percent of Writers Believe Authors Should
facilities; health care and public health; information technology; nuclear reac- Be Compensated for the Use of Their Books in Training Generative AI,” press
tors, materials, and waste; transportation systems; and water and wastewater release, May 15, 2023.
(Cybersecurity and Infrastructure Security Agency, “Critical Infrastructure
Sectors”). Bahnsen, Alejandro Correa, Ivan Torroledo, Luis David Camacho, and Sergio
73
Villegas, “DeepPhish: Simulating Malicious AI,” in 2018 APWG Symposium on
"Several major characteristics are used to determine a city’s smartness. Electronic Crime Research (eCrime), 2018.
These characteristics include a technology-based infrastructure; environmen-
tal initiatives; a high-functioning public transportation system; and a confident Bansal, Sumeet, “Characteristics of Big Data,” blog post, AnalytixLabs,
sense of urban planning and humans to live and work within the city and utilize June 17, 2021.
its resources" (Shea and Burns, “Smart City”).
Batra, Gaurav, Zach Jacobson, Siddarth Madhav, Andrea Queirolo, and
74 Nick Santhanam, Artificial Intelligence Hardware: New Opportunities for
Sakhnini et al., “AI and Security of Critical Infrastructure.”
75
Semiconductor Companies, McKinsey and Company, December 2018.
Abdallat, “Can We Trust Critical Infrastructure to Artificial Intelligence?”
76
Bloomberg Law, “Uses and Risks of AI in Export Controls Compliance,”
Microsoft, Microsoft Digital Defense Report. Practical Guidance, undated.
77 Kolosnjaji et al., “Adversarial Malware Binaries.” Bond, Shannon, “It Takes a Few Dollars and 8 Minutes to Create a Deepfake.
78 Abdallat, “Can We Trust Critical Infrastructure to Artificial Intelligence?” And That’s Only the Start,” NPR, March 23, 2023.
79 Leenen and Meyer, “Artificial Intelligence and Big Data Analytics in Support Carnegie Endowment of International Peace, “The Future of AI Regulation:
of Cyber Defense.” A Conversation with Arati Prabhakar,” video, November 14, 2023. As of
January 30, 2024:
80 Kolosnjaji et al., “Adversarial Malware Binaries.” https://www.youtube.com/watch?v=3uovOOUL4zg
// 19

City of New York, “Mayor Adams Releases First-of-Its-Kind Plan for Kolosnjaji, Bojan, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio
Responsible Artificial Intelligence Use in NYC Government,” press release, Giacinto, Claudia Eckert, and Fabio Roli, “Adversarial Malware Binaries:
October 16, 2023. Evading Deep Learning for Malware Detection in Executables,” in 2018 26th
European Signal Processing Conference, September 2018.
Columbia University, Fu Foundation School of Engineering and Applied
Science, “What Is Financial Technology (FinTech)? A Beginner’s Guide,” Kumar, Yogesh, Apeksha Koul, Ruchi Singla, and Muhammad Fazal Ijaz,
Columbia Engineering Boot Camps blog, undated. “Artificial Intelligence in Disease Diagnosis: A Systematic Literature Review,
Synthesizing Framework and Future Research Agenda,” Journal of Ambient
Cox, Louis Anthony, Jr., “Some Limitations of ‘Risk = Threat × Vulnerability Intelligence and Humanized Computing, Vol. 14, No. 7, 2023.
× Consequence’ for Risk Analysis of Terrorist Attacks,” Risk Analysis, Vol. 28,
No. 6, December 2008. Kuusi, Osmo, and Sirkka Heinonen, “Scenarios from Artificial Narrow
Intelligence to Artificial General Intelligence— Reviewing the Results of the
Cybersecurity and Infrastructure Security Agency, “Critical Infrastructure International Work/Technology 2050 Study,” World Futures Review, Vol. 14,
Sectors,” webpage, undated. As of February 15, 2024: No. 1, 2022.
https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/
critical-infrastructure-sectors Lane, Terry, “Nvidia Could Be Affected by Expanded US Export Controls on AI
Chips to China,” Investopedia, October 16, 2023.
De Vynck, Gerrit, “How Big Tech Is Co-Opting the Rising Stars of Artificial
Intelligence,” Washington Post, September 30, 2023. Lawler, Dave, “How the U.S. Is Trying to Stay Ahead of China in the AI Race,”
Axios, June 29, 2023.
DHS—See U.S. Department of Homeland Security.
Lebans, Jim, “The Threat from AI Is Not That It Will Revolt, It’s That It’ll Do
Executive Order 13859, “Maintaining American Leadership in Artificial Exactly as It’s Told,” CBC Radio, April 24, 2020.
Intelligence,” Executive Office of the President, February 11, 2019.
Leenen, Louise, and Thomas Meyer, “Artificial Intelligence and Big
Executive Order 13960, “Promoting the Use of Trustworthy Artificial Data Analytics in Support of Cyber Defense,” in Information Resources
Intelligence in the Federal Government,” Executive Office of the President, Management Association, Research Anthology on Artificial Intelligence
December 3, 2020. Applications in Security, IGI Global, November 2020.
Executive Order 14110, “Executive Order on the Safe, Secure, and Li, Victor, “What Could AI Regulation in the US Look Like?” Legal Rebels
Trustworthy Development and Use of Artificial Intelligence,” Executive Office of podcast, American Bar Association, June 14, 2023.
the President, October 30, 2023.
Microsoft, “Autonomy for Industrial Control Systems,” white paper, April 2020.
Federal Emergency Management Agency, Strategic Foresight Initiative,
“Critical Infrastructure: Long-Term Trends and Drivers and Their Implications Microsoft, "Applications for Artificial Intelligence in Department of Defense
for Emergency Management,” June 2011. Cyber Missions," On the Issues blog, May 3, 2022.
Frackiewicz, Marcin, “The Role of Artificial Intelligence in DDoS Mitigation,” National Artificial Intelligence Advisory Committee, National Artificial
TechnoSpace2, August 3, 2023. Intelligence Advisory Committee (NAIAC): Year 1, May 2023.
Gerstein, Daniel M., The Story of Technology: How We Got Here and What the National Institute of Standards and Technology, NIST AI RMF Playbook,
Future Holds, Prometheus Books, 2019. undated.
Gerstein, Daniel M., Tech Wars: Transforming U.S. Technology Development, National Institute of Standards and Technology, “5 AI RMF Core,” webpage,
Praeger, September 2022. undated. As of January 30, 2024:
https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/
Google, Recommendations for Regulating AI, undated. Core_And_Profiles/5-sec-core#tab:govlongtblr
Goyal, Manish, Shobhit Varshney, and Eniko Rozsa, “What Is Generative AI, National Institute of Standards and Technology, U.S. Leadership in AI: A Plan
What Are Foundation Models, and Why Do They Matter?” blog post, IBM, for Federal Engagement in Developing Technical Standards and Related Tools,
March 8, 2023. August 9, 2019.
Grand View Research, Artificial Intelligence Market Size and Share Report, National Institute of Standards and Technology, “NIST Evaluates Face
2030, June 2023. Recognition Software’s Accuracy for Flight Boarding: Agency’s New Test
Gülen, Kerem, “The Building Blocks of AI,” webpage, Dataconomy, April 3, Concerns Checking in Passengers and Documenting Their Exit from a
2023. As of January 30, 2024: Country,” news release, July 13, 2021.
https://dataconomy.com/2023/04/03/ National Institute of Standards and Technology, Computer Security Resource
basic-components-of-artificial-intelligence/ Center, “Artificial Intelligence,” webpage, updated July 20, 2023. As of
Haan, Katherine, and Rob Watts, “24 Top AI Statistics and Trends in 2024,” January 30, 2024:
Forbes Advisor, April 25, 2023. https://csrc.nist.gov/Topics/Technologies/artificial-intelligence

Henshall, Will, “4 Charts That Show Why AI Progress Is Unlikely to Slow National Institute of Standards and Technology, “AI RMF Development,”
Down,” Time, November 6, 2023. webpage, updated January 2, 2024. As of January 30, 2024:
https://www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development
Herath, H. M. K. K. M. B., and Mamta Mittal, “Adoption of Artificial Intelligence
in Smart Cities: A Comprehensive Review,” International Journal of Information NIST—See National Institute of Standards and Technology.
Management Data Insights, Vol. 2, No. 1, April 2022. Norden, Lawrence, and Gowri Ramachandran, “Artificial Intelligence and
Hill, Michael, “NATO Tests AI’s Ability to Protect Critical Infrastructure Against Election Security,” Brennan Center for Justice, October 5, 2023.
Cyberattacks,” CSO, January 5, 2023. Olavsrud, Thor, “9 Famous Analytics and AI Disasters,” CIO, September 22,
Kelley, Alexandra, “DHS Looks to AI to Help Solve Child Abuse Cases,” 2023.
NextGov/FCW, October 2, 2023.
20 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE

Roser, Max, “The Brief History of Artificial Intelligence: The World Has U.S. Department of the Treasury, “The Committee on Foreign Investment in
Changed Fast—What Might Be Next?” Our World In Data, December 6, the United States (CFIUS),” webpage, undated. As of February 9, 2024:
2022. https://home.treasury.gov/policy-issues/international/
Rozen, Courtney, “AI Leaders Are Calling for More Regulation of the Tech. the-committee-on-foreign-investment-in-the-united-states-cfius
Here’s What That May Mean in the US,” Washington Post, July 27, 2023. Vasquez, Christian, “How the Colonial Pipeline Hack Galvanized a Nation at
Sakhnini, Jacob, Hadis Karimipour, Ali Dehghantanha, and Reza M. Parizi, “AI Risk,” EnergyWire, May 9, 2022.
and Security of Critical Infrastructure,” in Kim-Kwang Raymond Choo and Ali White House, “Blueprint for an AI Bill of Rights: Making Automated Systems
Dehghantanha, eds., Handbook of Big Data Privacy, Springer, 2020. Work for the American People,” webpage, undated. As of January 30, 2024:
Shea, Sharon, and Ed Burns, “Smart City,” TechTarget, webpage, July 2020. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
As of February 5, 2024: White House, “Biden-Harris Administration Announces New Actions
https://www.techtarget.com/iotagenda/definition/smart-city to Promote Responsible AI Innovation that Protects Americans’ Rights
Smith, Craig S., “What Large Models Cost You—There Is No Free AI Lunch,” and Safety,” fact sheet, May 4, 2023.
Forbes, September 8, 2023. White House, “President Biden Issues Executive Order on Safe, Secure, and
Spray, Aaron, “8 Biggest Challenges with Self-Driving Cars,” HotCars, Trustworthy Artificial Intelligence,” fact sheet, October 30, 2023.
April 19, 2021. Whyte, Christopher, “Beyond ‘Bigger, Faster, Better:’ Assessing Thinking
Talagala, Nisha, “The AI Act: Three Things to Know About AI Regulation About Artificial Intelligence and Cyber Conflict,” Cyber Defense Review, Vol. 8,
Worldwide,” Forbes, June 29, 2022. No. 3, 2023.

United Nations, “Secretary-General Urges Security Council to Ensure Wilson, Mark, “ChatGPT Gets Its Biggest Update So Far—Here Are
Transparency, Accountability, Oversight, in First Debate on Artificial 4 Upgrades That Are Coming Soon,” TechRadar, November 7, 2023.
Intelligence,” press release, July 18, 2023. Wood, Thomas, "Generative Adversarial Network," DeepAI, webpage,
Urbina, Fabio, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, “Dual Use of undated. As of February 16, 2024:
Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence, https://deepai.org/machine-learning-glossary-and-terms/
Vol. 4, 2022. generative-adversarial-network

U.S. Department of Defense, Defense Innovation Board, AI Principles: Yasar, Kinza, and Sarah Lewis, “Generative Adversarial Network (GAN),”
Recommendations on the Ethical Use of Artificial Intelligence by the TechTarget, webpage, undated. As of February 5, 2024:
Department of Defense, October 31, 2019. https://www.techtarget.com/searchenterpriseai/definition/
generative-adversarial-network-GAN
U.S. Department of Homeland Security, U.S. Department of Homeland
Security Artificial Intelligence Strategy, December 3, 2020.
U.S. Department of Homeland Security, “Secretary Mayorkas Announces New
Measures to Tackle A.I., PRC Challenges at First State of Homeland Security
Address,” press release, April 21, 2023.

ABBREVIATIONS
AGI artificial general intelligence
AI artificial intelligence
ANI artificial narrow intelligence
DHS U.S. Department of Homeland
Security
EO executive order
EU European Union
GAN generative adversarial network
IoT Internet of Things
IP intellectual property
LLM large-language model
NIST National Institute of Standards and
Technology
NLP natural language processing
R&D research and development
RMF risk management framework
RS risks and scenarios
TAV technology availability
TRL technology readiness level
// 21

About the Authors


Daniel M. Gerstein works at the RAND Corporation. From
2011 to 2014, he served as the Acting Under Secretary and
Deputy Under Secretary of the DHS Science and Technology
Directorate. He has a doctorate in biodefense.

Erin N. Leidy is a technical analyst at RAND. Some of her


areas of interest are in emerging technology, science policy,
cybersecurity, and technology implementation. She has an
M.S. in technology policy.
HS AC
HOMELAND SECURITY
OPER ATIONAL ANALYSIS CENTER

An FFRDC operated by RAND


under contract with DHS

The Homeland Security Act of 2002 (Public


Law 107-296, § 305, as codified at 6 U.S.C.
§ 185) authorizes the Secretary of Homeland
Security, acting through the Under Secretary
for Science and Technology, to establish
one or more federally funded research
and development centers (FFRDCs) to
provide independent analysis of homeland
security issues. RAND operates HSOAC
as an FFRDC for DHS under contract
70RSAT22D00000001.
The HSOAC FFRDC provides the
government with independent and objective
analyses and advice in core areas important
to the department in support of policy
development, decisionmaking, alternative

About This Report approaches, and new ideas on issues


of significance. HSOAC also works with
and supports other federal, state, local,
tribal, and public- and private-sector
This report is part of a series of analyses on the effects of emerging technolo- organizations that make up the homeland
gies on U.S. Department of Homeland Security (DHS) missions and capabili- security enterprise. HSOAC’s research
is undertaken by mutual consent with
ties. As part of this series, the Homeland Security Operational Analysis Center DHS and is organized as a set of discrete
tasks. This report presents the results
(HSOAC) was charged with developing a technology and risk assessment of research and analysis conducted
methodology for evaluating emerging technologies and understanding their under 70RSAT22FR0000125, Emerging
Technology and Risk Analysis. The results
implications within a homeland security context. The methodology and analy- presented in this report do not necessarily
reflect official DHS opinion or policy. For
ses provide a basis for DHS to better understand the emerging technologies more information on HSRD, see www.rand.
org/hsrd.
and the risks they present.
For more information on this publication, visit
www.rand.org/t/RR-A2873-1.
This research was sponsored by the DHS Science and Technology Directorate
This research was published in 2024.
and conducted by the Management, Technology, and Capabilities Program of
Approved for public release; distribution is
RAND Homeland Security Research Division (HSRD), which operates HSOAC. unlimited.

RR-A2873-1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy