RAND_RRA2873-1 (1)
RAND_RRA2873-1 (1)
RAND_RRA2873-1 (1)
Emerging
Technology and
Risk Analysis
Artificial Intelligence and Critical Infrastructure
T
he National Institute of Standards and At a more granular level, tools supporting AI
Technology’s (NIST’s) Computer Security technology development include large-language
Resource Center defines artificial intel- models (LLMs), neural networks, supervised and
ligence (AI) as “[a] branch of computer unsupervised training, and computational linguis-
science devoted to developing data processing tics. Other enabling and converging technologies
systems that performs functions normally associ- such as cyber, big data, and the Internet of Things
ated with human intelligence, such as reasoning, (IoT), have also become inextricably linked to AI
learning, and self-improvement.”1 However, no research and development (R&D). These lists and
single authoritative AI taxonomy exists because the components that directly relate to and support
the key technologies, which come from several AI will likely expand as use cases continue to be
fields, have each developed specific taxonomies identified, AI technologies continue to evolve, and
for AI technologies. For example, in “Adoption of operational AI systems are fielded.
Artificial Intelligence in Smart Cities: A Compre- In this report, we draw from the literature
hensive Review,” the authors identify six key AI on smart cities. As such, we consider early AI-
technologies used in smart cities: machine learning embedded capabilities in a city’s critical infra-
(including deep learning and predictive analysis), structure to be the initial stages of developing a
natural language processing (NLP) (including trans- smart city. Inherent in this assumption is that these
lation, information extraction and classification, and infrastructure components are vital to the safe and
clustering), computer speech (including speech to effective functioning of society, the safety of the
text and text to speech), computer vision (includ- population, and the functioning of the economy. In
ing image recognition and machine vision), expert making this assumption, we focus on critical infra-
systems, and robotics.2 This list is similar to those structure because purpose-built smart cities are
developed for other applications in which AI is unlikely to be widespread within the time horizon of
being incorporated but is tailored for smart cities this report (i.e., ten years). In preparing this report,
and applies to AI-enabled critical infrastructure. we recognize that there are many uses of AI and
2 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: UNMANNED AERIAL SYSTEMS INTELLIGENT SWARM TECHNOLOGY
sible, equitable, traceable, reliable, and governable.8 We The size of the global AI market is expected to grow
assessed that these principles have applicability in many from $86.9 billion (in 2022) to $407 billion by 2027 and
domains and use cases of AI, including for use in AI projects further growth by the end of the study period
systems supporting critical infrastructure. More recently, considered.11
President Joseph R. Biden issued an “Executive Order on We also assess that AI opportunities and challenges
the Safe, Secure, and Trustworthy Development and Use will be cumulative. Increasingly AI will be incorporated
of Artificial Intelligence” on October 30, 2023.9 The execu- into more applications that affect economic prosper-
tive order (EO) establishes “new standards for AI safety ity, national security, and societal well-being. In terms
and security, protects Americans’ privacy, advances of homeland security, we assessed that people’s lives,
equity and civil rights, stands up for consumers and cities, and critical infrastructure will increasingly incorpo-
workers, promotes innovation and competition, advances rate AI. The number of threat actors with access to these
American leadership around the world, and more.”10 capabilities should also be expected to grow as the tech-
In our analysis, we consider four attributes in assess- nology proliferates, more use cases are identified, and the
ing the technology: technology availability (TAV ), and risks full power of AI technology becomes evident. As AI tech-
and scenarios (RS), which we have divided into threat, nologies continue to be developed and mature, the power
vulnerability, and consequence. The RS considered in this of the technology will expand, sometimes in predictable
analysis pertain to AI use affecting critical infrastructure. trajectories and at other times in unpredictable and often
The use cases could be for monitoring and controlling in a discontinuous (or disruptive) manner.
either critical infrastructure or adversaries employing AI
for use in illicit activities and nefarious acts directed at
critical infrastructure. The RS have been provided by the Technology Description and
Scenarios for Consideration
study sponsors from the U.S. Department of Homeland
Security (DHS) Science and Technology Directorate and
the Office of Policy. We compared these four attributes Much of the societal knowledge about AI has come in the
across three periods for critical infrastructure applications form of sound bites. Futurists have highlighted the oppor-
(see Figure 1). tunities and warned of the threats of AI. Movies have
Overall, we assessed that AI technology will continue sensationalized the dangers of out-of-control AI-enabled
to mature during the ten-year horizon of this analysis. technology. Meanwhile, AI has continued to proliferate
FIGURE 1
Artificial Intelligence Risk Assessment for critical Infrastructure
TAV Average
5.0
4.0
3.0
2.0
1.0
Short term (up to 3 years)
Consequence 0.0 Threat Medium term (3–5 years)
Long term (5 to < 10 years)
Vulnerability
NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or
possible, and 4 to 5 = high impact or likely feasible.
4 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
Methodology
Today, the technology We developed a framework for assessing the risks of
emerging technologies. The assessment included an
has achieved only ANI, evaluation of the TAV and potential RS for which a technol-
ogy could be used. The TAV evaluation covered five areas:
though some argue science and technology maturity; use case, demand,
and market forces; resources committed; policy, legal,
that several applications ethical, and regulatory impediments; and technology
accessibility.17 The RS evaluation covered threat, vulner-
have demonstrated ability, and consequence. The ratings for the TAV and RS
categories ranged from 1 to 5, where 1 corresponds to
early AGI. many challenges and 5 to very few, if any, challenges. The
five TAV areas were averaged and included in the emerg-
ing technology risk assessment. To allow the compari-
throughout various fields and purposes, mostly in the sons of emerging technologies in our assessment of the
past 20 years.12 This includes uses in critical infrastruc- consequences, we rated impacts according to the likely
ture applications, such as fraud detection in the banking affected level (national, regional, or local), potential mortal-
industry and monitoring and control of industrial control ity and morbidity, and likely economic and societal disrup-
systems.13 We examined AI applications as they pertain tion. By averaging the threat, vulnerability, consequence,
to the DHS 16 critical infrastructure sectors. However, we and TAV average calculated previously, an emerging tech-
do not seek to provide a detailed history, suggest a single nology risk for a particular scenario could be assessed as
authoritative definition, or capture the many AI use cases low (0 < 2), moderate (2 < 4), or high (4 to 5).
likely to be discovered for use in critical infrastructure– These assessments were repeated for three periods:
related applications. short term (up to three years), medium term (three to five
This report was also written during the early stages years), and long term (five to ten years). This allowed us
of AI development, as humankind continues to learn more to assess individually and collectively how the TAV and RS
about the uses and misuses of AI. The study considers would be affected over time. In the assessment, we con-
only a ten-year time horizon, long before “AI would inte- sidered how the threat, vulnerability, and consequence
grate new technologies, into a ‘singularity,’” which is a evolved over time, as well as whether any preparedness,
“hypothetical point in time at which technological growth mitigation, and response activities had been undertaken
becomes uncontrollable and irreversible, resulting in that could reduce the risk.
unforeseeable changes to human civilization.”14
To place AI into its current state of maturity, it is
useful to delineate three AI categories: artificial narrow
intelligence (ANI), artificial general intelligence (AGI), and
Technology Availability
artificial super intelligence (ASI). The global Millennium Assessment
Project describes these categories: “Artificial narrow
The TAV assessment was conducted without regard to the
intelligence is often better and faster than humans in,
specific RS. Those factors were addressed in the subse-
for example, driving trucks, playing games, and medical
quent RS assessment. We separated these assessments
diagnostics. Artificial general intelligence is [the] hypo-
in this way to isolate the effects of the changes in technol-
thetical ability of an intelligent agent to understand or
ogy over the ten-year study time frame.
learn any intellectual task that a human being can. Artifi-
There are several interesting points of departure for
cial super intelligence sets its own goals independent of
this TAV assessment. First, the many technologies that
human awareness and understanding.”15 Today, the tech-
comprise AI mature along different time scales, making
nology has achieved only ANI, though some argue that
it challenging to identify an exact technology readiness
several applications have demonstrated early AGI.16
level (TRL) for the field of AI. As a result, a more produc-
tive approach to examining AI availability is to associate
the TRLs of the individual AI technologies with the use
cases that are likely to be developed. Second, given the
// 5
potential of AI growth to become “uncontrollable and All of these fields, regardless of where they fit in the
irreversible, resulting in unforeseeable changes to human overall AI development ecosystem, will mature according
civilization,”18 the development of the technology and to their own timelines, and each of the different use cases
associated guardrails should be considered simultane- incorporating these technologies will also mature along
ously to ensure that this runaway condition does not their own timelines. As such, seeking to identify a single
occur. Third, developers, users, and even society should AI TRL would not be fruitful. However, understanding
expect that AI applications will have vulnerabilities and individual TRLs of the key supporting technologies within
challenges that are inherent in the associated technolo- a specific use case, such as critical infrastructure, will be
gies used in AI applications—we assessed that this essential for predicting how AI will mature.
effect will be cumulative. As an example, data science, Understanding AI’s relationships to these other key
the cyber domain, and the IoT have vulnerabilities that technologies will be essential and likely affect the long-
will combine and converge within AI-enabled platforms. term prospects for AI systems. For example, debates
These affects will need to be mitigated to ensure the safe are occurring about intellectual property (IP) protections
use of AI. relevant to AI systems. AI systems require vast amounts
AI is already being used in applications directly of data. Some experts have spoken about ingesting the
related to critical infrastructure. Examples include health cumulative totals of all of the books ever written within
care (e.g., diagnosis of patients and predicting patient an AI platform when conducting supervised learning.21
outcomes), finance (e.g., detecting fraud and improv- However, the creators of this content—including authors
ing customer service), transportation (e.g., developing and artists—have raised concerns about not being com-
self-driving vehicles and predictive maintenance), and pensated for their role in producing the inputs for the AI
manufacturing (e.g., optimizing production and improving platforms.22 This creates a challenging dilemma in that
quality).19 Despite these uses, we assessed that these AI developers might decide to forgo key potential inputs
are narrow AI uses that reflect areas in which technology that have IP rights protections. However, doing so likely
maturity has been achieved. However, these remain early excludes important work that could be helpful in training
use cases, which are likely to become more sophisticated AI models. How this issue is addressed has implications
as AI technology areas and the individual component for the AI systems that are developed. This also has impli-
technologies continue to mature. cations for the policy, legal, ethical, and regulatory impedi-
ments to developing both AI foundation and transformer
models.
Science and Technology Maturity AI neural networks use these foundation and trans-
The AI field contains numerous technologies that will former models, which are pretrained for NLP and com-
be incorporated into AI systems as they become avail- puter vision using large volumes of data and computing
able. We could also decompose these technology fields power.23 Foundation models provide a starting point for
into lower levels. As a result, the progress in AI is based developing more-advanced and -complex models, while
on key dependencies in several essential technology transformer models use a neural network structure and
areas, including high-performance computing, advanced unsupervised learning.24 Both models require vast data
semiconductor development and manufacturing, robot- inputs that consider the V characteristics of volume,
ics, machine learning, NLP, and data science (including velocity, value, variety, veracity, validity, volatility, and
accumulating and protecting key data). Specific applica- visualization; failure to do so increases the chances of
tions also mature according to their own timelines. One miscues and misuses of AI.25
analysis from ContextualAI indicates that handwriting The ChatGPT-4 rollout in March 2023 provides an
analysis, speech recognition, image recognition, reading interesting case study for how these AI technologies—in
comprehension, and language understanding have all this case, LLMs—are likely to mature and be exposed
surpassed human performance, while other tasks—such to society. Previous versions of the software had been
as common-sense completion, grade school math, and in development for years, largely for R&D purposes.
code generation—have achieved approximately 85 to In ChatGPT-4, the software captivated the interest of
98 percent of human performance.20 One could argue broader society. Even casual users were intrigued by the
that AI in those narrow applications (in which it has new AI platform. However, the initial rollout highlighted
already exceeded human performance) has achieved that there were accuracy and consistency problems
AGI-like capabilities. within the platform, forcing the developers to quickly
6 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
employ AI. ChatGPT-4 had more than 1 million users in Development of AI, especially in these early stages,
the first five days that it was deployed.34 requires access to key technologies that are currently
Industry has also become interested in continuing to limited by resource availability. As a Washington Post
proliferate AI capabilities, driven largely by the need for October 2023 account offered, “To build AI at any kind
efficiencies. We assessed that these market forces are of meaningful scale, any developer is going to have core
likely to continue to drive this increasing AI use. For exam- dependencies on resources that are largely concentrated
ple, one survey reports that “97% of business owners in only a few firms.”40
believe that ChatGPT will benefit their businesses.” This Four areas in particular demonstrate the costs
same survey said that “64% of business owners believe AI and resources required for developing AI technolo-
has the potential to improve customer relationships” and gies: specialized equipment (e.g., semiconductors and
that “over 60% of business owners believe AI will increase high-performance computers), data, infrastructure, and
productivity.”35 human capital. Semiconductors are a key component
An increasingly large market for AI technologies of AI systems, and the costliest and hardest-to-develop
exists. One report indicates that three factors are driving semiconductors continue to be a resource constraint.41
the increase in size of the total addressable market: the High-performance computing is also in short supply but
increasing availability of data, the falling cost of computing continues to be essential to handling the extremely large
power, and a growing talent pool. Key industries cited in volumes of data that will be necessary to gain confidence
the report include health care, manufacturing, retail, and in the AI systems being developed.
finance.36 Data are an essential component of AI, and confi-
The same report from Grand View Research identi- dence in the AI models grows as the amount of data used
fies the 2022 market as being worth $136.55 billion and in developing and training the models increases. The
projected an expansion of the compound annual growth Washington Post article referenced earlier adds, “Training
rate of 37.3 percent from 2023 to 2030, resulting in a ‘generative’ AI systems like chatbots and image genera-
revenue forecast of $1.811 trillion in 2030. The report tors is hugely expensive. The technology behind them has
also expanded the number of sectors that will specifically to crunch through trillions of words and images before it
benefit from AI to include health care, finance, law, retail, can produce humanlike text and photorealistic pictures
advertising and media, automotive and transportation, from simple prompts,” requiring “thousands of special-
agriculture, and manufacturing.37 In comparing corporate ized computer chips sitting in huge data centers that use
adoption of AI in China and the United States, a Forbes enormous amounts of energy.”42
survey found that 58 percent of companies in China were On the confluence of the high-performance comput-
deploying AI, while only 25 percent of U.S.-based com- ing increases and data capacity, we have seen that “for a
panies were using AI.38 Still, the United States appears to model trained on one trillion data points, a model trained
have an advantage in technology development, especially in 2021 required ~16,500x less compute than a model
in basic and applied research and early development.39 trained in 2012.”43 Two LLMs, Alphabet’s Pathways Lan-
Of course, with any new technology with such dis- guage Model 2 and Meta’s Large Language Model Meta
ruptive potential, there are likely going to be concerns. AI, both developed in 2023, used 3 trillion and 1 trillion
Some people have expressed unease about the potential data points, respectively.44 Infrastructure, including labo-
loss of jobs to AI technologies. Others have expressed ratories for AI R&D, fabrication plants for manufacturing
concerns about becoming overly dependent on AI. Still semiconductors, and the skilled workforce necessary to
others have expressed concerns about not adhering develop the AI systems, represents key technology areas
to key AI principles and the potential for loss of human that will be essential to AI development. The entry costs
agency. for many of these essential feeder technologies are often
quite high, which limits access to technology develop-
ment capabilities.
Resources Committed Human capital provides a key barrier to entry and
AI presents an interesting dichotomy between the cost to remains in short supply, despite recent growth in the AI
develop the technology and the cost to use the technol- workforce. AI researchers tend to be a homogeneous
ogy. This will become more pronounced as the science cohort; “[s]ixty-two percent were concentrated in 10 elite
and technology maturity continues and more AI systems universities and top companies,” and 70 percent of
and use cases are identified and become available. U.S.-based AI researchers “were foreign-born or foreign-
8 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
all people from these threats—and uses technologies in legal, ethical, and regulatory measures regarding AI. For
ways that reinforce our highest values”: safe and effec- example, the 2018 EU general data protection regulation
tive systems; algorithmic discrimination protections; data (GDPR) relates to data that would be useful in develop-
privacy; notice and explanation; and human alternatives, ing AI foundation and transformer models. The EU is also
consideration, and fallback.55 As with other initiatives, proposing a European law on AI—the EU AI Act—which
these principles are guidance and do not reflect man- develops three risk categories: “(a) systems that create
dates for AI development or use. an unacceptable risk, which will be banned, (b) systems
As highlighted earlier, President Biden issued the that are high risk—that will be regulated, and (c) other
EO, which establishes “new standards for AI safety and applications, which are left unregulated.”61 Debates
security, protects Americans’ privacy, advances equity continue regarding “exact criterion and specifics of the
and civil rights, stands up for consumers and workers, law.”62 Despite the continued debate, some are already
promotes innovation and competition, advances Ameri- indicating that this act could serve as a model for others
can leadership around the world, and more.”56 This EO to follow.
potentially represents an important inflection point in AI Industry has also talked about the need for a broad
development, management, and use; however, as with understanding that responsible AI development will likely
all such policy documents, implementation of the EO and need assistance by way of laws, policies, and regulations.
follow-through will be key. At President Biden’s request, seven U.S.-based AI com-
Export controls serve to restrict the proliferation panies (including Amazon.com, Inc., Alphabet, Inc., and
of key AI technologies. One source comments, “The Meta Platforms, Inc.) “promised to allow outside teams
technology underpinning AI is heavily regulated when it to probe for security defects and risks to consumer pri-
is exported from the US. The data used to train AI also vacy” as an initial measure.63 Google’s Recommendations
could be considered ‘technical data’ under the Export for Regulating AI states directly, “[W]hile self-regulation
Administration Regulations (EAR) and International is vital, it is not enough. Balanced, fact-based guidance
Trade in Arms Regulations (ITAR), especially if the data from governments, academia and civil society is also
[are] proprietary (some data sources, including publicly needed to establish boundaries, including in the form of
available information, educational information, and fun- regulation.”64 It is noteworthy that other industries have
damental research, generally are not subject to export already established policies, regulations, standards,
restrictions). Running afoul of these regulations can result norms, and ethics, either national or internationally.65
in criminal and civil sanctions, including jail time, fines, Congress has also begun to deliberate on AI issues
and being prohibited from exporting.”57 The Committee but appears to be in the early stages, with concrete mea-
on Foreign Investment in the United States (CFIUS) also sures likely to come well in the future. If the cyber legisla-
“establishes mandatory declaration requirements for tion and social media deliberations are any indications,
foreign investment transactions involving critical technolo- progress will take time because balancing protections,
gies.” These regulations limit sharing AI technologies.58 risk, stakeholder prerogatives, and innovation will likely
A recent expansion of the restrictions could halt the flow prove challenging. Finally, the American Bar Association’s
of advanced Nvidia semiconductors to China—Nvidia Legal Rebels podcast summarizes the overall progress
is concerned because the chips were designed to be toward specific AI regulation of legislation as follows, “At
export-compliant and one-fifth of Nvidia’s revenue comes present, the regulation of AI in the United States is still in
from Chinese sales.59 its early stages, and there is no comprehensive federal
Regarding international AI concerns, the United legislation dedicated solely to AI regulation. However,
Nations Security Council conducted its first debate on there are existing laws and regulations that touch upon
AI on July 23, 2023. In his remarks, Secretary-General certain aspects of AI, such as privacy, security, and
António Guterres focused on the “opportunity to con- anti-discrimination.”66
sider the impact of artificial intelligence on peace and Despite all of the activities surrounding AI, there are
security—where it is already raising political, legal, ethical relatively few impediments that would hinder development
and humanitarian concerns.”60 He has already spoken or use of AI systems other than those caused by export
forcefully about lethal autonomous weapon systems on control provisions, resource shortfalls, or knowledge.
the battlefield.
The European Union (EU) should likely be considered
the most stringent governing body in developing policy,
10 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
TABLE 1
Artificial Intelligence Technology Availability for critical infrastructure applications
Policy, Legal,
Science and Use Case, Ethical, and
Technology Demand, and Resources Regulatory Technology
Scenario Maturity Market Forces Committed Impediments Accessibility TAV Average
lowered and that the potential for employing AI capabili- applications, in which high reliability and risk aversion are
ties for illicit purposes will grow. important considerations in the development and field-
In our evaluation of the individual TAV categories, we ing of new capabilities. For such applications, integration
assessed that science and technology maturity will and testing will be required to ensure that full functionality,
continue to increase. To date, many of the AI applications safety, and failure modes are well understood and within
available to users have been more like demonstrations established limits.
or prototypes. Overall, we assessed that AI science and The resource dichotomy will remain. We assessed
technology maturity will continue to increase as the R&D that developing complex AI systems will be outside the
in individual AI technologies matures. However, we also capability of most users who might seek to develop new
assess that it is extremely unlikely that widespread AGI AI systems, but using AI systems for a variety of licit and
will be available by the end of the ten-year period under illicit purposes is likely to become more prevalent. Policy,
consideration; some narrow AI applications might sug- legal, ethical, and regulatory impediments will not
gest that AGI has been achieved, but these cases will be present significant barriers to the employment of AI tech-
limited and likely result in a consensus that general intel- nologies. Given the ratings for the other four elements
ligence has not yet been achieved. of TAV, we assessed that the final category—technology
As AI continues to be employed, additional use accessibility—would go from low to moderate by the
cases, demand, and market forces will be identified, medium term and remain moderate through the end of
and the technology will become more readily available the ten-year period under consideration.
during the ten-year study time frame. For example, AI
applications are likely to be identified across even more
critical infrastructure sectors and applications. To date,
many of the envisioned illicit use cases have focused
The Risk Assessment
on using AI to gather information (e.g., asking an LLM to This risk assessment focuses on the threat, vulnerability,
provide a formula for a toxin). However, we assessed that and consequence to critical infrastructure of employing
there will be a shift toward using AI to compile informa- AI technologies for either licit or illicit purposes.71 DHS’s
tion about taking actions, such as penetrating critical Cybersecurity and Infrastructure Security Agency (CISA)
infrastructure. In terms of both science and technology plays a pivotal role in critical infrastructure because it
maturity and use cases, demand, and market forces, coordinates with the 16 sectors across an array of areas,
we assessed that ChatGPT-4’s experience will recur for including regulations and standards and preparedness
other breakthroughs in the technology. To reiterate, the and response.72
systems will be made available in early stages of maturity, We will not conduct an exhaustive analysis across
and users are likely to be used to test the applications. all 16 sectors, but rather will identify the types of threat,
Once errors are discovered, developers will undoubt- vulnerability, and consequence that are possible. As a
edly upgrade the AI platforms. However, this approach reminder, in this report, we draw from the literature on
would not be acceptable in many critical infrastructure smart cities and consider early AI-embedded capabilities
12 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
in a city’s critical infrastructure to be the initial stages of for swiftly analyzing and correlating patterns across bil-
developing a smart city. Inherent in the assumption is that lions of data points to track down a wide variety of cyber
these infrastructure components are vital to the safe and threats.”76 Still, GAN technology could also be used to
effective functioning of society, the safety of the popula- effectively perpetrate an attack.77
tion, and the functioning of the economy. In making this The increasing incorporation of IoT capabilities
assumption, we focus on AI-enabled critical infrastructure increases vulnerability to cyberattacks. As one source
because purpose-built smart cities are unlikely to be notes, “AI impacts the power grid system through its
widespread within the time horizon of this report (i.e., ten capacity to absorb usage pattern data and deliver precise
years). calculations of prospective demand, making it a prime
Although our analysis does not pertain exclusively to technology for grid management.”78 However, it also
smart cities, we use the smart city concept as an exam- increases the digital footprint and entry points for hack-
ple because such cities ordinarily contain a majority of the ers to target in a cyberattack. This increased attack sur-
critical infrastructure sectors and the digital capabilities face creates other potential vulnerabilities that could be
that could use AI capabilities for monitoring and control exploited. AI systems could be used to conduct network
and that could be targeted by adversarial AI systems.73 reconnaissance, develop network penetration plans, and
Smart cities build upon the IoT, which is composed of even conduct attacks, all without human intervention.79
networked sensors and actuators that offer situational AI systems—including using GANs and machine
awareness for a city and its critical infrastructure, as well learning—have developed malware that is effective at
as the ability to remotely control and manage major func- evading cyber defenses, allowing the enhanced malware
tional parts of the city.74 to bypass antivirus scanners.80 Research has also dem-
Our RS analysis builds on the previous TAV section, onstrated how GANs have been effective at targeting indi-
in which we concluded that AI deployment remains in viduals for social engineering attacks capable of penetrat-
the early stages for many use cases but has achieved ing networks based on their social media presence; the
important penetration in other areas, including some criti- findings demonstrated that “threat actors can enhance
cal infrastructure sectors, such as the financial services the effectiveness of their phishing attacks by using AI as a
sector. In considering AI’s likely effect on risk, one source malicious tool.”81
offers, “AI is expected to play a foundational role across Insider threats represent another category of activ-
our most critical infrastructures,” but the source also goes ity that could be used in targeting critical infrastructure
on to raise several important questions about AI’s use, during multiple stages of AI, from development to use
including “whether it’s capable of taking on such vital of the technology. During development of AI systems,
tasks, collaborative enough to cooperate with humans insiders could create insecurities and back doors that
and trustworthy enough to prove its transparency, reliabil- could be targeted for exploitation. Data from supervised
ity and dependability.”75 or unsupervised learning could have been adulterated in
the development phase, resulting in incorrect training of
the foundation or transformer models. This could cause
Threat errant signals to be produced regarding sensor informa-
AI threats to critical infrastructure could emanate from tion and even result in operators taking unnecessary or
several possible sources. AI could be used in developing dangerous actions.
and monitoring critical infrastructure, which could provide The degree of autonomy granted to AI systems also
benefits for optimizing designs, improving efficiency, and represents a possible threat. In a smart grid, for example,
ensuring safety. However, using AI in this way also has the the industrial control systems could alert the operator
potential to create threats if inputs used in the design of that the system is malfunctioning and call for action to be
the infrastructure—say, a bridge—had been either incor- taken. A human operator or even an AI–human opera-
rectly used or tampered with. tor team might do further interrogation of the system to
In this way, AI provides advances for both attackers validate the fault signal, whereas an AI operator could
and defenders. A Microsoft report states, “AI technologies decide to take an immediate action based on its incorrect
can provide automated interpretation of signals gener- training. Either of the cases could present issues—the
ated during attacks, effective threat incident prioritization, human and AI–human operator team might be too slow
and adaptive responses to address the speed and scale to act and the AI operator could act quickly but be taking
of adversarial actions. The methods show great promise erroneous actions.
// 13
The threats discussed previously represent a small example of AI use applicable to the financial sector is the
subset of possible concerns resulting from AI-generated prevention and mitigation of distributed denial-of-service
activities targeting critical infrastructure. Finally, we attacks.85
assessed that threats will continue to be cumulative. By As the scope and scale become more encompass-
this, we signal that all technologies that contribute to AI ing and the uses more generalizable, additional concerns
development and use—such as the ability to accumu- are likely to surface. For example, AI platforms capable of
late and protect key data, high-performance computing, planning and controlling smart cities or even critical infra-
semiconductor development and manufacturing, robot- structure (such as the electrical grid) could be employed
ics, machine learning, and NLP—could be targeted and to develop attack plans against this critical infrastructure.
should be considered as having potential vulnerabilities. The use of AI to target the U.S. election system—be it
This includes the enabling and converging technologies, deepfakes and misinformation or gerrymandering con-
such as cyber, big data, and IoT. gressional districts—provides another example of a scope
and scale that would be expansive and likely a cause of
great concern. AI has also been contemplated for use
Vulnerability in developing advanced biological and chemical weap-
In the “Threat” section, we allude to potential vulnerabili- ons that could be used against critical infrastructure or
ties; in this section, we provide a conceptual approach populations.86
to considering critical infrastructure vulnerabilities. In this Complexity also leads to concerns. We assessed
approach, we begin from the premise that an impor- that as AI systems become more complex and approach
tant driver of the vulnerability is the use case for the AI AGI, it will become increasingly difficult for humans to
systems. Here we assessed that the scale, scope, and understand and evaluate the outcomes of AI-enabled
complexity of the applications are important to the vulner- processes. Here, incorporating the Defense Innova-
ability caused by the AI system. For our purposes, scale tion Board’s five principles for AI technologies—that
refers to the size of the affected population, while scope they be responsible, equitable, traceable, reliable, and
refers to the number of applications into which an AI governable—is essential for reducing AI vulnerabilities.87
system is incorporated. Additionally, the use of GANs, protocols for the validation
AI systems confined in a defined system will likely not and verification of AI models, and reducing vulnerabilities
be as concerning as more-general applications that cut associated with technologies that contribute to AI devel-
across numerous areas and applications. Consider medi- opment and use will be critical.
cal applications in the field of oncology, in which AI-based
technology is being used to characterize tumors; such
abilities have proven quite useful.82 Given that the treat-
ment protocols for the diagnosis rely on human–machine
teaming, few would likely voice objections to this use of
AI. However, as the scope and scale of the applications AI platforms capable of
increase, it is likely that concerns will surface.
A transportation sector that includes self-driving planning and controlling
cars and long-haul trucks empowered by AI introduces
vulnerabilities that some have argued introduce too many smart cities or even
issues, including loss of agency for those riding in the
vehicles, liability and insurance concerns should an acci- critical infrastructure
dent occur, and the regulation and standardization of the
vehicles.83 These same concerns could be magnified if (such as the electrical
an entire sector were to turn to AI, such as the financial
services sector. AI has already been incorporated into the grid) could be
traditional financial services sector for protecting finan-
cial cyber networks, fraud detection, loan decisions, and employed to develop
newer financial technology applications, such as mobile
banking, peer-to-peer payment services, automated attack plans against this
portfolio managers, and trading platforms.84 A specific
critical infrastructure.
14 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
Finally, to mitigate vulnerabilities associated with AI, tions or not to act on the recommendations. Acting on
integration and testing will be imperative during all phases the outcome in a hypothesis 1 provides a good outcome
of development and deployment to ensure that full func- because the AI works as intended and its recommenda-
tionality, safety, and failure modes are well understood tions are followed. However, acting on the outcome in
and within established limits for any critical infrastructure hypothesis 2 could result in an adverse, potentially cata-
applications. strophic outcome. To avoid such negative consequences,
humans must retain control and understanding of the AI
systems. The greater the consequences, the more impor-
Consequence tant this becomes for ensuring the appropriate uses of AI
The consequences associated with AI use cases will also technology and preventing catastrophic outcomes.
be affected by the scope, scale, and complexity of the Regarding consequences, we assessed that the
applications. Critical infrastructure projects and sectors ChatGPT-4 difficulties when that technology was first
that function across a wide scope and large scale should rolled out suggest that moving deliberately to field AI sys-
be expected to have serious consequences. Critical infra- tems is a useful approach, especially if they directly inter-
structure that affects entire sectors or perhaps even cru- face with the public. Additionally, ensuring that appro-
cial parts of a single sector have the potential for greater priate guardrails are in place such that the AI reflects
impacts should they be attacked by AI-enabled threats or appropriate goals and objectives will be an important
should AI design systems fail. Although it was not caused outcome to achieve. Building public trust in AI systems
by an AI-enabled system, the 2021 Colonial Pipeline will be essential for taking advantage of the opportunities
Company ransomware attack provides an example of and mitigating the challenges that are likely to result from
how the targeting of even a single company responsible the incorporation of these new capabilities.
for an important segment of a sector can have dispro-
portionately adverse consequences. The attack caused a
“shutdown of nearly half of the gasoline and jet fuel supply Emerging Technology Risk Assessment
delivered to the East Coast.”88 It also demonstrated the AI remains in a relatively early state of TAV (and science
risks associated with even narrowly targeted critical infra- and technology maturity in particular) for a majority
structure attacks. Some industry experts called the attack of the critical infrastructure applications, with contribut-
a clarion call for energy cybersecurity.89 ing technologies maturing along discrete timelines. As
Complexity in developing and protecting critical infra- a result, we assessed that advances in AI systems will
structure is also an important factor to consider in exam- occur both incrementally as these individual technologies
ining potential consequences of AI use and misuse. One mature, and, at times, in a discontinuous manner—as
AI expert offers the following warning about the increas- occurred with ChatGPT-4. AI represents transformative
ing complexity of AI: “With super intelligent machines the technology and will likely be incorporated broadly across
problem with making them more and more intelligent is society—including in critical infrastructure—as occurred
that we don’t know how to specify their objectives cor- with the internet. We assessed that AI will likely be
rectly.” The expert went on, “[t]hen we have a machine affected by many of the same factors as other emerging
that’s more intelligent, more powerful than human beings technologies, such as ensuring adequate cybersecurity,
that’s doing what we asked for but not what we really protecting IP, ensuring key data protections, and protect-
want.”90 The warning is that humans will lose the ability to ing proprietary methods and processes. Table 2 provides
describe the system requirements of the very AI-enabled our AI risk assessment for critical infrastructure.
systems that are being developed. The threat landscape is likely to be defined by incre-
Relating this back to consequences, we posit that mental advances resulting from threat actors and critical
severe adverse outcomes could occur if AI systems reach infrastructure stakeholders engaging in a constant com-
erroneous conclusions, regardless of the reason for the petition for supremacy. We also assess that early rollout
mistake. Considering how to evaluate the outcomes and of the technology before appropriate validation and veri-
recommendations of AI platforms proves to be a useful fication has been conducted could present a significant
exercise. Here we can consider two hypotheses. Hypoth- target for adversaries and a loss of confidence that could
esis 1 is that AI works as intended, and hypothesis 2 is doom additional use cases.
that AI does not work as intended. For both hypotheses, The vulnerability and consequence will both be
one can choose either to act on the AI recommenda- directly related to the use cases for the AI systems—
// 15
TABLE 2
artificial intelligence Risk Assessment for critical infrastructure
Scenario TAV Average Threat Vulnerability Consequence Average
government, and (2) ensuring that AI is being appropri- infrastructure—as occurred with the internet. As DHS
ately considered and managed in key DHS internal opera- states in its AI strategy document, AI has already been
tional and management areas. used in several key critical infrastructure sectors, includ-
The use of AI-enabled technologies is not new to ing “manufacturing, financial services, transportation,
DHS, and they have already been incorporated into a healthcare, energy, and food and agriculture.”94 Figure 2
wide variety of applications, including facial recognition provides our overall risk assessment, by time frame.
and predictive analysis using big data applications for We assessed that protecting AI development and
assisting law enforcement officers. For example, DHS’s security will likely be affected by many of the same fac-
Homeland Security Investigations department and the tors as those affecting other emerging technologies,
Science and Technology Directorate want to employ AI such as ensuring cybersecurity, protecting IP, ensuring
technology to discover, stop, and prosecute child sexual key data protections, and protecting proprietary methods
abuse cases.92 and processes. Failing to do so could create poten-
Given the likely future ubiquity of AI, we should tially catastrophic vulnerabilities in the country’s critical
expect that DHS stakeholders will also be incorporating infrastructure.
AI capabilities into their operations. As an example, New We further assess that the AI field contains numer-
York City released a plan for responsible AI on Octo- ous technologies that will be incorporated into AI critical
ber 16, 2023, called the “New York City Artificial Intel- infrastructure–related systems as they become available.
ligence Action Plan.” The plan provides a “framework for As a result, AI science and technology maturity will be
city agencies to carefully evaluate AI tools and associated based on key dependencies in several essential tech-
risks, help city government employees build AI knowledge nology areas, including high-performance computing,
and skills, and support the responsible implementation advanced semiconductor development and manufactur-
of these technologies to improve quality of life for New ing, robotics, machine learning, and NLP, and on the
Yorkers.”93 ability to accumulate and protect key data.
Despite significant excitement about AI, particu-
larly in the past decade or so, the technology remains
Conclusion in a relatively early state of maturity. This has significant
implications for setting the conditions for the responsible
AI is transformative technology and will likely be incor- development and use of AI today and gaining a better
porated broadly across society—including in critical
FIGURE 2
artificial intelligence Risk Assessment for critical Infrastructure
TAV Average
5.0
4.0
3.0
2.0
1.0
Short term (up to 3 years)
Consequence 0.0 Threat Medium term (3–5 years)
Long term (5 to < 10 years)
Vulnerability
NOTE: The emerging technology risk assessment scale is 0 to < 2 = low impact or not likely feasible, 2 to < 4 = moderate impact or
possible, and 4 to 5 = high impact or likely feasible.
// 17
Notes
31 Whyte, “Beyond ‘Bigger, Faster, Better,’” p. 26.
32 Kolosnjaji et al., “Adversarial Malware Binaries.”
1 NIST, Computer Security Resource Center, “Artificial Intelligence.”
33 Bahnsen et al., “DeepPhish.”
2 Herath and Mittal, “Adoption of Artificial Intelligence in Smart Cities.”
34 Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
3 Herath and Mittal, “Adoption of Artificial Intelligence in Smart Cities.”
35 Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
4 Initially, this assertion was made; however, NIST reevaluated the results
36
and concluded that facial recognition algorithms performed similarly across Grand View Research, Artificial Intelligence Market Size and Share Report,
demographic groups (NIST, “NIST Evaluates Face Recognition Software’s 2030.
Accuracy for Flight Boarding”). 37Grand View Research, Artificial Intelligence Market Size and Share Report,
5 Olavsrud, “9 Famous Analytics and AI Disasters.” 2030.
38
6 Norden and Ramachandran, “Artificial Intelligence and Election Security.” Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
39
7 Bansal, “Characteristics of Big Data.” Lawler, “How the U.S. Is Trying to Stay Ahead of China in the AI Race.”
40
8U.S. Department of Defense, Defense Innovation Board, AI Principles: Rec- De Vynck, “How Big Tech Is Co-Opting the Rising Stars of Artificial Intel-
ommendations on the Ethical Use of Artificial Intelligence by the Department of ligence.”
Defense, p. 2. 41 Batra et al., Artificial Intelligence Hardware.
9EO 14110, “Executive Order on the Safe, Secure, and Trustworthy Develop- 42 De Vynck, “How Big Tech Is Co-opting the Rising Stars of Artificial Intel-
ment and Use of Artificial Intelligence.” ligence.”
10White House, “President Biden Issues Executive Order on Safe, Secure, 43 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
and Trustworthy Artificial Intelligence.”
44
11 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
Haan and Watts, “24 Top AI Statistics and Trends in 2024.”
45
12 Abdulla and Chahal, Voices of Innovation.
Roser, “The Brief History of Artificial Intelligence.”
46
13 Highest-h-indexed researchers refers to the most-cited researchers
“AI is paired with human operators across functions like equipment calibra-
(Abdulla and Chahal, Voices of Innovation).
tion, robotics controls in assembly plants, predictive maintenance, and more”
(Microsoft, “Autonomy for Industrial Control Systems”). 47 Henshall, “4 Charts That Show Why AI Progress Is Unlikely to Slow Down.”
14 Kuusi and Heinonen, “Scenarios from Artificial Narrow Intelligence to 48 EO 13859, “Maintaining American Leadership in Artificial Intelligence.”
Artificial General Intelligence.” 49 NIST, U.S. Leadership in AI.
15 Kuusi and Heinonen, “Scenarios from Artificial Narrow Intelligence to 50
Artificial General Intelligence.” EO 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the
Federal Government,”
16 Roser, “The Brief History of Artificial Intelligence.” 51 NIST, “AI RMF Development.”
17 This framework is described in Chapter 6 (pp. 143–162) and Appendix B 52
(pp. 301–304) of Gerstein, The Story of Technology. NIST, NIST AI RMF Framework. This reference includes specific tables that
contain recommendations and outcomes. NIST, “5 AI RMF Core.”
18 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
53 National Artificial Intelligence Advisory Committee, National Artificial Intel- 81 Bahnsen et al., “DeepPhish.”
ligence Advisory Committee (NAIAC), p. 3. 82 Kumar et al., “Artificial Intelligence in Disease Diagnosis.”
54 White House, “President Biden Issues Executive Order on Safe, Secure, 83
and Trustworthy Artificial Intelligence.” Spray, “8 Biggest Challenges with Self-Driving Cars.”
84
55 White House, “Blueprint for an AI Bill of Rights.” Columbia University, Fu Foundation School of Engineering and Applied
Science, “What Is Financial Technology (FinTech)?”
56 White House, “President Biden Issues Executive Order on Safe, Secure, 85
and Trustworthy Artificial Intelligence.” Frackiewicz, “The Role of Artificial Intelligence in DDoS Mitigation.”
86
57 Bloomberg Law, “Uses and Risks of AI in Export Controls Compliance.” Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery.”
87
58 U.S. Department of the Treasury, “The Committee on Foreign Investment in U.S. Department of Defense, Defense Innovation Board, AI Principles:
the United States (CFIUS).” Recommendations on the Ethical Use of Artificial Intelligence by the Depart-
ment of Defense.
59 Lane, “Nvidia Could Be Affected by Expanded US Export Controls on AI
88
Chips to China.” Vasquez, “How the Colonial Pipeline Hack Galvanized a Nation at Risk.”
89
60 United Nations, “Secretary-General Urges Security Council to Ensure Vasquez, “How the Colonial Pipeline Hack Galvanized a Nation at Risk.”
Transparency, Accountability, Oversight, in First Debate on Artificial Intel- 90 Lebans, “The Threat from AI Is Not That It Will Revolt, It’s That It’ll Do
ligence.” Exactly as It’s Told.”
61 Talagala, “The AI Act.” 91DHS, “Secretary Mayorkas Announces New Measures to Tackle A.I., PRC
62 Talagala, “The AI Act.” Challenges at First State of Homeland Security Address.”
92
63Rozen, “AI Leaders Are Calling for More Regulation of the Tech. Here’s Kelley, “DHS Looks to AI to Help Solve Child Abuse Cases.”
What That May Mean in the US.” 93City of New York, “Mayor Adams Releases First-of-Its-Kind Plan for
64 Google, Recommendations for Regulating AI, p. 1. Responsible Artificial Intelligence Use in NYC Government.”
94
65 Gerstein, Tech Wars. DHS, U.S. Department of Homeland Security Artificial Intelligence Strategy,
p. ii.
66 Li, “What Could AI Regulation in the US Look Like?” 95 Federal Emergency Management Agency, Strategic Foresight Initiative,
67 Smith, “What Large Models Cost You—There Is No Free AI Lunch.” “Critical Infrastructure.”
68Bond, “It Takes a Few Dollars and 8 Minutes to Create a Deepfake. And
That’s Only the Start.”
69
70
Carnegie Endowment of International Peace, “The Future of AI Regulation.”
References
Shea and Burns, “Smart City.”
Abdallat, A. J., “Can We Trust Critical Infrastructure to Artificial Intelligence?”
71 For this risk assessment, the analysis of threat is focused on the capabili- Forbes, July 1, 2022.
ties and intentions of the actor; the vulnerabilities are weaknesses to be
exploited; and the consequences are the outcomes of the event occurrence Abdulla, Sara, and Husanjot Chahal, Voices of Innovation: An Analysis of
(Cox, “Some Limitations of ‘Risk = Threat × Vulnerability × Consequence’ for Influential AI Researchers in the United States, Center for Security and
Risk Analysis of Terrorist Attacks”). Emerging Technology, July 2023.
72 The 16 critical infrastructure sectors are chemical; commercial facilities; Andersen, Ross, “What Happens When AI Has Read Everything?” The
communications; critical manufacturing; dams; defense industrial base; emer- Atlantic, January 18, 2023.
gency services; energy; financial services; food and agriculture; government Authors Guild, “Survey Reveals 90 Percent of Writers Believe Authors Should
facilities; health care and public health; information technology; nuclear reac- Be Compensated for the Use of Their Books in Training Generative AI,” press
tors, materials, and waste; transportation systems; and water and wastewater release, May 15, 2023.
(Cybersecurity and Infrastructure Security Agency, “Critical Infrastructure
Sectors”). Bahnsen, Alejandro Correa, Ivan Torroledo, Luis David Camacho, and Sergio
73
Villegas, “DeepPhish: Simulating Malicious AI,” in 2018 APWG Symposium on
"Several major characteristics are used to determine a city’s smartness. Electronic Crime Research (eCrime), 2018.
These characteristics include a technology-based infrastructure; environmen-
tal initiatives; a high-functioning public transportation system; and a confident Bansal, Sumeet, “Characteristics of Big Data,” blog post, AnalytixLabs,
sense of urban planning and humans to live and work within the city and utilize June 17, 2021.
its resources" (Shea and Burns, “Smart City”).
Batra, Gaurav, Zach Jacobson, Siddarth Madhav, Andrea Queirolo, and
74 Nick Santhanam, Artificial Intelligence Hardware: New Opportunities for
Sakhnini et al., “AI and Security of Critical Infrastructure.”
75
Semiconductor Companies, McKinsey and Company, December 2018.
Abdallat, “Can We Trust Critical Infrastructure to Artificial Intelligence?”
76
Bloomberg Law, “Uses and Risks of AI in Export Controls Compliance,”
Microsoft, Microsoft Digital Defense Report. Practical Guidance, undated.
77 Kolosnjaji et al., “Adversarial Malware Binaries.” Bond, Shannon, “It Takes a Few Dollars and 8 Minutes to Create a Deepfake.
78 Abdallat, “Can We Trust Critical Infrastructure to Artificial Intelligence?” And That’s Only the Start,” NPR, March 23, 2023.
79 Leenen and Meyer, “Artificial Intelligence and Big Data Analytics in Support Carnegie Endowment of International Peace, “The Future of AI Regulation:
of Cyber Defense.” A Conversation with Arati Prabhakar,” video, November 14, 2023. As of
January 30, 2024:
80 Kolosnjaji et al., “Adversarial Malware Binaries.” https://www.youtube.com/watch?v=3uovOOUL4zg
// 19
City of New York, “Mayor Adams Releases First-of-Its-Kind Plan for Kolosnjaji, Bojan, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio
Responsible Artificial Intelligence Use in NYC Government,” press release, Giacinto, Claudia Eckert, and Fabio Roli, “Adversarial Malware Binaries:
October 16, 2023. Evading Deep Learning for Malware Detection in Executables,” in 2018 26th
European Signal Processing Conference, September 2018.
Columbia University, Fu Foundation School of Engineering and Applied
Science, “What Is Financial Technology (FinTech)? A Beginner’s Guide,” Kumar, Yogesh, Apeksha Koul, Ruchi Singla, and Muhammad Fazal Ijaz,
Columbia Engineering Boot Camps blog, undated. “Artificial Intelligence in Disease Diagnosis: A Systematic Literature Review,
Synthesizing Framework and Future Research Agenda,” Journal of Ambient
Cox, Louis Anthony, Jr., “Some Limitations of ‘Risk = Threat × Vulnerability Intelligence and Humanized Computing, Vol. 14, No. 7, 2023.
× Consequence’ for Risk Analysis of Terrorist Attacks,” Risk Analysis, Vol. 28,
No. 6, December 2008. Kuusi, Osmo, and Sirkka Heinonen, “Scenarios from Artificial Narrow
Intelligence to Artificial General Intelligence— Reviewing the Results of the
Cybersecurity and Infrastructure Security Agency, “Critical Infrastructure International Work/Technology 2050 Study,” World Futures Review, Vol. 14,
Sectors,” webpage, undated. As of February 15, 2024: No. 1, 2022.
https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/
critical-infrastructure-sectors Lane, Terry, “Nvidia Could Be Affected by Expanded US Export Controls on AI
Chips to China,” Investopedia, October 16, 2023.
De Vynck, Gerrit, “How Big Tech Is Co-Opting the Rising Stars of Artificial
Intelligence,” Washington Post, September 30, 2023. Lawler, Dave, “How the U.S. Is Trying to Stay Ahead of China in the AI Race,”
Axios, June 29, 2023.
DHS—See U.S. Department of Homeland Security.
Lebans, Jim, “The Threat from AI Is Not That It Will Revolt, It’s That It’ll Do
Executive Order 13859, “Maintaining American Leadership in Artificial Exactly as It’s Told,” CBC Radio, April 24, 2020.
Intelligence,” Executive Office of the President, February 11, 2019.
Leenen, Louise, and Thomas Meyer, “Artificial Intelligence and Big
Executive Order 13960, “Promoting the Use of Trustworthy Artificial Data Analytics in Support of Cyber Defense,” in Information Resources
Intelligence in the Federal Government,” Executive Office of the President, Management Association, Research Anthology on Artificial Intelligence
December 3, 2020. Applications in Security, IGI Global, November 2020.
Executive Order 14110, “Executive Order on the Safe, Secure, and Li, Victor, “What Could AI Regulation in the US Look Like?” Legal Rebels
Trustworthy Development and Use of Artificial Intelligence,” Executive Office of podcast, American Bar Association, June 14, 2023.
the President, October 30, 2023.
Microsoft, “Autonomy for Industrial Control Systems,” white paper, April 2020.
Federal Emergency Management Agency, Strategic Foresight Initiative,
“Critical Infrastructure: Long-Term Trends and Drivers and Their Implications Microsoft, "Applications for Artificial Intelligence in Department of Defense
for Emergency Management,” June 2011. Cyber Missions," On the Issues blog, May 3, 2022.
Frackiewicz, Marcin, “The Role of Artificial Intelligence in DDoS Mitigation,” National Artificial Intelligence Advisory Committee, National Artificial
TechnoSpace2, August 3, 2023. Intelligence Advisory Committee (NAIAC): Year 1, May 2023.
Gerstein, Daniel M., The Story of Technology: How We Got Here and What the National Institute of Standards and Technology, NIST AI RMF Playbook,
Future Holds, Prometheus Books, 2019. undated.
Gerstein, Daniel M., Tech Wars: Transforming U.S. Technology Development, National Institute of Standards and Technology, “5 AI RMF Core,” webpage,
Praeger, September 2022. undated. As of January 30, 2024:
https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/
Google, Recommendations for Regulating AI, undated. Core_And_Profiles/5-sec-core#tab:govlongtblr
Goyal, Manish, Shobhit Varshney, and Eniko Rozsa, “What Is Generative AI, National Institute of Standards and Technology, U.S. Leadership in AI: A Plan
What Are Foundation Models, and Why Do They Matter?” blog post, IBM, for Federal Engagement in Developing Technical Standards and Related Tools,
March 8, 2023. August 9, 2019.
Grand View Research, Artificial Intelligence Market Size and Share Report, National Institute of Standards and Technology, “NIST Evaluates Face
2030, June 2023. Recognition Software’s Accuracy for Flight Boarding: Agency’s New Test
Gülen, Kerem, “The Building Blocks of AI,” webpage, Dataconomy, April 3, Concerns Checking in Passengers and Documenting Their Exit from a
2023. As of January 30, 2024: Country,” news release, July 13, 2021.
https://dataconomy.com/2023/04/03/ National Institute of Standards and Technology, Computer Security Resource
basic-components-of-artificial-intelligence/ Center, “Artificial Intelligence,” webpage, updated July 20, 2023. As of
Haan, Katherine, and Rob Watts, “24 Top AI Statistics and Trends in 2024,” January 30, 2024:
Forbes Advisor, April 25, 2023. https://csrc.nist.gov/Topics/Technologies/artificial-intelligence
Henshall, Will, “4 Charts That Show Why AI Progress Is Unlikely to Slow National Institute of Standards and Technology, “AI RMF Development,”
Down,” Time, November 6, 2023. webpage, updated January 2, 2024. As of January 30, 2024:
https://www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development
Herath, H. M. K. K. M. B., and Mamta Mittal, “Adoption of Artificial Intelligence
in Smart Cities: A Comprehensive Review,” International Journal of Information NIST—See National Institute of Standards and Technology.
Management Data Insights, Vol. 2, No. 1, April 2022. Norden, Lawrence, and Gowri Ramachandran, “Artificial Intelligence and
Hill, Michael, “NATO Tests AI’s Ability to Protect Critical Infrastructure Against Election Security,” Brennan Center for Justice, October 5, 2023.
Cyberattacks,” CSO, January 5, 2023. Olavsrud, Thor, “9 Famous Analytics and AI Disasters,” CIO, September 22,
Kelley, Alexandra, “DHS Looks to AI to Help Solve Child Abuse Cases,” 2023.
NextGov/FCW, October 2, 2023.
20 \\ EMERGING TECHNOLOGY AND RISK ANALYSIS: ARTIFICIAL INTELLIGENCE AND CRITICAL INFRASTRUCTURE
Roser, Max, “The Brief History of Artificial Intelligence: The World Has U.S. Department of the Treasury, “The Committee on Foreign Investment in
Changed Fast—What Might Be Next?” Our World In Data, December 6, the United States (CFIUS),” webpage, undated. As of February 9, 2024:
2022. https://home.treasury.gov/policy-issues/international/
Rozen, Courtney, “AI Leaders Are Calling for More Regulation of the Tech. the-committee-on-foreign-investment-in-the-united-states-cfius
Here’s What That May Mean in the US,” Washington Post, July 27, 2023. Vasquez, Christian, “How the Colonial Pipeline Hack Galvanized a Nation at
Sakhnini, Jacob, Hadis Karimipour, Ali Dehghantanha, and Reza M. Parizi, “AI Risk,” EnergyWire, May 9, 2022.
and Security of Critical Infrastructure,” in Kim-Kwang Raymond Choo and Ali White House, “Blueprint for an AI Bill of Rights: Making Automated Systems
Dehghantanha, eds., Handbook of Big Data Privacy, Springer, 2020. Work for the American People,” webpage, undated. As of January 30, 2024:
Shea, Sharon, and Ed Burns, “Smart City,” TechTarget, webpage, July 2020. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
As of February 5, 2024: White House, “Biden-Harris Administration Announces New Actions
https://www.techtarget.com/iotagenda/definition/smart-city to Promote Responsible AI Innovation that Protects Americans’ Rights
Smith, Craig S., “What Large Models Cost You—There Is No Free AI Lunch,” and Safety,” fact sheet, May 4, 2023.
Forbes, September 8, 2023. White House, “President Biden Issues Executive Order on Safe, Secure, and
Spray, Aaron, “8 Biggest Challenges with Self-Driving Cars,” HotCars, Trustworthy Artificial Intelligence,” fact sheet, October 30, 2023.
April 19, 2021. Whyte, Christopher, “Beyond ‘Bigger, Faster, Better:’ Assessing Thinking
Talagala, Nisha, “The AI Act: Three Things to Know About AI Regulation About Artificial Intelligence and Cyber Conflict,” Cyber Defense Review, Vol. 8,
Worldwide,” Forbes, June 29, 2022. No. 3, 2023.
United Nations, “Secretary-General Urges Security Council to Ensure Wilson, Mark, “ChatGPT Gets Its Biggest Update So Far—Here Are
Transparency, Accountability, Oversight, in First Debate on Artificial 4 Upgrades That Are Coming Soon,” TechRadar, November 7, 2023.
Intelligence,” press release, July 18, 2023. Wood, Thomas, "Generative Adversarial Network," DeepAI, webpage,
Urbina, Fabio, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, “Dual Use of undated. As of February 16, 2024:
Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence, https://deepai.org/machine-learning-glossary-and-terms/
Vol. 4, 2022. generative-adversarial-network
U.S. Department of Defense, Defense Innovation Board, AI Principles: Yasar, Kinza, and Sarah Lewis, “Generative Adversarial Network (GAN),”
Recommendations on the Ethical Use of Artificial Intelligence by the TechTarget, webpage, undated. As of February 5, 2024:
Department of Defense, October 31, 2019. https://www.techtarget.com/searchenterpriseai/definition/
generative-adversarial-network-GAN
U.S. Department of Homeland Security, U.S. Department of Homeland
Security Artificial Intelligence Strategy, December 3, 2020.
U.S. Department of Homeland Security, “Secretary Mayorkas Announces New
Measures to Tackle A.I., PRC Challenges at First State of Homeland Security
Address,” press release, April 21, 2023.
ABBREVIATIONS
AGI artificial general intelligence
AI artificial intelligence
ANI artificial narrow intelligence
DHS U.S. Department of Homeland
Security
EO executive order
EU European Union
GAN generative adversarial network
IoT Internet of Things
IP intellectual property
LLM large-language model
NIST National Institute of Standards and
Technology
NLP natural language processing
R&D research and development
RMF risk management framework
RS risks and scenarios
TAV technology availability
TRL technology readiness level
// 21
RR-A2873-1