Managing Artificial Intelligence-Specific Cybersecurity Risks in The Financial Services Sector
Managing Artificial Intelligence-Specific Cybersecurity Risks in The Financial Services Sector
Intelligence-Specific
Cybersecurity Risks in the
Financial Services Sector
U.S. Department of the Treasury
March 2024
Executive Summary
In response to Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and
Use of Artificial Intelligence, this report focuses on the current state of artificial intelligence
(AI)-related cybersecurity and fraud risks in financial services, including an overview of
current AI use cases, trends of threats and risks, best-practice recommendations, and
challenges and opportunities. The report’s findings are based on 42 in-depth interviews
conducted in late 2023. The interview participants include representatives from the
financial services sector, information technology (IT) firms, data providers, and anti-fraud/
anti-money laundering (AML) companies. The U.S. Department of the Treasury (Treasury)
recognizes both the importance of responsible technological innovation across the financial
sector and the increasing opportunities surrounding AI systems in the financial sector.
Emerging technologies, however, often come with risks. While the focus of this report is on
cybersecurity and fraud, Treasury also recognizes that the use of AI in financial services has
important implications beyond these topics and will continue to study these implications.
Financial institutions have used AI systems in connection with their operations, and
specifically to support their cybersecurity and anti-fraud operations, for years. Early
adopters in the financial services sector may be revisiting AI’s potential strategic value and
considering new use cases in light of the recent rate of change and rapid developments
in AI technology. Thus far, many financial institutions have incorporated AI-related risks
into their existing risk management frameworks, especially those related to information-
technology, model, compliance, and third-party risk management.
Some of the financial institutions that Treasury met with reported that existing risk
management frameworks may not be adequate to cover emerging AI technologies,
such as Generative AI, which emulates input data to generate synthetic content. Hence,
financial institutions appear to be moving slowly in adopting expansive use of emerging
AI technologies. Interview participants generally agreed that the safe adoption of AI
technologies requires cross-enterprise collaboration among model, technology, legal,
compliance, and other teams, which can present its own challenges. Further, in the case
of cybersecurity and anti-fraud AI usage, interviewees largely agreed that effectively
managing risks requires collaboration across the financial services sector. Applying
appropriate risk management principles to AI development is critical from a cybersecurity
perspective, as data poisoning, data leakage, and data integrity attacks can take place
at any stage of the AI development and supply chain. AI systems are more vulnerable to
these concerns than traditional software systems because of the dependency of an AI
system on the data used to train and test it.
Like other critical infrastructure sectors, the financial services sector is increasingly
subject to costly cybersecurity threats and cyber-enabled fraud. As access to advanced
AI tools becomes more widespread, it is likely that, at least initially, cyberthreat actors
utilizing emerging AI tools will have the advantage by outpacing and outnumbering their
The Bank Policy Institute (BPI) and the American Bankers Association (ABA) are both making
efforts to close the fraud information-sharing gap across the banking sector. The ABA’s
initiative is specifically aimed at closing the fraud data gap for smaller financial institutions.
Treasury’s Financial Crimes Enforcement Network (FinCEN) and core providers might also
be well positioned to play a critical role in supporting efforts to ensure that smaller financial
institutions are benefitting from the advancements in AI technology development for
countering fraud. If smaller financial institutions are not supported in closing this critical
Additionally, the importance of data for AI technology and the complexity of AI technology
development would very likely increase financial institutions’ reliance on third-party
providers of data and technology. As a result, it is very likely that often overlooked
third-party risk considerations such as data integrity and data provenance will emerge
as significant concerns for third-party risk management. Emerging AI solutions may
challenge traditional expectations regarding financial institutions’ ownership of data,
models, and insights. Additionally, the current trend of adopting AI solutions through
multiple intermediaries and service providers complicates oversight and transparency. It
is becoming increasingly challenging to accurately understand data flows and the use of
AI solutions, thus inhibiting understanding and verification of those AI systems’ fidelity of
insights and decision making.
1. Introduction..................................................................................7
1.1 Background and Purpose of this Report.............................................................7
1.2 Report Organization.............................................................................................7
1.3 Scope and Methodology......................................................................................8
1.4 What do we mean by artificial intelligence?.......................................................9
1.5 Financial Services Sector Profile.......................................................................10
1.6 Cybersecurity and Fraud Trends.......................................................................10
Glossary.......................................................................................... 49
Within 150 days of the date of this order, the Secretary of the Treasury shall issue
a public report on best practices for financial institutions to manage AI-specific
cybersecurity risks.
This report provides an overview of the state of AI use in the financial services sector for
cybersecurity purposes and discusses its implications for financial institutions based on
42 interviews with industry stakeholders. While this report is focused on the use of AI in
cybersecurity and the risks that are attendant to that use, many of the findings and best
practices may have applicability to other AI use cases. This report does not address many
other issues related to AI and financial services, including those related to consumer
and investor protection, disparate impact, financial stability, and financial regulatory
questions. Treasury expects to continue to study the impact of AI on financial services in
the coming months and years. The observations reflect the participating stakeholders’
perception of the state of AI and is not intended as an authoritative assessment of
actual AI usage or terminology. Potential follow-on workstreams will address aligning
perceptions toward a common understanding of the current state and enhancements to AI
cybersecurity and risks.
Following the release of Treasury’s report Financial Services Sector’s Adoption of Cloud
Services1 in February 2023, Treasury launched a public-private partnership dedicated
to bolstering regulatory and private sector cooperation.2 This partnership—the Cloud
Executive Steering Group (CESG)—is chaired by leaders in the financial sector with
expertise in financial sector cybersecurity. The CESG provides a forum for convening
financial sector AI stakeholders across the member agencies of the Financial Stability
Oversight Council (FSOC),3 the Financial and Banking Information Infrastructure
Committee (FBIIC),4 and the Financial Services Sector Coordinating Council (FSSCC).5
To develop this report, Treasury, with assistance from CESG’s private sector members
and several FSSCC committees, met with a broad array of organizations and financial
firms to learn from practitioners about the use and implications of AI in the financial
services sector. This included 42 in-depth discussions with representatives from financial
institutions of all sizes, financial sector trade associations, cybersecurity and anti-fraud
service providers that include AI features in their products and services, consulting firms
assisting financial institutions in the development of AI, regulatory advocacy groups,
1 U.S. Department of the Treasury, The Financial Service Sector’s Adoption of Cloud Services (Feb. 2023), https://
home.treasury.gov/system/files/136/Treasury-Cloud-Report.pdf.
2 U.S. Department of the Treasury, U.S. Department of the Treasury Kicks Off Public-Private Executive Steering
Group to Address Cloud Report Recommendations (May 25, 2023), https://home.treasury.gov/news/press-
releases/jy1503.
3 Established in 2010 under the Dodd-Frank Wall Street Reform and Consumer Protection Act, the FSOC’s
mission is to identify risks to U.S. financial stability, promote market discipline, and respond to emerging
threats to the stability of the U.S. financial system. For more about FSOC see https://www.fsoc.gov.
4 Chartered under the President’s Working Group on Financial Markets, the FBIIC is charged with improving
coordination and communication among financial regulators, promoting public-private partnerships within
the financial sector, and enhancing the resiliency of the financial sector infrastructure overall. For more about
the FBIIC see https://www.fbiic.gov/.
5 Established in 2002, the FSSCC is an industry-led, non-profit organization that coordinates critical
infrastructure and homeland security activities within the financial services industry. FSSCC members consist
of financial trade associations, financial utilities, and the most critical financial firms. For more about the
FSSCC see https://fsscc.org/.
The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C.
9401(3): a machine-based system that can, for a given set of human-defined
objectives, make predictions, recommendations, or decisions influencing real or
virtual environments. Artificial intelligence systems use machine- and human-
based inputs to perceive real and virtual environments; abstract such perceptions
into models through analysis in an automated manner; and use model inference to
formulate options for information or action.
Treasury found that there is no uniform agreement among the participants in the study
on the meaning of “artificial intelligence.” While the above definition is broad, it may still
not cover all the different concepts associated with the term “artificial intelligence.” In
fact, one participant said they do not use “artificial intelligence” and instead prefer to
call these systems “augmented intelligence” to emphasize that users of these systems
still need to be the ultimate decision-makers. Participants also expressed frustration with
understanding various AI product and service offerings because vendors described their AI
systems in ways that do not always align with how financial institutions understand them
to work, especially for Generative AI.
6 Generative is capitalized throughout the report to help clearly distinguish between different AI technologies.
Financial institutions are generally organized and regulated based on the services that the
institutions provide. Collectively, these organizations form the backbone of the nation’s
financial system and are a vital component of the global economy. These organizations
are tied together through a network of electronic systems with innumerable entry points.
Financial institutions face an evolving and dynamic set of risks, including operational,
liquidity, credit, legal, and reputational risks. Each financial institution has unique security
and resilience needs, resources, and plans depending on the functions it performs and its
approach to risk management.8
Losses from fraud also continue to rise every year. According to Juniper Research, online
payment fraud is expected to cumulatively surpass $362 billion by 2028.12 Separately,
7 See U.S. Department of the Treasury, U.S. Department of Homeland Security, and FSSCC, Financial Services
Sector-Specific Plan (2015), https://www.cisa.gov/sites/default/files/publications/nipp-ssp-financial-
services-2015-508.pdf.
8 Ibid.
9 Verizon, 2023 Data Breach Investigations Report (Jun. 2023), https://www.verizon.com/business/resources/
Te7c/reports/2023-data-breach-investigations-report-dbir.pdf.
10 IBM, Cost of Data Breach Report 2023 (Jul. 2023), https://www.ibm.com/downloads/cas/E3G5JMBP.
11 Ibid.
12 Juniper Research, Online Payment Fraud: Market Forecasts, Emerging Threats & Segment Analysis 2023-2028
(Jun. 2023), https://www.juniperresearch.com/research/fintech-payments/fraud-identity/online-payment-
fraud-research-report/.
13 FBI, Business Email Compromise: The $50 Billion Scam (Jun. 2023), https://www.ic3.gov/Media/Y2023/
PSA230609.
14 Verizon, 2023 Data Breach Investigations Report (Jun. 2023), https://www.verizon.com/business/resources/
Te7c/reports/2023-data-breach-investigations-report-dbir.pdf.
15 The Federal Reserve FedPayments Improvement, Payments Fraud Insights July 2023: Mitigating Synthetic
Identity Fraud in the U.S. Payment System (Jul. 2023), https://fedpaymentsimprovement.org/wp-content/
uploads/frs-synthetic-identity-payments-fraud-white-paper-july-2020.pdf.
Some financial institutions reported that they started using AI for cybersecurity several years
ago. Various types of cybersecurity tools that financial institutions typically use to mitigate
cybersecurity risks now incorporate AI, making the institutions reportedly more agile than
they were in the past. Financial institutions provided examples of incorporating advanced
anomaly-detection and behavior-analysis AI methods into existing endpoint protection,
intrusion detection/prevention, data-loss prevention, and firewall tools. AI-driven tools
are replacing or augmenting the legacy, signature-based threat detection cybersecurity
approach of many financial institutions. AI tools can help detect malicious activity that
manifests without a specific, known signature. This capability has become critical in the
face of more sophisticated, dynamic cyberthreats that may leverage legitimate system
administration tools, for example, to avoid triggering signature detection.
Despite views on its potential, participants in our outreach stated that they are taking a
cautious and risk-based approach to integrating Generative AI into their cybersecurity
and anti-fraud operations. Most participants representing firms reported that they
are proceeding with caution on Generative AI and are trying to address Generative AI
Specifically, many financial institution representatives believe that their existing practices
align with aspects of the National Institute of Standards and Technology (NIST) AI Risk
Management Framework (RMF), which was released in January 2023. However, while
interview participants representing financial institutions stated that they have implemented
the core elements of the NIST RMF (govern, map, measure, manage), many also noted
that it is challenging to establish practical and enterprise-wide policies and controls for
emerging technologies like Generative AI. Discussion participants noted that while their
risk management programs should map and measure the distinctive risks presented by
technologies such as large language models (LLM), these technologies are new and can be
challenging to evaluate, benchmark, and assess in terms of their cybersecurity.
Differences in financial institutions’ proprietary data and adoption of cloud services will
likely influence the adoption of AI by each institution. Most representatives of financial
institutions stated that they are trying to leverage their own data for use cases that are
unique to their firm while relying on vendors for more general applications. As computing
capacity and analytic tools become more readily available through cloud service
providers, more institutions may develop their own models. Institutions that have already
adopted cloud services, or those with large amounts of proprietary data, will likely be
able to take advantage of these AI tools sooner than those that have not. In particular,
institutions that have already migrated some of their systems and data into cloud
computing platforms will likely be able to take advantage of these developments sooner
Data poisoning, data leakage, and data integrity attacks can occur at any stage of the AI
development and supply chain. AI systems are more vulnerable to these concerns than
traditional software systems because of the dependency of an AI system on the data used
16 Other issues specifically implicated by AI, but not necessarily relevant to cybersecurity and anti-fraud use cases,
include biases, unethical uses, and false outputs that are becoming more noteworthy with the increasing use of
foundational models such as LLMs and Generative AI. This report generally does not address these challenges.
Financial-sector firms may be able to address these kinds of issues by leveraging model risk management (MRM)
practices common to the banking industry. MRM may help manage the risks surrounding the use of these models
by highlighting the need for transparency and risk management of the models themselves. However, it remains
to be seen whether additional guidance or risk management practices are needed to manage these types of
risks. Additionally, financial institutions that participated in our study reported keeping humans involved in the
development of AI models and eventual applicability of AI models’ outputs as they work to resolve model-risk
concerns. How any of these issues manifest for institutions, however, largely depends on how AI systems are
deployed. Some AI systems, for example, may not be appropriate for high-assurance applications that require
explanation of the outputs to ensure repeatable, unbiased decision making.
Currently, firms do not share fraud data with each other to the extent that would be
needed to train anti-fraud AI models. Similar to cybersecurity tools, fraud detection and
prevention technologies have evolved from traditional rule-based engines, deny lists,
and device fingerprints to more advanced ML-based systems. However, the accuracy of
ML-based systems in identifying and modeling fraudulent behavioral patterns correlates
directly with the scale, scope (variety of datasets), and quality of data available to
firms. Unlike data on cybersecurity, there is reportedly little fraud information sharing
across the financial sector, which limits the ability to aggregate fraud data for use by AI
systems. Most financial institutions Treasury spoke with expressed the need for better
collaboration in this domain, particularly as fraudsters themselves have been using AI and
ML technologies. Sharing of fraud data would support the development of sophisticated
fraud detection tools and better identification of emerging trends or risks.
However, while collecting, analyzing, and sharing fraud data can improve detection, it
also raises privacy concerns that need to be managed through robust data protection and
privacy practices. The collection, storage, and processing of sensitive financial information
or other personal data poses risks. Transaction histories and personal behaviors are
examples of sensitive financial information that may be used as inputs in AI systems.
Although beyond the scope of this report, financial sector representatives have stated that
the historical data used to train fraud-detection models could contain biases, such as the
overrepresentation of certain demographics in anti-fraud cases.18 Participating firms noted
that data-anonymization techniques and algorithmic transparency could help to mitigate
some of these issues.
17 U.S. Department of the Treasury, Assessing the Impact of New Entrant Non-bank Firms on Competition in
Consumer Finance Markets (Nov. 2022), https://home.treasury.gov/system/files/136/Assessing-the-Impact-of-
New-Entrant-Nonbank-Firms.pdf.
18 Chen, Changye, et al, Ethical perspectives on AI hazards to humans: A review (Dec. 2023), https://www.ncbi.nlm.
nih.gov/pmc/articles/PMC10695628/.
• Social Engineering: Financial institutions have cited an increase in the scope and
scale of sophisticated social-engineering techniques. Threat actors can utilize LLMs to
conduct more targeted phishing, business email compromise, and other fraud. Using
Generative AI systems, threat actors can more realistically misrepresent themselves
as reflecting a variety of backgrounds, languages, statuses, and genders. These
attacks can further be tailored for high-value customers using other data points, such
as social media posts or messages to enhance their mimicry.
• Vulnerability Discovery: Threat actors can use advanced AI-based tools that are
typically used for cyber defense by developers and testers to discover vulnerabilities
and identify weaknesses in an institution’s IT network and application security
measures. This potential reduction in vulnerability discovery time, combined with
potential reduction in exploitation time, could give cyber threat actors an upper hand
against cybersecurity practitioners by outpacing patching or intrusion detection.
The few major providers of Generative AI models today have stated that they have
implemented safeguards to prevent malicious use of their products. However, there
are documented instances of researchers bypassing these safeguards through carefully
crafted prompt engineering techniques.19
• Data Poisoning: Threat actors corrupt the training data or model weights directly to
impair the training process or gain a desired output of a model.
19 Zou, A., et al, Universal and transferable adversarial attacks on aligned language models (Dec. 2023), https://
arxiv.org/abs/2307.15043.
20 Vassilev, A., Oprea, A., Fordyce, A. and Andersen, H., Adversarial Machine Learning: A Taxonomy and
Terminology of Attacks and Mitigations (Jan. 2024), NIST Trustworthy and Responsible AI, National Institute of
Standards and Technology, https://www.nist.gov/publications/adversarial-machine-learning-taxonomy-and-
terminology-attacks-and-mitigations.
24 The Federal Reserve FedPayments Improvement, Payments Fraud Insights July 2023: Mitigating Synthetic
Identity Fraud in the U.S. Payment System (Jul. 2023), https://fedpaymentsimprovement.org/wp-content/
uploads/frs-synthetic-identity-payments-fraud-white-paper-july-2020.pdf.
25 Wakefield, Financial Jeopardy: Companies Losing Fight Against Synthetic Fraud (Oct. 2023), https://www.
deduce.com/resource/wakefield-research-report/.
26 Ibid.
27 Ibid.
28 FinCEN Financial Trend Analysis, Identity-Related Suspicious Activity: 2021 Threats and Trends (Jan. 2022),
https://www.fincen.gov/sites/default/files/shared/FTA_Identity_Final508.pdf.
Various existing laws, regulations, and supervisory guidance are applicable to financial
institutions’ use of AI. Although existing laws, regulations, and supervisory guidance may
not expressly address AI, the principles contained therein can help promote safe, sound, and
fair implementation of AI. The focus of this report is on cybersecurity and fraud protection,
as they relate to AI and financial services, and thus, this report does not comprehensively
list the regulatory frameworks and issues that could potentially apply to the use of AI in
financial services. This report also does not weigh in on whether there are regulatory gaps or
issues that may necessitate additional legislation, regulation, or guidance.
Key examples of risk management and control principles common across financial sector
laws, regulations, and supervisory guidance that are applicable to the use of AI regarding
cybersecurity and fraud issues include:
• Model Risk Management: Appropriate governance and controls over the use of
AI and other tools is an important aspect of managing risks. Supervisory guidance
on model risk management has principles applicable to managing risks from AI,
including assessing conceptual soundness, confirming underlying data, considering
model complexity and transparency, assessing performance, and evaluating
implementation.33 Regardless of whether an AI tool or service is formally considered a
model within the context of model risk management, appropriate risk management,
including validation and testing, helps ensure AI tools and services operate as
intended. Ongoing performance monitoring helps assess model implementation
and whether the model is performing as intended. Similar considerations apply to
29 See, e.g., Interagency Guidelines Establishing Standards for Safety and Soundness (12 CFR Part 30, App. A
(OCC); 12 CFR Part 208, App. D–1 (FRB), 12 CFR. Part 364 App. A (FDIC)); Interagency Guidelines Establishing
Information Security Standards, 12 CFR 30, Appendix B (OCC), 12 CFR 208, Appendix D-2 (FRB, for state
member banks), 12 CFR §§ 211.24 (FRB, for uninsured state-licensed branches or agencies of foreign banks),
12 CFR 225, Appendix F (FRB, for bank holding companies), 12 CFR 362, Appendix B (FDIC), Computer-Security
Incident Notification Requirements for Banking Organizations and Their Bank Service Providers, (codified at 12
CFR part 53 (OCC); 12 CFR 225, subpart N (FRB); 12 CFR 304, subpart C (FDIC); Interagency Guidance on Third-
Party Relationships: Risk Management (Board SR 23-4, FDIC FIL 29-2023, and OCC Bulletin 2023-17); 17 C.F.R.
240.15c3-5 (risk management controls for brokers or dealers with market access); 17 C.F.R. 242.1000-1007
(Regulation Systems Compliance and Integrity); 15 U.S.C 78o(g) (prevention of misuse of material nonpublic
information by brokers or dealers); 17 C.F.R. 248.1-30 (Regulation S-P: Privacy of Consumer Financial
Information and Safeguarding Personal Information).
30 See 12 CFR 30, appendix D, OCC Guidelines Establishing Heightened Standards for Certain Large Insured
National Banks, Insured Federal Savings Associations, and Insured Federal Branches; and 12 CFR 52, Enhanced
Prudential Standards issued by the Federal Reserve Board; and Federal Reserve SR Letter 20-24: Sound
Practices to Strengthen Operational Resilience (Nov. 2, 2020).
31 See, e.g., 17 CFR 39 subpart C, establishing heightened risk management and cybersecurity requirements for
derivatives clearing organizations that have been designated as systemically important; and 12 CFR part 234,
establishing, among other things, operational risk management standards for certain systemically important
financial market utilities supervised by the FRB.
32 OCC Bulletin 2017-43, New, Modified, or Expanded Bank Products and Services: Risk Management Principles.
33 OCC Bulletin 2011-12, Sound Practices for Model Risk Management: Supervisory Guidance on Model Risk
Management; FDIC Financial Institution Letter (FIL)-22-2017, Adoption of Supervisory Guidance on Model;
Federal Reserve SR Letter 11-7, Supervisory Guidance on Model Risk Management.
34 See, e.g., 17 CFR 39.13 and 17 CFR 39.36, setting forth risk management considerations for systemically
important derivatives clearing organizations.
35 See, e.g., 17 CFR Part 242 Subpart ECFRe106e84e67e2bc9, Regulation SCI—Systems Compliance and Integrity.
36 See FFIEC IT Examination Handbook InfoBase - III.A Data Governance and Data Management for relevant
expectations available here: https://ithandbook.ffiec.gov/it-booklets/architecture-infrastructure-and-
operations/iii-common-aio-risk-management-topics/iiia-data-governance-and-data-management/.
37 FRB, FDIC, and OCC, Interagency Guidance on Third-Party Relationships: Risk Management (Jun. 2023),
https://www.federalregister.gov/documents/2023/06/09/2023-12340/interagency-guidance-on-third-party-
relationships-risk-management.
38 See, e.g., 17 CFR. 39.18(d) concerning retention of responsibility in the event of outsourcing.
FBIIC partner agencies also advised of two additional topics not directly related to cyber,
anti-fraud, or operational resilience.
The potential for further benefits as AI gains more widespread adoption could be significant.
However, as with the implementation of any new technology or businesses process,
unintended risks and consequences may occur if effective governance and controls are not
implemented. Risks and consequences may be magnified as adoption of AI technologies
becomes more widespread. Treasury, U.S. prudential regulators, and other U.S. government
agencies also participate in a range of international fora concerning the regulation of AI in
financial services that are more specific to client-facing products, including at the Financial
Stability Board (FSB),47 the Basel Committee on Bank Supervision (BCBS), 48 and the
Organisation for Economic Co-operation and Development (OECD).49
42 See SEC’s Fact Sheet, Conflicts of Interest and Predictive Analytics (Jul. 2023), https://www.sec.gov/files/34-
97990-fact-sheet.pdf; and SEC Press Release, SEC Proposes New Requirements to Address Risks to Investors
from Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and
Investment Advisers, https://www.sec.gov/news/press-release/2023-140.
43 See SEC, Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and
Investment Advisers, https://www.sec.gov/files/rules/proposed/2023/34-97990.pdf (Proposing Release, July
26, 2023). The proposal would also require a firm to have written policies and procedures reasonably designed
to achieve compliance with the proposed rules and to make and maintain books and records related to these
requirements. See also SEC, Cybersecurity Risk Management for Investment Advisers, Registered Investment
Companies, and Business Development Companies, https://www.sec.gov/files/rules/proposed/2022/33-11028.
pdf (Proposing Release, Feb. 9, 2022) and Outsourcing by Investment Advisers, https://www.sec.gov/files/rules/
proposed/2022/ia-6176.pdf (Proposing Release, Oct. 26, 2022).
44 FSOC, Annual Report 2023, https://home.treasury.gov/system/files/261/FSOC2023AnnualReport.pdf.
45 FRB, OCC, FDIC, CFPB, and NCUA, Request for Information and Comment on Financial Institutions' Use of Artificial
Intelligence, Including Machine Learning (Mar. 2021), https://www.federalregister.gov/documents/2021/03/31/2021-06607/
request-for-information-and-comment-on-financial-institutions-use-of-artificial-intelligence.
46 FHFA, Advisory Bulletin 2022-02, Artificial Intelligence/Machine Learning Risk Management (Feb. 2022), https://www.fhfa.
gov/SupervisionRegulation/AdvisoryBulletins/Pages/Artifical-Intelligence-Machine-Learning-Risk-Management.aspx.
47 FSB, Artificial Intelligence and Machine Learning in Financial Services (Nov. 2017), https://www.fsb.org/2017/11/
artificial-intelligence-and-machine-learning-in-financial-service.
48 BIS, Newsletter on artificial intelligence and machine learning (Mar. 2022), https://www.bis.org/publ/bcbs_nl27.htm.
49 OECD, Artificial Intelligence, Machine Learning and Big Data in Finance (Oct. 2021), https://www.oecd.org/
finance/financial-markets/Artificial-intelligence-machine-learning-big-data-in-finance.pdf.
Generally, the three lines of defense for risk management are the business line, corporate
risk management, and auditing risk controls. In such a structure, the business line is first
responsible for managing risk associated with its AI systems and its business offerings.
The second line provides compliance management systems and risk management
structures to support the business lines managing AI-specific risk and enables risk-related
communications and decisions about AI system use to be elevated to management.
The third line audit assures the right monitoring, controls, and reporting are in place to
manage the context-specific risks posed by AI.
In the absence of such a risk management structure, the NIST RMF suggests having a
principles-based approach in which senior leadership determines the overall goals, values,
policies, and risk tolerance within the organization and aligns the technical aspects of AI
risk management to these goals. Regardless of the approach, NIST recommends that AI
risk management governance should be a continual and intrinsic requirement over an
AI system’s lifespan and organizational hierarchy. Furthermore, transparency is required
to improve human review processes and establish accountability in AI systems’ teams.
50 Basel Committee on Banking Supervision, Principles for the Sound Management of Operational Risk (Jun.
2011), https://www.bis.org/publ/bcbs195.pdf.
51 Ibid.
Regardless of the location within the governance structure, financial institutions are
advised to integrate AI plans into their enterprise risk management functions and connect
them with other parts of the organization to address the multifaceted risks posed by AI.
The most common integration within enterprise risk management occurs across model
risk, technology risk, cybersecurity risk, and third-party risk management, which are all
core functions associated with the implementation and use of AI systems. AI-specific
integration at the enterprise level should include representation from legal, compliance,
data science, marketing, and other business functions like operations, product
management, and design, depending on the organization.
In this model, typically, the corporate data lead integrates data across the organization
through participation in a cross-functional AI risk management team to streamline data
requirements across the firm and to capitalize on data opportunities for AI systems. This
includes both internal use of AI systems and vendor solutions that have embedded AI.
Depending on the corporate structure of a financial institution, the corporate data lead
may not be always called the Chief Data Officer (CDO) but may be empowered with
similar responsibilities. This role will likely continue to evolve as financial institutions
adopt more AI systems and AI systems become embedded in more products. With careful
implementation, communication, and leadership support, the CDO can drive innovation
and manage data as a business asset.55
In Treasury’s own AI adoption strategy, Treasury has broadly leveraged the Chief Data
Officer role to empower both technology transformation and AI development. For
example, Treasury established a CDO position for the Office of Terrorism and Financial
Intelligence (TFI), which includes FinCEN, the Office of Foreign Assets Control (OFAC), the
Office of Intelligence and Analysis (OIA), the Office of Terrorist Financing and Financial
Crimes (TFFC), and Treasury Executive Office for Asset Forfeiture (TEOAF). The TFI CDO was
empowered to create an executive-level data governance board to drive decision making
on cloud adoption strategies, federated data lake development, and AI use cases with a
focus on scalability, accessibility, access control, and protection of data required to enable
the future development of AI models.
In 2023, this multi-year effort led by TFI CDO resulted in the launch of the CACHE platform,
a consolidated data platform for open source, subscription, and proprietary Treasury
data sources. Treating data as the foundation for AI development coupled with the
55 Eastwood, Brian. Chief data officers don’t stay in their roles long. Here’s why. (Sept. 2022), MIT Sloan School of
Management, https://mitsloan.mit.edu/ideas-made-to-matter/chief-data-officers-dont-stay-their-roles-long-
heres-why.
Some requests that financial institutions should consider asking their vendors include:
• Notify the financial institution if the third party makes changes or updates to
products or services that use AI systems.
• Disclose the scope of AI system use in their products or services and notify them of
material changes.
• Describe the model and data lifecycles, when an AI system is significant to a product
or service.
• Explain the impact the AI systems could have on the financial institution’s customers
and how the financial institution can explain this impact to their customers.
56 FS-ISAC, Financial Services and AI: Leveraging the Advantages, Managing the Risks (Feb. 2024), https://www.
fsisac.com/knowledge/ai-risk.
• App-based passkeys
Financial institutions should be wary of disabling any of these factors, like geolocation or
device fingerprinting.
The use case for AI systems should account for the risk tolerance associated with current
Generative AI shortcomings. If a higher level of explainability59 is appropriate for a use
case, Generative AI may not currently be a viable option. If a use case is intended to
have anti-bias assurances, it may be appropriate to train AI models only on data that is
prepared with anti-bias standards. Financial institutions should take these considerations
into account when determining how to use Generative AI and how to fit these concerns
within the risk tolerances of both the particular use case and the overall risk appetite of
their firms.
Secure-by-design principles are not only applicable to AI systems but have become more
essential as AI systems’ risk mitigation solutions are much more demanding. AI systems
are increasingly more complex and minor changes to the system may require changes in
the models, which may need retraining or retuning. Additionally, AI systems’ integration
into an organization’s IT systems requires investment in training multiple teams across
that organization. The process of applying frequent patches for security vulnerabilities
after integration is more complex. Therefore, it is paramount that cybersecurity and
other risks are considered in the design and development phases of AI systems. Financial
institutions are encouraged to communicate their security thresholds and requirements to
59 See NIST, The Four Principles of Explainable AI, NISTIR 8312, https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.
IR.8312.pdf.
60 See UK National Cyber Security Center, et al, Guidelines for Secure AI System Development (Nov. 2023), https://
www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf.
Careful consideration of terminology may help address the current lack of clarity around
measuring and identifying risks, especially with the rapid adoption of Generative AI. As noted
in the introduction, terminology can have implications for the common understanding
of AI technology and its associated risks as well. For instance, one firm uses “augmented
intelligence” to shift the responsibility to the user by emphasizing that the system is
augmenting the user’s intelligence, rather than having its own intelligence. Similarly, the
use of “hallucination” to describe false outputs by Generative AI suggests these Generative
AI systems intend meaning in their outputs when what they are supplying is probabilistic
semantics.61 This anthropomorphism may misleadingly imply intention and create a false
sense of trust in a system. For this reason, one firm said they use “prompt” and “response”
rather than “question” and “answer,” indicating an attempt to neutralize the language
associated with these systems to retain the primacy of human agency.
As a first effort, Treasury has included a glossary section in this report, based on NIST’s
AI RMF and Adversarial Machine Learning documents. Treasury intends to collaborate
with FBIIC partner agencies and FSSCC members to develop a common lexicon of AI
terminologies most relevant to financial institutions.
61 Stening, Tanner, What are AI chatbots actually doing when they ‘hallucinate’? Here’s why experts don’t like
the term (Feb. 2024), Northeastern Global News, https://news.northeastern.edu/2023/11/10/ai-chatbot-
hallucinations/.
With the tight intersection between AI and cloud services, financial institutions that have
already moved data and services to the cloud may have an advantage when it comes
to leveraging AI in a safe and sound manner. This gap is not insurmountable if firms can
transition to the cloud, although firms already in the cloud will have had more time to
experiment and refine their AI systems. These early adopters may be able to shorten the
AI ramp-up time for later cloud adopters. Some smaller financial institutions noted issues
with getting their own data from their core providers.
Treasury will seek to facilitate conversations between FSSCC members and critical core
providers, along with their direct and indirect oversight agencies and consumer advocate
organizations, to better understand how core providers are working toward developing AI-
enhanced capabilities for financial institutions in the cybersecurity and anti-fraud space.
In addition, Treasury will explore opportunities to collaborate with trade associations and
other stakeholders to extend smaller institutions’ access to AI capabilities.
Cybersecurity information sharing in the financial sector, especially through the FS-ISAC,
has matured over the years, but little progress has been made to enhance data sharing
related to fraud. The ABA is working to design, develop, and pilot a new information-
sharing exchange focused on fraud and other illicit finance activities.62 The U.S.
Government, with its collection of historical fraud reports, may be able to assist with this
effort to contribute to a data lake of fraud data that would be available to train AI, with
appropriate and necessary safeguards.
62 American Banker, ABA to launch information-sharing exchange to help banks fight fraud (Nov. 2023), https://
www.americanbanker.com/news/aba-to-launch-information-sharing-exchange-to-help-banks-fight-fraud.
Treasury will work with FBIIC and FSSCC to map major existing and anticipated regulatory
regimes relevant to financial sector firms and their vendors in the cybersecurity and
fraud space. This effort will explore potentially enhancing coordination across regulators
with the goal of fostering responsible AI advancements, while addressing risk, and
understanding applicable regulatory regimes. The coordination actions could include
the recommendation to establish AI-specific coordinating groups, as allowable, to assess
enhancing shared standards and regulatory coordination options.
The financial sector’s maturity with both AI and risk management, including enterprise
risk management, could help inform AI governance models applied to other industries.
Treasury will assist NIST’s U.S. AI Safety Institute (USAISI) to establish a financial sector-
specific working group under the new AI consortium construct with the goal of extending
the AI RMF toward a financial sector specific profile.
63 U.S. Department of the Treasury, Treasury Announces Enhanced Fraud Detection Process Using AI Recovers
$375M in Fiscal Year 2023 (Feb. 2024), https://home.treasury.gov/news/press-releases/jy2134.
Financial institutions have stated that they would benefit from the development of best
practices concerning the mapping of data supply chains and data standards. This mapping
would enable firms to understand restrictions and user rights throughout the training data
supply chain and AI model-output data chain while at the same time enabling privacy and
other data protection concerns. Relatedly, financial institutions stated that they would
benefit from a standardized description, similar to a nutrition label, for vendor-provided
Generative AI systems to clearly identify what data was used to train a model, where it
came from, and how any data submitted to the model will be incorporated.
Treasury will work with the financial sector, NIST, the National Telecommunications and
Information Administration (NTIA), and the Cybersecurity and Infrastructure Security
Agency (CISA) to identify whether such recommendations should be explored further. To
the extent feasible, this line of effort can potentially build upon the collaborative methods
used for the Software Bill of Materials (SBOM) workstreams led by NTIA and CISA covering
traditional software components. The best practices could recommend repeatable
methods to label incoming and outgoing data, including sharing a firm’s data with
external partners or ingesting data to train a model.
Financial institutions report that they are currently limiting Generative AI systems to use
cases where lower explainability levels may be deemed by the financial institution as
sufficient because they are less likely to have to provide any insight into how the outputs
were generated. However, going forward, more sensitive use cases that raise concerns like
safety, privacy, and consumer protection may increase the need for explainability.
Treasury will work with the financial sector, NIST, and relevant R&D programs to further
study the implication of black box AI explainability for cyber and fraud-related issues for
the financial services sector.
Further, the talent gap is not limited to those building and deploying AI systems. The
technical competency gap also exists across the core teams managing AI risk, such as
those in the legal and compliance fields. Financial institutions would benefit from role-
specific AI training offerings to help educate those critical enabling roles outside of IT.
Closing this gap will be the key to the safe and effective use of AI by financial institutions
regardless of where the AI systems originate.
AI tools may be applied to use cases across a financial institution, including business
lines, enterprise IT, customer service, compliance, and human capital. An important
aspect of managing any technology risk is ensuring that all employees and vendors are
properly trained in technologies they may encounter. AI is no exception to this, especially
Generative AI, which may have even greater potential to create or magnify errors.
Financial institutions should consider providing training for all employees on how to
understand and properly use AI tools, as well as recognize malicious activities where AI
may be involved.
Robust enterprise digital identity solutions may help financial institutions combat fraud
and insider threats and strengthen cybersecurity. However, digital identity systems,
solutions, and credentials differ in their technologies, governance, and security and,
therefore, offer different levels of assurance regarding their accuracy, privacy, and overall
effectiveness. In addition, although outside the scope of this report, digital identities may
implicate concerns about fairness and inclusion, and may create challenges for certain
individuals or populations.
To further inform our efforts to encourage responsible innovation, Treasury will continue
to monitor the development of digital identity solutions and how they are being used
by government and the financial services industry. Over the past several years, Treasury
has leveraged its interagency and public-private partnerships to share information and
explore best practices for mitigating the threats financial institutions face from gaps
and vulnerabilities in identity processes—including the potential use of digital identity
solutions and infrastructure to strengthen anti-money laundering and countering the
financing of terrorism (AML/CFT) compliance and facilitate financial inclusion and equity.
Treasury is also tracking and, as appropriate, supporting the efforts by federal and state
agencies to develop standards and to mitigate risks in these emerging systems and
solutions. These include the following:
64 See NIST, Digital Identity Guidelines (Dec. 2022), 800-63-4 IDP, https://csrc.nist.gov/pubs/sp/800/63/4/ipd.
• The NIST National Cybersecurity Center of Excellence is working with the U.S.
Department of Homeland Security Science and Technology Directorate (DHS
S&T) and the private sector, including financial institutions, on customer identity
verification use cases to convert mDL standard into code and creating a toolkit for
relying parties. DHS S&T is also testing onboarding and authenticating solutions’
ability to detect fake documents, selfie matching, and liveness testing.
65 See Draft NIST, EU-US TTC WG-1 Digital Identity Mapping Exercise Report (Dec. 2023), https://www.nist.gov/
system/files/documents/2023/12/22/EU-US%20TTC%20WG1_Digital_Identity_Mapping_Report_Final%20
Draft%20for%20Comment_22122023.pdf.
AI presents opportunities for the financial sector, but also significant potential risks,
particularly for consumers and historically marginalized groups. Applications of AI in
consumer financial services, insurance underwriting, fraud detection, and other areas of
financial services have the potential to perpetuate or amplify existing inequities. Use of AI
in these contexts also raises questions regarding consumer privacy and data security.
Treasury and financial regulators are also considering other potential risks and benefits
associated with the use of AI by financial institutions. As discussed in the report in the
context of cyber risk management, longstanding principles for sound risk management,
including model risk management and third-party risk management, are critical to
addressing the risks of AI more broadly. Treasury continues to monitor the development
and use of these tools and consider risks to financial institutions and the financial system.
To better understand the use of AI by financial institutions and its impact on consumers
and investors, as well as the sufficiency of existing regulatory frameworks, Treasury is
exploring opportunities for deeper engagement with the public.
66 See U.S. Department of the Treasury, 2023 FSOC Annual Report, https://home.treasury.gov/system/files/261/
FSOC2023AnnualReport.pdf.
EXECUTIVE SUMMARY
In an increasingly digitized financial ecosystem, the role of artificial intelligence (AI) is both
pivotal and multifaceted.67 Many financial institutions already use AI-powered solutions
to manage cybersecurity and fraud risks to their core product and service offerings. Most
institutions are now assessing novel AI technologies to enhance core business, customer,
and risk management activities. The integration of AI offers the sector increased efficiency,
precision, and adaptability, as well as the potential to bolster the resiliency of institutions’
systems, data, and services. This opportunity is set against the reality of a complex and
persistent threat landscape that is also adopting AI for malicious purposes.
This white paper aims to provide policymakers with an understanding of opportunities and
risks, particularly cybersecurity-risks, of AI deployment within the financial sector. To achieve
this, the paper first examines the current and anticipated use cases of cybersecurity and
fraud AI solutions within the sector. The paper then focuses on the novel risks introduced
alongside the evolution of AI technologies, including how adversaries are leveraging
these advancements to create new threats. Finally, the white paper offers ideas on how
policymakers can encourage AI innovation while effectively managing risk across the sector.
INTRODUCTION
The Financial Services Sector Coordinating Council’s Research and Development
Committee convened a series of discussions in the Fall of 2023 to discuss how advances
in AI (including the release of AI tools such as ChatGPT) could impact cybersecurity, fraud
prevention, third party risk management, and governance within the financial services
sector.68 These discussions were organized to inform the U.S. Treasury Department,
which was tasked to write a report by the Biden Administration’s Executive Order on Safe,
Secure, and Trustworthy Artificial Intelligence (EO).69
67 The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based
system that can, for a given set of human-defined objectives, make predictions, recommendations, or
decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-
based inputs to perceive real and virtual environments; abstract such perceptions into models through
analysis in an automated manner; and use model inference to formulate options for information or action.
68 The FSSCC is composed of 50 financial trade associations, financial market utilities, and the most critical financial
firms. The FSSCC coordinates across sector participants to enhance the resiliency of the financial services sector,
one of the nation’s critical infrastructure sectors. The FSSCC proactively promotes an all-hazards approach to drive
preparedness through its collaboration with the U.S. Government for the benefit of consumers, the Financial Services
Sector, and the national economy. Additional details are available on the FSSCC website: https://www.fsscc.org.
69 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-
secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
The recent introduction of widely available GenAI technologies has created new
opportunities and risks. At this time, only a few larger financial institutions have deployed
limited GenAI solutions in support of enterprise cybersecurity. GenAI is expected to
significantly transform the cybersecurity ecosystem, enabling cybersecurity professionals
to process data and gain deeper insights in shorter cycle times. The use of GenAI will
facilitate the automation of analyzing threat actor behaviors and streamlining alerts,
investigations, and responses. Importantly, it should also serve as a countermeasure
against AI-driven attacks, allowing for a more robust defense mechanism in the ever-
evolving cybersecurity landscape.
Financial institutions described GenAI use cases that include both assisting cybersecurity
professionals in comprehending malicious code, as well as aiding internal developers in
identifying and mitigating vulnerabilities in their own code. One institution developed
an AI tool to assist security analysts in the initial classification of suspicious emails.
While this tool is designed to scale up the analysts’ work by reducing the time spent
on false positives, the institution highlighted the essential role of human verification.
The effectiveness of AI underscores the necessity for rapid and safe implementation of
AI technologies in the financial sector. By doing so, institutions not only leverage AI’s
effectiveness in reducing the frequency and impact of fraud but also stay one step ahead of
fraudsters and cybercriminals who are increasingly using similar advanced technologies.
The clear success of AI in enhancing operational processes and safeguarding financial
operations affirms its vital role in the ongoing battle against financial fraud and cyber
threats. Furthermore, some institutions perceive integrating GenAI technology could
significantly amplify current uses of AI for fraud detection and prevention. GenAI’s advanced
algorithms and learning capabilities can adapt to evolve in response to the ever-changing
tactics of fraudsters, providing more dynamic and proactive approach to identifying and
mitigating fraudulent activities. The potential synergy of GenAI with existing AI systems
promises a more robust and resilient defense against significant threats.
The evolution of LLM-generated content and deepfake creation services are of concern
to the financial sector, especially smaller and less well-resourced institutions. These
technologies not only lower the barrier to entry but also complicate authenticity
verification measures. Concerns also extend to prompt injections into various forms
of LLMs, with the speed of patching varying depending on deployment methods like
stateless LLMs.
AI’s capacity to conceal threats within multimodal content, such as images, combined
with the prevalence of hallucinations or AI-generated misinformation, presents emergent
challenges. While AI providers are implementing guardrails to deter this activity, security
researchers have demonstrated that it is not difficult to circumvent the guardrails,
necessitating robust, layered defenses. Current capabilities may not be fully equipped to
address these novel threats, necessitating enhancements in both technical capabilities
and control processes.
While the overall malicious activity levels might appear static, the reality could be more
nuanced. The intensification of the challenge lies in detecting and understanding the
exploitation of models by sophisticated actors. This is particularly true for GenAI, which
is increasingly being used to create and disseminate misinformation at an alarming pace,
often bypassing traditional detection methods. The potential for nation state actors to
exploit GenAI for misinformation campaigns poses a significant and evolving threat.
Furthermore, the advent of open-source AI introduces additional risks like software supply
chain security vulnerabilities and data/model poisoning. Cybercriminal groups are using
modified versions of open-source models, such as “WormGPT,” to circumvent model
restrictions, leading to more sophisticated phishing campaigns with fewer typos and
improved formatting. Beyond phishing via email, the rise of social engineering attacks
through text and voice technologies poses new challenges.
Recognizing that existing controls for phishing campaigns are vital, financial institutions
are also aware of the need to evolve their defenses. This includes keeping pace with
cybercriminals’ advancements by leveraging novel AI technologies. By reinforcing
technological defenses and the human element of cybersecurity, financial institutions aim
to adapt to these emerging trends and better protect against the dynamic nature of AI-
facilitated cyber threats.
Following the release of the NIST AI Risk Management Framework (RMF) in January
2023, many financial institutions have commenced self-assessments of their practices
against the framework.70 The integration of practices noted in AI RMF, along with the
adoption of MITRE ATT&CK for Learning Systems (ATLAS), OWASP LLM Top 10 and Machine
Learning Security Top 10, is becoming common practice, albeit with variations in how
comprehensively each institution embeds these guidelines. Most institutions identified
utilizing the Office of the Comptroller of the Currency’s Model Risk Management guidance
to drive their underlying controls related to model risk. Some members perceive that
their existing practices may already align with many aspects of these frameworks,
though implemented in different guises through existing risk management policies and
frameworks.
Some institutions are strengthening technical controls and initiating risk management
programs, specifically tailored to address the distinctive risks presented with GenAI,
particularly focusing on LLMs like “co-pilot” LLMs. These models are notably opaque,
presenting challenges in terms of auditability and security observability. Their inherent
complexity introduces risks beyond traditional biases, including hallucinations, toxicity,
data poisoning, and difficulties in verifying accuracy. Members shared varying approaches
including developing AI-specific policies and conducting thorough assessments of model
risk, model security risk, and validation practices within risk and cybersecurity areas to
focus on adversarial tactics related to LLMs.
Many institutions noted their reliance on a “human in the loop” approach, but this raises
concerns about the varying experience levels of reviewers and the potential for them
to develop a false sense of correctness in the models’ outputs. Some institutions will
continue to place accountability for accuracy on the users and reviewers through related
training to have a balanced approach between technology and human oversight. As
AI technology advances, evolving risk management practices to effectively detect and
mitigate these complex and emerging risks is crucial.
Some institutions shared challenges pertaining to third party risk management, with
variability in the depth of information provided by vendors regarding their testing
70 https://www.nist.gov/itl/ai-risk-management-framework.
There is a growing consensus among institutions that there is a need to map various
AI-related frameworks together, potentially by mapping informative references to the
NIST AI RMF or creating a sector-specific profile of it. This effort highlights a significant
opportunity for standardization in managing AI risks within the financial sector. By
doing so, financial institutions can adopt a cohesive approach, integrating different
methodologies and best practices into a clearer and more unified strategy. Such a strategy
would be instrumental in mitigating the evolving threats in the financial sector, ensuring
that institutions are not only compliant with established guidelines but also at the
forefront of risk management in the AI landscape.
Finally, institutions emphasized that regulatory risks are a critical concern in the adoption
of new AI technologies. Institutions are mindful of the balance between deploying
effective new technologies while appropriately managing associated risks. However,
regulatory uncertainty can slow the deployment of new technologies even when effective
controls are in place.
• Advanced Fraud Detection Mechanisms: The sector must continue to prioritize the
adoption of AI models for fraud prevention. Staying ahead of technologically adept
fraudsters requires proactive, AI-enhanced fraud detection methods, particularly
leveraging GenAI for early identification and mitigation of fraud.
• Robust Risk Management Strategies: Aligning with frameworks like NIST AI RMF
is critical. Financial institutions must strengthen their risk management protocols,
focusing on emerging risks from the increased availability of AI, especially GenAI
models, which includes data positioning and model biases.
Accenture IBM
Bitsight Moody’s
Google Cloud
Anomaly Detection – The identification of observations, events or data points that deviate
from what is usual, standard, or expected, making them inconsistent with the rest of data.71
API – Application program interface. A system access point or library function that has a
well-defined syntax and is accessible from application programs or user code to provide
well-defined functionality.72
Business Email Compromise – A scam targeting businesses with foreign suppliers and/
or businesses regularly performing wire transfer payments. These sophisticated scams
are carried out by fraudsters compromising email accounts through social engineering or
computer intrusion techniques to conduct unauthorized transfer of funds.73
Community Bank – Community banks are those that provide traditional banking services
in their local communities.74 The Federal Deposit Insurance Corporation (FDIC) lays out a
test to determine if a bank qualifies as a community bank, which involves, among other
things, a limited geographic profile, a limited asset size, and no more than 50% of assets in
a certain specialty, such as industrial loan companies.
Core Provider – Companies that provide financial technology and services across the
financial services sector in support of core banking services. Core banking services include
taking deposits, making loans, and facilitating payments.
Data Integrity – The property that data has not been altered in an unauthorized manner.
Data integrity covers data in storage, during processing, and while in transit.75
Data Integrity Attacks – An integrity attack targets the integrity of an ML model’s output,
resulting in incorrect predictions performed by an ML model.76
Data Leakage – May occur during inference attacks, where a threat actor gains access to
confidential data through model inversion and programmatic querying of the model.
Data Loss Prevention – A system’s ability to identify, monitor, and protect data in use
(e.g., endpoint actions), data in motion (e.g., network actions), and data at rest (e.g.,
data storage) through deep packet content inspection, and contextual security analysis
Data Poisoning – Poisoning attacks occur in the training phase by introducing corrupted
data. An example would be slipping numerous instances of inappropriate language into
conversation records, so that a chatbot interprets these instances as common enough
parlance to use in its own customer interactions.79
Generative AI – The class of AI models that emulate the structure and characteristics
of input data in order to generate derived synthetic content. This content can include
images, videos, audio, text, and other digital content.83
77 Committee on National Security Systems (CNSS), National Instruction on Classified Information Spillage (Jun.
2021), CNSSI 1001, https://www.cnss.gov/CNSS/issuances/Instructions.cfm.
78 Ibid.
79 NIST, Adversarial Machine Learning (footnote 72).
80 NIST, Guide for Security-Focused Configuration Management of Information Systems (Aug. 2011), NIST SP 800-
128, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-128.pdf.
81 NIST, Four Principles (footnote 59).
82 NIST, Guide to Industrial Control Systems (ICS) Security (May 2015), NIST SP 800-82 Rev. 2, https://nvlpubs.nist.
gov/nistpubs/SpecialPublications/NIST.SP.800-82r2.pdf.
83 See E.O. 14110 of Oct 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88
FR 75191, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-
development-and-use-of-artificial-intelligence. Federal Register:: Safe, Secure, and Trustworthy Development
and Use of Artificial Intelligence.
High Assurance Applications – Grounds for justified confidence that a claim has been or will
be achieved.85 In the context of software applications, a high assurance AI application or
system reflects a high degree of confidence in the output or results produced by that system.
Intrusion Detection System – A security service that monitors and analyzes network or
system events for the purpose of finding and providing real-time or near real-time warning
of attempts to access system resources in an unauthorized manner.
Intrusion Prevention System – Software that has all the capabilities of an intrusion
detection system and can also attempt to stop possible incidents.
Nutrition Label – Similar to a Software Bill of Materials (SBOM), a nutrition label for an AI
system would clearly document what data was used to train a model, where it came from,
and how any data submitted to the model will be incorporated. Attributes of a nutrition
label could include data quality score, personally identifiable information score, and
toxicity score.
Prompt – Natural language text describing the task that an AI should perform.
84 BIS, Global systemically important banks: assessment methodology and the additional loss absorbency
requirement (Nov. 23), https://www.bis.org/bcbs/gsib.
85 NIST, Engineering Trustworthy Secure Systems (Nov. 22), NIST SP 800-160v1r1, https://nvlpubs.nist.gov/
nistpubs/SpecialPublications/NIST.SP.800-160v1r1.pdf.