Ai Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

LLM AI Security &

Governance Checklist
From the OWASP Top 10
for LLM Applications Team
Revision History
Revision Date Author(s) Description
0.1 2023-11-01 Sandy Dunn initial draft
0.5 2023-12-06 Sandy Dunn, public draft
OWASP LLM
Apps Team

Version: 0.5
Published: December 6, 2023

The information provided in this document does not, and is not intended to, constitute legal advice.
All information is for general informational purposes only.

This document contains links to other third-party websites. Such links are only for convenience
and OWASP does not recommend or endorse the contents of the third-party sites.
1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Responsible and Trustworthy Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Who is This For? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Why a Checklist? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Large Language Model Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


2.1 LLM Threat Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Artificial Intelligence Security and Privacy Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Incorporate LLM Security and governance with Existing, Established Practices and Controls10
2.4 Fundamental Security Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Vulnerability and Mitigation Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Determining LLM Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


3.1 Deployment Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Check List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 Adversarial Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 AI Asset Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 AI Security and Privacy Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 Establish Business Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.6 Legal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.7 Regulatory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.8 Using or Implementing Large Language Model Solutions . . . . . . . . . . . . . . . . . . . . . . . . 18

5 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

A Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Overview

Every internet user and business should prepare for the impact of a surge in powerful generative
artificial intelligence (GenAI) applications. GenAI holds enormous promise and opportunities for
discovery, efficiency, and driving corporate growth across many industries and disciplines. However,
as with any strong new technology, it introduces new challenges to security and privacy.

Artificial Intelligence, Machine Learning, Large Language Models, and Diffusion Models have been
in development and the focus of academic research for many years. Recent improvements in
training data availability, computer power, GenAI capacity, and the release of solutions such as
ChatGPT, ElevenLabs, Midjourney, along with their broader availability outside of what previously
was a relatively isolated and specialized field, have led to its eruptive growth. These advances in
artificial intelligence (AI) emphasize the importance of organizations developing plans to manage
their engagement and use of AI within their organization.

• Artificial intelligence is a broad term that encompasses all fields of computer science that
enable machines to accomplish tasks that would normally require human intelligence. Machine
learning and generative AI are two subcategories of AI.

• Machine learning is a subset of AI that focuses on creating algorithms that can learn from
data. Machine learning algorithms are trained on a set of data, and then they can use that data
to make predictions or decisions about new data.

• Generative AI is a type of machine learning that focuses on creating new data. Often, GenAI
relies on the use of large language models to perform the tasks needed to create the new data.

• A large language model (LLM) is a type of AI program that uses machine learning to perform
natural language processing (NLP) tasks. LLMs are trained on large data sets to understand,
summarize, generate, and predict new content.
The diagram below shows the relationship of LLM to the field of AI generally:

Figure 1.1: Image of LLM relationship within the field of Artificial Intelligence

Organizations will face new challenges defending and managing GenAI solutions. Additionally, there
is significant potential for accelerated threats from threat actors who will use GenAI to augment
attack techniques.

Many applications within a business employ artificial intelligence applications, such as human
resource hiring, SPAM detection for email, behavioral analytics for SIEM, and MDR apps. The primary
focus of this document is on Large Language Model applications, which can produce content.
Responsible and Trustworthy Artificial Intelligence
As challenges and benefits of Artificial Intelligence emerge - and regulations and laws are passed -
the principles and pillars of responsible and trustworthy AI usage are evolving from idealistic objects
and concerns to established standards.

The OWASP AI Security and Privacy Guide working group is monitoring these changes and addressing
the broader and more challenging considerations for all aspects of artificial intelligence.

Figure 1.2: Image credit Montreal AI Ethics Institute


Who is This For?
Executive, technology, cybersecurity, privacy, compliance, and legal leaders must pay close attention
to the fast GenAI technological transformation and devise a strategy to benefit from opportunities
while fighting against threats and managing risks.

This checklist is designed to assist these technology and business leaders in quickly understanding
the risks and benefits of using LLM, allowing them to focus on developing a comprehensive list of
essential areas and tasks required to defend and protect the organization as they create a Large
Language Model strategy.

Scenarios presented here include those that pertain to internal use of models released commercially
or those that are open sourced, as well as scenarios for organizations that consume LLM services
provided by third-parties. Resources from MITRE Engenuity, OWASP, and others are referenced.

The diagram below shows how these resources can be used to create a threat informed defense
strategy.

Figure 1.3: Image of integrating LLM Security with OWASP and MITRE resources

It is the hope of the OWASP Top 10 for LLM Applications team that this list will help organizations
improve their existing defensive techniques and develop techniques to address the new threats that
come from using this exciting technology.
Why a Checklist?
Checklists can help with strategy development by ensuring thoroughness, clarifying goals, fostering
consistency, and allowing for focused, deliberate effort, all of which may result in fewer oversights.
Following the list can build confidence in a path to secure adoption while sparking ideas for future
business cases moving forward. Itś a very forward and very practical way to achieve continuous
improvement.

Not Comprehensive While this document is intended to support organizations in developing an


initial LLM strategy in a rapidly changing technical, legal, and regulatory environment, it does not
cover every use case or obligation. Organizations should extend assessments and practices beyond
the scope of the provided checklist as required for their use case or jurisdiction.
Large Language Model Challenges

Large Language models face a number of serious and unique issues. One of the most important is
that while working with LLMs, the control and data planes cannot be strictly isolated or separable.
Another significant challenge is that LLMs are nondeterministic by design, yielding a different
outcome when prompted or requested. It is not always a challenge, but LLMs employ semantic
search rather than keyword search. The key distinction between the two is that the model’s algorithm
prioritizes the terms in its response. This is a significant departure from how consumers have
traditionally used technology, and it has an impact on the consistency and reliability of the findings.
Hallucinations, emerging from the gaps and training flaws in the data the model is trained on, are
the result of this method.

There are methods to improve reliability and reduce the attack surface for jailbreaking, model
tricking, and hallucinations, but there is a trade-off between restrictions and utility in both cost and
functionality.

LLM use and applications increase an organization’s attack surface. Some risks associated with
LLMs are unique, but many are familiar issues, such as the known software bill of materials (SBOM),
supply chain, data loss protection (DLP), and authorized access. There are also increased risks not
directly related to GenAI, but GenAI increases the efficiency, capability, and effectiveness of attacks.

Adversaries are increasingly harnessing LLM and Generative AI tools to refine and expedite traditional
methods. These enhanced techniques allow them to effortlessly craft new malware, potentially
embedded with novel zero-day vulnerabilities or designed to evade detection. They can also generate
sophisticated, unique, or tailored phishing schemes. The creation of convincing deep fakes, whether
video or audio, further facilitates their social engineering ploys. Additionally, these tools enable them
to execute intrusions and develop innovative hacking utilities. It is very likely that in the future, more
“tailored” and compound use of AI technology by criminal actors will demand specific responses
and dedicated solutions for appropriate defense schemas.
LLM Threat Categories

Figure 2.1: Image of types of AI threats

Artificial Intelligence Security and Privacy Training


Employees throughout organizations benefit from training to understand artificial intelligence,
generative artificial intelligence, and the future potential consequences of building, buying, or utilizing
LLMs. Training for permissible use and security awareness should target all employees as well as
be more specialized for certain positions such as human resources, legal, developers, data teams,
and security teams.

Fair use policies and healthy interaction are key aspects that, if incorporated from the very start,
will be a cornerstone to the success of future AI cybersecurity awareness campaigns. This will
necessarily imply the user’s knowledge of the basic rules for interaction as well as the ability to
separate good behavior from bad or unethical behavior.

Incorporate LLM Security and governance with Existing, Established Practices


and Controls
While AI and generated AI add a new dimension to cybersecurity, resilience, privacy, and meeting
legal and regulatory requirements, the best practices that have been around for a long time are still
the best way to find risks, test them, fix them, and lower them.

• The management of artificial intelligence systems is integrated with existing organizational


practices.

• Apply existing privacy, governance, and security practices.


Fundamental Security Principles
LLM capabilities introduce a different type of attack and attack surface. LLMs are vulnerable
to complex business logic bugs, such as prompt injection, insecure plugin design, and remote
code execution. Existing best practices are the best way to solve these issues. An internal product
security team that understands secure software review, architecture, data governance, and third-party
assessments The cybersecurity team should also check how strong the current controls are to find
problems that could be made worse by LLM, like voice cloning, impersonation, or getting around
captchas.

Accounting for the specific skills and competences developed in the last few years around machine
learning, NLP and NLU, deep Learning and lately, LLMs and GenAI, it is advised to have skilled
professionals with practice, knowledge, or experience in these fields to side with security teams in
adopting, at best, and even shaping new potential analyses and responses to those issues.

Risk
Reference to risk uses the ISO 31000 definition: Risk = "effect of uncertainty on objectives." LLM
risks included in the checklist include a targeted list of LLM risks that address adversarial, safety,
legal, regulatory, reputation, financial, and competitive risks.

Vulnerability and Mitigation Taxonomy


Established methods of vulnerability classification and threat sharing are in early development, such
as Oval, STIX, threat sharing, and vulnerability classification. The checklist anticipates calibrating
with existing, established, and accepted standards, such as CVE classification.
Determining LLM Strategy

The acceleration of LLM applications has raised the visibility of all artificial intelligence applications’
organizational use. Recommendations for policy, governance, and accountability should be considered
holistically.

The immediate LLM threats are the use of online tools, browser plugins, third-party applications, the
extended attack surface, and ways attackers can leverage LLM tools to facilitate attacks.

Figure 3.1: Image of steps of LLM implementation


Deployment Strategy
The scopes range from leveraging public consumer applications to training proprietary models on
private data. Factors like use case sensitivity, capabilities needed, and resources available help
determine the right balance of convenience vs. control. But understanding these five model types
provides a framework for evaluating options.

Figure 3.2: Image of options for deployment strategy


Check List

Adversarial Risk
Adversarial Risk includes competitors and attackers.

□ Scrutinize how competitors are investing in artificial intelligence. Although there are risks in AI
adoption, there are also business benefits that may impact future market positions.
□ Threat Model: how attackers may accelerate exploit attacks against the organization,
employees, executives, or users.
□ Threat models potential attacks on customers or clients through spoofing and generative AI.
□ Investigate the impact of current controls, such as password resets, which use voice
recognition.
□ Update the Incident Response Plan and playbooks for LLM incidents.

AI Asset Inventory
An AI asset inventory should apply to both internally developed and external or third-party solutions.

□ Catalog existing AI services, tools, and owners. Designate a tag in asset management for
specific inventory.
□ Include AI components in the Software Bill of Material (SBOM), a comprehensive list of all the
software components, dependencies, and metadata associated with applications.
□ Catalog AI data sources and the sensitivity of the data (protected, confidential, public)
□ Establish if pen testing or red teaming of deployed AI solutions is required to determine the
current attack surface risk.
□ Create an AI solution onboarding process.
□ Ensure skilled IT admin staff is available either internally or externally, in accordance to the
SBoM

AI Security and Privacy Training


□ Train all users on ethics, responsibility, and legal issues such as warranty, license, and copyright.
□ Update security awareness training to include GenAI related threats. Voice cloning and image
cloning, as well as in anticipation of increased spear phishing attacks
□ Any adopted GenAI solutions should include training for both DevOps and cybersecurity for
the deployment pipeline to ensure AI safety and security assurances.
Establish Business Cases
Solid business cases are essential to determining the business value of any proposed AI solution,balancing
risk and benefits, and evaluating and testing return on investment. There are an enormous number
of potential use cases; a few examples are provided.

□ Enhance customer experience


□ Better operational efficiency
□ Better knowledge management
□ Enhanced innovation
□ Market Research and Competitor Analysis
□ Document creation, translation, summarization, and analysis

Governance
Corporate governance in LLM is needed to provide organizations with transparency and accountability.
Identifying AI platform or process owners who are potentially familiar with the technology or the
selected use cases for the business is not only advised but also necessary to ensure adequate
reaction speed that prevents collateral damages to well established enterprise digital processes.

□ Establish the organizationś AI RACI chart (who is responsible, who is accountable, who should
be consulted, and who should be informed)
□ Document and assign AI risk, risk assessments, and governance responsibility within the
organization.
□ Establish data management policies, including technical enforcement, regarding data
classification and usage limitations. Models should only leverage data classified for the
minimum access level of any user of the system. For example, update the data protection
policy to emphasize not to input protected or confidential data into nonbusiness-managed
tools.
□ Create an AI Policy supported by established policy (e.g., standard of good conduct, data
protection, software use)
□ Publish an acceptable use matrix for various generative AI tools for employees to use.
□ Document the sources and management of any data that the organization uses from the
generative LLM models.
Legal
Many of the legal implications of AI are undefined and potentially very costly. An IT, security, and
legal partnership is critical to identifying gaps and addressing obscure decisions.

□ Confirm product warranties are clear in the product development stream to assign who is
responsible for product warranties with AI.
□ Review and update existing terms and conditions for any GenAI considerations.
□ Review AI EULA agreements. End-user license agreements for GenAI platforms are very
different in how they handle user prompts, output rights and ownership, data privacy,
compliance and liability, privacy, and limits on how output can be used.
□ Review existing AI-assisted tools used for code development. A chatbotś ability to write code
can threaten a companyś ownership rights to its own product if a chatbot is used to generate
code for the product. For example, it could call into question the status and protection of the
generated content and who holds the right to use the generated content.
□ Review any risks to intellectual property. Intellectual property generated by a chatbot could
be in jeopardy if improperly obtained data was used during the generative process, which is
subject to copyright, trademark, or patent protection. If AI products use infringing material, it
creates a risk for the outputs of the AI, which may result in intellectual property infringement.
□ Review any contracts with indemnification provisions. Indemnification clauses try to put the
responsibility for an event that leads to liability on the person who was more at fault for it or
who had the best chance of stopping it. Establish guardrails to determine whether the provider
of the AI or its user caused the event, giving rise to liability.
□ Review liability for potential injury and property damage caused by AI systems.
□ Review insurance coverage. Traditional (D&O) liability and commercial general liability
insurance policies are likely insufficient to fully protect AI use.
□ Identify any copyright issues. Human authorship is required for copyright. An organization
may also be liable for plagiarism, propagation of bias, or intellectual property infringement if
LLM tools are misused.
□ Ensure agreements are in place for contractors and appropriate use of AI for any development
or provided services.
□ Restrict or prohibit the use of generative AI tools for employees or contractors where
enforceable rights may be an issue or where there are IP infringement concerns.
□ Assess and AI solutions used for employee management or hiring could result in disparate
treatment claims or disparate impact claims.
□ Make sure the AI solutions do not collect or share sensitive information without proper consent
or authorization.
Regulatory
The EU AI Act is anticipated to be the first comprehensive AI law but will apply in 2025 at the
earliest. The EUś General Data Protection Regulation (GDPR) does not specifically address AI but
includes rules for data collection, data security, fairness and transparency, accuracy and reliability,
and accountability, which can impact GenAI use. In the United States, AI regulation is included within
broader consumer privacy laws. Ten US states have passed laws or have laws that will go into effect
by the end of 2023.

Federal organizations such as the US Equal Employment Opportunity Commission (EEOC), the
Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and the US
Department of Justiceś Civil Rights Division (DOJ) are closely monitoring hiring fairness.

□ Determine State specific compliance requirements.


□ Determine compliance requirements for restricting electronic monitoring of employees and
employment-related automated decision systems (Vermont)
□ Determine compliance requirements for consent for facial recognition and the AI video analysis
required (Illinois, Maryland)
□ Review any AI tools in use or being considered for employee hiring or management.
□ Confirm the vendorś compliance with applicable AI laws and best practices.
□ Ask and document any products using AI during the hiring process. Ask how the model was
trained, how it is monitored, and track any corrections made to avoid discrimination and bias.
□ Ask and document what accommodation options are included.
□ Ask and document whether the vendor collects confidential data.
□ Ask how the vendor or tool stores and deletes data and regulates the use of facial recognition
and video analysis tools during pre-employment.
□ Review other organization-specific regulatory requirements with AI that may raise compliance
issues. The Employee Retirement Income Security Act of 1974, for instance, has fiduciary duty
requirements for retirement plans that a chatbot might not be able to meet.
Using or Implementing Large Language Model Solutions
□ Threat Model: LLM components and architecture trust boundaries.
□ Data Security: Verify how data is classified and protected based on sensitivity, including
personal and proprietary business data. (How are user permissions managed, and what
safeguards are in place?)
□ Access Control: Implement least privilege access controls and implement defense-in-depth
measures
□ Training Pipeline Security: Require rigorous control around training data governance, pipelines,
models, and algorithms.
□ Input and Output Security: Evaluate input validation methods, as well as how outputs are
filtered, sanitized, and approved.
□ Monitoring and Response: Map workflows, monitoring, and responses to understand
automation, logging, and auditing. Confirm audit records are secure.
□ Include application testing, source code review, vulnerability assessments, and red teaming in
the production release process.
□ Consider vulnerabilities in the LLM model solutions (Rezilion OSFF Scorecard).
□ Look into the effects of threats and attacks on LLM solutions, such as prompt injection, the
release of sensitive information, and process manipulation.
□ Investigate the impact of attacks and threats to LLM models, including model poisoning,
improper data handling, supply chain attacks, and model theft.
□ Supply Chain Security: Request third-party audits, penetration testing, and code reviews for
third-party providers. (both initially and on an ongoing basis)
□ Infrastructure Security: How often does the vendor perform resilience testing? What are their
SLAs in terms of availability, scalability, and performance?
□ Update incident response playbooks and include an LLM incident in tabletop exercises.
□ Identify or expand metrics to benchmark generative cybersecurity AI against other approaches
to measure expected productivity improvements.
Resources

OWASP Resources Using LLM solutions expands an organization’s attack surface and presents new
challenges, requiring special tactics and defenses. It also poses problems that are similar to known
issues, and there are already established cybersecurity procedures and mitigations. Integrating LLM
cybersecurity with an organization’s established cybersecurity controls, processes, and procedures
allows an organization to reduce its vulnerability to threats. How they integrate with each other is
available at the OWASP Integration Standards.

OWASP Resource Description Why It Is Recommended & Where


To Use It
OWASP SAMM Software Assurance Provides an effective and
Maturity Model measurable way to analyze and
improve an organization’s secure
development lifecycle. SAMM
supports the complete software
lifecycle. It is interative and
risk-driven, enabling organizations
to identify and prioritize gaps in
secure software development
so resources for improving
the process can be dedicated
where efforts have the greatest
improvement impact.
OWASP AI Security and OWASP Project with a The OWASP AI Security and Privacy
Privacy Guide goal of connecting Guide is a comprehensive list of
worldwide for an the most important AI security and
exchange on AI security, privacy considerations. It is meant
fostering standards to be a comprehensive resource for
alignment, and driving developers, security researchers,
collaboration. and security consultants to verify
the security and privacy of AI
systems.
OWASP AI Exchange OWASP AI Exchange is The AI Exchange is the primary
the intake method for the intake method used by OWASP to
OWASP AI Security and drive the direction of the OWASP AI
Privacy Guide. Security and Privacy Guide.
OWASP Resource Description Why It Is Recommended & Where
To Use It
OWASP Machine OWASP Machine The OWASP Machine
Learning Security Learning Security Learning Security Top 10 is a
Top 10 Top 10 security issues community-driven effort to collect
of machine learning and present the most important
systems. security issues of machine learning
systems in a format that is easy
to understand by both a security
expert and a data scientist. This
project includes the ML Top 10
and is a live working document
that provides clear and actionable
insights on designing, creating,
testing, and procuring secure and
privacy-preserving AI systems. It
is the best OWASP resource for
AI global regulatory and privacy
information.
OpenCRE OpenCRE (Common Use this site to search for
Requirement standards. You can search by
Enumeration) is standard name or by control type.
the interactive
content-linking platform
for uniting security
standards and guidelines
into one overview.
OWASP Threat Modeling A structured, formal Learn everything about Threat
process for threat Modeling which is a structured
modeling of an representation of all the
application information that affects the
security of an application.
OWASP CycloneDX OWASP CycloneDX Modern software is assembled
is a full-stack Bill using third-party and open source
of Materials (BOM) components. They are glued
standard that provides together in complex and unique
advanced supply chain ways and integrated with original
capabilities for cyber risk code to achieve the desired
reduction. functionality. An SBOM provides
an accurate inventory of all
components which enables
organizations to identify risk,
allows for greater transparency,
and enables rapid impact analysis.
EO 14028 provided minimum
requirements for SBOM for federal
systems.
OWASP Resource Description Why It Is Recommended & Where
To Use It
OWASP Software A community-driven Use SCVS to develop a common
Component Verification effort to establish a set of activities, controls, and
Standard (SCVS) framework for identifying best-practices that can reduce
activities, controls, and risk in a software supply chain
best practices can help in and identify a baseline and path
identifying and reducing to mature software supply chain
risk in a software supply vigilance.
chain.
OWASP API Security API Security focuses APIs are a foundational element
Project on strategies and of connecting applications, and
solutions to understand mitigating misconfigurations or
and mitigate the vulnerabilities is mandatory to
unique vulnerabilities protect users and organizations.
and security risks Use for security testing and red
of Application teaming the build and production
Programming Interfaces environments.
(APIs)
OWASP Application Application Security Cookbook for web application
Security Verification Verification Standard security requirements, security
Standard ASVS (ASVS) Project provides testing, and metrics. Use to
a basis for testing web establish security user stories and
application technical security use case release testing.
security controls
and also provides
developers with a list of
requirements for secure
development.
OWASP Threat and An action oriented view This matrix allows a company to
Safeguard Matrix to safeguard and enable overlay its major threats with the
(TaSM) the business NIST Cyber Security Framework
Functions (Identify, Protect, Detect,
Respond, & Recover) to build a
robust security plan. Use it as a
dashboard to track and report on
security across the organization.
Defect Dojo An open source Use Defect Dojo to reduce the
vulnerability time for logging vulnerabilities
management tool that with templates for vulnerabilities,
streamlines the testing imports for common vulnerability
process by offering scanners, report generation, and
templating, report metrics.
generation, metrics, and
baseline self-service
tools.
Table 5.1: OWASP Resources
OWASP Top 10 for Large Language Model Applications

Figure 5.1: Image of OWASP Top 10 for Large Language Model Applications
OWASP Top 10 for Large Language Model Applications Visualized

Figure 5.2: Image of OWASP Top 10 for Large Language Model Applications Visualized
MITRE Resources The increased frequency of LLM threats emphasizes the value of a resilience-first
approach to defending an organization’s attack surface. Existing TTPS are combined with new
attack surfaces and capabilities in LLM Adversary threats and mitigations. MITRE maintains a
well-established and widely accepted mechanism for coordinating opponent tactics and procedures
based on real-world observations.

Coordination and mapping of an organization’s LLM Security Strategy to MITRE ATT&CK and MITRE
ATLAS allows an organization to determine where LLM Security is covered by current processes
such as API Security Standards or where security holes exists.

MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a framework, collection
of data matrices, and assessment tool that was made by the MITRE Corporation to help organizations
figure out how well their cybersecurity works across their entire digital attack surface and find holes
that had not been found before. It is a knowledge repository that is used all over the world. The
MITRE ATT&CK matrix contains a collection of strategies used by adversaries to achieve a certain
goal. In the ATT&CK Matrix, these objectives are classified as tactics. The objectives are outlined in
attack order, beginning with reconnaissance and progressing to the eventual goal of exfiltration or
impact.

MITRE ATLAS, which stands for "Adversarial Threat Landscape for Artificial Intelligence Systems," is
a knowledge base that is based on real-life examples of attacks on machine learning (ML) systems
by bad actors. ATLAS is based on the MITRE ATT&CK architecture, and its tactics and procedures
complement those found in ATT&CK.

MITRE Resource Description Why It Is Recommended & Where


To Use It
MITRE ATT&CK Knowledge base of The ATT&CK knowledge base
adversary tactics and is used as a foundation for the
techniques based on development of specific threat
real-world observations models and methodologies.
Map existing controls within the
organization to adversary tactics
and techniques to identify gaps or
areas to test.
MITRE AT&CK Create or extend ATT&CK Host and manage a customized
Workbench data in a local knowledge copy of the ATT&CK knowledge
base base. This local copy of the
ATT&CK knowledge base can be
extended with new or updated
techniques, tactics, mitigation
groups, and software that is
specific to your organization.
MITRE Resource Description Why It Is Recommended & Where
To Use It
MITRE ATLAS MITRE ATLAS Use it to map known ML
(Adversarial Threatvulnerabilities and map checks
Landscape forand controls for proposed projects
Artificial-Intelligence or existing systems.
Systems) is a knowledge
base of adversary
tactics, techniques,
and case studies for
machine learning (ML)
systems based on
real-world observations,
demonstrations from ML
red teams and security
groups, and the state
of the possible from
academic research
MITRE ATT&CK Powered ATT&CK Powered Suit is Add to your browser to quickly
Suit a browser extension that search for tactics, techniques,
puts the MITRE ATT&CK and more without disrupting your
knowledge base at your workflow.
fingertips.
The Threat Report Automates TTP Mapping TTPs found in CTI reports
ATT&CK Mapper (TRAM) Identification in CTI to MITRE ATT&CK is difficult,
Reports error prone, and time-consuming.
TRAM uses LLMs to automate this
process for the 50 most common
techniques. Supports Juypter
notebooks.
Attack Flow v2.1.0 Attack Flow is a Attack Flow helps visualize how
language for describing an attacker uses a technique, so
how cyber adversaries defenders and leaders understand
combine and sequence how adversaries operate and
various offensive improve their own defensive
techniques to achieve posture.
their goals.
MITRE Caldera A cyber security platform Plugins are available for Caldera
(framework) designed that help to expand the core
to easily automate capabilities of the framework and
adversary emulation, provide additional functionality,
assist manual red-teams, including agents, reporting,
and automate incident collections of TTPs and others
response.
CALDERA plugin: A plugin developed for This plugin provides TTPs defined
Arsenal adversary emulation of in MITRE ATLAS to interface with
AI-enabled systems. CALDERA.
MITRE Resource Description Why It Is Recommended & Where
To Use It
Atomic Red Team Library of tests mapped Use to validate and test controls
to the MITRE ATT&CK in an environment. Security teams
framework. can use Atomic Red Team to
quickly, portably, and reproducibly
test their environments. You can
execute atomic tests directly from
the command line; no installation
is required.
MITRE CTI Blueprints Automates Cyber Threat CTI Blueprints helps Cyber Threat
Intelligence reporting. Intelligence (CTI) analysts create
high-quality, actionable reports
more consistently and efficiently.
Table 5.2: MITRE Resources
AI Vulnerability Repositories

Name Description
AI Incident Database A repository of articles about different times AI has
failed in real-world applications and is maintained by a
college research group and crowds sourced.
OECD AI Incidents Monitor (AIM) Offers an accessible starting point for comprehending
the landscape of AI-related challenges.
Three of the leading companies tracking AI Model vulnerabilities
Huntr Bug Bounty : ProtectAI Bug bounty platform for AI/ML
AI Vulnerability Database (AVID) : Garak Database of model vulnerabilities
AI Risk Database: Robust Intelligence Database of model vulnerabilities
Table 5.3: AI Vulnerability Repositories
AI Procurement Guidance

Name Description
World Economic Forum: Adopting AI The standard benchmarks and assessment criteria for
Responsibly: Guidelines for Procurement of procuring Artificial systems are in early development.
AI Solutions by the Private Sector: Insight The procurement guidelines provide organizations
Report June 2023 with a baseline of considerations for the end-to-end
procurement process.
Use this guidance to augment an organization’s existing
Third Party Risk Supplier and Vendor procurement
process.
Table 5.4: AI Procurement Guidance
Team

Thank you to the OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist
Contributors.

Checklist Contributors
Sandy Dunn Heather Linn John Sotiropoulos
Steve Wilson Fabrizio Cilli Aubrey King
Bob Simonoff David Rowe Rob Vanderveer
Emmanual Guilherme Junior Andrea Succi Jason Ross
Table A.1: OWASP LLM AI Security & Governance Checklist
v.0.5 Team

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy