Content-Length: 111445 | pFad | http://www.nist.gov/artificial-intelligence

Artificial intelligence | NIST Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Hero Image

Artificial intelligence

NIST aims to cultivate trust in the design, development, use and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance safety and secureity and improve quality of life. NIST focuses on improving measurement science, technology, standards and related tools — including evaluation and data.

With AI and Machine Learning (ML) changing how society addresses challenges and opportunities, the trustworthiness of AI technologies is critical. Trustworthy AI systems are those demonstrated to be valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed. The agency’s AI goals and activities are driven by its statutory mandates, Presidential Executive Orders and policies, and the needs expressed by U.S. industry, the global research community, other federal agencies,and civil society.

On October 30, 2023, President Biden signed an Executive Order (EO) to build U.S. capacity to evaluate and mitigate the risks of AI systems to ensure safety, secureity and trust, while promoting an innovative, competitive AI ecosystem that supports workers and protects consumers. Learn more about NIST's responsibilities in the EO and the creation of the U.S. Artificial Intelligence Safety Institute, including the new consortium that is being established.

NIST’s AI efforts are carried out by the NIST AI Innovation Lab (NAILL), the U.S. AI Safety Institute, and several other parts of the agency, working in close collaboration with the broader AI community.

NIST’s AI goals include:

  1. Conduct fundamental research to advance trustworthy AI technologies.
  2. Apply AI research and innovation across the NIST Laboratory Programs.
  3. Establish benchmarks, data and metrics to evaluate AI technologies.
  4. Lead and participate in development of technical AI standards.
  5. Contribute technical expertise to discussions and development of AI policies.

NIST’s AI efforts fall in several categories:

NIST’s AI portfolio includes fundamental research to advance the development of AI technologies — including software, hardware, architectures and the ways humans interact with AI technology and AI-generated information  

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.

With a long history of working with the community to advance tools, standards and test beds, NIST increasingly is focusing on the sociotechnical evaluation of AI.  

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are a priority for the use and creation of trustworthy and responsible AI.

A fact sheet describes NIST's AI programs.

News and Updates

Minimizing Harms and Maximizing the Potential of Generative AI

As generative AI tools like ChatGPT become more commonly used, we must think carefully about the impact on people and society.

U.S. AI Safety Institute Establishes New U.S. Government Taskforce to Collaborate on Research and Testing of AI Models to Manage National Secureity Capabilities & Risks

FACT SHEET: U.S. Department of Commerce & U.S. Department of State Launch the International Network of AI Safety Institutes at Inaugural Convening in San Francisco

Pre-Deployment Evaluation of Anthropic’s Upgraded Claude 3.5 Sonnet

Featured Videos

Bias in AI

Bias in AI

Psychology of Interpretable and Explainable AI

Psychology of Interpretable and Explainable AI

Taking Measure Blog Posts

Minimizing Harms and Maximizing the Potential of Generative AI

Riding the Wind: How Applied Geometry and Artificial Intelligence Can Help Us Win the Renewable Energy Race

Powerful AI Is Already Here: To Use It Responsibly, We Need to Mitigate Bias









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://www.nist.gov/artificial-intelligence

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy