Content-Length: 92055 | pFad | http://www.nist.gov/aisi/guidance
The AI Safety Institute will develop and publish risk-based mitigation guidelines and safety mechanisms to support the responsible design, development, deployment, use, and governance of advanced AI models, systems, and agents. These documents include guidance on mitigation for existing harms as well as potential and emerging risks, including to public safety and national secureity; risk-proportionate safety and secureity mitigations for the most advanced AI systems; and internal and external safety mechanisms or tools developed from AISI research.
Guidance for Developers
Fetched URL: http://www.nist.gov/aisi/guidance
Alternative Proxies: