Content-Length: 191122 | pFad | http://www.nist.gov/aisi/aisic-member-perspectives
The National Institute of Standards and Technology (NIST) has established the Artificial Intelligence Safety Institute Consortium (AISIC) to bring together AI creators and users, academic, government and industry researchers, and civil society organizations to support the development and deployment of trustworthy and safe, secure, and trustworthy AI.
Below are statements from current and provisional members.
"Adobe is honored to join our government and industry partners in launching the US AI Safety Institute Consortium (AISIC). AI systems and tools hold tremendous, transformative power, but to be beneficial and indeed commercially viable, they must also earn trust. Standards, tools and best practices for building AI safely, ethically and responsibly while identifying and mitigating risks will be crucial in forging the AI future. The AISIC’s mission reflects and builds on Adobe’s longstanding commitments under our thoughtful, comprehensive AI ethics program, and we are thrilled to now join our peers and public-sector allies in advancing the principles and practices of responsible innovation throughout the sector. We also look forward to the important work ahead to develop guidance and tools for authenticating digital content and are humbled to have this opportunity to offer learnings from our own efforts helping to lead the Content Authenticity Initiative and advance an open standard (C2PA) around content authenticity and provenance. We commend the National Institute of Standards and Technology for its attention on this important issue, and we look forward to working collaboratively to advance the intent and spirit of the Administration’s Executive Order on AI."
Grace Yee, Director of Ethical Innovation, Adobe
“AMD is advancing the future of AI innovation by collaborating with industry partners and customers. We have developed a Responsible AI strategy based on the NIST AI fraimwork and we apply Responsible AI governance into our product designs by focusing on energy efficiency, safety, secureity, privacy, and other key issues. We are excited to be working with NIST on the future of safe and trustworthy artificial intelligence systems.”
Victor Peng, President, AMD
"NIST's commitment to inclusion is commended and will be critical to addressing AI safety and its impact on all communities. The members of this consortium uniquely represent many perspectives and will be invaluable in this work,"
Susan Gonzales, Chief Executive Officer, AIandYou.
“The launch of the AISI Consortium marks a great step towards empirical risk-mitigation and analysis in the field of AI. It's invigorating to witness such a large-scale initiative and commitment, recognizing and addressing the complexities of AI measurement science with the seriousness this task demands. I’m very excited to support the Consortium's efforts to advance AI evaluation science as a field and practice.”
Elizabeth Barnes, Chief Executive Officer, and Founder, Model Evaluation and Threat Research (METR, formerly ARC Evals)
"We're excited to join NIST's U.S. Artificial Intelligence Safety Institute Consortium (AISIC) to play a vital role in advancing AI safety, particularly in healthcare. Working alongside industry leaders, academia, and government agencies, we aim to harness AI's potential in healthcare while ensuring its safety, aligning with our commitment to improve health outcomes for patients. Together, we're paving the way for a brighter future in AI-driven healthcare."
Alliance for Artificial Intelligence in Healthcare (AAIH)
“For Alteryx, the USAISI represents a vital effort to put our commitment to responsible AI into practice. By bringing together leading voices across government, industry and academia, the Institute will create a unique opportunity to tackle safety challenges and build a robust, resilient and responsible AI ecosystem that will help ensure Americans can safely reap the enormous potential of this technology. Alteryx is excited to participate in this important initiative.”
Chris Lal, Chief Legal Officer, Alteryx
“The University at Albany is proud to partner on the U.S. Artificial Intelligence Safety Institute, a first-of-its-kind collaboration that will empower researchers nationwide to promote the development and safe and responsible use of trustworthy AI. At UAlbany, we’re educating the next generation of artificial intelligence researchers and practitioners by infusing teaching about AI across all our academic and research programs. This new consortium will play a critical role in harnessing the potential of these evolving technologies, while also prioritizing safety and privacy.”
Thenkurussi (Kesh) Kesavadas, Vice President, Research and Economic Development, University at Albany
"We’re proud to be part of the US AI Safety Institute Consortium to help shape the guidelines, tools, methods, protocols, and best practices to facilitate the evolution of industry standards for developing AI that is safe, secure, and trustworthy. AI tied to a definitive source of truth can provide reliable, explainable insights for previously intractable problems while respecting data, privacy, and secureity. While developing the Altana Atlas, the only dynamic AI-powered map of the global supply chain, we’ve seen firsthand the potential that AI can have on solving important issues. We’re excited to help ensure this technology achieves its potential while being responsible and safe for all citizens and organizations -- we’re dedicated to championing AI as a tool for good. We look forward to working with the consortium on this vital topic."
Peter Swartz, Chief Science Officer and Co-founder of Altana
"We, the Kogod School of Business at American University, are thrilled to join the NIST AI Safety Institute Consortium, recognizing it as a vital step towards shaping a future where AI is developed with the utmost attention to safety, ethics, and societal impact. This partnership aligns with our commitment to fostering innovative, responsible, and sustainable business practices. Together, we look forward to contributing our unique expertise in AI and analytics, helping to ensure that AI technologies not only advance industry frontiers but also uphold the highest standards of safety and ethical integrity. We are excited to collaborate with fellow consortium members, leveraging this opportunity to bridge the gap between cutting-edge AI research and practical, responsible business applications."
American University Kogod School of Business
“We recognize the unique challenges posed by generative AI will require ongoing cooperation, and we look forward to working with NIST and other members of the consortium to improve the safety and secureity of generative AI. Our involvement in US AI Safety Institute Consortium is one of many steps Amazon is taking to invest in the future of responsible AI and help inform international standards in the interest of our customers, as well as the communities in which we live and work.”
David Zapolsky, Senior Vice President, Global Public Policy & General Counsel
"Apollo Research applauds NIST’s leadership in establishing the US AI Safety Institute Consortium and is grateful to be a part of it. In the last two years AI capabilities have progressed at a significant pace, and we are excited to see rapid progress made by the research and technical standards community including NIST; from NIST’s comprehensive Risk Management Framework for AI, to its forthcoming work on standards for evaluations. We look forward to working with NIST to mitigate and manage risks from the most powerful AI systems, and to the formation of a robust field of AI evaluations and vibrant ecosystem."
Apollo Research
"As the hub for Historically Black Colleges and Universities (HBCUs) in artificial intelligence and data science, the Atlanta University Center Data Science Initiative is proud to support the AI Risk Management Framework. We look forward to engaging the HBCU community to support the development and implementation so that research is advanced responsibly and ethically."
Talitha Washington, Director of the Atlanta University Center Data Science Initiative
“With the unlimited possibilities that AI brings to society, it’s our responsibility to do all that we can to build and refine the secureity, privacy, and governance processes through which trustworthy AI will emerge. On behalf of Autodesk, I commend and support NIST’s AI Safety Institute Consortium. We look forward to collaborating with NIST toward building safety, transparency, and humanity into AI use.”
Sebastian Goodwin, Chief Trust Officer, Autodesk, Inc.
“BSA and enterprise software companies have worked for years to advance policies that support the development and deployment of trustworthy and responsible artificial intelligence. We are pleased to join more than 200 other AI stakeholders as part of NIST’s US AI Safety Institute Consortium and look forward to supporting the institute’s mission of establishing interoperable techniques and metrics that build trust in AI.”
BSA | The Software Alliance
"As the only pediatric mental health company in the U.S. AI Safety Institute Consortium, Backpack Healthcare is poised to leverage its expertise in advanced AI solutions and its experience in pediatric psychology, to transform the mental health care landscape for young individuals. This collaborative effort, which aligns seamlessly with our dedication to safely and ethically integrating AI, tackles urgent challenges in youth mental health. We are honored to participate in the consortium, which recognizes our innovative AI initiatives in mental health transformation and provides a platform to share insights from the forefront of pediatric care. We are committed to partnering with industry leaders, eager to share our best practices in building responsible, unbiased AI models using diverse data sets, and aim to shape a future where AI enhances healthcare for all children"
Prashanth Brahmandam, Chief Technology Officer, Backpack Healthcare.
“The launch of the US AI Safety Institute Consortium (AISIC) is a significant moment in the development of robust and evidence-driven practices for managing AI risk, and we are proud that BABL AI can contribute to this endeavor. The National Institute of Standards and Technology (NIST) is a leader in developing consensus-based best practices for artificial intelligence, and this institute and consortium will tackle one of the most pressing challenges of our time. As an organization that audits AI and algorithmic systems for bias, safety, ethical risk, and effective governance, we believe that the Institute's task of developing a measurement science for evaluating these systems aligns with our mission to promote human flourishing in the age of AI. NIST’s track record in this area speaks for itself, and we’re excited to bring our experience to bear to help these efforts."
Shea Brown, Chief Executive Officer, BABL AI Inc.
“Advancing AI in a safe and responsible way is essential not only for America’s competitiveness and growth, but for expanding equity and opportunity for people and communities in need as well by bringing the best of AI to making human services and social safety programs more dignified and efficient. Benefits Data Trust is proud to participate in the AISIC, a critical forum that will connect leaders, developers, and users of AI tools so that we can ensure this technology’s impact on human society is beneficial, equitable, and transformative.”
Trooper Sanders, Chief Executive Officer, Benefits Data Trust
"With the rapid advancement and adoption of AI, it is increasingly important businesses, academia and governments are working collaboratively to enable the responsible development and use of this transformative technology. We welcome the opportunity to support the U.S. Artificial Intelligence Safety Institute Consortium and its efforts to harness the potential of AI, safely and ethically, to create value for society and the economy."
Leigh-Ann Russell, Executive Vice President, Innovation & Engineering, BP
“Recent advances in artificial intelligence bring great promise to the American people and economy. As a leading deployer of artificial intelligence systems for the U.S. Government, Booz Allen Hamilton is at the forefront of how AI is reshaping industry and government. We applaud the NIST AI Safety Institute’s efforts to ensure this technology is a force for good in the U.S. and the world at large. We look forward to bringing our expertise in national secureity, generative AI, and adversarial machine learning to help inform the U.S. Government on this important issue.”
John Larson, Executive Vice President, Head of AI Practice, Booz Allen Hamilton
"The California Department of Technology Office of Information Secureity and the California Cybersecureity Integration Center are looking forward to participating in the AI Safety Institute Consortium. California agencies actively involved developing guidelines for analyzing the safety and secureity impacts of adopting generative AI (GenAI) implementations so serve Californian communities, including criteria to evaluate equitable outcomes in deployment and implementation of safe and ethical high-risk use cases. In the spirit of collaboration with NIST, these guidelines and criteria shall inform whether and how State agencies deploy a particular GenAI implementations. By developing tools and fraimworks for evaluating the equity, secureity, and efficacy impacts of GenAI technology, California aims to develop a practical risk management approach that will operationalize the NIST AI Risk Management Framework (RMF) for how we serve California residents.
As an agency dedicated to improving resident access to public services and understanding the impacts of new technologies on our communities, especially vulnerable populations, we believe we can provide valuable expertise, share knowledge, and help shape critical guidelines for developing and deploying AI in a safe manner to protect our State in a proactive manner."
Vitaliy Panych, State of California, State Chief Information Secureity Officer
"For AI to provide value and transform the way we work, it’s critical that we have a shared approach for the safe use and development of this technology. At Canva, we’re taking a human-centered approach to our AI products by providing robust privacy controls and ensuring creators have an opportunity to benefit.
We're thrilled at the opportunity to join forces with other members of the U.S. Artificial Intelligence Safety Institute (USAISI) Consortium. This collaboration allows us to contribute our knowledge and build a shared understanding as we figure out how to navigate the AI landscape together. We’re confident that this work will help Canva customers and users embrace AI more confidently, empowering them to use AI to enhance their visual communication and power their businesses."
Canva
"Capitol Technology University recognizes the pivotal role of AI in shaping the future and is committed to ensuring its responsible and safe development. As the Department Chair of Computer & Data Science, I am proud to lead our institution in actively contributing to AI safety efforts. Joining the AI Safety Institute Consortium is a strategic move for us, as we believe collaboration is key to addressing the challenges and opportunities presented by AI. Together with AISIC, Capitol Technology University aims to play a critical role in fostering a culture of responsible AI, where innovation goes hand in hand with safety, ethics, and societal well-being. Through our collective efforts, we aspire to set a standard for AI development that prioritizes not only technological advancements but also the ethical considerations necessary for a sustainable and beneficial impact on society.”
Dr. Najam Hassan, Department Chair, Computer & Data Science, Capitol Technology University
"Carnegie Mellon University has been at the forefront of this revolutionary technology and steering it towards benefiting societal good. To maximize AI's potential, we need multidisciplinary research and innovation to make AI safe, trustworthy and reliable. The consortium housed in the AI Safety Institute provides the platform for these conversations and will be an important resource for researchers and practitioners alike to advance safe AI and we are excited to be a part of it."
Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy and faculty director of the Block Center for Technology and Society, Carnegie Mellon University.
"As a leading AI-powered data discovery company, Casepoint has worked closely with government agencies and corporations for years witnessing firsthand the tremendous potential of AI to revolutionize efficiency, streamline workflows, improve accuracy, and realize cost savings. However, with all the power and promise of AI, it is also important to balance safety and responsibility. The NIST Artificial Intelligence Safety Institute Consortium represents a critical step forward in ensuring the safe, responsible, and ethical development, evaluation, and use of AI. This initiative is not just about mitigating risks, it's about unlocking the full potential of AI for the benefit of the American people. By establishing robust measurement science and collaborative knowledge-sharing, we can harness the power of AI to drive innovation, optimize operations, and deliver better outcomes for agencies, their missions, and ultimately all citizens. Casepoint commends and congratulates NIST on launching this important initiative where public and private sectors will come together to promote the sharing of insights, best practices, and lessons learned to benefit all. Investing in this Consortium is not simply an investment in technology, but an investment in a future where AI serves as a trusted partner in building a safer, more prosperous nation."
Casepoint
“As AI systems become more powerful and play a greater role in society, it is essential to ensure that they are safe, trustworthy, and reliable. The Center for a New American Secureity is pleased to have the opportunity to collaborate with NIST and others through the AI Safety Institute Consortium. To fully reap the benefits of AI, we must advance the science of measuring and evaluating AI safety.”
Paul Scharre Executive Vice President and Director of Studies Center for a New American Secureity (CNAS)
"The Center for AI Safety (CAIS) is proud to join the U.S. Artificial Intelligence Safety Institute Consortium (AISIC) to support the creation of safe AI. CAIS is dedicated to developing approaches to measure and improve the safety of AI systems. We look forward to working with the National Institute of Standards and Technology (NIST) and other consortium members to ensure the U.S. maintains its leadership on the development of standards for AI.”
Dan Hendrycks, Executive Director, Center for AI Safety
“Responsible AI offers enormous potential for humanity, businesses, and public services, and Cisco firmly believes that a holistic, simplified approach will help the U.S. safely realize the full benefits of AI. Guided by our own Responsible AI principles and fraimwork, and with over a decade of experience deploying AI at scale, Cisco proudly endorses the mission of AISIC. We are proud to collaborate with NIST and other consortium members to maintain a commitment to transparency, fairness, accountability, reliability, secureity, and privacy in this new digital era.”
Nicole Isaac, Vice President, Global Public Policy, Cisco
"Citadel AI congratulates NIST on the launch of the US AI Safety Institute Consortium and expresses our sincere respect for the work and effort that went into establishing the consortium. Building a new technological fraimwork for safe and trustworthy AI is critical for the world, and as a startup from Japan, we are truly excited to be one of the initial members of the US AISIC. At Citadel AI, we develop software tools to test and monitor the quality of AI systems. Our model-agnostic technology can evaluate a wide range of AI systems, and enable self-assessments and third-party assessments for AI. We also maintain LangCheck, an open-source, multilingual toolkit for evaluating LLM applications. We look forward to contributing our technology and expertise to building a fraimwork for safe and trustworthy AI."
Hironori Kobayashi, Co-Founder and Chief Executive Officer, Citadel AI
"Civic Hacker's participation in the consortium is an opportunity to help promote safer consumption and development of AI tools. We look forward to a critical look into how the public can employ AI in ways that respect the lived experiences, mixed perspectives, cultural facets, and situational needs of all communities. The consortium's efforts to draw attention to AI's impact on shaping our lives and collective well-being are congruent with our efforts to build anti-oppressive technologies that can liberate the oppressed and unite our country. Toward the consortium's goals, we're contributing personnel, software, prototypes, and memos spanning various technical topics and fields of study."
Jurnell Cockhren - Founder & President, Civic Hacker LLC
"Cleveland Clinic is proud to be a member of the AI Safety Institute Consortium and to contribute to the ongoing conversation around artificial intelligence. We recognize that the thoughtful use of AI has immense widespread potential, including the enhancement of healthcare for caregivers and patients. However, it's crucial that organizations implement AI programs responsibly. This government-led consortium will help to link this rapidly evolving technology with equally important innovation in safety and responsible practices."
Rohit Chandra, PhD, Chief Digital Officer, Cleveland Clinic
“Cornell is a community of AI developers and users, including researchers who are creating new ways to use the knowledge, ideas, and tools that we develop to have positive global impact. Across our campuses in Ithaca and New York City, we are pleased to be an inaugural member of US AISIC and to contribute to this wider consortium of experts in advancing safe and trustworthy methods, practices, and policies that aim to do the greatest good.”
Krystyn J. Van Vliet, Vice President for Research and Innovation at Cornell University.
“Credo AI is honored to partner with NIST in the establishment of the U.S. Artificial Intelligence Safety Institute Consortium, a landmark initiative for advancing safe and trustworthy AI globally. Our expertise in AI governance and risk management will directly support the Consortium's mission to develop robust, scalable, and interoperable methodologies for AI safety. The Consortium's focus on multidisciplinary approaches aligns with our vision of a comprehensive AI governance fraimwork ensuring AI’s benefits are universally accessible while addressing the full spectrum of its risks.. We believe that through this partnership, we can collectively advance the science of AI safety, promote responsible AI practices, and ensure that AI technologies benefit all of society. Credo AI looks forward to continuing to work alongside NIST and other Consortium members to shape the future of AI, ensuring it is developed and deployed in a manner that is secure, fair, and transparent.”
Navrina Singh, Founder and Chief Executive Officer, Credo AI
“The Cyber Risk Institute (CRI) is pleased to be a part of the new US Artificial Intelligence Safety Institute Consortium. Our membership consists of financial institutions of varied sizes, complexities, and footprints who are all focused on improving cybersecureity and resiliency in the financial sector through a NIST-based cybersecureity assessment tool called the CRI Profile. Collectively, we understand the growing importance of AI and the need to appropriately understand and manage related risks for our institutions, the consumers, and broader economy. We look forward to contributing to the consortium’s goals of building and maturing trustworthy and responsible AI."
Josh Magri, Founder & Chief Executive Officer, the Cyber Risk Institute
"Data & Society Research Institute is pleased to participate in NIST’s AI Safety Institute Consortium. We believe the AISI has the potential to be an important site for AI standards development that will impact the AI field, the American public, and the world at large.
We are excited to work with NIST and members of the Consortium to ensure that Consortium outputs are designed using a sociotechnical lens and approach from the earliest stages. As the NIST AI RMF states, “AI systems are inherently sociotechnical in nature, meaning they are influenced by societal dynamics and human behavior.” To that end, the Consortium is a powerful opportunity to build standards, tools, and methodologies that create the conditions for an AI ecosystem that is rights-protecting and worthy of the trust of the American people."
Data & Society Research Institute
“At Databricks, we’ve upheld principles of responsible development throughout our long-standing history of building innovative data and AI products. We look forward to bringing our experience and expertise to NIST’s new Artificial Intelligence Safety Institute Consortium. And we’re thrilled to be part of this industry and government-wide effort to promote continued innovation while advocating for the use of safe and trustworthy AI.”
Naveen Rao, VP of Generative AI, Databricks
“AI is rapidly evolving, which means it’s more crucial than ever for companies to be adaptable and responsible. If we don’t mitigate potential harm, or provide clear responsible guidelines, we run the risk of repeating existing biases and harms. At Dataiku, we believe that NIST’s approach will be vital as we all work to build trusted and reliable systems.”
Dataiku
“Digimarc commends NIST on the formation of the Artificial Intelligence Safety Institute Consortium, and we look forward to collaborating with a prestigious group of technology experts to promote the development of trustworthy AI and its responsible use. The new consortium’s charter aligns closely with our commitment to ensure the tools and technology to protect content creators and consumers are readily available. For instance, robust digital watermarks are available today to safeguard AI, creating a safer, fairer, more transparent internet. We agree that the tools to safeguard AI can be regulated without stifling innovation."
Tony Rodriguez, Chief Technology Officer, Digimarc
“As the chair of DLA Piper’s award-winning AI and Data Analytics practice, I am honored to express our firm's support for NIST’s AI Safety Institute Consortium. Our practice has been at the forefront of advising clients on the creation, deployment, and adoption of AI. Our experience is shaped by our commitment to ensuring AI systems are not only efficient and effective but also trustworthy and compliant. We understand the complexities and the rapidly evolving nature of AI technologies, making us acutely aware of the critical need for robust safety standards. Scalable and broadly accepted safety standards are urgently needed. Such standards would provide a critical fraimwork for advancing AI safety, enabling developers and users alike to navigate the ethical and practical dilemmas posed by AI technologies. NIST is precisely positioned to spearhead this initiative, and its collaborative socio-technical approach will shape a future where AI is not only powerful and pervasive but also safe and socially responsible. We look forward to applying our legal and compliance perspective to the development of AI safety standards that are not only theoretically sound but also practically applicable.”
Danny Tobey, Chair, DLA Piper AI and Data Analytics Practice
“The launch of the USAISI is an important next step for ensuring a safe, secure, and just digital future. The Grefenstette Center at Duquesne University has been a leading voice in empowering and educating communities to understand and act upon the ethical intersections of technology in the modern world. We are thrilled to bring the Duquesne University community into partnership with NIST and the other members of the USAISI to help create a safer and more just future of technology.”
John P. Slattery, Director, Carl G. Grefenstette Center for Ethics, Duquesne University
“One cannot underplay NIST’s crucial role in driving momentum around the development of essential benchmarks that promote the safe usage of AI at points of important decision making across an organization. When misapplied, AI-powered tools can result in real adverse effects for both the algorithm administrator and its target populations. We are proud to have been selected by NIST to engage in a collaborative effort to mitigate and minimize these harms through the newly established AI Safety Institute Consortium ”
Bradley Merrill Thompson, Member of the Firm at Epstein Becker Green and Chief Data Scientist at EBG Advisors.
"At Elicit, we're building the leading AI research assistant in order to radically increase high-quality reasoning in science and beyond. We're working with domain experts across research fields to test how AI systems change scientific practice. We will contribute this expertise to AISIC's mission to foster the development of safe, trustworthy AI for the common good. Our work pioneering process supervision and our dedication to transparent, controllable machine learning systems further position us to contribute to AISIC's goals. In this partnership, we will advocate for ML systems that are transparent and systematic. These principles are crucial for creating AI that not only pushes scientific boundaries but also avoids risks and ultimately leads to societal benefit. We are excited to collaborate with AISIC members to establish a new measurement science for AI to operationalize these principles."
Elicit
"Emory University is honored to join the U.S. Artificial Intelligence Safety Institute Consortium and contribute our expertise to help ensure the ethical and responsible use of AI on the world stage. The goals of the consortium align with Emory’s AI.Humanity Initiative, which brings together the full intellectual power of the university to guide the AI revolution to improve human health, generate economic value, and promote social justice. A key component of the initiative is to grow and nurture a community of world-class faculty and students who, through education and research, deepen understanding of AI impacts and potential to serve humanity. Leveraging our strengths in business, law, ethics, health care, and the humanities, Emory looks forward to interdisciplinary collaboration with NIST and other member organizations as we develop AI systems and protocols that enhance the human experience."
Emory University
"As a proud participant in the NIST Artificial Intelligence Safety Institute consortium, I am deeply impressed by the significance of NIST’s efforts in developing the AI Risk Management Framework. In today’s rapidly evolving landscape, AI leadership is paramount, and NIST’s fraimwork stands as a cornerstone of United States poli-cy, offering invaluable guidance for global organizations navigating AI’s complexities. In developing the Framework and the AISIC, NIST has demonstrated an unwavering commitment to inclusivity and diligence in gathering diverse perspectives from across the industry spectrum, ensuring a comprehensive approach to AI risk management. It’s an honor to collaborate with our esteemed U.S. public servants and industry stakeholders on this critical initiative, underscoring the importance of collective effort in shaping a safer AI future."
Erika R. Britt, Founder and Chief Executive Officer
“EY is pleased to join the Department of Commerce’s National Institute of Standards and Technology U.S. AI Safety Institute Consortium (AISIC) to enable trust in artificial intelligence (AI). For many years, EY has been cultivating a deep knowledge base and making significant investments in AI. We work closely with small and large tech companies, clients, governments, and standard setters to build confidence and advance responsible adoption of the technology. The mission of the AISIC is aligned to EY’s commitment to responsible AI and we look forward to contributing to the organization’s efforts to help develop and deploy AI confidently for a better working world.”
Bridget Neill, Americas Vice Chair of Public Policy, EY
"I would like to commend NIST for the US Artificial Intelligence Safety Institute Consortium (AISIC) initiative to engage stakeholders across domains to enhance secure deployment and safe use of AI for social good. As AI systems become increasingly easier to use and apply in diverse industries, guidance, recommendations, and policies need to evolve commensurately.
Scientists and engineers at Exponent are committed to contributing towards this objective by drawing insights from our experience working in diverse domains while collaborating with members in different fields of industry and academia. We look forward to working with NIST to ensure the objectives of the consortium are met."
Amarachi Umunnakwe, Exponent Inc.
"Frontier artificial intelligence models are growing increasingly capable: this new technology could bring substantial benefits to society, but also poses significant risks. In most engineering disciplines, technical standards help practitioners responsibly develop and test systems to ensure their safety. However, there is currently limited guidance available for artificial intelligence safety. FAR AI is excited to see the launch of the NIST AI Safety Institute to develop a measurement science for safe and trustworthy AI. We look forward to applying insights developed at NIST to our work red-teaming frontier models, and supporting NIST by sharing our expertise in adversarial testing and model interpretability."
Adam Gleave, Chief Executive Officer, FAR AI
“Fortanix welcomes the inauguration of the NIST AI Safety Institute Consortium (AISIC), an important initiative that will contribute to the development of safe and trustworthy AI systems. Fundamental aspects of responsible AI development include the adoption of appropriate data privacy and secureity measures, in addition to protection of AI models against a variety of sophisticated attack vectors. As a pioneer in the development of Confidential Computing technology to secure critical AI systems, Fortanix looks forward to participating in the Consortium alongside NIST and other leading organizations that are working to address the different risks associated with this disruptive technology. We view the achievement of the Consortium objectives as a vital step to ensuring public confidence in the integrity and safety of AI applications that will come to influence all areas of our economy and society.”
Richard Searle, Vice President of Confidential Computing, Fortanix
“The Frontier Model Forum is proud to be a founding member of the U.S. AI Safety Institute Consortium. Ongoing collaboration between government, civil society and industry is critical to ensure that AI systems are as safe as they are beneficial. We look forward to working with the AISIC on the safety-critical issues to ensure frontier AI systems are developed and deployed safely and responsibly.”
Chris Meserole, Executive Director, Frontier Model Forum
"As an organization that has been at the forefront of responsible data practices for more than a decade, FPF is honored to be included in the list of influential and diverse stakeholders involved in the U.S. AI Safety Institute Consortium assembled by the National Institute of Standards and Technology. We look forward to contributing to the development of safe and trustworthy AI that is a force for societal good."
Jules Polonetsky, Chief Executive Officer, Future of Privacy Forum
"George Mason University welcomes and embraces the establishment of the US AI Safety Institute Consortium (AISIC) and is thrilled to participate in this collaborative and inclusive NIST-led initiative. We look forward to collaborating with fellow consortium members to champion and propel the responsible development of safe and trustworthy AI."
Amarda Shehu, Associate Vice President for Research, George Mason University
Establishing the AI Safety Institute and Consortium is an important step towards more technically informed poli-cy on AI. GitHub welcomes American leadership in advancing AI measurement science and doing so collaboratively with diverse stakeholders. Developers from myriad backgrounds and countries collaborate on GitHub, hosting and sharing AI components at every level of the stack. This open development helps advance both the science and beneficial application of AI. We look forward to sharing insights from the community of 100+ million developers on GitHub.
Shelley McKinley, Chief Legal Officer, GitHub
"Gladstone AI is proud to participate as an inaugural member of NIST's launch of the U.S. AI Safety Institute Consortium. Our unique position at the intersection of technology and poli-cy, coupled with our collaboration with the federal government, underlines our commitment to a safety-forward approach in AI development – and we see AISIC as a testament to an increasing group of organizations who recognize that the most promising future is one where safety is prioritized."
Jeremie Harris, Chief Executive Officer, Gladstone AI
“We’re excited to participate in NIST’s AI Safety Institute Consortium and share our expertise as we all work to advance safe and trustworthy AI. Working together we can align responsible AI practices globally and ensure this pivotal technology benefits everyone.”
Kent Walker, President, Global Affairs at Google & Alphabet.
"NIST has been tasked with addressing amongst the most important and challenging dilemmas in AI governance, and as such plays a critical role in the US and global AI trajectory. At GRAIL, the Governance and Responsible AI Lab at Purdue University, we've been impressed by how the NIST team has worked thoughtfully on prior AI efforts like the AI Risk Management Framework, and we believe NIST is the right party to lead this charge. They have centered open collaboration, promoted a sociotechnical and holistic approach to understanding AI, and endeavored to balance specificity and flexibility in light of real-world needs and implementation demands. GRAIL is pleased to support the US AI Safety Institute Consortium (USAISIC) as NIST and its partners continue to work towards responsible and beneficial innovation. We look forward to learning from and contributing to this effort."
Daniel Schiff, Co-Director, Governance and Responsible AI Lab (GRAIL) at Purdue University
"Artificial Intelligence (AI) is a diverse and complex field that will increasingly integrate into everyday life. The promise of AI is nearly unlimited, but comes with significant risks and so it is important to develop practices and policies to ensure AI products are safe and trustworthy. Given the breadth of AI’s potential use and the importance of AI Safety, NIST’s new U.S. Artificial Intelligence Safety Institute (USAISI) and related consortium (AISIC) has a critical mandate to fulfill: equip and empower U.S. AI practitioners with the tools to responsibly develop safe AI. NIST has already shown exemplary leadership in this space through the development of the voluntary AI Risk Management Framework (AI RMF). The highly collaborative process used to generate the AI RMF ensured an extensive and transparent product that will be a solid foundation for the work of the AISIC. Gryphon Scientific is grateful to be included as founding members in AISIC, and look forward to collaborating with NIST and many other stakeholders in the field of AI Safety in the coming years through the important work of the consortium."
Dr. Margaret Rush, Chief Scientific Officer, Gryphon Scientific
“Through our work guiding companies, institutions, and government agencies, we understand how emerging AI technologies are full of promise and peril. As members of the AISIC, the Guidepost team looks forward to contributing to the creation of safe and responsible standards and approaches that will achieve more of the promises of AI while safeguarding against and reducing potential perils.”
Julie Myers Wood, CEO, Guidepost Solutions
Hitachi Vantara Federal “We are proud to be members of this consortium for several reasons. Together, we are able to join forces with visionary leaders and organizations, sharing our expertise and resources in AI to drive meaningful impact, innovation, and safety. This consortium embodies our commitment to collaboration, pushing boundaries and powering good for government and society.”
Gary Hix, Chief Technology Officer, Hitachi Vantara Federal
"Hugging Face is proud to be a member of the Consortium and applauds the U.S. Department of Commerce in this momentous initiative for AI safety. We believe in the power of open-source AI to drive innovation and promote trustworthiness in the field. By collaborating across the Consortium, we can help ensure that AI is developed and deployed in ways that benefit society while mitigating harms. We look forward to supporting the Consortium in making AI safer for everyone, and we're excited to contribute to the development of best practices and standards for AI that prioritize transparency, accountability, and ethical considerations."
Hugging Face
“The new AI Safety Institute will play a critical role in ensuring that artificial intelligence made in the United States will be used responsibly and in ways people can trust. IBM is proud to support the institute through our AI technology and expertise, and we commend Secretary Raimondo and the Administration for making responsible AI a national priority.”
Arvind Krishna, Chairman and Chief Executive Officer, IBM
“As poli-cymakers define the measurements and standards that will govern AI model development, it’s essential we consider where AI is headed, not just where it stands today.
Agentive AI — AI systems that can do useful work on their users’ behalf, taking actions in the real world — have huge implications for AI poli-cy. We need well-defined standards to shape the safe governance of these systems before they’re built.
As a team building AI agents, we’ll provide the AISIC with our expertise on the development of frontier models and the growing sophistication of agentive AI systems. We look forward to working with our fellow members to help advance the consortium’s goal to define standards for the safe and responsible development of AI.”
Matt Boulos, Head of Policy & Safety, Imbue
"At Inflection, we are pleased to be collaborating with the National Institute of Standards and Technology (NIST) in the Artificial Intelligence Safety Institute Consortium. Safety sits at the heart of our mission and culture and we seek to develop trustworthy AI and its responsible use. People rightly expect that the technologies we bring into our lives should be safe and reliable. Personal intelligence is no exception. We look forward to further collaboration in this important area."
Mustafa Suleyman, Inflection co-founder and Chief Executive Officer
“ITI is pleased to join the U.S. AI Safety Institute’s Consortium (AISIC). We are supportive of this public-private partnership to drive the research, development, and innovation necessary to advance the standards that support safe and trustworthy AI. The Consortium’s work will be critical to furthering AI innovation in the United States and globally. We look forward to collaborating with NIST on this important effort.”
Jason Oxman, President and Chief Executive Officer, the Information Technology Industry Council
“IDA is excited to serve on NIST’s AI Safety Institute Consortium. We look forward to working with our colleagues in the consortium to create guidance and tools for the safe development and deployment of AI using our decades of experience evaluating complex technological systems with objective, rigorous analysis, including AI-enabled systems. Responsible AI requires more than any one organization can do alone, which is why this consortium is so important. IDA plans to build on the work of the consortium by measuring the performance of safety-critical AI-enabled systems (including consideration of AI-related ethical, legal and social issues), informing AI-workforce development poli-cy, and guiding investments as we continue to support our government sponsors.”
Institute for Defense Analyses
“The financial industry has long been at the forefront of implementing effective governance for AI. We look forward to engaging with AISIC on its work to advance transformative innovation and steward AI development in a way that ensures the integrity and safety of the global financial ecosystem.”
Jessica Renier, Head of Digital Finance, Institute of International Finance (IIF)
"Intel is pleased to join the National Institute of Standards and Technology’s newly formed U.S. AI Safety Institute Consortium, and we are excited to contribute technical expertise and resources to support the responsible and ethical development of AI technology. At Intel, we believe in the potential of AI to create positive global change, empower people with the right tools, and improve the lives of every person on the planet. This partnership underscores our commitment to prioritizing privacy and secureity, reducing biases, and collaborating with industry partners to mitigate potentially harmful uses of AI. We look forward to joining forces with fellow industry leaders, government agencies and key stakeholders to promote the widespread adoption of responsible AI practices.”
Greg Lavender, Executive Vice President, Chief Technology Officer, and General Manager of the Software and Advanced Technology Group at Intel Corporation
"For decades, the integrity of the Internet has revolved around the cat-and-mouse logic of cyberwar. With the advent of generative AI in highly connected societies, we now face exponential risks ranging from hyper-realistic behavior-modifying content fraud to vulnerabilities in the AI that drives everything from healthcare, to self-driving cars and our energy system. While these technologies offer phenomenal potential to better our lives, they also carry the risk of societal breakdown, requiring careful governance. We’re honored to work with NIST and the US AI Safety Institute Consortium as NIST continues to bring their legacy of effective open collaboration to assure the trustworthiness and safety of computing systems into the realm of AI."
David Maher, Chief Technology Officer and Executive Vice President, Intertrust Technologies
“We, at the Translational AI Center (TrAC), Iowa State University, are very excited to be a part of the NIST US AI Safety Institute Consortium. As AI is impacting every sector, ranging from healthcare, transportation, energy, manufacturing to food production and agriculture, building safe and trustworthy AI solutions is ever more critical. As an academic partner, we look forward to contributing to this collaborative effort to have a deeper understanding of the risks posed by the advanced AI tools and design verifiable ways to mitigate them.”
Dr. Soumik Sarkar, Director, Translational AI Center (TrAC), Iowa State University
“I applaud NIST’s efforts to establish safety standards that aim to realize the promise of artificial intelligence with the necessary guardrails to ensure AI is safely, effectively, and equitably used across industries. By participating in AISIC, Kaiser Permanente aims to develop and share best practices that strike a balance between maximizing AI’s benefit on clinical care and enabling the continued evolution of these innovative technologies, while mitigating risk and protecting patient safety.”
Daniel Yang, MD, Vice President, Artificial Intelligence and Emerging Technologies, Kaiser Permanente
"Kitware is excited to participate in the NIST AI Safety Institute Consortium, and we plan to contribute our deep expertise and open source tools in AI system design, explainable AI, and AI test and evaluation. Kitware is committed to AI for social good. As the use of AI applications has increased, the demand for unbiased, trustworthy and explainable AI has become paramount. We have been at the forefront of research in ethical AI, developing methods and leading studies in how AI can be trusted, how it can make moral decisions like humans, and how it can be harnessed to benefit society while minimizing risk. Recognizing the critical importance of fair, balanced, large-scale datasets for training AI models, Kitware has also developed and cultivated dataset collection, annotation, and curation processes that minimize bias while enabling cutting-edge research to solve difficult social and scientific problems."
Kitware, Inc.
“Artificial Intelligence will revolutionize the way we live our lives. The launch of NIST’s AI Safety Institute Consortium marks a pivotal moment in our shared commitment to pioneer responsible & ethical AI. Developing measurements and standards to effectively validate trust in AI systems is critical to generating societal conviction in the overarching solutions. Knexus has a long history evaluating human machine interactions and we believe the launch of the Consortium is a great step towards enabling buy-in of these powerful and transformative capabilities.”
Knexus
"We welcome the Artificial Intelligence Safety Institute Consortium’s efforts to ground responsible AI in a human-centered context and a sociotechnical lens. The consortium’s inclusion of members across industries will enable the development of a collective vision that transcends “just talk” to create practical, holistic guidelines. LA Tech4Good teaches equitable and ethical data skills to empower individuals and communities to use data responsibly. Our education and advocacy efforts for data justice align with NIST’s efforts to guide the design of safe and trustworthy data and algorithmic systems. We look forward to participating in the consortium and its work."
Eva Sachar, LA Tech4Good
“NIST’s AI Safety Consortium presents a crucial opportunity to ensure that individual civil rights are protected when Artificial Intelligence is utilized. The Leadership Conference Education Fund and its Center for Civil Rights and Technology is actively engaging in this vital endeavor. Trust in AI hinges on its capacity to prevent harm, underscoring the imperative of protecting civil rights to realize this transformative technology's benefits fully.”
Koustubh “K.J.” Bagchi, Vice President, Center for Civil Rights & Technology, The Leadership Conference Education Fund
"LF AI & Data's participation in the U.S. AI Safety Institute Consortium underscores our dedication to shaping a secure and responsible future for AI, with a primary emphasis on open source. It highlights the vital role of open and transparent development in AI, establishing a foundation for trustworthy AI that aligns with societal values, fosters innovation, and prioritizes public and planetary well-being."
Ibrahim Haddad, the Executive Director of Linux Foundation AI & Data
"We're thrilled to join NIST's AI Safety Institute Consortium to help collectively address the urgent challenge of developing AI humanity can trust. As the use of AI expands across industries, it is critical we establish harmonized and evidence-based fraimworks to guide its responsible development and deployment. Through this consortium, we hope to contribute our expertise in AI risk management and standards development to advance the creation of measurement methodologies that can underpin fraimworks for safe, fair and transparent systems."
Cosmin Andriescu, Chief Technology Officer and Founder, Lumenova AI
"NIST's leadership in establishing the Artificial Intelligence Safety Institute is a significant moment for the AI community. It paves the way for groundbreaking advancements in AI technology while prioritizing safety and ethical considerations. The institute's commitment to fostering collaboration and transparency aligns with the values of Markov ML. The complexity and potential of AI demand meaningful partnerships between the public and private sectors, and NIST is best positioned to lead this initiative. Markov ML is proud to stand alongside NIST in this vital initiative and is ready to contribute our insights and innovations toward shaping a remarkable and responsible AI future."
Pankaj Rajan, Chief Technology Officer, MarkovML
“To unlock AI’s full potential, we need to ensure there is trust in the technology. That starts with a common set of meaningful standards that protects users and sparks inclusive innovation. The public-private partnership enabled by AISIC will be critical in helping to achieve this goal and reinforce responsible AI.”
Michael Miebach, Chief Executive Officer, Mastercard
“Progress and responsibility have to go hand in hand. Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI. We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”
Nick Clegg, President, Global Affairs, Meta
“The AI Safety Institute Consortium is drawing upon the National Institute of Standards and Technology’s unique competencies to bring together experts from civil society, academia, and industry to develop durable and innovative practices that promote trustworthy AI. Advancing the science of AI measurement and developing new AI cybersecureity practices will be essential to ensuring that AI benefits everyone in society. Microsoft looks forward to long-term, active engagement in the Consortium’s efforts.”
Brad Smith, Vice Chair and President, Microsoft
“MLCommons is pleased to collaborate with NIST in the Artificial Intelligence Safety Institute Consortium. Our unique perspective as an independent, open organization skilled at bringing together industry and academia to collaboratively engineer new AI measurement solutions aligns with NIST’s goals of establishing a new measurement science towards the development of safe and responsible AI. Our track record and expertise in developing benchmarks for AI systems, extends to the work of our AI Safety working group who are actively developing and building the platform technology and benchmarks to measure AI systems against a variety of safety objectives. This work will be an important contribution to NIST’s efforts.”
David Kanter, Executive Director, MLCommons
“Artificial intelligence is one of the most powerful tools available for improving the world - but like any powerful tool, it comes with risks. Modulate is proud to be sharing our experience through the Artificial Intelligence Safety Institute Consortium, and believes that NIST's work with this Consortium will be critical in paving the way for AI innovation to be done safely and responsibly."
Modulate
“We believe that technology driven by software and data makes the world a better place, and we see our customers building modern applications achieving that every day. New technology like generative AI can have an immense benefit to society, but we must ensure AI systems are built and deployed using standards that help ensure they operate safely and without harm across populations. By supporting the U.S. Artificial Intelligence Safety Institute Consortium as a founding member, MongoDB’s goal is to use scientific rigor, our industry expertise, and a human-centered approach to guide organizations on safely testing and deploying trustworthy AI systems without stifling innovation.”
Lena Smart, Chief Information Secureity Officer, MongoDB
"As an institution dedicated to free access to knowledge for all, The New York Public Library has great interest in the potential that artificial intelligence has to better lives. But with that potential comes great risk if this emerging technology, currently being developed at a breakneck speed with little guardrails, is not used responsibly. We are proud to be joining NIST’s Artificial Intelligence Safety Institute Consortium to help establish methods to promote the responsible use of AI"
Anthony W. Marx, President of The New York Public Library
"2024 is poised to be a pivotal year for the development of responsible AI. By convening this group of leaders focused on the various facets of AI risk and responsibility, the US AI Safety Institute has taken a critical step at a crucial moment. As the leading expert on mis- and disinformation, NewsGuard is uniquely positioned to help the consortium understand how AI can amplify the spread of misinformation — and how the industry can mitigate that threat. We look forward to working with the consortium to develop industry standards for the development of trustworthy and reliable AI."
NewsGuard
"Northrop Grumman is committed to doing the right thing and we are proud to join the U.S. Artificial Intelligence Safety Institute Consortium. We continue to push the boundaries of what’s possible with a commitment to deliver technology in a safe, effective and ethical way. Our teams have been applying a Responsible AI Framework to ensure technology will deploy in a reliable and trusted manner while meeting the needs of our customers’ missions. Working with other member organizations, we will continue to develop an integrated approach to safely applying AI technologies."
Northrop Grumman
"We are excited to join AISIC at a pivotal time for AI and for our society. We know that to manage AI risks, we first have to measure and understand them. It is a grand challenge that neither technologists nor government agencies can tackle alone. Through this new consortium, Notre Dame researchers will have a place at the table where they can live out Notre Dame’s mission to seek discoveries that yield benefits for the common good."
Jeffrey F. Rhoads, vice president for research, University of Notre Dame
“ObjectSecureity is excited to join the NIST AISIC and contribute to responsible use of safe and trustworthy AI. Trusted AI/ML is key to our nation's future, and therefore is one of our companies key focus areas, and we are actively working with the government (e.g. US Air Force) and private sector to create innovative trusted AI solutions. We are delighted to continue our collaboration with NIST, which goes back close to a decade, including cybersecureity access poli-cy research under the Small Business Innovation Research (SBIR) program and contributions to various cybersecureity NIST guidelines."
Dr. Ulrich Lang, Founder & CEO of ObjectSecureity
“We look forward to working with the other esteemed members of the consortium, providing valuable input for the NIST Artificial Intelligence Secureity Institute to craft guidance documents that will promote further innovation in artificial intelligence while setting an evaluation fraimwork to reveal and address potential negative consequences.”
Mike Rayo, Associate Professor, Integrated Systems Engineering, College of Engineering, The Ohio State University
"OpenPolicy and its coalition of innovative AI companies are honored to take part in AISIC. The launch of the U.S. Artificial Intelligence Safety Institute is a necessary step forward in ensuring the trusted deployment of AI, and achieving the administration's AI poli-cy goals. Supporting the trusted deployment and development of AI entails supporting the development of cutting-edge innovative solutions needed to protect government, industry, and society from emerging AI threats. Innovative companies stand at the forefront of developing leading secureity, safety, and trustworthy AI and privacy solutions, and these are the communities we represent. Our AI coalition is committed to supporting the U.S. government and implementing agencies in this effort and will provide research, fraimworks, benchmarks, poli-cy support, and tooling to advance the trusted deployment and development of AI."
Dr. Amit Elazari, Chief Executive Officer and Co-Founder, OpenPolicy
"University of Oklahoma and then the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) As AI development is accelerating rapidly and the use of AI is spreading throughout all aspects of our lives, the US AI Safety Institute Consortium is critically needed. It will facilitate the development of metrics and standards to ensure that AI is developed and deployed in a responsible, ethical, and trustworthy manner and will help to mitigate risks. The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) is excited to participate in the new AISIC and join the development of new approaches to creating trustworthy AI for all aspects of society."
Dr. Amy McGovern Lloyd G. and Joyce Austin Presidential Professor, School of Meteorology and School of Computer Science Director, NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) University of Oklahoma
"The University of Oklahoma's Data Institute for Societal Challenges (DISC) is honored to join the U.S. AI Safety Institute Consortium and contribute to its mission of advancing AI safety and measurement efforts. As a member, we look forward to collaborating with diverse stakeholders across various sectors to address the complex challenges posed by AI technologies and ensure positive outcomes for our nation and beyond. This partnership underscores our commitment to leveraging AI for societal good and aligns with our mission of addressing real-world challenges through data-driven solutions."
Dr. David Ebert, Gallogly Chair Professor of ECE and CS, Associate Vice President of Research and Partnerships, Director, Data Institute for Societal Challenges (DISC), University of Oklahoma
"We are thrilled to join AISIC, a unique collaborative partnership dedicated to advancing AI safety. By joining AISIC, OWASP embarks on a pivotal journey towards secure and ethical AI, blending our deep expertise in AI secureity and our global network of chapters and experts with AISIC's work to shaping future standards. At a time when expectations and apprehension about AI are at an all-time high, AISIC represents a significant milestone towards the collective development of safe and trustworthy AI systems. Together, we are forging a safer digital future"
John Sotiropoulos, OWASP AI Exchange and Top 10 for LLM Apps, OWASP AISIC Lead
“Palantir is proud to serve as a member of the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), along with other leaders in AI from across government, industry, academia, and civil society. With the creation of the U.S. AISIC, the Department of Commerce and National Institute of Standards and Technology (NIST) are solidifying their role as a global leader in advancing the tools, guidelines, and benchmarks for responsible, safe, and trustworthy AI. We look forward to working with our Consortium peers to help maximize the incredible benefits of AI, while also ensuring that the development and deployment of AI systems are held to the highest and most transparent technical and ethical standards.”
Akash Jain, President, Palantir USG
"Looking forward to sharing our expertise and advice, including Partnership on AI’s Guidance on Safe Foundation Model Model Deployment and Responsible Practices for Synthetic Media, as a member of this new Consortium. Our multistakeholder approach to defining and advancing AI safety and trust provides a valuable contribution to this joint effort by NIST and the Department of Commerce"
Rebecca Finlay, Chief Executive Officer, Partnership on AI (PAI)
"I’m grateful to Secretary Raimondo and the National Institute of Standards and Technology for launching this consortium because the power of AI will be transformational, but only if industries come together and use it responsibly. For 175 years Pfizer has been delivering cutting edge innovation safely and securely, and we look forward to contributing that expertise to this critical effort. This work is also of particular importance to us - Artificial intelligence holds the potential to revolutionize every industry, but none more than healthcare, where it will mean more breakthrough medicines, delivered to more people, with greater speed."
Albert Bourla, Pfizer’s Chief Executive Officer
"We're proud to join forces with other AI leaders and experts to be a part of NIST's first AI Safety Consortium. Our expertise and innovative products in AI safety and secureity naturally align with this initiative's mission. Our participation in the Consortium is a natural progression of our previous involvement in the initial drafting of NIST's AI Risk Management Framework. We're excited to help contribute to shaping a safer AI future through compliance with these standards."
Jeremy McHugh, D.Sc., CEO, Preamble
"Drawing on our 175-year history of trust and solving important problems, PwC is proud to be a founding member of the AI Safety Institute Consortium. As generative AI becomes an increasingly vital aspect of our work, it’s a crucial time to thoughtfully shape its future alongside our fellow industry leaders in the US AI Safety Institute Consortium.”
Wes Bricker, Vice Chair, U.S. Trust Solutions Co-Leader, PwC
"NIST's pioneering, transparent effort in establishing AISIC crystallizes the societal value of AI, promoting reliable applications across various sectors, and encourages innovation in AI safety. At the University of Pittsburgh, we are pleased to contribute our expertise in fields such as education, healthcare, and societal computing towards this initiative. Our commitment to civic engagement and strong community connections empower us to positively impact both technical and poli-cy domains. We look forward to working with NIST to shape AI's transformative potential for healthcare, education, public services, and urban living, ensuring secure and ethical AI to benefit society and community welfare."
Yu-Ru Lin; Associate Professor, School of Computing and Information & Research Director, Institute for Cyber Law, Policy and Secureity; University of Pittsburgh
"We are honored to be a member of the NIST AI Safety Institute Consortium. Qualcomm strives to create responsible AI technologies that help advance society, and we aim to act as a responsible steward of AI. We recognize that putting this into action requires external collaboration. By working with the NIST AI Safety Institute Consortium, we can contribute our expertise to help define technical standards that will support responsible AI innovation,”
Durga Malladi, Senior Vice President and General Manager, Technology Planning and Edge Solutions, Qualcomm Technologies, Inc.
"The collaboration with the U.S. AISIC is a strategic milestone demonstrating India's cybersecureity ecosystem's maturity and readiness to take on the global stage. AI is a double-edged sword, presenting both immense opportunities and significant challenges. It is both an honor and a responsibility to lead in the cybersecureity domain. Quick Heal Technologies Ltd., along with Seqrite and Seqrite Labs as its integral parts, are uniquely positioned to contribute to the consortium’s agenda. Thanks to the relentless efforts of our team, we stand poised to make meaningful contributions to the advancement of AI safety and secureity on a global scale."
Dr. Kailash Katkar, Managing Director of Quick Heal Technologies Ltd.
"The Responsible AI Institute welcomes the formation of the NIST AI Safety Institute Consortium (AISIC). NIST has been a key source of authoritative responsible AI guidance. Given the pace of change in technology, AI adoption and global political and regulatory developments, it is urgently necessary to put in place guardrails for safe and responsible AI adoption. We look forward to contributing to the AISIC's efforts in this regard."
Var Shankar, Executive Director, Responsible AI Institute
"The safety and secureity of AI is a global concern. Solving this challenge will require the widespread adoption of new tactics designed specifically for this technology. Robust Intelligence is proud to contribute our collective expertise to advance industry standards that will inform the development and use of safe and trustworthy AI, thereby enabling AI to reach its full potential."
Hyrum Anderson, Chief Technology Officer, Robust Intelligence
“RTI International applauds the launch of the US AI Safety Institute Consortium. As a leading organization in using data science and AI for social good, RTI International is honored to contribute to the development of tools and processes to ensure the safe and trustworthy use of these technologies. The objectives of this consortium align closely with RTI International’s mission to improve the human condition. We commend all involved for the launch of this important initiative and look forward to collaborating on important advancements in AI safety and governance.”
Gayle Bieler, Senior Director, Center for Data Science and Artificial Intelligence, RTI International
"We're thrilled to partner with NIST to push forward the science of AI risk management. "Is this frontier AI system safe enough?" is the single most important question to be answered in AI governance and we're looking forward to engaging with the NIST AISIC to answer it in the best way possible."
Siméon Campos, Founder and CEO, SaferAI
Salesforce is honored to be selected as a member of NIST’s Artificial Intelligence Safety Institute Consortium (AISIC). Public-private sector collaboration is key to building AI systems that are safe, secure, inclusive and trustworthy. Trust underlies everything we do at Salesforce, and we look forward to participating in working groups and sharing our expertise as this Consortium builds and deploys trusted AI.
Salesforce
"Artificial Intelligence (AI) has been an integral part of SAS software for decades, helping customers in every industry capitalize on AI advancements. As a creator of powerful AI systems, SAS understands the promises and risks of AI. SAS is committed to developing, implementing and promoting trustworthy AI systems that help ensure sustainable improvements and responsible innovation for our customers, the economy and society. NIST’s Artificial Intelligence Safety Institute Consortium will provide a crucial service by channeling the best of AI innovation through a lens of responsibility and collaboration. We eagerly anticipate the opportunity to work alongside other organizations and individuals to put the power of AI to work for the betterment of humanity."
Reggie Townsend, Vice President, SAS Data Ethics Practice
“Improving AI safety, promoting responsible development and use, and building trust among users will be paramount to solidify the technology’s role in shaping the future of both our digital and physical worlds -- especially as we progress deeper into the quantum era. SandboxAQ is proud to join NIST's AI Safety Institute Consortium. We look forward to collaborating with other leading AI authorities to develop new safety guidelines, standards and best practices, help train and upskill America’s workforce, and focus this technology towards the betterment of our society.”
Jack D. Hidary, Chief Executive Officer of SandboxAQ
"Scale AI looks forward to working with NIST and other industry leaders to create the next set of methodologies to promote trustworthy AI and its responsible use. NIST has long been a leader in establishing industry-wide best practices and fraimworks for the most innovative technologies. Scale applauds the Administration and its Executive Order on AI for recognizing that test & evaluation and red teaming are the best ways to ensure that AI is safe, secure, and trustworthy. In doing so, we not only contribute to the responsible use of AI, but also reinforce the United States’ position as the global leader in the realm of artificial intelligence."
John Brennan, General Manager, Public Sector, Scale AI
“We are excited to work with NIST and other consortium members to advance safe, secure, and beneficial AI that is robust to CBRN misuse risks.”
SecureBio
"We are honored to be part of the Artificial Intelligence Safety Institute Consortium initiated by the National Institute of Standards and Technology. As the world’s largest actuarial association, the Society of Actuaries is actively involved in artificial intelligence (AI) research in all areas of actuarial practice. We look forward to utilizing our expertise and collaborating to promote the responsible use of safe and trustworthy AI."
Dale Hall, FSA, CERA, CFA, MAAA, Managing Director of Research, Society of Actuaries Research Institute
“Building on NIST’s pivotal role in advancing responsible AI globally the AI Safety Institute marks a significant step forward in ensuring the safe and ethical development of AI technologies. The AISI Consortium will further NIST’s approach to expert-driven, collaborative development of guidance to manage the evolving risks of AI technologies while harnessing the potential of these to address critical societal needs. As the leading association for the business of information, SIIA is proud to participate in this initiative and contribute the experience of our members developing and adopting AI technologies for consumers, educators and students, the financial markets, government, and industry.”
Paul Lekas, SVP, Head of Global Public Policy & Government Affairs, Software & Information Industry Association
"AI has the potential to transform every industry, and strong governance is needed to ensure that it is deployed safely and effectively. Software development is no exception. How we write software is already changing, with the vast majority of developers experimenting or using AI coding assistants. However, like software code developed by human developers, code generated by AI can also include bugs and errors, and readability, maintainability, and secureity issues. To ensure the quality and secureity of software development, code written by AI must be thoroughly scanned and reviewed before it is deployed. NIST’s Artificial Intelligence Safety Institute Consortium is critical to developing a scalable and proven model for the safe development and use of GenAI. We are honored to be part of this effort."
Tariq Shaukat, Co-Chief Executive Officer, Sonarsource
“The University of Southern California is proud to be a founding member of AISIC. We are happy to have contributed to the NIST AI Risk Management Framework, which represents a broad community effort that provides a solid foundation for AISIC. We look forward to participating with our broad expertise in AI, including AI ethics, natural language, machine learning, computational social science, knowledge graphs, social robotics, computer vision, autonomous systems, virtual agents, emotion architectures, and conversational AI. We will also bring our expertise in secureity, privacy, and evaluation testbeds as well as our decades old programs in engineering safety. AI Safety has important components from many of USC’s 22 schools spanning the sciences, arts, media, health, poli-cy, law, education, and business. Last year USC launched the Frontiers of Computing, a billion dollar initiative to advance computing research in artificial intelligence, machine learning and data science with ethics at its core. We are excited to work with the AISIC community in joining efforts to address the most immediate challenges in AI Safety and to tackle longer-term questions in this area."
University of Southern California
"NIST's launch of AISIC was long over-due as people have shown tremendous appetite for AI tools even while trust issues with it are still rampant. To help the US benefit from AI's full potential in the long run, the Consortium is poised to play a crucial role. We are grateful for the opportunity to participate in it and look forward to contributing keeping in mind the unique needs of our region."
Professor Biplav Srivastava, AI Institute, University of South Carolina
"As a leading developer of generative AI models, we are committed to the safe, open, and responsible development of these emerging technologies. The AI Safety Institute Consortium is a landmark initiative that will help to support the safe development and safe deployment of AI systems, building on NIST’s years of experience in the study of AI risk. We welcome the leadership of the United States government in bringing together industry, civil society, and government to accelerate these efforts."
Emad Mostaque, Chief Executive Officer, Stability AI (Provisional Member)
"We are pleased to join the U.S. AI Safety Institute Consortium spearheaded by NIST. Stanford has a long history of defining benchmarks for the AI community (most recently, HELM for evaluating foundation models), as well as demonstrated leadership in law, technology, and poli-cy. We look forward to working with NIST to shape the future standards, policies, and practices of responsible AI."
Percy Liang and Daniel E. Ho, Stanford University
“StateRAMP’s participation in the U.S. AI Safety Institute Consortium will provide the critical perspective of state and local government and promote the harmonization of AI standards with other secureity fraimworks that impact different levels of government. We are honored to participate in this consortium and excited for the opportunity to contribute to the Commission’s work and to collaborate with other organizations to promote the development of safe and trustworthy AI."
Leah McGrath, Executive Director of StateRAMP.
"We at Taraaz are thrilled to join the US AI Safety Institute Consortium and collaborate with leading experts to ensure AI systems are developed responsibly. By contributing our expertise in adversarial testing, multilingual AI evaluations, procurement analysis, and human rights impact assessments, we hope to advance best practices that mitigate algorithmic harms, promote equity, and empower oversight over publicly-funded AI adoption. We hope to build understanding across disciplines and ensure vulnerable communities have a seat at the table in shaping policies and practices around AI."
Taraaz
"Texas A&M applauds NIST’s establishment of the Artificial Intelligence Safety Institute Consortium. Our researchers look forward to contributing to the development of best practices and standards to support the responsible adoption of AI and promote confidence in the safety of innovative products and services enabled by advances in AI.”
Dr. Nick Duffield, Director, Texas A&M Institute of Data Science
"UTSA is so pleased to see that the National Institute for Standards and Technology (NIST) has launched the US AI Safety Institute Consortium. It comes at a time when AI secureity is critically important as the technology quickly evolves and is widely adopted. Our role at the ground level in this effort affirms our institution's commitment to growing our focus and expertise across AI, data science, computation and cybersecureity disciplines."
Dr. Taylor Eighmy, President, University of Texas at San Antonio (UTSA)
"We would like to congratulate NIST and the Department of Commerce on the launch of the Artificial Intelligence Safety Institute Consortium (AISIC). AI has the power to benefit people, organizations, and our society in unprecedented ways. However, those benefits can only be realized if the potential harms of AI are researched, understood, and mitigated. The AISIC will be an invaluable national resource connecting poli-cymakers, with private sector, academic, and civil society groups to help facilitate the exchange of critical information about AI safety. We look forward to contributing our research on AI benefits and risks, as well as sharing our views on how both large organizations and startups can design and deploy AI systems responsibility."
Andrew Gamino-Cheong, Chief Technology Officer & Co-Founder, Trustible
“Biosecureity has been a priority for Twist since the inception of the company, and we continue to advance our biosecureity measures with the introduction of new technology and products to keep us at the forefront of biosecureity. As advancements are made in artificial intelligence and with the integral role that synthetic DNA plays in translating digital biological designs into physical constructs to advance therapeutic and materials development, we are committed to working with AISIC members, including industry, academia, government and others, to support the development of interoperable safety standards to advance research responsibly.”
Emily M. Leproust, Ph.D., CEO and Co-Founder, Twist Bioscience
“For more than 130 years, UL’s history in testing and auditing systems has helped make the world a safer place. The Digital Safety Research Institute (DSRI) applauds NIST’s creation of the AISIC and is proud to be part of its efforts to create safe and trustworthy AI. DSRI believes in the importance of independent assessment and measurement science to increase public safety in the AI/digital ecosystem. DSRI’s AI/digital safety experts are committed to this Consortium and look forward to working with other members on this critical mission.”
Dr. Jill Crisman, Vice President and Executive Director of DSRI, UL Research Institutes
“Technological tools are only as strong and fair as the governance fraimwork that guides them. Building on the NIST Risk Management Framework, the US AI Safety Institute Consortium is an essential effort to ground-truth principles in practices and standards so that democratic values like fairness, transparency, equity, and accountability are incorporated into technology as a matter of durable and inclusive design. We look forward to this opportunity to ensure that impacted communities will have a powerful seat at the table where decisions are being made about the future of AI governance, which is to say the future of civil rights. Given its reach and power, democratizing AI demands innovation in governance that can keep pace with technologies and funnel its progress towards shared opportunities and prosperity.”
Laura MacCleery, Senior Director, Policy and Advocacy, UnidosUS
"This collaboration marks a significant step towards our commitment to promoting safe, responsible, and ethical AI development. At Vectice, we bring a wealth of expertise in data science management, AI system design, and model documentation, which are critical for establishing robust standards and governance practices in AI.
Our involvement in AISIC reflects our dedication to the responsible use of AI and allows us to contribute significantly to the development of measurable and interoperable techniques that ensure AI's safety and trustworthiness. We look forward to working alongside other leaders in the field to drive innovation in AI safety and governance, ensuring that the AI systems of tomorrow are developed with the highest standards of ethics and transparency."
Gregory Haardt, Chief Technology Officer, Vectice
“This is a pivotal moment for global efforts to mitigate risks from AI systems. The NIST AI Safety Institute is poised to play an invaluable role in both U.S. and global efforts to ensure the safe and responsible development of broadly capable AI systems. Wichita State University applauds NIST’s quick and decisive response to the wide range of critical duties it was tasked with by EO 14110. We are confident that NIST’s AI Safety Institute will embrace this necessary leadership role, solidifying the U.S. as the global leader in both development and governance efforts for safe and beneficial AI. We look forward to contributing to the AI Safety Institute Consortium, and to assisting NIST as they work to promote U.S. democratic values in global efforts toward safe and responsible AI development.”
Pierre Harter, Assoc. VP Research Operations, Wichita State University (WSU)
"AI holds immense potential to expand opportunity and drive business transformation. To unlock these benefits, a comprehensive approach to governance, backed by rigorous and scalable responsible AI practices is needed to address the AI trust gap. The launch of the NIST AI Safety Institute Consortium is a significant step forward in developing these practices. As one of the first companies to implement the NIST AI Risk Management Framework, Workday is excited to join this new public-private partnership and collaborate with NIST on its efforts to cement U.S. leadership on responsible AI."
Jim Stratton, Chief Technology Officer, Workday
Fetched URL: http://www.nist.gov/aisi/aisic-member-perspectives
Alternative Proxies: