Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

White House Launches AI Safety Consortium

The National Group Will Develop Guidelines for AI Safety, Security and Red-Teaming
White House Launches AI Safety Consortium
NIST launched the Artificial Intelligence Safety Institute Consortium on Feb 8, 2024. (Image: White House)

The White House is recruiting more than 200 artificial intelligence companies, stakeholders and a wide array of organizations across public society for the first-ever U.S. consortium dedicated to AI safety.

See Also: OnDemand | Cyber Threats in Financial Services: An Adversary-Focused Strategy Beyond Compliance

The Artificial Intelligence Safety Institute Consortium will develop guidelines for red-teaming, safety evaluations and other security measures, according to an announcement published Thursday by the Department of Commerce. The new coalition, housed under the National Institute of Safety and Technology's AI Safety Institute, aims to serve as a liaison between AI developers and federal agencies. It will also work to develop collaborative research and security guidelines for advanced AI models.

Cybersecurity experts, lawmakers and legal scholars previously raised concerns that AI developers lack comprehensive regulations, standards or even a set of best practices to follow when developing advanced models that can have significant risks for national security and public health (see: G7 Unveils Rules for AI Code of Conduct - Will They Stick?). The consortium will provide a "critical forum" for the public and private sector to work together in developing AI safeguards and security standards, according to Bruce Reed, White House deputy chief of staff.

"To keep pace with AI, we have to move fast and make sure everyone - from the government to the private sector to academia - is rowing in the same direction," Reed said in a statement.

The inaugural cohort of AISIC members includes a vast list of nonprofits, universities, research groups and major corporations such as Amazon, Adobe, Google, Microsoft, Meta, Salesforce and Visa. It also features prominent academic AI hubs, including the University of Buffalo's Institute for AI and Data Science and the University of South Carolina's AI Institute, as well as leading AI developers such as ChatGPT maker OpenAI.

The White House issued an executive order in October invoking the Defense Production Act and requiring AI developers to share the results of red-teaming and other safety evaluations with the federal government (see: White House Issues Sweeping Executive Order to Secure AI). According to NIST, AISIC is the "largest collection of test and evaluation teams established to date" and will focus on "establishing the foundations for a new measurement science in AI safety."

AISIC will be tasked with establishing a space where Ai stakeholders can share knowledge and data, and it will seek to improve information sharing between members of the consortium. The group will also recommend measures to facilitate "the cooperative development and transfer of technology and data between and among consortium members."


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.co.uk, you agree to our use of cookies.