By Jason Nelson
3 min read
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has made a deal with top AI developers OpenAI and Anthropic to establish formal collaboration with the U.S. AI Safety Institute (AISI), the agency said on Thursday.
The institute would “receive access to major new models from each company prior to and following their public release,” according to the announcement. Anthropic is developing Claude, while OpenAI offers ChatGPT.
The arrangement will allow the Institute to evaluate the capabilities and safety risks of the respective AI models.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” agency director Elizabeth Kelly said in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
Anthropic and OpenAI did not immediately respond to requests for comment by Decrypt.
“Looking forward to doing a pre-deployment test on our next model with the U.S. AISI,” Anthropic co-founder Jack Clark wrote on Twitter. “Third-party testing is a really important part of the AI ecosystem and it's been amazing to see governments stand up safety institutes to facilitate this.”
“We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models,” OpenAI co-founder and CEO Sam Altman wrote on Twitter. “For many reasons, we think it's important that this happens at the national level. [The] U.S. needs to continue to lead!”
The issue of AI safety has permeated every level of the industry, with many leading experts and executives leaving OpenAI over concerns about its practices—in some cases forming rival companies centered around cautious development.
Governments are also concerned. Launched in October 2023 by the Biden Administration the AISI formed after President Biden issued a sweeping Executive Order aimed at reigning in artificial intelligence development.
In February, the Biden Administration announced the first members of what it called the AISI Consortium (AISIC). The AISIC included several high-profile AI firms, including OpenAI, Anthropic, Google, Apple, NVIDIA, Microsoft, and Amazon.
The U.S. AISI said it will share its OpenAI and Anthropic findings with its European counterparts at the U.K. AI Safety Institute.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.