OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

The US AI Safety Institute didn’t mention other companies tackling AI. Engadget emailed Google, which began rolling out updated chatbot and image generator models this week, for a comment on its omission. We’ll update this story if we hear back.

It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-anthropic-agree-to-share-their-models-with-the-us-ai-safety-institute-191440093.html?src=rss

Leave a Reply

Your email address will not be published. Required fields are marked *

Join the Underground

a vibrant community where every pixel can be the difference between victory and defeat.

Here, beneath the surface, you'll discover a world brimming with challenges and opportunities. Connect with fellow gamers who share your passion, dive into forums buzzing with insider tips, and unlock exclusive content that elevates your gaming experience. The Underground isn't just a place—it's your new battleground. Are you ready to leave your mark? Join us now and transform your gaming journey into a saga of triumphs.