The artificial intelligence (AI) startup Anthropic laid out a “targeted” framework on Monday, proposing a series of transparency rules for the development of frontier AI models.Ā
The framework seeks to establish āclear disclosure requirements for safety practicesā while remaining ālightweight and flexible,ā the company underscored in a news release.Ā
āAI is advancing rapidly,ā it wrote. āWhile industry, governments, academia, and others work to develop agreed-upon safety standards and comprehensive evaluation methodsāa process that could take months to yearsāwe need interim steps to ensure that very powerful AI is developed securely, responsibly, and transparently.āĀ
Anthropic’s proposed rules would apply only to the largest developers of frontier models or the most advanced AI models.
They would require developers to develop and publicly release a secure development framework, detailing how they assess and mitigate unreasonable risks. Developers would also be obligated to publish a system card, summarizing testing and evaluation procedures.Ā
āTransparency requirements for Secure Development Frameworks and system cards could help give policymakers the evidence they need to determine if further regulation is warranted, as well as provide the public with important information about this powerful new technology,ā the company added.Ā
The AI firmās proposed framework comes on the heels of the defeat last week of a provision in President Trumpās tax and spending bill that initially sought to ban state AI regulation for 10 years.Ā
Anthropic CEO Dario Amodei came out against the measure last month, calling it āfar too blunt an instrumentā to mitigate the risks of the rapidly evolving technology. The AI moratorium was ultimately stripped out of the reconciliation bill before it passed the Senate.Ā
The companyās framework earned praise from AI advocacy group Americans for Responsible Innovation (ARI), which praised Anthropic for āmoving the debate from whether we should have AI regulations to what those regulations should be.ā Ā
āWe’ve heard many CEOs say they want regulations, then shoot down anything specific that gets proposed ā so it’s nice to see a concrete plan coming from industry,ā Eric Gastfriend, executive director at ARI, said in a statement.Ā
āAnthropic’s framework advances some of the basic transparency requirements we need, like releasing plans for mitigating risks and holding developers accountable to those plans,ā he continued. āHopefully this brings other labs to the table in the conversation over what AI regulations should look like.āĀ
Ā