The dispute between Anthropic and the Department of Defense is quickly becoming a broader test of how far the government can go in policing AI companies’ policies—and how much support those companies can rally from the wider research community.
A fair showing of top AI researchers had already signed a public letter backing Anthropic. Now 37 of them have taken a more formal step, signing an amicus brief filed with the court Monday.
The filing underscores how the clash is evolving from a narrow contract dispute into something bigger: a test of whether the government can effectively blacklist an American AI company for setting limits on how its technology is used. The outcome could shape how much independence AI companies have to impose safety guardrails, especially when those limits collide with national security priorities.
The group behind the amicus brief includes Google chief scientist Jeff Dean, along with 19 researchers from OpenAI and 10 from Google DeepMind. The researchers filed the brief in their personal capacities, not as representatives of their respective companies.
The brief is intended to support Anthropic’s lawsuit against the government. Anthropic is suing for harms incurred from the Pentagon naming the company a “supply chain risk”—a designation normally reserved for companies in adversary countries—meaning that the AI company can no longer do business with the government or its contractors.
The Defense Department (or the Department of War, as it now calls itself) was angered by Anthropic’s refusal to drop its policies against the use of its AI for targeting autonomous weapons and for synthesizing data from the mass surveillance of U.S. citizens.
In the suit filed Monday in a federal district court in San Francisco, Anthropic called the DoD’s designation “unprecedented and unlawful” and alleged that the government is retaliating against the company for exercising its First Amendment rights. Anthropic believes it could lose “hundreds of millions of dollars” in business.
The amicus brief argues that the Pentagon’s move could affect not just Anthropic but the broader AI industry.
“We wanted to make sure we were arming the court with an understanding of the industry’s perspective,” Nicole Schniedman, a Protect Democracy attorney whose name appears atop the brief, tells Fast Company. “It’s critical [that] the brief acknowledges that the use of this authority by the defense department is extraordinarily concerning–it is unprecedented to label a domestic [company] a supply chain risk for taking a stand on safety guard rails.”
The brief was filed on the researchers’ behalf by the AI for Democracy Action Lab at the nonprofit Protect Democracy, which describes itself as a “nonpartisan, anti-authoritarianism group.”
Schniedman characterized the group of signees as a “convergence of different stakeholders who both saw the urgency and just what’s at stake . . . with this escalation and threat tactics that Anthropic has been encountering, and what it means for our democracy to have a private company that is putting forward pretty widely aligned-on industry best practices and guard rails around two very high-risk and concerning applications of AI.”
The industry support for Anthropic seems to be expanding. Microsoft filed a separate amicus brief in support of Anthropic with the court on Tuesday. The tech giant urged the federal court to grant Anthropic the temporary restraining order it requested, which would delay the DoD’s “supply chain risk” designation while the court hears the case.
Microsoft, Google, and Amazon AWS, the three biggest cloud services providers, have all said they will continue distributing Anthropic models through their platforms, though not for defense-related work.
Schniedman says that the Defense Department has yet to clearly explain why it considers Anthropic a national security threat. Defense Secretary Pete Hegseth’s announcement on X of the Pentagon’s intent made no attempt at a legal argument. Earlier in the day President Donald Trump said in an angry Truth Social post that government agencies should “cease all use of Anthropic’s technology,” but he didn’t go so far as to call Anthropic a security threat. Nor did Hegseth present a legal argument in the formal letter he sent to Anthropic last week making the supply chain risk designation official.
As more AI companies and researchers line up in support of Anthropic, the chance of a major rift between the tech industry and the Trump administration increases. Many tech industry titans—people like Marc Andreessen, David Sacks, Elon Musk, Sundar Pichai, Tim Cook and Jensen Huang—supported Trump’s bid for reelection in 2024 and have continued their support, including financial support, during his second term. In return, they expected four years of minimal government oversight as the industry rolled out trillions of dollars in AI infrastructure and services.
Perhaps the Trump administration thought that, since Anthropic CEO Dario Amodei didn’t fund Trump’s campaign or attend his inauguration, it was OK to label the company “woke” and then set out to seriously harm its business. After all, other Trump-supporting AI companies like OpenAI, xAI, and Google were ready to provide their AI models to the Pentagon. OpenAI signed its new Pentagon contract just days after Anthropic was ejected.
Still, the administration’s treatment of Anthropic has now drawn in major AI researchers, cloud providers, and some of the industry’s largest companies. What might have been a narrow contract dispute is starting to look more like a test of how much leverage the government has over the companies building the next generation of AI systems.