
On Tuesday, San Francisco federal court became the backdrop for a heated hearing where Anthropic, the creator of the Claude AI models, sought an emergency injunction to halt a federal blacklist orchestrated by the Trump administration.
The conflict centers on a fundamental disagreement over how to use artificial intelligence in warfare. Anthropic alleges the government is retaliating against them because they refused to allow their AI technology to be used for “fully autonomous lethal weapons” or the “mass surveillance of Americans.” In response, the Pentagon took the historic step of designating the U.S.-based company as a “supply chain risk.” This was the first time an American firm has ever received such a label.
Inside the Anthropic and Pentagon lawsuit
During the hearing, U.S. District Judge Rita Lin didn’t mince words, suggesting that the Pentagon’s move “looks like an attempt to cripple” Anthropic. The judge pressed government lawyers on the discrepancy between official policy and the administration’s social media rhetoric.
The government’s stance is that Anthropic’s “corporate red lines” make them an unreliable partner. According to the Department of Justice, the Pentagon fears that Anthropic could theoretically “sabotage” or disable its technology mid-operation if it decided a military action crossed its ethical boundaries. Anthropic’s counsel, Michael Mongan, countered that this is an unlawful overreach. He stated the government is “leveraging its powers to punish a major American company for the sin of expressing its views.”
Billions at stake and “computers at war”
Anthropic argues that without an injunction, it faces “reputational and economic harm” totaling billions of dollars. The ban doesn’t just affect direct government contracts. It forces major defense contractors like Amazon, Microsoft, and Palantir to certify that they aren’t using Claude in any military-related work.
Ironically, the military is already deeply dependent on the very technology it is trying to ban. Reports indicate that the US government’s classified networks currently integrate Claude. Plus, they’re also using it for target analysis in ongoing operations, such as the conflict with Iran. Disentangling these systems would be a massive, months-long undertaking that officials acknowledge would cause significant disruption.
U.S. government reaching out to Anthropic rivals
Faced with Anthropic’s stance, the Trump administration is pivoting toward rival firms like OpenAI and xAI. Both have reportedly agreed to “all legal uses” of their technology. Meanwhile, the court must decide if a private company has the First Amendment right to set safety guardrails on its products.
Judge Lin has taken the arguments under advisement. If she grants the injunction, Anthropic can continue its government work while the broader lawsuit proceeds. If she denies it, the “supply chain risk” label sticks, potentially forcing one of America’s leading AI labs out of the federal market entirely.
The post AI at War: Anthropic Fights the Pentagon’s ‘Unprecedented’ Blacklist in Court appeared first on Android Headlines.