TLDR
- A federal judge in San Francisco issued a preliminary injunction blocking the Pentagon’s ban on Anthropic’s Claude AI
- Judge Rita Lin called the ban “classic illegal First Amendment retaliation”
- The dispute started after Anthropic refused to allow Claude to be used for lethal autonomous weapons or mass surveillance
- Anthropic held 32% of the enterprise AI market as of 2025, ahead of OpenAI at 25%
- The injunction is paused for seven days to allow the government time to appeal
A US federal judge has temporarily blocked the Trump administration’s ban on government use of Anthropic’s AI technology, pausing measures that the company said could cost it billions in lost revenue.
BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon 'supply chain risk' designation by Judge Rita Lin in California but is allowing a stay for one week https://t.co/1xk41AB5zQ
— Hadas Gold (@Hadas_Gold) March 26, 2026
Judge Rita Lin of the US District Court for the Northern District of California issued the preliminary injunction on Thursday. She put the order on hold for seven days to give the government a chance to appeal.
The case centers on a deal struck in July 2025 between Anthropic and the Pentagon. The contract would have made Claude the first frontier AI model approved for use on classified networks.
Negotiations broke down in February 2026. The Pentagon wanted to renegotiate, demanding Anthropic allow military use of Claude “for all lawful purposes” and without restrictions.
Anthropic refused. The company said its technology should not be used for lethal autonomous weapons or mass domestic surveillance of Americans.
On February 27, President Trump ordered all federal agencies to stop using Anthropic products. He wrote on Truth Social that the company had made a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
The Defense Department then designated Anthropic as a national security supply chain risk. Anthropic sued in federal court in Washington, DC, on March 9, arguing that Defense Secretary Pete Hegseth had overstepped his authority.
Judge Questions the Government’s Reasoning
A 90-minute court hearing took place in San Francisco on March 24. Judge Lin questioned government lawyers on whether Anthropic was being punished for publicly criticizing the Pentagon.
In her ruling, Lin said the ban did not appear to be directed at national security interests. “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude,” she wrote.
She added that the measures appeared “designed to punish Anthropic” and called them “arbitrary, capricious, and an abuse of discretion.”
During the hearing, an Anthropic attorney noted that the Pentagon can review any AI model before deploying it. Anthropic also has no way to stop a model from working, change how it operates, or see how the military is using it.
What Both Sides Argued
A government lawyer argued that Anthropic destroyed trust during contract negotiations by trying to dictate Pentagon policies. The lawyer said the government was concerned about the risk of “future sabotage” from Anthropic.
Judge Lin rejected that reasoning. She said the Justice Department had no “legitimate basis” to conclude that Anthropic’s stance on restrictions could lead it to become a saboteur.
Anthropic held 32% of the enterprise AI market as of 2025, ahead of OpenAI at 25%, according to Menlo Ventures. A government-wide ban would have put that position at risk.
Anthropic said it was “grateful to the court for moving swiftly.” The company also filed a separate complaint in an appellate court in Washington, DC, focused on procurement law.
The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California.







