TLDR
- The Pentagon gave Anthropic a formal “supply chain risk” designation, effective immediately.
- The label bars government contractors from using Anthropic’s Claude AI in Pentagon work.
- Claude was reportedly being used for military operations in Iran and Venezuela.
- Anthropic CEO Dario Amodei says the company will challenge the designation in court.
- The designation is typically reserved for foreign adversaries, like China’s Huawei.
The U.S. Department of Defense has officially labeled Anthropic a supply chain risk, a designation that blocks government contractors from using the company’s Claude AI in Pentagon-related work.
The Defense Department has officially told Anthropic it is a supply-chain risk and will be cut off from partners who work with the Pentagon, following through on a threat to escalate its battle with the artificial-intelligence company. https://t.co/BVvQJnl2lN pic.twitter.com/n8ei90DMxm
— The Wall Street Journal (@WSJ) March 6, 2026
The move is effective immediately and puts Anthropic in rare company. The label has historically been used against foreign adversaries, most famously China’s Huawei.
Anthropic CEO Dario Amodei confirmed the designation in a public statement. He said the scope is narrow and only applies to direct Pentagon contract work, not all use of Claude by companies that also hold military contracts.
“It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War,” Amodei wrote.
Despite the restriction, Claude is deeply embedded in the U.S. military’s systems. According to sources familiar with the matter, Claude has been used to support military operations in both Iran and Venezuela, including analyzing intelligence and assisting with operational planning.
Removing it will be difficult. One analyst described the process as “painful,” given how widely the technology has been integrated.
Why the Dispute Happened
The conflict between Anthropic and the Pentagon has been building for months. At the center of it is a disagreement over safeguards.
Anthropic has refused to allow Claude to power autonomous weapons or mass domestic surveillance. The Pentagon argued it should be able to use the technology as needed, as long as it complies with U.S. law.
The dispute went public earlier this year and escalated this week. An internal Anthropic memo, originally written last Friday and published Wednesday by The Information, added fuel to the fire. In it, Amodei suggested Pentagon officials disliked Anthropic partly because “we haven’t given dictator-style praise to Trump.”
Amodei apologized for the memo’s publication. Anthropic investors were reportedly working to limit the damage from the fallout.
Pentagon Chief Technology Officer Emil Michael posted on X Thursday night, stating there is no active Department of Defense negotiation with Anthropic.
What Happens Next
Amodei said in his statement that Anthropic and the Pentagon had been discussing ways the company could continue working with the military without dropping its safety restrictions. No agreement has been reached.
Amodei confirmed Anthropic plans to challenge the designation in court.
Microsoft reviewed the designation and concluded that Claude can still be available to its customers through platforms like Microsoft 365, GitHub, and its AI Foundry — except for Department of War contracts.
Palantir’s Maven Smart Systems, which provides militaries with intelligence analysis and weapons targeting, had built multiple workflows using Anthropic’s Claude code.
Amazon, a major investor in Anthropic, had not commented at the time of reporting.





