TLDR
- Trump orders six month phaseout of Anthropic AI in agencies
- Pentagon labels Anthropic a supply chain risk
- Dispute centers on lawful use and AI safeguards
- Anthropic says it will challenge the designation in court
President Donald Trump has directed all federal agencies to stop using Anthropic’s artificial intelligence technology. The order follows a dispute between the AI firm and the Pentagon over the military use of its systems.
The White House confirmed that agencies must begin a six month phaseout of Anthropic products. The decision came after negotiations between Anthropic and the Department of Defense failed to reach an agreement before a set deadline.
Pentagon Ethics Dispute Over AI Access
The conflict began during contract talks between Anthropic and the Department of Defense. The Pentagon requested that its contractors be allowed to use Anthropic’s AI tools for “all lawful purposes.” Anthropic declined to revise certain safeguards in its terms of service. The company restricts the use of its Claude model for mass domestic surveillance and fully autonomous weapons.
Defense Secretary Pete Hegseth said the company’s refusal created concerns for national security operations. He announced that Anthropic would be designated a “supply chain risk.” Such a designation would bar military contractors and suppliers from engaging in commercial activity with Anthropic.
The label has rarely been applied to American companies. Anthropic stated that the designation would be “legally unsound” and unprecedented. The company added that it would challenge any such action in court.
Trump Orders Immediate Halt Across Agencies
Trump announced the directive on Truth Social. He wrote that every federal agency must “IMMEDIATELY CEASE” using Anthropic technology. “We don’t need it, we don’t want it, and will not do business with them again,” Trump stated. He also warned of civil and criminal consequences if the company failed to cooperate during the transition.
The president said agencies such as the Department of Defense would have six months to complete the phaseout. Contractors working on military projects may also be required to discontinue Anthropic systems.
Anthropic said it had not received direct communication from the White House regarding the final decision. The company noted it would work to enable a smooth transition if required. The dispute became public after months of private discussions. Anthropic said recent contract language would weaken its stated protections through broad legal terms.
Political Response and AI Industry Context
Lawmakers responded to the administration’s decision. Senator Elizabeth Warren accused the administration of pressuring the company to remove safeguards. She said the action could remove “common sense guardrails” related to surveillance and autonomous weapons. Senator Ed Markey described the designation as an attempt to damage an American firm.
Anthropic was awarded a Pentagon contract last summer valued at up to $200 million. Other companies, including OpenAI and Google, received similar frontier AI contracts. Anthropic describes itself as a Public Benefit Corporation. It states that its mission is to develop advanced AI systems for the benefit of humanity. Generative AI models such as Claude can produce text, code, and images.
These systems are used in commercial and government settings. The dispute centers on how AI tools should be used in defense work. The administration maintains that lawful access is necessary for military operations. Anthropic maintains that certain uses conflict with its policies. As federal agencies begin the phaseout process, the legal and contractual questions remain unresolved.





