TLDR
- The Pentagon’s CTO said Anthropic’s Claude AI has built-in policy preferences that could compromise military effectiveness.
- Anthropic became the first American company ever labeled a supply chain risk by the Defense Department.
- Defense contractors must now certify they do not use Claude in Pentagon-related work.
- Anthropic sued the Trump administration Monday, calling the move “unprecedented and unlawful” and warning hundreds of millions in contracts are at risk.
- Despite the ban, Palantir CEO Alex Karp confirmed his company is still using Claude for U.S. military operations.
The Pentagon designated Anthropic as a supply chain risk earlier this month, making it the first American company to receive that label. The designation has historically been used against foreign adversaries.
Defense Department CTO Emil Michael explained the decision Thursday in an interview on CNBC’s “Squawk Box.” He said Claude’s built-in “constitution” — a document Anthropic uses to shape the model’s behavior — creates policy preferences that could affect how the AI performs in military settings.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
Anthropic published the most recent version of Claude’s constitution in January 2026. The company says it plays a “crucial role” in training its models and “directly shapes Claude’s behavior.”
The supply chain designation means defense contractors and vendors must now certify they are not using Claude in any work they do for the Pentagon.
Michael said the decision was “not meant to be punitive.” He also noted that the U.S. government accounts for only a “tiny fraction” of Anthropic’s overall revenue.
Anthropic was founded in 2021 by researchers who left OpenAI. It has built a strong enterprise business, including early contracts with the Defense Department.
Anthropic pushed back hard on the Pentagon’s move. On Monday, the company filed a lawsuit against the Trump administration, calling the supply chain designation “unprecedented and unlawful.”
In the filing, Anthropic said it is being harmed “irreparably” and that hundreds of millions of dollars in contracts are now in doubt.
Pentagon Denies Active Outreach to Companies
Michael dismissed Anthropic’s claims that the government was actively contacting companies and warning them not to use Claude. He called those claims “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He also acknowledged that the transition away from Claude will take time. The DOD has a transition plan in place, he said, noting that removing deeply integrated AI tools is more complex than deleting a desktop application.
Claude Still in Use for Military Operations
Despite the designation, Claude is still being used in some military contexts. CNBC previously reported that the AI was used to support U.S. military operations in Iran.
Palantir CEO Alex Karp confirmed Thursday that his company, one of the largest defense contractors in the U.S., is still using Claude.
Michael said the agency cannot “just rip out” Anthropic’s technology overnight and confirmed a transition plan is underway.





