TLDR
- OpenAI CEO Sam Altman admitted the Pentagon deal was rushed and “looked opportunistic and sloppy”
- OpenAI is revising the deal to clarify its AI tools won’t be used for domestic surveillance of U.S. citizens
- The Pentagon confirmed OpenAI’s tools won’t be used by intelligence agencies like the NSA
- The deal came hours after Trump banned federal agencies from using Anthropic’s AI tools
- Altman publicly called for Anthropic to be offered the same contract terms as OpenAI
OpenAI Revises Pentagon Deal After Backlash, Altman Admits Rushed Rollout
OpenAI CEO Sam Altman has admitted the company’s recently announced deal with the U.S. Department of Defense was handled poorly. He shared what he described as an internal memo on X, saying the company “shouldn’t have rushed” to announce the agreement.
Here is re-post of an internal post:
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
"• Consistent with applicable laws,…
— Sam Altman (@sama) March 3, 2026
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman wrote.
The deal was announced last Friday, just hours after President Donald Trump directed federal agencies to stop using Anthropic’s AI tools. It also came hours before the U.S. carried out strikes on Iran.
The timing drew immediate backlash online. Many users reportedly deleted ChatGPT and switched to Anthropic’s Claude app following the announcement.
OpenAI is now working with the Pentagon to revise the contract terms. The changes aim to make the company’s principles clearer in the formal agreement.
One key addition states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” The Pentagon also affirmed that OpenAI’s tools will not be used by intelligence agencies such as the NSA.
Any future services to those agencies would require a separate contract modification, according to Altman.
How the Anthropic Dispute Set the Stage
The situation follows a breakdown in talks between Anthropic and the Defense Department. Anthropic had sought guarantees that its tools would not be used for domestic surveillance or to develop autonomous weapons without human oversight.
Defense Secretary Pete Hegseth said Friday that Anthropic would be designated a supply-chain threat after negotiations collapsed. Government officials had for months reportedly criticized Anthropic for being too focused on AI safety.
The dispute became public after it emerged that Anthropic’s Claude AI had been used by the U.S. military during a January raid to capture Venezuelan president Nicolás Maduro. Anthropic did not publicly object to that use at the time.
Anthropic was actually the first AI lab to deploy models on the Defense Department’s classified network, following an initial deal last year.
Altman Calls for Equal Treatment of Anthropic
Altman used his post to also address the fallout for Anthropic directly. He said he had spoken to officials over the weekend and pushed back against the supply-chain threat designation.
“I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to,” he wrote.
Anthropic was founded in 2021 by former OpenAI employees who left over disagreements about the company’s direction.
It has positioned itself as a safety-focused AI company. The Pentagon has not yet responded publicly to Altman’s call for equal terms.





