TLDR
- The U.S. military used Anthropic’s Claude AI in airstrikes on Iran hours after Trump banned federal agencies from using it
- Claude was used for intelligence assessments, target identification, and battle simulations
- Trump ordered a six-month phase-out of Anthropic’s tools, calling the company a “supply chain risk”
- OpenAI struck a deal with the Pentagon to deploy its models in classified networks shortly after
- Anthropic refused Pentagon demands for unrestricted use of its AI, citing ethical limits on surveillance and autonomous weapons
The U.S. military carried out airstrikes on Iran using Anthropic’s Claude AI on Saturday, just hours after President Trump ordered federal agencies to stop using the technology.
JUST IN: 🇺🇸🇮🇷 US used Anthropic's Claude AI for its military operations during strikes on Iran, WSJ reports. pic.twitter.com/DUGOHUHRZy
— Watcher.Guru (@WatcherGuru) March 1, 2026
U.S. Central Command used Claude for intelligence assessments, target identification, and battle simulations during the operation, according to people familiar with the matter. The strikes were conducted alongside Israel.
Iran’s Supreme Leader Ayatollah Ali Khamenei was killed in the strikes. Iranian state media confirmed his death after he was targeted at his office in Tehran. Iran declared a 40-day mourning period in response.
BREAKING: US strikes on Iran used Anthropic's Claude, just hours after it was banned by President Trump, per WSJ.
Details include:
1. US Central Command in the Middle East used Claude for intel purposes
2. Claude was used for intelligence assessments, target identification,…
— The Kobeissi Letter (@KobeissiLetter) March 1, 2026
Trump had ordered all federal agencies to “immediately cease” using Anthropic’s tools the day before. He called the company “leftwing nut jobs” and said it was putting “American lives” at risk.
The Pentagon also designated Anthropic a “supply chain risk” and announced a six-month plan to phase out its systems. Defense Secretary Pete Hegseth said American warfighters would not be “held hostage by the ideological whims of Big Tech.”
The dispute between Anthropic and the Pentagon had been building for months. The Pentagon demanded unrestricted use of Claude for any “lawful” military purpose. Anthropic refused, saying it would not remove safeguards against domestic surveillance or strikes carried out without human involvement.
Anthropic said on Friday that “no amount of intimidation or punishment from the Department of War will change our position.”
Claude had already been embedded in military operations through partnerships with Palantir and Amazon Web Services. It was also used during the January operation that led to the capture of Venezuelan President Nicolás Maduro in Caracas.
OpenAI Moves In
Hours after the Pentagon cut ties with Anthropic, OpenAI CEO Sam Altman announced a deal to deploy OpenAI’s models within the Defense Department’s classified network.
Altman said the agreement reflected OpenAI’s principles, including prohibitions on domestic mass surveillance and requirements for human oversight of weapons use. He also called on the Pentagon to offer the same terms to all AI companies.
OpenAI declined to say whether its services would directly replace Anthropic’s work for the department.
The Pentagon had previously signed multi-year AI contracts worth up to $200 million each with several companies, including Anthropic, OpenAI, Google, and Elon Musk’s xAI.
Funding and Rivalry
OpenAI announced a $110 billion funding round on Friday, valuing the company at $730 billion. Anthropic had raised $30 billion earlier in February.
Both companies are pushing toward initial public offerings, possibly this year. Anthropic CEO Dario Amodei previously worked at OpenAI before leaving in 2020 over concerns about safety being deprioritized.
AI experts say it will take months to fully replace Claude across military systems, given how deeply the technology is integrated with partners like Palantir.





