TLDR
- Anthropic accused three Chinese AI firms — DeepSeek, Moonshot, and MiniMax — of running “distillation attacks” on its Claude AI model.
- The firms allegedly created 24,000 fake accounts and generated over 16 million exchanges with Claude to train their own models.
- They used commercial proxy services to bypass Anthropic’s restrictions on Claude access in China.
- MiniMax drove the most traffic, accounting for over 13 million of the 16 million exchanges.
- Anthropic is calling on the AI industry, cloud providers, and lawmakers to coordinate a response.
Anthropic has accused three Chinese AI companies of running coordinated campaigns to extract knowledge from its Claude AI model using a technique called “distillation.”
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
— Anthropic (@AnthropicAI) February 23, 2026
The three firms named are DeepSeek, Moonshot AI, and MiniMax. Anthropic says the attacks took place across roughly 24,000 fraudulent accounts and produced over 16 million exchanges with Claude.
Distillation is a training method where a smaller, weaker AI model learns by studying the outputs of a larger, stronger one. Anthropic says this is a legitimate practice when used internally, but becomes illicit when competitors use it to steal capabilities.
“Distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost,” Anthropic wrote in its blog post on Sunday.
Anthropic’s terms of service block commercial access to Claude in China. To get around this, the three firms allegedly used commercial proxy services to access networks running tens of thousands of Claude accounts at the same time.
Once inside, the firms sent large volumes of carefully crafted prompts designed to pull specific skills out of Claude. The responses were then used to train their own models or as data for reinforcement learning.
The attacks focused on Claude’s most advanced features, including agentic reasoning, tool use, coding, data analysis, and computer vision.
MiniMax Led the Traffic
Of the three companies, MiniMax was the biggest offender by volume. Anthropic says MiniMax alone accounted for more than 13 million of the 16 million total exchanges.
DeepSeek, Moonshot AI, and MiniMax are all based in China and each carry multi-billion dollar valuations. None of the three responded to requests for comment.
Anthropic said it identified the firms through IP address correlation, request metadata, infrastructure indicators, and tips from industry partners who spotted the same actors on other platforms.
Anthropic Is Not Alone
Anthropic is not the first US AI company to raise this issue. OpenAI sent an open letter to US lawmakers earlier this month, claiming it had seen activity “indicative of ongoing attempts by DeepSeek to distill frontier models.”
OpenAI first flagged distillation concerns in early 2024, after the launch of DeepSeek’s first model, which users found closely resembled ChatGPT.
Anthropic says it will respond by improving its detection systems, tightening access controls, and sharing threat intelligence with other companies.
The company is also calling for a coordinated response from the wider AI industry, cloud providers, and policymakers. “No company can solve this alone,” Anthropic wrote.
Analysts contacted by CNBC noted that the line between legitimate and illicit distillation can be blurry, and that nuance is needed when evaluating these claims.
Anthropic’s blog post stated that MiniMax drove over 13 million exchanges, the highest volume of any single firm involved.





