Anthropic has publicly accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of conducting large-scale “distillation attacks” on its Claude models.
Anthropic published a blog post titled “Detecting and preventing distillation attacks,” detailing what it described as industrial-scale efforts to illicitly extract Claude’s capabilities. The companies allegedly created approximately 24,000 fraudulent accounts bypassing terms of service and regional restrictions, as Claude is not officially available in China.
These accounts generated over 16 million exchanges i.e., prompts and responses with Claude. The technique involved distillation: training their own models on Claude’s outputs to transfer advanced capabilities like agentic reasoning, tool use, and coding.
Anthropic emphasized that distillation is a legitimate method; labs use it to create smaller versions of their own models, but called this usage “illicit” because it violated their terms of service, involved fraud, and aimed to shortcut independent development.
They highlighted national security risks: distilled models could lack safety guardrails; restrictions on bioweapons or cyberattacks, and if open-sourced, such capabilities could spread uncontrollably. Anthropic linked this to broader policy arguments, reinforcing the need for U.S. export controls on AI chips, as limited compute access hinders both direct training and large-scale distillation.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
This follows similar accusations from OpenAI earlier in February 2026, which claimed DeepSeek and others distilled its models. Distillation is a standard technique in the field pioneered years ago and used widely, but the scale, use of fake accounts, and alleged TOS violations cross into prohibited territory for proprietary APIs like Claude.
Critics on platforms like Reddit and X point out irony: many frontier models including Claude were trained on vast public data, often raising copyright questions, yet companies now cry foul when their outputs are used similarly.
Some view it as geopolitical posturing—Anthropic and U.S. firms pushing back against rapid advances in Chinese open-source models that challenge closed Western frontiers. No immediate responses from the accused companies were widely reported in initial coverage, though the claims align with ongoing U.S.-China AI tensions.
Anthropic stated it is investing in better defenses like detection, rate-limiting and called for industry-wide coordination, including with cloud providers and policymakers. OpenAI made similar accusations against Chinese AI company DeepSeek, focusing on “distillation” techniques to replicate U.S. frontier models.
OpenAI sent a memo to the U.S. House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party often called the House Select Committee on China. They accused DeepSeek of ongoing efforts to “free-ride” on capabilities developed by OpenAI and other U.S. frontier labs through distillation.
DeepSeek allegedly used distillation — training its own models on outputs from more advanced U.S. models like those from OpenAI to replicate advanced capabilities at lower cost and faster. OpenAI reported detecting new, obfuscated methods to evade restrictions, including: Accounts linked to DeepSeek employees circumventing access limits.
Use of obfuscated third-party routers and other masking techniques to hide sources. Programmatic code developed by DeepSeek staff to access models and harvest outputs for distillation. This activity was described as part of broader, persistent efforts tied to China and occasionally Russia, continuing despite OpenAI’s defenses against terms-of-service violations.
OpenAI Highlighted Risks
Distilled models often bypass safety guardrails on misuse for bioweapons or cyberattacks, threatening U.S. technological leadership and national security. The accusations built on earlier suspicions from 2025, when DeepSeek’s R1 model launched and appeared strikingly similar to OpenAI’s outputs, prompting reviews of potential improper distillation.
OpenAI did not name Moonshot AI or MiniMax in its public disclosures unlike Anthropic’s broader accusations. The focus remained primarily on DeepSeek, with references to “other U.S. frontier labs” implying possible wider targeting.
These claims align with escalating U.S.-China AI tensions, including debates over export controls on advanced chips — which critics argue distillation circumvents by leveraging API outputs instead of direct training compute. Community reactions highlight irony: U.S. labs trained on vast public data; raising copyright issues, yet now decry similar use of their API outputs.
OpenAI framed this as a business and security threat, noting free or low-cost distilled models could undercut subscription-based Western frontiers.



