Anthropic’s claim that 16 million Claude interactions were siphoned through 24,000 fake accounts reframes the AI race as a battle over inference access, safeguards, and chip supply — not just model training.
U.S. artificial intelligence firm Anthropic has accused three Chinese AI developers, DeepSeek, Moonshot AI, and MiniMax, of orchestrating what it describes as a coordinated, large-scale “distillation” campaign to extract capabilities from its Claude AI system.
Anthropic said the three labs created more than 24,000 fraudulent accounts that generated over 16 million interactions with Claude. The queries, it alleged, were designed to systematically replicate some of Claude’s most advanced features, including agentic reasoning, tool use, and coding — areas considered differentiators among frontier AI systems.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The accusations land amid heightened geopolitical tension over artificial intelligence, particularly as Washington reassesses export controls on advanced semiconductors to China and as Chinese AI labs close the performance gap with U.S. counterparts.
Distillation as a competitive shortcut
Distillation is a common technique in machine learning in which a large, high-performing model acts as a “teacher” to train a smaller “student” model. Within a single company, it is used to compress models for lower-cost deployment while retaining much of their capability.
Across companies, however, the method becomes controversial. By querying a rival’s model at scale and using the responses as training data, a competitor can approximate performance without replicating the comprehensive research, compute expenditure, or alignment work that went into the original system.
Anthropic said the alleged campaigns varied in focus and scale. It tracked more than 150,000 exchanges linked to DeepSeek that appeared aimed at strengthening foundational reasoning and alignment, including workarounds for policy-sensitive prompts. Moonshot AI allegedly generated more than 3.4 million exchanges focused on agentic reasoning, tool integration, coding, data analysis, and computer vision. MiniMax accounted for roughly 13 million exchanges targeting agentic coding and orchestration, with Anthropic claiming it observed traffic being redirected toward the latest Claude release shortly after launch.
The scale matters since sixteen million interactions represent not casual usage but what Anthropic characterizes as industrialized extraction.
DeepSeek, in particular, has drawn scrutiny since releasing its open-source R1 reasoning model last year, which analysts said approached the performance of leading U.S. frontier labs at a fraction of the cost. The company is reportedly preparing DeepSeek V4, a new model said to outperform both Claude and OpenAI’s ChatGPT in certain coding benchmarks. Earlier this month, OpenAI also accused DeepSeek in a memo to U.S. lawmakers of using distillation techniques to mimic its systems.
Export controls and compute leverage
The dispute is unfolding alongside debate over access to high-end chips. Last month, the Trump administration allowed U.S. firms, including Nvidia, to export advanced AI processors such as the H200 to China, loosening earlier restrictions.
Anthropic linked the alleged distillation campaigns to computing power. “The scale of extraction… requires access to advanced chips,” the company said in a blog post.
It argued that export controls serve a dual function: limiting direct model training and constraining the compute needed for high-volume distillation.
This framing shifts the chip debate. Historically, export controls focused on preventing Chinese firms from training large frontier models from scratch. Anthropic’s claim suggests that even if direct training is restricted, access to sufficient inference compute could enable large-scale replication via API querying.
Policy analysts say this complicates enforcement. Distillation occurs through legitimate product interfaces — paid or public APIs — rather than through overt hacking. That creates a grey zone between normal usage and systematic extraction.
National security dimension
Anthropic also framed the issue as one of security. The company said U.S. developers build safeguards into frontier systems to prevent misuse in areas such as bioweapons design or malicious cyber operations.
“Models built through illicit distillation are unlikely to retain those safeguards,” Anthropic wrote, warning that dangerous capabilities could proliferate if protections are stripped out during replication.
It pointed to the possibility of authoritarian governments deploying advanced AI for offensive cyber operations, disinformation campaigns, and mass surveillance — risks that increase if such systems are open-sourced without embedded safety layers.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, told TechCrunch the allegations were unsurprising.
“It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact,” he said. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Anthropic said it will continue investing in defensive measures to make distillation harder to execute and easier to detect, while calling for “a coordinated response across the AI industry, cloud providers, and policymakers.”
Such coordination could include tighter API rate limits, improved anomaly detection, contractual enforcement mechanisms, and shared threat intelligence among AI labs. Cloud providers — which host the infrastructure underpinning both U.S. and Chinese AI workloads — may also face pressure to monitor and flag high-volume extraction patterns.
The broader stakes extend beyond one company. If frontier AI capabilities can be replicated rapidly through sustained querying, the competitive moat built on research expenditure and chip access narrows. In that scenario, advantage may hinge less on breakthrough architecture and more on distribution control, access management, and compute governance.
At the same time, aggressive restrictions carry trade-offs. Limiting chip exports could affect U.S. semiconductor revenues and accelerate domestic Chinese chip development. Restricting API access could constrain legitimate global customers and developers.
Anthropic’s allegations therefore crystallize a central tension in the global AI race: openness versus control. The tools that make AI widely usable, APIs, cloud access, and scalable inference, also create vectors for replication. As Chinese labs close the performance gap with U.S. peers, the contest increasingly revolves not just around building the most advanced model, but protecting it.



