Anthropic has delivered one of the most direct and urgent warnings yet from a leading AI lab, stating that the United States can still lock in a meaningful 12- to 24-month advantage in frontier AI capabilities over China — but only if it moves immediately to close critical loopholes in chip exports and prevent advanced model distillation.
In a detailed post published Thursday, Anthropic outlined two starkly different scenarios for the global AI industry in 2028. In one, the US and its allies successfully tighten controls, preserving technological superiority. In the other, continued leaks in hardware and knowledge allow China to rapidly close the gap or even pull ahead in key areas.
How China Is Closing the Gap
Anthropic highlighted two main vectors accelerating China’s progress:
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
- Persistent weaknesses in chip export controls, despite existing restrictions, allow Chinese entities to access advanced computing hardware through loopholes, smuggling, or third countries.
- Distillation attacks, a technique in which Chinese labs use powerful Western “teacher” models (such as Anthropic’s Claude) to train smaller, more efficient “student” models. This process enables rapid capability transfer with far less compute than originally required.
- The company stressed the narrow window of opportunity: “If the US and its allies act now to address both issues, it may be possible to lock in a 12-24 month lead in frontier capabilities.”
It added a blunt note of urgency: “The window of opportunity to lock in that lead will not necessarily remain open for long.”
Why Maintaining the Lead Matters
Anthropic framed technological superiority as essential not just for economic or military advantage, but for the safe development of AI itself. A close “neck-and-neck” race, the company argued, would create dangerous incentives for both sides to rush model releases while cutting corners on safety testing and alignment research.
“A neck-and-neck race between American and Chinese AI labs could make industry and government-led safety and governance efforts more difficult,” it said.
This concern aligns with Anthropic’s founding mission, which emphasizes constitutional AI and responsible development. In a tight competition, the pressure to deploy ever-more-powerful systems faster could outweigh caution, raising risks of unintended consequences, misuse, or loss of control.
The post also carried a direct message about protecting hard-won advantages: “Our past success means that our present task is largely to avoid squandering our advantage: to decide not to make it easier for the CCP to catch up.”
Policy Recommendations
Anthropic called for immediate policy actions, including:
- Strengthening and expanding chip export controls
- Significantly increasing enforcement budgets and resources
- Developing specific measures to detect and prevent large-scale distillation of frontier models
The warning comes at a sensitive geopolitical moment. It was published on the same day President Donald Trump met with Chinese leader Xi Jinping in Beijing, Trump’s first visit to China since 2017, accompanied by a powerful delegation of American tech executives, including Elon Musk, Tim Cook, and Jensen Huang.
The juxtaposition highlights the tension between commercial interests (market access and revenue in China) and national security imperatives. While companies like Nvidia continue to seek controlled sales to China, Anthropic’s intervention underscores the cost of overly permissive policies.
However, not all experts agree with Anthropic’s assessment of the gap. In April, former ByteDance engineer Zhang Chi, now at Peking University, argued that China is actually falling further behind due to chronic shortages of high-quality training data and restricted access to the most advanced chips.
Nevertheless, Anthropic’s perspective carries significant weight given its deep technical expertise and front-row seat in the frontier AI race.
Maintaining even a modest multi-year lead could have profound consequences. It would give the US and its allies more time to shape global AI norms, standards, and safety frameworks aligned with democratic values.
Militarily, it would preserve advantages in autonomous systems, intelligence analysis, and cyber capabilities. Economically, it would help ensure that the enormous productivity gains and new industries created by advanced AI are disproportionately captured by open societies.
Conversely, some analysts believe that losing the lead could accelerate authoritarian applications of AI, complicate efforts to manage existential risks, and shift the global balance of power. A true arms-race dynamic would likely reduce overall safety investment across the industry.
Anthropic’s call to action is therefore seen as a reflection of a growing consensus among some frontier labs that the era of relatively open AI development is ending. The challenge for policymakers is to implement targeted, enforceable controls without stifling American innovation or triggering unintended escalations.



