DeepSeek, the Chinese artificial intelligence lab that unsettled global markets with its low-cost model last year, has broken with industry convention by withholding early access to its forthcoming V4 model from major U.S. chipmakers, according to two sources familiar with the matter who spoke to Reuters.
Instead of providing pre-release builds to Nvidia and Advanced Micro Devices, DeepSeek granted several weeks of lead optimization time to domestic suppliers, including Huawei Technologies. The decision represents more than a technical adjustment; it signals a strategic realignment within China’s AI ecosystem.
AI developers typically collaborate closely with hardware vendors ahead of major model launches. Pre-release access allows chipmakers to fine-tune compilers, memory management systems, and runtime libraries to maximize throughput and efficiency. That process is particularly critical for large foundation models, where marginal performance gains can significantly affect deployment economics.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
DeepSeek’s previous releases involved cooperation with Nvidia’s engineering teams. The absence of such collaboration for V4 marks a notable shift.
From a narrow commercial perspective, the immediate revenue impact on Nvidia and AMD may be limited.
“The impact to Nvidia and AMD for general data accelerators is minimal — most enterprises are not running DeepSeek, which serves as a benchmarking model more than anything else,” said Ben Bajarin, CEO of research firm Creative Strategies.
He noted that advances in AI development tools are shortening optimization cycles “from months to weeks.”
Yet optimization windows are not merely operational conveniences. They help shape ecosystem dominance. When a model is tuned early for a specific architecture, it can reinforce software-hardware lock-in, steering future deployments toward that platform.
By giving Huawei and other domestic chipmakers a head start, DeepSeek effectively directs performance alignment toward China’s indigenous silicon stack. This is particularly significant as Chinese AI developers work to reduce dependence on U.S. accelerators amid tightening export controls.
The move arrives as U.S.-China technology tensions deepen. A senior Trump administration official told Reuters that DeepSeek’s latest model was trained on Nvidia’s most advanced Blackwell chips within mainland China — a claim that, if substantiated, could raise compliance questions under U.S. export restrictions. Licenses for cutting-edge training processors remain tightly controlled.
According to the official, DeepSeek may seek to remove technical signatures revealing reliance on U.S. hardware and publicly assert that Huawei chips were used in training. It remains unclear whether DeepSeek secured approval to acquire inference-focused chips such as Nvidia’s H20 or AMD’s MI308, which were allowed to resume limited shipments to China last year.
DeepSeek’s rise has been rapid and consequential. Since its breakout in January 2025, its models have been downloaded more than 75 million times on Hugging Face. Over the past year, Chinese open-source models collectively surpassed those from any other country on the platform, reshaping the global competitive landscape.
Open-source traction matters strategically. Models distributed widely through platforms like Hugging Face can become de facto standards for benchmarking, experimentation, and downstream development. If such models are optimized primarily for domestic Chinese hardware, that alignment could gradually shift developer preferences and infrastructure investment patterns.
This dynamic also intersects with export control debates in Washington. The U.S. government has sought to limit China’s access to advanced AI training chips while allowing some inference-oriented processors to ship. The rationale is to constrain China’s ability to build frontier models while preserving certain commercial ties.
However, if Chinese labs are increasingly aligning software with domestic chips — and potentially obfuscating the role of U.S. hardware in training — the effectiveness of hardware-centric export controls could erode. Software optimization and distributed training techniques can mitigate hardware disadvantages over time, especially if supported by state-backed investment.
China’s broader policy objective appears clear: build a vertically integrated AI stack encompassing chips, models, and applications. DeepSeek reinforces that ambition by prioritizing domestic chipmakers in its model development cycle.
The near-term financial consequences for Nvidia and AMD may be modest, particularly if DeepSeek remains more influential as a benchmarking reference than as a dominant enterprise platform. The longer-term significance lies in ecosystem alignment. Early access determines which hardware architectures are favored, which toolchains are refined, and which supply chains are reinforced.
As several Chinese AI firms prepare new model releases this month, the pattern suggests a coordinated acceleration in domestic capability building. DeepSeek’s decision to exclude U.S. chipmakers from early optimization access reflects not just competitive maneuvering but a recalibration of technological allegiance.



