Nvidia has privately notified Chinese clients of its intention to initiate shipments of its advanced H200 AI chips before the Lunar New Year holiday, which commences on February 17, 2026, according to three sources with direct knowledge of the matter who spoke to Reuters.
The initial batches would be fulfilled from existing inventory, comprising 5,000 to 10,000 chip modules—translating to roughly 40,000 to 80,000 individual H200 processors—potentially valued in the hundreds of millions given current market pricing.
In parallel, the U.S. semiconductor leader has outlined plans to ramp up H200 production capacity, with new order slots expected to open in the second quarter of 2026 to accommodate sustained demand.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
However, substantial uncertainties loom large: Beijing has yet to grant import approvals for the H200, and the proposed timeline could slip based on regulatory decisions from Chinese authorities.
“The whole plan is contingent on government approval,” one source emphasized. “Nothing is certain until we get the official go-ahead.”
Nvidia and China’s Ministry of Industry and Information Technology (MIIT), which oversees semiconductor imports, did not immediately respond to requests for comment.
The prospective shipments represent a pivotal milestone, marking the first legal deliveries of the H200 to China since President Donald Trump’s early December announcement permitting such sales under a revamped U.S. export framework.
Trump’s policy imposes a 25% fee on transactions, collected by the U.S. government, while mandating Commerce Department vetting of “approved customers” to mitigate national security risks.
This stands in stark contrast to the Biden administration’s stringent bans on advanced AI chips to China, enacted in October 2022 and expanded thereafter, which prohibited exports of any processors matching or exceeding the Nvidia A100’s capabilities, citing fears of bolstering Beijing’s military AI advancements.
The Trump administration’s rationale posits that controlled sales will keep Chinese entities “addicted” to U.S. technology, diminishing incentives for domestic rivals like Huawei to accelerate their own developments, thereby preserving American leadership in AI.
An inter-agency review of license applications, involving the Commerce, State, Energy, and Defense Departments, was initiated last week to evaluate these sales, with a 30-day window for agency input before a final presidential decision.
U.S. lawmakers have demanded transparency in this process, urging disclosure of license reviews to ensure accountability.
The H200, a cornerstone of Nvidia’s previous-generation Hopper architecture, boasts 141 GB of high-bandwidth HBM3e memory, 4.8 TB/s bandwidth, and superior tensor performance, making it ideal for large-scale AI training and inference.
It significantly outperforms the H20, Nvidia’s export-compliant variant for China, by an estimated six times in key metrics, offering a vital boost for applications in generative AI and data centers.
Although eclipsed by the newer Blackwell and upcoming Rubin lines—Nvidia’s production priorities—the H200’s scarcity has not diminished its appeal in global markets.
Major Chinese firms, including Alibaba Group, ByteDance (parent of TikTok), Tencent, and Baidu, have voiced keen interest, positioning the H200 as a game-changer for their AI ambitions amid a domestic market projected to exceed $100 billion in AI spending by 2027.
Beijing’s response, however, remains guarded: Emergency internal meetings this month have explored countermeasures, such as mandating bundled purchases where each H200 must be paired with a specified ratio of indigenous chips to foster local innovation.
China’s drive for semiconductor self-sufficiency is intensifying, with Huawei’s Ascend 910C—boasting 12,032 TPP (tensor processing power) and 3.2 TB/s memory bandwidth—falling short of the H200’s 15,840 TPP and 4.8 TB/s, per industry benchmarks.
Upcoming models like the Ascend 960, slated for 2027, aim to bridge this gap, supported by state subsidies exceeding $50 billion in 2025 alone.
In the interim, grey-market smuggling has provided limited access to restricted Nvidia hardware, with U.S. enforcers recently dismantling networks involving over $160 million in illicit shipments.
Critics in Washington, including former officials and bipartisan lawmakers, decry the policy as a “disastrous” concession that could erode U.S. advantages, potentially fueling China’s military AI programs.
Market sentiment has been buoyed, with Nvidia shares rising intermittently on the news, though capped by lingering uncertainties.
The resolution will depend on swift bilateral approvals, challenging Trump’s approach of monetizing U.S. tech exports while managing the escalating U.S.-China AI rivalry.



