Zhao Haijun warned that companies are attempting to build “10 years’ worth” of AI data center capacity in one to two years, raising the risk that large portions of new infrastructure could sit idle.
Zhao Haijun, co-chief executive of Semiconductor Manufacturing International Co. (SMIC), has cautioned that the breakneck pace of global AI data center construction could outstrip practical demand, echoing a costly experience from China’s recent past.
“Companies would love to build 10 years’ worth of data center capacity within one or two years,” Bloomberg cited Zhao as saying on a recent earnings call. “As for what exactly these data centers will do, that has not been fully thought through.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
His comments land at a moment when AI infrastructure spending is accelerating at a historic scale, with hyperscalers and governments racing to secure computing capacity amid fierce competition in generative AI.
Infrastructure Race Meets Demand Uncertainty
Artificial intelligence is widely expected to transform industries ranging from pharmaceuticals to finance. Yet the speed at which that transformation translates into consistent, monetizable workloads remains uncertain.
Developers of frontier models, including Alphabet, Meta Platforms, OpenAI, and xAI, argue they can absorb virtually unlimited computing resources. Training and deploying large language models requires massive clusters of GPUs, high-speed interconnects, and advanced cooling systems. Computing demand for inference at scale adds a further layer of sustained infrastructure needs.
However, frontier labs are only one segment of the market. Enterprise AI adoption, industrial automation, and sector-specific AI services must scale meaningfully to justify trillions in capital expenditure.
According to Moody’s Ratings, spending on AI-related infrastructure could surpass $3 trillion over the next five years. In 2026 alone, capital expenditures by Alphabet, Amazon Web Services, Meta, and Microsoft are projected to approach $650 billion. In China, Alibaba Group, Tencent, and ByteDance are expanding AI capacity aggressively.
The capital intensity of these investments raises fundamental questions about utilization rates, return on invested capital, and the durability of projected demand curves. If AI adoption progresses unevenly across sectors, newly built facilities could operate below optimal capacity for extended periods.
Lessons from China’s “Eastern Data, Western Computing” Initiative
Zhao’s warning draws on China’s earlier cloud and AI infrastructure push under the “Eastern Data, Western Computing” initiative. During the early 2020s, developers constructed large data centers in western provinces where electricity was cheaper, intending to serve economically stronger eastern regions.
While the strategy reduced energy costs, geographic distance increased network latency. For latency-sensitive applications such as financial transactions, real-time analytics, and certain AI workloads, this constraint reduced the appeal of these facilities.
Many projects were also predicated on the expectation that state-owned enterprises and government agencies would anchor demand. In practice, projected usage failed to materialize at scale. Some facilities reportedly operated at only 20% to 30% of their designed capacity.
Despite weak utilization, construction continued into 2024 and 2025, according to Reuters, prompting concerns about capital discipline. Authorities have since imposed restrictions to prevent overbuilding and are exploring mechanisms to improve resource allocation.
China’s Ministry of Industry and Information Technology is reportedly considering a centralized cloud platform to pool idle computing resources nationwide and distribute them via a unified network. Yet the technical complexity is significant. Data centers rely on diverse hardware configurations, GPU generations, networking topologies, and software stacks. High-performance AI training workloads often require tightly integrated clusters, limiting the fungibility of generic compute capacity.
Strategic Imperatives, Financial Risk, and the Semiconductor Supply Chain
The current global AI buildout is shaped not only by commercial ambition but also by strategic competition. Governments view AI leadership as tied to economic growth, defense capability, and geopolitical influence. That urgency incentivizes capacity expansion even in the absence of fully mature end-use cases.
This means high stakes for chipmakers. As co-chief executive of SMIC, Zhao oversees China’s leading semiconductor foundry. Data center expansion directly influences demand for advanced processors, memory, and packaging technologies. Sustained high utilization would support fabrication volumes and justify capital expenditure. A slowdown triggered by excess capacity could reverberate through the semiconductor supply chain.
There is also a structural distinction between short-term training demand and long-term inference demand. Training frontier models requires concentrated bursts of compute, while inference workloads scale with user adoption. If consumer and enterprise uptake lags, inference demand may not fully offset the upfront investment.
The comparison to high-speed rail or highway systems is instructive. Infrastructure can precede usage by years, particularly when planners anticipate structural shifts in economic activity. Yet infrastructure financed by private capital faces stricter return thresholds than state-backed projects.
Zhao’s remarks do not dismiss AI’s transformative potential. Instead, they highlight execution risk in a capital cycle unfolding at unprecedented speed. The question is not whether AI will reshape industries, but whether the timing and scale of infrastructure investment align with the pace of real economic absorption.



