The global artificial intelligence buildout is no longer straining only cutting-edge GPUs. It is now tightening the supply of the more traditional computing backbone that underpins data centers, cloud services, and enterprise IT.
Fresh warnings from Intel and AMD to Chinese customers about server CPU shortages underscore how the AI infrastructure race is cascading through the entire semiconductor supply chain, driving up prices, extending delivery times, and complicating expansion plans for some of the world’s largest technology firms.
According to people familiar with the matter, who spoke to Reuters, Intel and AMD have recently notified customers in China that supplies of server central processing units are constrained, with Intel cautioning that delivery lead times for some products could stretch as long as six months. The shortages have already pushed prices for Intel’s server CPUs in China up by more than 10% on average, although the impact varies depending on contract terms and customer scale.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
China accounts for more than 20% of Intel’s global revenue and hosts some of the largest cloud computing and data center operators in the world. Any sustained disruption to CPU availability risks slowing deployments across sectors ranging from AI model training and inference to e-commerce, fintech, and government digital infrastructure.
The most severe constraints are affecting Intel’s fourth- and fifth-generation Xeon processors, which remain widely used across Chinese data centers. Sources say Intel has begun rationing deliveries as it grapples with a growing backlog of unfulfilled orders, with some customers facing waits of up to half a year.
AMD, which has steadily expanded its footprint in the server market, has also informed Chinese clients of supply constraints. While its situation appears less acute than Intel’s, delivery lead times for some AMD server CPUs have reportedly been pushed out to eight to ten weeks, signaling that capacity pressures are spreading across the industry.
These developments are being reported for the first time by Reuters and point to a broader structural issue rather than a short-term hiccup. The AI investment wave has triggered a surge not only in demand for specialized accelerators but also for the CPUs that coordinate workloads, manage data flows, and support complex, multi-tenant data center environments.
AI infrastructure strains the full stack
While Nvidia’s GPUs have dominated headlines as the most visible bottleneck in AI hardware, industry participants say CPUs have quietly become another pressure point. Modern AI systems still rely heavily on server CPUs for preprocessing data, orchestrating GPU workloads, handling inference pipelines, and running non-AI applications alongside training clusters.
The rise of agentic AI systems is intensifying this trend. Unlike earlier chatbot-style applications, agentic systems perform multi-step tasks, interact continuously with software tools, and operate around the clock. These workloads are significantly more CPU-intensive, increasing the number of processors required per deployment and amplifying demand just as supply is tightening.
Memory constraints are compounding the problem. Prices for memory chips have continued to climb, particularly in China, as suppliers prioritize AI-optimized products. Distributors say that when memory prices began rising sharply late last year, customers rushed to secure CPUs earlier than planned to avoid mismatched system builds or higher overall costs. That front-loading of orders further depleted available CPU inventories.
Manufacturing limits on both sides
The root causes of the shortages differ between Intel and AMD, but converge in outcome. Intel has struggled to ramp up production of its latest server chips amid persistent manufacturing yield challenges, limiting how quickly it can meet surging demand. AMD, meanwhile, relies on Taiwan Semiconductor Manufacturing Co., which has prioritized capacity for AI accelerators and advanced-node chips, leaving less room for high-volume server CPU production.
Intel acknowledged the tight conditions in a statement, saying the rapid adoption of AI has driven strong demand for what it described as “traditional compute.” The company said inventory levels are expected to be at their lowest point in the first quarter but added that it is addressing the situation aggressively and expects supply to improve in the second quarter through 2026, suggesting constraints could linger for months.
AMD reiterated comments made during its earnings call that it has boosted supply capabilities and remains confident in its ability to meet global demand, citing its supplier agreements and relationship with TSMC. Even so, the reported delays indicate that the fabless model offers limited insulation when the entire advanced semiconductor ecosystem is under strain.
Market dynamics amplify the impact
The shortages come against the backdrop of a shifting competitive landscape in server CPUs. Intel’s global market share has fallen from over 90% in 2019 to about 60% in 2025, while AMD’s share has risen from roughly 5% to more than 20%, according to a UBS report. In a tighter, more balanced market, disruptions at either supplier can have outsized effects, as customers have fewer surplus alternatives.
In China, major buyers include server manufacturers and cloud providers such as Alibaba and Tencent, which are racing to expand AI services while navigating U.S. export controls that restrict access to the most advanced accelerators. As GPUs become harder to source, CPUs have grown even more strategically important, making shortages particularly disruptive for long-term planning.
Taken together, the warnings from Intel and AMD highlight a critical shift in the AI boom. What began as a scramble for GPUs is evolving into a system-wide supply challenge spanning CPUs, memory, and manufacturing capacity. This means higher costs, longer deployment timelines, and tougher prioritization decisions for AI developers and cloud operators.



