Morgan Stanley is recalibrating the narrative around artificial intelligence infrastructure, arguing that the industry is moving into a phase where coordination, not just computation, defines competitive advantage.
The shift, driven by the emergence of autonomous or “agentic” AI systems, is expected to redirect capital flows across the semiconductor ecosystem and deepen demand for components that had taken a back seat during the initial GPU-led surge.
In a note released Sunday, the bank said the transition from generative models to systems capable of executing multi-step tasks is changing the locus of technical strain inside data centers.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“As AI transitions from generation to autonomous action, the computing bottleneck is shifting towards CPU and memory, driving a step-change in general-purpose compute intensity,” Morgan Stanley said, adding that demand for graphic processing units (GPUs) remains strong.
Early AI workloads were dominated by large-scale training and inference, processes that rely heavily on parallel processing power delivered by GPUs. That dynamic is now evolving. As AI systems begin to plan, sequence actions, and interact with multiple tools and datasets, the burden is shifting toward CPUs and memory subsystems.
Morgan Stanley describes CPUs as increasingly functioning as the “control layer” in these environments. Rather than simply supporting workloads, they are now responsible for orchestrating task execution, managing dependencies between processes, and coordinating interactions between models and external systems. This architectural role becomes more pronounced as AI systems grow more autonomous, with workflows that resemble distributed computing pipelines rather than single-pass inference tasks.
However, the bank estimates that agentic AI could add between $32.5 billion and $60 billion to the data-center CPU market by 2030, expanding a segment already valued at more than $100 billion. That projection suggests a structural broadening of the AI investment cycle, moving beyond the concentrated demand that has defined the current boom.
The memory market is expected to experience a similar uplift. Autonomous systems tend to retain context over longer durations, store intermediate states, and repeatedly access large datasets. This increases reliance on high-bandwidth memory and advanced storage architectures, tightening supply in segments that are already constrained. Morgan Stanley notes that this could enhance pricing power for manufacturers operating in these bottleneck areas.
Companies such as Micron Technology, Samsung Electronics, and SK hynix are positioned to benefit from that shift, particularly as demand for DRAM and high-bandwidth memory accelerates. These firms have already been central to supplying advanced memory used alongside AI accelerators, but the next phase could deepen their exposure as memory becomes a limiting factor in system performance.
On the processing side, the report broadens the field of potential beneficiaries. Nvidia remains dominant in accelerators, but Morgan Stanley’s framing suggests that its long-term advantage may increasingly depend on how effectively it integrates CPUs and system-level software into its platforms. Advanced Micro Devices is similarly positioned, with a portfolio spanning both GPUs and CPUs, allowing it to capture value across multiple layers of the stack.
Meanwhile, Intel and Arm Holdings could see renewed relevance. Intel’s entrenched position in server CPUs aligns directly with the anticipated increase in general-purpose compute demand, while Arm’s architecture continues to gain traction in data centers due to its power efficiency and scalability, factors that become more critical as workloads grow more complex and persistent.
Further upstream, manufacturing constraints remain a defining variable. Taiwan Semiconductor Manufacturing Company is expected to remain a central beneficiary as demand rises across both advanced logic and memory chips. At the same time, ASML retains its strategic importance as the sole supplier of extreme ultraviolet lithography systems required for cutting-edge chip production. Any sustained expansion in AI-driven demand inevitably feeds into capital expenditure cycles for these firms.
What emerges from Morgan Stanley’s analysis is a more layered view of the AI economy. The first wave concentrated value in a narrow group of GPU suppliers, driven by the urgency to build training capacity. The next phase appears more diffuse, with incremental spending spreading across CPUs, memory, networking, and fabrication.
There is also a subtext. As AI systems become more autonomous, reliability, latency, and coordination efficiency begin to matter as much as raw processing power. That raises the stakes for system architecture and integration, areas where incumbents with broad product ecosystems may hold an advantage over more specialized players.
For investors, the implication is not a rotation away from GPUs but an expansion of the opportunity set. The infrastructure required to support agentic AI is more complex, more interdependent, and potentially more capital-intensive.



