Nvidia Corp. has reportedly pressed Samsung Electronics to hasten deliveries of its sixth-generation high-bandwidth memory (HBM4) chips, even before completing full reliability and quality evaluations, according to Chosun.
The move signals a high-stakes scramble for advanced memory that underscores the shifting power dynamics in the global AI ecosystem.
Industry sources indicate Samsung is finalizing inspections for mass production and shipments starting in February 2026, but Nvidia’s request to bypass detailed testing reflects urgency driven by intensifying competition from rivals like AMD and Google in AI accelerator design.
This marks a notable role reversal from the HBM3E era, where Samsung’s supply hinged on passing Nvidia’s rigorous qualifications; now, within a single generation, Nvidia is prioritizing speed over exhaustive verification, viewing HBM4 as sufficiently vetted. The company plans to integrate HBM4 into its next-generation Rubin AI accelerators, which demand unprecedented bandwidth and capacity for handling massive AI workloads.
The collaboration extends beyond basic supply: Nvidia and Samsung are synchronizing production timelines, with HBM4 modules slated for immediate use in Rubin performance demonstrations ahead of the official GTC 2026 unveiling. This partnership tightens Korea-U.S. supply loops for top-tier AI silicon, but rushing shipments risks exposing issues in reliability, thermal management, or yield consistency—challenges that have plagued prior HBM ramps.
The urgency highlights a profound shift in the AI supply chain, where Korean memory giants Samsung and SK Hynix now command bottleneck control. Once viewed as subordinate suppliers, these firms have ascended to “super subcontractor” status, dictating terms in a market where HBM scarcity directly impacts AI chip launches, data center expansions, and competitive edges.
Without sufficient HBM, even Nvidia’s advanced GPUs falter, as the memory supports the computational intensity required for frontier AI models.
Market forecasts underscore this dominance. Counterpoint Research projects SK Hynix capturing 54% of the global HBM4 market in 2026, with Samsung at 28%—together holding over 80% share. UBS anticipates SK Hynix securing approximately 70% of Nvidia’s HBM4 needs for the Rubin platform, while Samsung aims for over 30%. In the broader HBM market for Q3 2025, SK Hynix led with 53% revenue share, followed by Samsung at 35% and Micron at 11%.
Financial projections reflect the “memory supercycle” boom. Morgan Stanley forecasts Samsung’s 2026 operating profit at 245 trillion won ($180 billion)—nearly six times its 43.6 trillion won in 2025—while SK Hynix is expected to hit 179 trillion won, up from 47.2 trillion won.
Combined, the duo could exceed 200 trillion won in profits, per some estimates. SK Hynix’s Q4 2025 operating profit surged 137% to 19.2 trillion won ($13.5 billion), beating forecasts, while Samsung’s memory division recorded 24.9 trillion won for FY2025. Both firms have sold out their 2026 memory inventory, entering a phase of severe supply constraints and elevated margins.
Capital expenditures are ramping up accordingly. SK Hynix plans over 30 trillion won in 2026 (up from mid-20 trillion in 2025), with 90% allocated to DRAM and HBM; Samsung anticipates exceeding 40 trillion won, focusing on HBM output, Pyeongtaek expansion, and its Texas fab. This investment surge responds to insatiable AI demand, with Nvidia’s Jensen Huang warning of massive memory needs straining supply chains.
Broader shortages in commodity DRAM and NAND—exacerbated by HBM prioritization—have driven explosive price hikes: consumer DRAM up 750% to $11.50 from $1.35 in January 2025, NAND to $9.46 from $2.18. These dynamics enhance Korean firms’ bargaining power, likened by KB Securities’ Kim Dong-won to TSMC’s foundry dominance.
KAIST professor Kim Jeong-ho noted memory’s evolution toward customized products amplifies their influence: “In the future AI era, memory will dominate the industry.”
SK Hynix shares surged 23% in a week amid speculation of Nvidia HBM4 breakthroughs.
This shift elevates Korean memory makers to “super subcontractor” status, controlling the AI bottleneck. As demand outpaces supply into 2027, their leverage is expected to reshape alliances, pricing, and innovation timelines.






