Nvidia reported fiscal fourth-quarter results on Wednesday that topped Wall Street expectations, propelled by explosive growth in its data center division. Shares rose about 2% in extended trading following the announcement.
According to CNBC, the company posted adjusted earnings per share of $1.62, ahead of the $1.53 expected by analysts polled by LSEG. Revenue reached $68.13 billion, exceeding estimates of $66.21 billion and marking a 73% increase from $39.3 billion a year earlier.
The numbers underscore Nvidia’s central role in the global AI infrastructure buildout. More than 91% of the company’s total revenue now comes from its data center unit, which houses its artificial intelligence accelerators and associated networking components.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Data center revenue totaled $62.3 billion in the quarter, ahead of StreetAccount estimates of $60.69 billion and up 75% year over year. Net income nearly doubled to $43 billion, or $1.76 per share, compared with $22.1 billion, or 89 cents per share, in the same quarter last year.
Guidance signals sustained AI demand
For the fiscal first quarter, Nvidia forecast revenue of $78 billion, plus or minus 2%, well above analyst expectations of $72.6 billion. The company said its outlook does not assume data center revenue from China, signaling that growth projections are anchored in other regions amid ongoing geopolitical and export restrictions.
The guidance reinforces Nvidia’s position as the primary beneficiary of AI capital expenditures. So far in 2026, Nvidia shares are up 5%, outperforming the broader Nasdaq, which is down 0.4%. Among trillion-dollar companies, only Apple has posted gains this year, and those are modest by comparison.
Hyperscaler spending drives momentum
Investors had early visibility into AI infrastructure momentum when the four largest U.S. cloud providers — Alphabet, Amazon, Meta, and Microsoft — reported quarterly results and outlined aggressive capital expenditure plans. Based on company forecasts and analyst projections, the combined capex for 2026 could approach $700 billion as hyperscalers expand AI data centers.
In CFO commentary, Nvidia said hyperscalers remained its largest customer category, accounting for just over 50% of data center revenue. That concentration underscores both the durability of demand and the strategic importance of a handful of buyers in shaping Nvidia’s revenue trajectory.
Networking becomes a breakout growth engine
Within the data center segment, Nvidia’s networking business posted $10.98 billion in quarterly revenue, up 263% year over year. The surge reflects growing adoption of NVLink interconnect technology and Spectrum-X Ethernet switches, which enable large clusters of GPUs to operate as unified AI supercomputers. New deals with Meta contributed to the strength.
The rapid growth in networking highlights a structural shift: AI workloads increasingly depend not only on raw compute but also on high-bandwidth, low-latency interconnects. As models scale into trillions of parameters, data transfer between GPUs becomes a critical bottleneck, elevating the value of Nvidia’s integrated hardware stack.
Gaming steady, but no longer the growth driver
Nvidia’s gaming division, once its primary revenue engine, generated $3.7 billion in revenue, up 47% year over year but down 13% sequentially. Analysts have speculated that the company may delay launching a new consumer GPU this year due to memory constraints, prioritizing high-margin AI accelerators such as rack-scale systems built around its 72-GPU Grace Blackwell architecture.
Global memory shortages have emerged as a risk factor. High-bandwidth memory (HBM), essential for AI accelerators, remains supply-constrained, forcing chipmakers to allocate production toward enterprise AI demand rather than consumer graphics.
Vera Rubin on deck
Investor focus is increasingly turning to Nvidia’s next-generation rack-scale system, Vera Rubin, the successor to Grace Blackwell. CFO Colette Kress said the company shipped its first Vera Rubin samples to customers this week and remains on track for production shipments in the second half of the year.
Vera Rubin is expected to deliver 10 times more performance per watt, a critical metric as power constraints become a defining challenge for global data centers. Energy efficiency is now a competitive differentiator, as hyperscalers grapple with grid limitations and sustainability targets.
Nvidia said it is expanding manufacturing beyond Asia into the United States and Latin America to strengthen supply chain resiliency and reduce geographic concentration risk.
“These moves are expected to strengthen our supply chain, add resiliency and redundancy, and meet the growing demand for AI infrastructure,” the company said in its filing.
It added that scaling production will depend on the capacity of local manufacturing ecosystems to ramp output on time.
The shift reflects broader geopolitical pressures and export controls that have reshaped semiconductor supply chains. Nvidia’s decision to exclude China data center revenue from forward guidance further signals sensitivity to regulatory constraints.
In the automotive segment, which includes chips for autonomous vehicles and robotics, Nvidia reported $604 million in revenue, up 6% year over year but below StreetAccount expectations of $654.8 million. The modest growth contrasts sharply with the data center surge and suggests that AI-driven demand remains concentrated in cloud infrastructure rather than edge deployment.
Strategic investments and capital risk
Beyond product revenue, Nvidia disclosed that it invested $17.5 billion over the year in private companies and infrastructure funds, primarily supporting early-stage AI startups. The company acknowledged in its annual filing that those investments “may not become profitable in the near term, or at all.”
The strategy positions Nvidia not only as a hardware supplier but also as a financial backer of the broader AI ecosystem. However, it introduces balance sheet risk, particularly if venture-backed AI firms struggle to monetize at the pace implied by current infrastructure spending.
Nvidia has also taken a large stake in Intel, further entangling it in the competitive and strategic dynamics of the semiconductor industry.
A defining cycle for AI infrastructure
The quarter reinforces Nvidia’s dominance at a pivotal moment in the AI investment cycle. Hyperscaler capex remains elevated, next-generation systems promise significant efficiency gains, and networking has emerged as a high-growth adjacency.
The open question for investors is sustainability. If AI adoption continues at its current pace, Nvidia’s vertically integrated stack — from GPUs to interconnects to rack-scale systems — positions it to capture disproportionate value. If enterprise ROI slows or capital markets tighten, the scale of infrastructure commitments could come under scrutiny.
For now, Nvidia’s results suggest that AI infrastructure demand remains robust — and that the company continues to sit at the center of the global buildout.



