Home Community Insights Broadcom Deepens AI Bet With Google and Anthropic Partnerships as Compute Arms Race Accelerates

Broadcom Deepens AI Bet With Google and Anthropic Partnerships as Compute Arms Race Accelerates

Broadcom Deepens AI Bet With Google and Anthropic Partnerships as Compute Arms Race Accelerates

Broadcom has tightened its grip on the infrastructure layer of the artificial intelligence boom, unveiling fresh long-term agreements with Broadcom Inc., Google LLC, and Anthropic PBC that underscore the scale at which frontier AI companies are now building.

In a securities filing released Monday, Broadcom said it has agreed to develop and supply future generations of Google’s custom artificial intelligence chips through 2031, while also expanding a separate arrangement that will give Anthropic access to roughly 3.5 gigawatts of TPU-based computing capacity beginning in 2027.

The market’s immediate reaction was telling. Broadcom shares rose about 3 per cent in extended trading as investors interpreted the agreements as another sign that the company is emerging as one of the biggest beneficiaries of the AI infrastructure build-out, second only to Nvidia Corporation in strategic importance.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Google’s Tensor Processing Unit, or TPU, its in-house alternative to Nvidia’s GPUs, is an integral part of the deal. Broadcom has been a crucial design and supply partner in that effort for years, helping turn Google’s chip blueprints into production-scale silicon.

The latest agreement significantly extends that relationship. Under the long-term arrangement, Broadcom will not only help produce future TPU generations but will also supply networking and other hardware components used in Google’s next-generation AI racks through the end of the decade.

For Google, this is a strategic push to reduce dependence on Nvidia’s costly and supply-constrained graphics processors, while strengthening the economics of its cloud and AI offerings. TPU sales have increasingly become a core growth engine for Google Cloud as the company seeks to prove that its massive AI capital expenditure is translating into recurring enterprise revenue.

The more consequential signal, however, may lie in Anthropic’s expanded compute commitment. A 3.5-gigawatt allocation is enormous by data center standards. To put it in perspective, this is power capacity on the scale of multiple hyperscale campuses and enough to support the training and inference demands of frontier foundation models serving millions of users and enterprise clients globally.

The deal suggests Anthropic is preparing for an aggressive next phase of model scaling. Only last month, Broadcom chief executive Hock Tan told investors that the company had already made “a very good start” in 2026 by delivering about 1 gigawatt of compute for Anthropic through Google’s TPUs.

“For 2027, this demand is expected to surge in excess of 3 gigawatts of compute,” Tan said.

Monday’s filing now effectively formalizes that projection. The numbers also point to the financial scale of the AI race. Analysts at Mizuho had earlier estimated Broadcom could generate $21 billion in AI-related revenue from Anthropic in 2026, rising to $42 billion in 2027 if deployment proceeds as expected.

While no dollar figure was disclosed in the filing, those projections illustrate how AI infrastructure is fast becoming a tens-of-billions-of-dollars business for chipmakers outside Nvidia’s ecosystem.

Anthropic itself said the expanded partnership reflects the extraordinary growth in demand for its Claude models. The company disclosed that its run-rate revenue has climbed from roughly $9 billion at the end of 2025 to over $30 billion in 2026, a pace that helps explain why it is locking in multi-gigawatt capacity years in advance.

This also sharpens the competitive picture in the AI model wars. Anthropic and OpenAI are increasingly competing not just on model performance, but on privileged access to compute. OpenAI is simultaneously working with Broadcom on custom silicon while also securing large GPU commitments from Advanced Micro Devices, Inc., and cloud partners such as Microsoft Corporation and Amazon.com, Inc.

In effect, the race is no longer only about algorithms. It is about who can secure the electricity, chips, networking, and data-center footprint necessary to train ever-larger models. That is why Monday’s announcement matters beyond Broadcom’s stock price. It reinforces the idea that AI leadership is increasingly being determined by infrastructure lock-ins signed years ahead of deployment.

Broadcom is positioning itself at the center of that stack, powering Google’s silicon ambitions while simultaneously enabling one of the fastest-growing AI labs in the world.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here