Google is pushing deeper into the battle for control of the artificial intelligence infrastructure stack, announcing that its most powerful custom chip to date will soon be available for broad public use as it intensifies efforts to win over AI companies and large enterprise customers.
The search giant said on Thursday that the seventh generation of its Tensor Processing Unit (TPU), known as Ironwood, will be released to customers in the coming weeks. The chip was first unveiled in April and has since been tested by select partners for deployment. Its wider availability marks a significant step in Google’s long-running attempt to reduce the industry’s dependence on Nvidia and position its cloud platform as a serious alternative for the most demanding AI workloads.
Built entirely in-house, Ironwood is designed to handle the full spectrum of modern AI tasks, from training massive foundation models to running real-time applications such as chatbots and autonomous AI agents. Google says the chip can be scaled aggressively, with up to 9,216 Ironwood TPUs linked together in a single pod, a configuration the company claims eliminates data bottlenecks that often slow down large-scale AI systems.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
According to Google, this architecture gives customers “the ability to run and scale the largest, most data-intensive models in existence,” a clear pitch to AI labs and enterprises struggling with the cost and complexity of training and deploying next-generation models.
The move comes as Google, Microsoft, Amazon, and Meta pour unprecedented sums into building the infrastructure that will underpin the AI economy. So far, much of the boom has been powered by Nvidia’s graphics processing units, which dominate the market for training and inference. Google’s TPUs fall into the category of custom silicon, purpose-built chips that can deliver advantages in performance per dollar, energy efficiency, and tighter integration with cloud software.
TPUs are not new. Google has been developing them for roughly a decade, initially for internal use and later as a selling point for Google Cloud. Ironwood, however, represents a major leap. The company says it is more than four times faster than its predecessor, a gain that matters as model sizes and computational demands continue to rise.
Major customers are already committing at scale. Google disclosed that AI startup Anthropic plans to use up to one million Ironwood TPUs to run its Claude model, a sign that leading AI developers are increasingly willing to diversify away from Nvidia-only infrastructure. Such deals also strengthen the strategic ties between Google and fast-growing AI labs that need vast amounts of compute to compete.
Ironwood’s launch is part of a broader push to make Google Cloud cheaper, faster, and more flexible as it goes head-to-head with Amazon Web Services and Microsoft Azure, both of which still command larger shares of the cloud market. Alongside the new chip, Google is rolling out software and pricing upgrades aimed at improving performance and lowering costs for customers running AI workloads.
The strategy appears to be gaining traction. In its earnings report last week, Google said third-quarter cloud revenue rose 34% year on year to $15.15 billion. While that still trails rivals, the growth rate compares with a 40% increase at Microsoft Azure and 20% growth at AWS over the same period. Google also said it has signed more billion-dollar cloud contracts in the first nine months of 2025 than in the previous two years combined, underscoring rising demand from large customers.
That surge in interest is forcing Google to spend heavily. The company raised the upper end of its capital expenditure forecast for the year to $93 billion, up from $85 billion, reflecting massive investments in data centers, chips, and networking equipment needed to support AI demand.
“We are seeing substantial demand for our AI infrastructure products, including TPU-based and GPU-based solutions,” chief executive Sundar Pichai told analysts on the earnings call.
He described AI infrastructure as one of the main drivers of Google’s growth over the past year and said the company expects demand to remain strong as it continues to invest.
Ironwood’s public release highlights how the AI race is no longer just about models and software, but about who controls the underlying hardware and cloud platforms. Google is signaling that it intends to challenge Nvidia’s dominance directly by betting on custom silicon at scale, while also trying to narrow the gap with Amazon and Microsoft in cloud computing.



