Home Community Insights Google Cloud VP Says Company Must Double Compute Capacity Biannually to Keep Pace With AI Services

Google Cloud VP Says Company Must Double Compute Capacity Biannually to Keep Pace With AI Services

Google Cloud VP Says Company Must Double Compute Capacity Biannually to Keep Pace With AI Services

During an all-hands meeting on November 6, Amin Vahdat, the vice president of Google Cloud responsible for AI infrastructure, laid out a staggering internal mandate: the company must double its compute capacity every six months to keep pace with the voracious demand for artificial intelligence services.

In a presentation viewed by CNBC titled “AI Infrastructure,” Vahdat displayed a slide that underscored the exponential stakes of the current technology arms race.

“Now we must double every 6 months,” the slide read, projecting a trajectory that would require a “1000x” increase in capacity within the next four to five years. Vahdat described the competition in AI infrastructure as “the most critical and also the most expensive part of the AI race,” a sentiment that aligns with the company’s massive financial escalation.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

The internal disclosure comes just a week after Alphabet, Google’s parent company, reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year. The company now anticipates spending between $91 billion and $93 billion in 2024 alone, with a “significant increase” projected for 2026. This aggressive outlay is part of a broader trend among the “hyperscaler” giants—including Microsoft, Amazon, and Meta—who collectively expect to pour more than $380 billion into capital expenditures this year.

However, Vahdat emphasized to employees that the strategy is not merely about brute-force spending.

“Google’s job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” he said. While acknowledging that the company is “going to spend a lot,” he clarified that the ultimate goal is to engineer infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

To achieve this efficiency, Google is leaning heavily on custom silicon. Vahdat highlighted the public launch of the company’s seventh-generation Tensor Processing Unit (TPU), codenamed “Ironwood,” which Google claims is nearly 30 times more power-efficient than its first Cloud TPU from 2018. He also pointed to a strategic advantage provided by DeepMind, whose research offers a roadmap for what future AI models will require. The objective, according to Vahdat, is to deliver 1,000 times more capability and compute power for “essentially the same cost and increasingly, the same power, the same energy level”—a feat he admitted “won’t be easy.”

The meeting also featured Alphabet CEO Sundar Pichai, who fielded questions alongside CFO Anat Ashkenazi regarding the sustainability of this spending spree. Addressing an employee’s question about the “zeitgeist” of a potential AI bubble and market skepticism regarding the return on investment, Pichai offered a pragmatic defense of the company’s aggressive posture. He reiterated his long-held view that in platform shifts of this magnitude, the risk of underinvesting far outweighs the risk of overinvesting.

“I actually think for how extraordinary the cloud numbers were, those numbers would have been much better if we had more compute,” Pichai said, referencing the cloud unit’s recent 34% annual revenue growth to over $15 billion, with a backlog swelling to $155 billion.

He argued that Google’s balance sheet and diverse business model make it “better positioned to withstand, you know, misses, than other companies.”

Pichai provided concrete examples of how hardware constraints are already throttling product distribution. He cited the video generation tool Veo, noting that while the launch was exciting, the company could not roll it out to as many users within the Gemini app as they wished “because we are at a compute constraint.” This bottleneck validates the urgency of Vahdat’s six-month doubling mandate.

The internal dialogue at Google mirrors the broader anxiety in the market. The bubble conversation intensified this week after Nvidia’s earnings report. Despite Nvidia CEO Jensen Huang rejecting the bubble premise and reporting 62% revenue growth with strong guidance, markets reacted negatively, with Nvidia shares sliding 3.2% and dragging the Nasdaq down 2.2%. Pichai acknowledged these external jitters, telling employees that 2026 will be “intense” and that “there will be no doubt ups and downs.”

Looking toward the financial horizon, CFO Anat Ashkenazi addressed concerns that capital expenditures are accelerating faster than operating income. She framed the spending as a critical opportunity to migrate more customers from physical data centers into the cloud, asserting that “the opportunity in front of us is significant and we can’t miss that momentum.”

With the launch of the new Gemini 3 model and the race against OpenAI tightening, Mountain View is preaching that the only way out is through—specifically, through a massive, unprecedented expansion of silicon and steel.

No posts to display

1 THOUGHT ON Google Cloud VP Says Company Must Double Compute Capacity Biannually to Keep Pace With AI Services

Post Comment

Please enter your comment!
Please enter your name here