OpenAI said Wednesday it is partnering with Google Cloud to provide additional computing power for its popular AI services, including ChatGPT and its developer-facing API, marking a significant shift in the company’s cloud infrastructure strategy as it scrambles to meet soaring demand for generative AI tools.
The move broadens OpenAI’s roster of cloud providers beyond Microsoft, its most prominent backer and infrastructure partner since 2019. Google now joins Microsoft, CoreWeave, and Oracle as suppliers of computing capacity for OpenAI’s growing suite of AI products. According to the company, ChatGPT and its API will now run on Google Cloud infrastructure in the United States, the United Kingdom, Japan, the Netherlands, and Norway.
This diversification follows rising strain on OpenAI’s capacity. In April, CEO Sam Altman made a blunt public appeal, posting on X: “if anyone has GPU capacity in 100k chunks we can get asap please call!”—a clear sign that the company was hitting the limits of its compute power amid explosive user demand and rapid product expansion. The company relies heavily on Nvidia’s advanced graphics processing units (GPUs) to power its large language models.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The Google partnership marks a key win for Google Cloud, which has been playing catch-up with Amazon Web Services and Microsoft Azure in the cloud services market. It also reinforces Google’s deepening play in AI infrastructure, where it already hosts Anthropic—a leading OpenAI rival founded by former OpenAI employees—and continues to advance its own models like Gemini.
OpenAI’s new arrangement also reflects shifting dynamics in its relationship with Microsoft. Despite Microsoft’s multi-billion-dollar investment in OpenAI and early exclusivity over its cloud workloads, that arrangement has evolved. In January, Microsoft confirmed it had moved to a “right of first refusal” model, meaning OpenAI can now seek other vendors when more capacity is needed. Microsoft still holds exclusive rights to OpenAI’s APIs and integrates them into its own products like Copilot in Microsoft 365, Azure OpenAI Service, and GitHub.
In recent months, OpenAI has aggressively expanded its compute partnerships. In March, it signed a $12 billion deal with CoreWeave, a specialized AI cloud provider backed by Nvidia. That agreement, set to run for five years, aimed to bolster OpenAI’s infrastructure with GPU-dense data centers tailored for training and serving large AI models. Oracle, another infrastructure partner, announced last year it was working with Microsoft and OpenAI to allow OpenAI workloads to run on Oracle Cloud Infrastructure (OCI) via Microsoft’s Azure platform.
Google Cloud, which once lagged in major AI hosting deals, has now become a direct beneficiary of the generative AI gold rush. Its data centers are designed to handle massive AI workloads and include Google’s own Tensor Processing Units (TPUs), which rival Nvidia’s GPUs in AI performance. While OpenAI is not expected to rely on Google’s TPUs in this arrangement, the move still places Google alongside Microsoft as one of the key players supporting ChatGPT’s growth.
Industry analysts say OpenAI’s multi-cloud strategy is part of a broader trend among AI firms trying to reduce risk, avoid vendor lock-in, and ensure continuity amid GPU shortages. It also underscores the arms race among hyperscalers—Microsoft, Google, Amazon, and Oracle—each vying to dominate the infrastructure layer of the AI economy.
The deal enhances Google’s reputation as a credible host for mission-critical AI services, not just for its in-house teams but also for the broader ecosystem of AI companies competing for scale and global reach.



