AI chipmaker Cerebras Systems has struck a landmark agreement with OpenAI to supply up to 750 megawatts of computing power through 2028.
The deal, which underscores both OpenAI’s escalating demand for specialized AI infrastructure and Cerebras’ ambition to emerge as a serious alternative to Nvidia in the booming AI hardware market, marks one of the most consequential infrastructure bets yet in the race to scale artificial intelligence.
Valued at more than $10 billion, according to people familiar with the matter, the agreement is not just a supply contract. It is a strategic signal that the market for AI compute is entering a more fragmented and competitive phase, with Nvidia’s long-standing dominance now facing sustained pressure from specialized challengers.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab (class begins Jan 24 2026).
At its core, the deal gives OpenAI access to vast amounts of dedicated inference and training capacity at a time when demand for AI computing power is exploding. Generative AI models are becoming larger, more capable, and more expensive to run. For OpenAI, which serves hundreds of millions of users and an expanding roster of enterprise clients, securing predictable, long-term compute has become as critical as model innovation itself.
Sachin Katti, who works on compute infrastructure at OpenAI, framed Cerebras’ role as complementary rather than disruptive to existing suppliers.
“Cerebras adds a dedicated low-latency inference solution to our platform,” he wrote, pointing to faster responses and more natural interactions as immediate benefits.
The emphasis on inference is notable. While much public attention focuses on training massive models, inference, the process of running models in real time for users, is increasingly the bottleneck as adoption surges.
The agreement represents a major diversification milestone for Cerebras. Until recently, the company relied heavily on G42, a United Arab Emirates–based AI firm, which accounted for 87% of its revenue in the first half of 2024. Landing OpenAI as a marquee customer immediately reshapes that concentration risk and gives Cerebras something it has long sought: a second anchor client with global scale and credibility.
Andrew Feldman, Cerebras’ co-founder and chief executive, has been candid about the company’s strategy.
“The way you have three very large customers is start with one very large customer, and you keep them happy, and then you win the second one,” he told CNBC.
In that sense, OpenAI is not just another customer; it is a strategic partner. It is validation of Cerebras’ thesis that purpose-built AI processors, rather than general-purpose GPUs, can play a central role in the next phase of AI deployment.
Cerebras’ technology stands apart from the GPU-centric approach that has powered much of the AI boom. The company builds wafer-scale processors, effectively turning an entire silicon wafer into a single, massive chip optimized for AI workloads. This architecture allows for extremely fast data movement and low latency, attributes that are particularly attractive for large language models serving real-time user requests. That specialization positions Cerebras as a direct challenger to Nvidia’s dominance, even as Nvidia continues to sell vast quantities of chips to cloud giants like Amazon and Microsoft, which then rent that capacity to AI developers by the hour.
The timing of the deal also matters. Nvidia’s ascent to a $5 trillion market capitalization in October underscored how central GPUs have become to the AI economy. But that concentration has raised concerns among customers about supply constraints, pricing power, and strategic dependence on a single vendor.
OpenAI’s move to deepen its relationship with Cerebras can be read as a hedge against those risks, alongside its continued use of Nvidia and Advanced Micro Devices chips.
This is not a sudden partnership. OpenAI and Cerebras have been in technical discussions for years and worked together to ensure that OpenAI’s gpt-oss open-weight models ran smoothly on Cerebras hardware. Feldman said those conversations culminated in a term sheet signed just before Thanksgiving.
The roots go back even further. Internal emails revealed during litigation between Sam Altman and Elon Musk show that OpenAI evaluated Cerebras’ technology as early as 2017. In 2018, Musk attempted to acquire Cerebras, an effort Feldman said was tied to Musk’s ambitions at Tesla.
The scale of the new commitment will likely accelerate Cerebras’ global footprint. The company already operates data centers in the United States and abroad, and Feldman has indicated that expansion will continue under the OpenAI agreement. That build-out comes as governments and regulators increasingly scrutinize where AI infrastructure is located and who controls it, adding a geopolitical dimension to what might otherwise look like a purely commercial deal.
The announcement also lands against the backdrop of Cerebras’ complicated journey toward the public markets. The company filed confidentially for an initial public offering in September 2024, revealing rapid revenue growth, with second-quarter revenue nearing $70 million, up sharply from about $6 million a year earlier. Losses, however, also widened, with a net loss of nearly $51 million. The absence of major investment banks from the prospectus and the use of a non–Big Four auditor raised eyebrows. Cerebras withdrew the filing a month later after closing a $1.1 billion funding round that valued the company at $8.1 billion, saying its disclosures were already outdated.
Feldman has said a revised filing will better capture the company’s improved business and its strategy in a fast-moving AI landscape, though he declined to give a timeline. The OpenAI deal, while not disclosed in detail, strengthens the narrative Cerebras is likely to present to future investors: long-term revenue visibility, blue-chip customers, and a clear role in the AI infrastructure stack.
More broadly, the agreement underscores a shift underway across the AI industry. As models proliferate and use cases expand, the focus is moving from raw innovation to reliability, cost control, and scale. AI developers are no longer content to rely on a single hardware supplier or a single cloud partner. They are assembling portfolios of compute options, blending GPUs, specialized accelerators, and custom silicon to optimize performance and economics.
In that context, OpenAI’s partnership with Cerebras is less about replacing Nvidia and more about reshaping the balance of power in AI computing. It suggests a future in which no single company controls the pipes that feed the world’s most advanced models, and where specialized hardware players have a real chance to carve out enduring roles alongside the industry’s giants.



