Artificial intelligence leader OpenAI has announced a major partnership with Broadcom to design and produce its own computer chips, in a move that underscores the company’s growing ambition to control every layer of the AI supply chain and reduce dependence on Nvidia’s dominant hardware.
The partnership, revealed Monday, will enable OpenAI to develop and deploy up to 10 gigawatts of custom AI accelerators, a massive amount of computing capacity roughly equivalent to the output of ten nuclear reactors. The project aims to power the company’s sprawling AI data centers that support products such as ChatGPT and Sora, and advance OpenAI’s long-term goal of creating superintelligent AI systems.
In a statement, OpenAI said that building its own chips allows it to “embed what it’s learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Under the deal, Broadcom will be responsible for deploying racks of custom-built hardware beginning in the second half of 2026, with full rollout expected to be completed by the end of 2029, according to details shared in the announcement.
OpenAI co-founder and CEO Sam Altman described the collaboration as a foundational step toward ensuring the company has the infrastructure it needs to sustain its rapid pace of innovation.
Part of a Wider Infrastructure Strategy
The Broadcom deal follows OpenAI’s six-gigawatt partnership with AMD and a 10-gigawatt deal with Nvidia, both announced earlier this year. Collectively, these agreements represent one of the most aggressive infrastructure buildouts in the AI industry, reflecting OpenAI’s intent to diversify its chip supply and secure guaranteed access to the computing power required to train and deploy massive AI models.
These arrangements became possible only after OpenAI modified its exclusive cloud computing agreement with Microsoft, which had previously limited its flexibility in sourcing compute from other vendors.
The move also places OpenAI alongside major technology firms such as Meta, Google, and Microsoft, all of which are investing heavily in custom AI chips to alleviate dependence on Nvidia’s graphics processing units (GPUs), which have become the gold standard for AI workloads.
Analysts say the push toward in-house chip design is both a strategic and economic response to the global shortage of AI chips, which has constrained supply chains and driven up costs. Though custom silicon projects have not yet threatened Nvidia’s market dominance, they have significantly benefited companies like Broadcom, which designs specialized components for AI systems.
Market Reaction
Investors welcomed the news, sending Broadcom shares up 9% on Monday, rebounding from a market selloff last Friday. Shares of other chipmakers also saw modest gains as optimism returned to semiconductor markets amid easing U.S.-China trade tensions.
Over the weekend, President Donald Trump sought to ease concerns about escalating geopolitical frictions with Beijing, writing on Truth Social that it “will all be fine,” in reference to trade talks and supply chain stability.
The timing of OpenAI’s Broadcom deal reflects broader efforts across the U.S. tech industry to fortify chip manufacturing alliances and secure long-term supply resilience in the face of unpredictable global dynamics.
However, OpenAI’s decision to move into chip design marks a strategic milestone that could reshape the economics of AI development. The company gains the ability to tailor chip architecture to the unique demands of its AI models by vertically integrating its hardware stack, potentially improving performance efficiency while cutting operational costs.
The 10-gigawatt infrastructure target places OpenAI’s ambitions on par with some of the world’s largest cloud operators. Analysts note that such capacity could support next-generation multimodal systems and large-scale simulation environments central to OpenAI’s pursuit of artificial general intelligence (AGI).
While Nvidia remains the undisputed leader in AI computing, the combined effect of OpenAI’s deals with AMD, Nvidia, and now Broadcom signals a new phase of diversification that may, over time, dilute Nvidia’s market control.
As the race for computing power accelerates, AI infrastructure has rapidly become one of the most lucrative segments of the technology industry, with smaller chip design firms and data center specialists also benefiting from the surge in demand.
The partnership with Broadcom, for OpenAI, is not just about building chips; it is about building independence, ensuring that the next generation of AI innovation will not be limited by the availability of hardware.



