Home Latest Insights | News OpenAI Reportedly Seeks Alternative Following Growing Dissatisfaction with Nvidia’s Chips

OpenAI Reportedly Seeks Alternative Following Growing Dissatisfaction with Nvidia’s Chips

OpenAI Reportedly Seeks Alternative Following Growing Dissatisfaction with Nvidia’s Chips

OpenAI is showing growing dissatisfaction with some of Nvidia’s latest artificial intelligence chips, marking an important shift in the balance of power shaping the global AI boom, and introducing fresh uncertainty into a partnership that has, until now, symbolized the sector’s rapid rise.

The development is also emerging as a meaningful stress test for the chipmaker’s dominance, and a sign that the next phase of the AI boom may be shaped less by who trains the biggest models and more by who can run them most efficiently.

Eight sources familiar with the matter say the ChatGPT-maker has, since last year, been exploring alternatives to Nvidia hardware for parts of its computing needs, with a particular focus on inference, according to Reuters. Inference is the stage where trained AI models generate responses to user prompts, power applications, and handle real-time workloads. It is also where costs scale rapidly as usage grows, making chip efficiency and pricing critical.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Nvidia remains the clear leader in chips used to train large AI models, an area where its GPUs and software ecosystem have become deeply entrenched across the industry. But inference has increasingly become a separate and more commercially sensitive battlefield. As AI tools move from experimental deployments into everyday consumer and enterprise use, inference workloads now account for a rising share of total computing demand.

Sources say OpenAI believes some of Nvidia’s newer chips are more heavily optimized for training rather than the kind of high-volume, always-on inference workloads that now dominate its operations. That perception has pushed OpenAI to assess other options, including specialized inference chips and alternative suppliers that promise better performance-per-watt or lower operating costs.

The shift underscores a broader challenge facing leading AI developers. Running models at scale is expensive, energy-intensive, and difficult to optimize. For OpenAI, whose products serve hundreds of millions of users, even marginal gains in efficiency can translate into significant savings. Inference chips, unlike training hardware, must balance speed, cost, and power consumption, particularly as regulators, customers, and investors scrutinize energy use more closely.

This search for alternatives comes at a sensitive moment for Nvidia. The company has built its position not just on powerful hardware, but on an integrated stack that includes software, developer tools, and tight relationships with top AI labs. Any sign that its most high-profile customer is hedging its bets, even partially, introduces uncertainty about how durable that dominance will be as AI matures.

It also complicates a parallel set of discussions between the two companies. In September, Nvidia said it intended to invest as much as $100 billion in OpenAI, a deal that would give the chipmaker an equity stake while providing OpenAI with capital to secure scarce advanced chips. The talks highlighted how interdependent the two companies have become, with OpenAI relying on Nvidia’s supply and Nvidia benefiting from OpenAI’s scale and influence in shaping AI workloads.

OpenAI’s exploration of alternatives does not amount to a break with Nvidia. Sources stress that Nvidia remains central to OpenAI’s training infrastructure and will continue to supply a significant portion of its computing needs. Switching inference infrastructure at scale is technically complex and would likely happen gradually, if at all. Still, the move sends a signal across the AI sector that even Nvidia’s largest customers are unwilling to rely on a single supplier indefinitely.

The implications extend beyond the two companies. Inference is widely expected to become the dominant source of AI chip demand over the coming decade, as models are deployed across search, productivity software, customer service, and consumer devices. That creates an opening for rivals and for new chip designs tailored specifically to inference workloads, rather than the brute-force training that has defined the AI arms race so far.

The moment is believed to underline the need for Nvidia to defend its leadership while adapting to a market that is shifting from building models to running them cheaply and reliably at scale. It also reflects OpenAI’s strategic effort to control costs, reduce dependency, and maintain flexibility as AI usage continues to grow.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here