Microsoft’s plan to challenge Nvidia’s dominance in AI hardware has encountered a significant setback. A new report from The Information reveals that the company’s much-hyped AI accelerator chip, codenamed Braga, has been delayed by at least six months and is now expected to enter mass production in 2026.
More troubling for Microsoft, the chip is projected to lag far behind Nvidia’s Blackwell GPU in performance, undermining its ambition to compete on silicon and reduce dependency on external suppliers.
The report portrays a project struggling with internal disarray. Braga, part of Microsoft’s larger effort to produce custom silicon for its Azure cloud and AI infrastructure, was supposed to start rolling out in data centers in 2025. However, unexpected design revisions, pressure from OpenAI for added features, and a wave of employee exits have reportedly slowed progress. One-fifth of some project teams have left, creating a vacuum of engineering talent at a critical phase.
Sources close to the project say that the chip’s instability during simulation testing is one reason the launch had to be pushed back. Yet, instead of revising deadlines, Microsoft pressed on, leading to burnout and a disjointed development cycle.
The delay has opened a wider gap between Microsoft and Nvidia, which has already launched its next-generation Blackwell chips that are redefining the industry benchmark for AI performance. Microsoft now finds itself on the back foot, struggling to catch up in a fast-moving industry where AI model development and infrastructure evolution happen in months, not years.
Microsoft has invested in three chips so far—Braga, Braga-R, and Clea—targeted for data center deployment between 2025 and 2027. But with Braga’s delay, there is growing skepticism over whether the tech giant can meet its hardware roadmap. A separate chip initially meant for training AI models was reportedly scrapped earlier this year, narrowing Microsoft’s focus solely to inference tasks.
This setback could have broader market implications. Microsoft’s success in building competitive AI chips was expected to be a disruptive force for Nvidia, whose chips power nearly every major generative AI service today. Had Microsoft succeeded in scaling its hardware in-house, it would have sharply reduced its reliance on Nvidia GPUs, potentially denting Nvidia’s future revenue. The scenario mirrors what happened with Intel, which suffered major revenue losses when Apple began producing its own M-series chips, eliminating its need for Intel processors in Macs.
But with Microsoft now facing delays and a performance gap, Nvidia appears to have a longer runway. CEO Jensen Huang has downplayed concerns about competition, stating: “What’s the point of building an ASIC if it’s not going to be better than the one you can buy?”
Microsoft’s challenges now seem to underscore his point.
The company’s earlier chip, the Maia 100, unveiled in 2023, has mostly remained in internal testing. Designed before the explosion of large language models, the Maia chip is geared more toward image processing than generative AI. Sources inside the company say Maia has not powered any of Microsoft’s flagship AI services, such as Copilot or OpenAI integrations.
If current trends continue, Clea, the final chip in Microsoft’s roadmap, may become the company’s first truly competitive offering. It’s not expected until 2027, which could leave Microsoft trailing Nvidia—and others—for at least another two years. That’s a lifetime in AI.
Until then, Microsoft will likely remain a major customer of Nvidia, funneling billions into buying its GPUs for use in Azure and enterprise AI services. The delay not only affects Microsoft’s cost structure but also stalls its ability to compete aggressively with Google, Amazon, and Meta, all of which are also developing custom silicon to control performance and spending at scale.
In an era where AI infrastructure is seen as the next frontier of computing, Microsoft’s struggles underscore how difficult it is, even for the world’s most powerful tech firms, to move away from Nvidia’s grip.