Home Latest Insights | News Anthropic in Early Stages of Exploring Possibilities of Designing its Own AI Chips

Anthropic in Early Stages of Exploring Possibilities of Designing its Own AI Chips

Anthropic in Early Stages of Exploring Possibilities of Designing its Own AI Chips

Anthropic is in the very early stages of exploring the possibility of designing its own AI chips. The company hasn’t committed to the idea, formed a dedicated team, or settled on any specific architecture. It could still decide to continue solely buying chips from existing suppliers.

Sources described the discussions as preliminary, driven by the chronic shortage of high-end AI accelerators needed to train and run ever-larger models. Anthropic currently relies on a diversified mix of hardware: NVIDIA GPUs including recent use of Blackwell for at least one major model like Mythos.

Google’s TPUs via a major expansion on Google Cloud, potentially up to ~1 million TPUs in partnership with Broadcom. Amazon’s Trainium and Inferentia chips through its primary cloud and training partnership on AWS, including the massive Project Rainier cluster. This multi-vendor strategy provides resilience, but surging demand for Claude with Anthropic’s annualized revenue reportedly tripling to a $30B+ run rate is straining supply and driving up costs.

Designing in-house silicon could give Anthropic more control over performance, power efficiency, and long-term economics—reducing what some call the Nvidia tax on margins and availability. This isn’t isolated. Other frontier labs and hyperscalers are pursuing similar paths: Meta and OpenAI already have custom chip projects underway.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Google (TPUs), Amazon (Trainium/Inferentia), and Microsoft (with Maia) have long invested in custom AI silicon. Partnerships like Anthropic’s with Broadcom for custom TPUs show they’re already leaning into semi-custom designs before going fully in-house. Designing a competitive AI chip from scratch is extremely expensive (hundreds of millions of dollars) and technically demanding. Success isn’t guaranteed—NVIDIA still dominates due to its CUDA software ecosystem, scale, and iterative hardware improvements.

Many attempts at custom AI accelerators have underperformed or been abandoned. If Anthropic moves forward, it could: Lower long-term compute costs. Optimize hardware specifically for Claude’s architecture and safety-focused training methods. Further diversify away from any single supplier. However, execution risks are high, and it would take years to reach production scale.

For now, the report signals strategic caution amid explosive AI growth rather than an imminent break from NVIDIA or its cloud partners. This fits the ongoing vertical integration push in AI: labs realizing that software model performance is increasingly bottlenecked by hardware access and cost. The compute race is shifting from who has the most GPUs toward who can build or control the best silicon stack.

We’ll likely see more such explorations as inference and training demands continue to outpace supply. Custom chips could reduce long-term dependence on expensive Nvidia GPUs and ease shortages. Optimization for Claude’s architecture might improve trainin and inference efficiency, power usage, and performance-per-watt, lowering the massive compute bills that frontier labs face.

Greater control over hardware tailored to safety-focused or specific model needs, potentially accelerating development cycles. However, success is far from guaranteed—designing a competitive AI accelerator can cost ~$500 million upfront, plus years of engineering, manufacturing likely via TSMC or similar, and software ecosystem building.

High execution risk. Failure could waste capital. Near-term, Anthropic continues diversifying via deals like expanded Google TPUs with Broadcom, scaling to multi-gigawatt capacity and CoreWeave for Nvidia-based cloud

No posts to display

Post Comment

Please enter your comment!
Please enter your name here