In a move to expand beyond its traditional mobile processor business, Qualcomm has announced two new artificial intelligence chips — the AI200 and AI250 — designed to rival Nvidia’s dominance in the fast-growing AI hardware market.
The company said the AI200 will launch in 2026, followed by the AI250 in 2027, marking its most ambitious push yet into high-performance computing. Both chips are built on Qualcomm’s Hexagon neural processing unit (NPU) architecture, the same technology that powers AI features in its Snapdragon chips for smartphones and laptops.
Unlike Nvidia’s flagship GPUs, which are designed for both training and inference, Qualcomm’s new chips are focused solely on AI inference — the process of running already-trained models efficiently in data centers or edge computing systems.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
According to CNBC, Qualcomm’s AI processors can be deployed inside large data racks, with up to 72 chips operating as a single computer, mirroring Nvidia’s and AMD’s multi-GPU configurations.
The AI200 chip will feature 768GB of RAM, optimized for inference workloads such as generative AI applications, voice assistants, and multimodal reasoning tasks. Qualcomm said the next-generation AI250 will deliver a “generational leap in efficiency,” offering “much lower power consumption” while maintaining high processing power — a critical advantage as energy costs rise across AI data centers.
Qualcomm’s expansion into AI datacenter hardware has already attracted major partnerships. Humain, the AI firm backed by Saudi Arabia’s Public Investment Fund (PIF), announced plans to adopt both the AI200 and AI250 chips to power its AI datacenters across the Middle East. The collaboration is part of a broader effort by Saudi Arabia to position itself as a regional AI hub, with large-scale investments in generative AI and cloud infrastructure.
The partnership pinpoints Qualcomm’s growing importance in AI infrastructure beyond consumer electronics, extending its reach into sovereign computing and national AI strategy projects.
Taking Aim at Nvidia’s Stronghold
The move comes as Nvidia continues to dominate the global AI hardware market, controlling more than 80% of the market for AI chips used in data centers. Qualcomm’s approach — emphasizing energy-efficient inference rather than high-cost training — is expected to carve out a niche in a sector that’s increasingly focused on cost-effective scalability.
Qualcomm aims to position its AI200 and AI250 chips as attractive alternatives for enterprises and governments seeking AI infrastructure with lower operating costs by leveraging its expertise in low-power, high-efficiency chips honed through years of building processors for mobile devices.
It is believed that Qualcomm’s decision to focus on inference chips aligns with the next wave of AI deployment, where companies are shifting from training massive models to running and scaling them efficiently across devices and servers.
The launch represents a major strategic shift for the company, which has traditionally derived most of its revenue from smartphone and telecommunications chips. The company’s growing investment in AI semiconductors signals its intention to diversify revenue streams and participate in what many analysts call the “AI compute boom.”
The AI200 and AI250 chips are expected to help Qualcomm secure a foothold in a market long dominated by Nvidia and AMD — a move that may redefine the global semiconductor market.



