When AMD chief executive Lisa Su stepped onto the CES 2026 stage in Las Vegas, she was not just unveiling a new generation of chips. She was trying to recalibrate how the industry thinks about scale.
Artificial intelligence, she said, is growing so fast that familiar yardsticks for computing power no longer apply. The future, in her telling, belongs to a unit so large it still sounds theoretical: the yottaflop.
Su told the audience that keeping pace with AI over the next five years will require more than 10 yottaflops of compute. She paused mid-speech to underline how unfamiliar that number is.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
“How many of you know what a yottaflop is?” she asked, inviting a show of hands. When none appeared, she explained it herself.
“A yottaflop is a one followed by 24 zeros. So 10 yottaflop flops is 10,000 times more compute than we had in 2022,” she said.
At its core, a flop is a single mathematical operation. A computer capable of performing one billion operations per second is said to deliver a gigaflop. A yottaflop represents one septillion calculations every second. At that scale, scientists say, computers could theoretically run atom-level simulations for entire planets, workloads that today sit firmly in the realm of speculation.
What makes Su’s projection striking is not just the size of the number, but the speed at which the industry is approaching it. In 2022, global AI compute was estimated at roughly one zettaflop, or 10²¹ operations per second. By 2025, Su said, that figure had already surged beyond 100 zettaflops. The jump from zettaflops to yottaflops is not a smooth curve. It is a steep climb that compresses decades of historical progress into a few years.
“There’s just never, ever been anything like this in the history of computing,” Su told the conference.
To grasp the magnitude, Su compared her forecast with the most powerful machine currently in operation. The US Department of Energy’s El Capitan supercomputer, which tops global rankings today, would need to be multiplied by about 5.6 million times to reach 10 yottaflops. Even the vast data centers being built by cloud giants fall dramatically short of that benchmark.
The implication is that AI’s next phase is no longer limited by software ingenuity alone. It is colliding with hard physical constraints. Power consumption has already become a central issue. Training large AI models and running them at scale requires enormous amounts of electricity, and that demand is placing visible strain on the US power grid. Data center operators are competing for capacity, while utilities warn that generation and transmission upgrades are struggling to keep up.
Scaling compute by several more orders of magnitude would require a parallel transformation of energy infrastructure. More power plants, stronger grids, advanced cooling systems, and new approaches to efficiency would all be necessary. In that sense, the yottaflop challenge extends far beyond chipmakers. It touches energy policy, industrial planning, and national infrastructure strategy.
There is also an economic dimension. The cost of building and operating yottaflop-scale systems will be immense. As computing becomes more concentrated in a handful of hyperscale players, questions around access, pricing, and competition are likely to intensify. Smaller firms and research institutions may find themselves locked out of the most advanced AI capabilities unless new models for shared infrastructure emerge.
Against this backdrop, Su used the CES keynote to position AMD as a key supplier for what comes next. She unveiled the company’s next generation of AI accelerators, including the MI455 GPU, underscoring AMD’s push deeper into the data-center market. The company is increasingly targeting customers building massive AI systems, including OpenAI, as it seeks to close the gap with Nvidia in high-performance AI hardware.
The timing comes as AI is moving from experimentation into industrial-scale deployment. Governments are embedding it into national strategies, companies are baking it into core products, and scientific research is leaning on it for breakthroughs. That shift is driving demand for compute at a scale that was barely discussed a few years ago.
Su’s message at CES was less a distant forecast than a warning shot. If AI continues on its current trajectory, the world will be forced to rethink how computing power is built, powered, and governed. The yottaflop, once a mathematical curiosity, is rapidly becoming the next benchmark.



