The next phase of the artificial intelligence boom will be defined less by who builds the biggest data centers and more by who figures out how to use far less power to achieve comparable results, according to former Facebook chief privacy officer Chris Kelly.
Speaking on CNBC’s Squawk Box on Tuesday, Kelly said the industry’s current obsession with scale is colliding with economic reality, as the cost of power, chips, and infrastructure rises sharply alongside mounting pressure on already stretched electricity grids.
“We run our brains on 20 watts. We don’t need gigawatt power centers to reason,” Kelly said. “I think that finding efficiency is going to be one of the key things that the big AI players look to.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Kelly, who also served as Facebook’s general counsel, said companies that deliver breakthroughs in reducing data center and compute costs will ultimately emerge as the winners of the AI race. In his view, the arms race to build ever-larger facilities packed with high-end GPUs is becoming increasingly difficult to justify, even for the industry’s best-funded players.
That warning comes at a time when spending on AI infrastructure is accelerating at an unprecedented pace. The global data center market has seen more than $61 billion in infrastructure dealmaking in 2025 alone, according to S&P Global, as hyperscalers rush to lock down land, power connections, and long-term equipment supply.
OpenAI sits at the center of that expansion. The company has made more than $1.4 trillion in AI-related commitments over the coming years, spanning massive partnerships with Nvidia, Oracle, and data center operator CoreWeave. Much of that capital is earmarked for training and running increasingly sophisticated models that demand enormous computing power.
Yet the scale of these projects has intensified concerns about energy consumption. In September, Nvidia and OpenAI announced a project involving at least 10 gigawatts of data center capacity. That level of power demand is roughly equivalent to the annual electricity consumption of about 8 million U.S. households and is close to New York City’s peak summer electricity demand in 2024, according to the New York Independent System Operator.
As utilities struggle to keep up with surging demand from AI facilities, questions are growing about where the power will come from, how fast the new generation can be brought online, and whether grids can remain reliable. For AI developers, electricity is increasingly becoming a strategic constraint, alongside access to advanced chips.
Cost concerns have been sharpened further by developments in China. In December 2024, Chinese startup DeepSeek released a free, open-source large language model that it said was developed for under $6 million, a figure that stood in stark contrast to the vast sums associated with U.S. AI projects. While the company’s claims have been closely examined by industry experts, the episode reinforced the idea that advanced AI may not always require massive budgets and sprawling infrastructure.
Kelly said these dynamics are likely to propel Chinese firms into a more prominent position in the next phase of AI development. He pointed to President Donald Trump’s recent decision to approve the sale of Nvidia’s H200 chips to China, a move that could significantly expand the country’s access to cutting-edge compute hardware.
“I think you’re going to see a number of Chinese players come to the fore,” Kelly said, adding that open-source models, particularly from China, could give users access to basic levels of compute as well as generative and agentic AI at far lower cost than proprietary systems.
Such a shift would carry wide implications for the global AI landscape. If efficiency gains and open-source approaches reduce the need for enormous capital outlays, the industry could become less concentrated among a small group of cash-rich U.S. firms. At the same time, pressure to curb power consumption could accelerate changes in chip design, model architecture, and software optimization, pushing developers to prioritize smarter, leaner systems over brute-force scale.
Kelly was pointing at a growing tension at the heart of the AI boom. While capital continues to pour into data centers and infrastructure, the limits imposed by energy supply and cost are becoming harder to ignore. As the industry matures, he suggests, the defining question will no longer be who can build the biggest machines, but who can deliver intelligence more efficiently, sustainably, and at a fraction of today’s power bill.



