Home Latest Insights | News It Takes 20 Years of Food & Water to Develop a Human: Altman Pushes Back on AI Water, Energy Consumption Claims

It Takes 20 Years of Food & Water to Develop a Human: Altman Pushes Back on AI Water, Energy Consumption Claims

It Takes 20 Years of Food & Water to Develop a Human: Altman Pushes Back on AI Water, Energy Consumption Claims

Altman drew a sharp line between what he called exaggerated per-query water claims and the very real macro-scale energy buildout AI will require, arguing the infrastructure challenge is about power generation — not gallons per prompt.


At a moment when artificial intelligence is reshaping industries and straining infrastructure planning, Sam Altman is confronting one of the most persistent criticisms head-on: the environmental cost of AI.

Speaking on the sidelines of the India AI Impact Summit in an interview with The Indian Express, the OpenAI chief executive dismissed viral claims that ChatGPT consumes gallons of water per query as “completely untrue” and “totally insane,” arguing that such figures bear “no connection to reality.”

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

The remarks land amid intensifying scrutiny of data center expansion, resource use, and AI’s long-term sustainability.

The Water Narrative — and What It Misses

Concerns about AI’s water footprint stem largely from how data centers are cooled. Many traditional facilities rely on evaporative cooling systems that draw significant volumes of water to regulate temperatures for densely packed servers.

Yet the link between a single AI query and water consumption is not direct. Water use occurs at the infrastructure level — in cooling systems and, in some regions, in power generation itself — rather than at the level of an individual prompt.

Cooling technology is also evolving. Hyperscale operators are deploying closed-loop liquid systems, advanced air cooling, and even water-free designs in some new builds. Efficiency gains per compute unit have improved steadily, though rising overall demand may offset those gains.

A recent projection by water technology firm Xylem and Global Water Intelligence estimated that water drawn for cooling could more than triple over the next quarter-century as global computing expands. That forecast reflects aggregate growth, not per-interaction intensity.

Altman’s pushback suggests he views the viral framing — “gallons per query” — as a distortion that conflates systemic resource use with marginal consumption.

Energy: The Real Constraint

Where Altman acknowledged a legitimate concern is the electricity demand.

“Not per query, but in total — because the world is using so much AI … and we need to move towards nuclear or wind and solar very quickly,” he said.

The distinction he is drawing is fundamental to understanding AI’s environmental calculus.

AI systems consume energy at two primary stages:

  1. Training: the compute-intensive process of building large models, often requiring massive parallel processing over weeks or months.
  2. Inference: the ongoing use of trained models to generate outputs in response to user inputs.

Training can require substantial bursts of energy, but inference — especially once hardware and software are optimized — is far less energy-intensive per transaction. The challenge lies in scale. Billions of inferences across millions of users translate into persistent demand on grids.

According to a May report from the International Monetary Fund, global data center electricity consumption in 2023 had already reached levels comparable to those of Germany or France, shortly after the debut of ChatGPT.

That comparison underscores how quickly AI has shifted data centers from background infrastructure to frontline energy consumers.

The Human Brain Analogy

Altman also addressed comparisons drawn by Bill Gates, who has suggested that the human brain’s efficiency implies AI systems could become dramatically more energy-efficient over time.

Altman argued that many comparisons overlook the energy embedded in human development.

“It takes like 20 years of life, and all the food you eat before that time, before you get smart,” he said.

He suggested the more appropriate benchmark is energy consumed per response once a model is trained — and by that metric, he believes AI may already be competitive.

The analogy has sparked debate. Critics argue that equating human cognition with computational systems risks flattening ethical distinctions. Sridhar Vembu of Zoho Corporation publicly criticized the equivalence, saying he does not want to see technology equated with human beings.

Beyond philosophy, the exchange highlights a deeper issue: AI efficiency is often discussed without standardized metrics. Measuring energy per inference, per token generated, or per model lifecycle produces very different narratives.

Infrastructure, Investment, and Political Friction

The debate is unfolding as governments and corporations commit billions to new data center capacity. AI has become a strategic priority, intertwined with economic competitiveness and national security.

To accommodate growth, some governments are accelerating approval processes for new power generation — including nuclear, solar, and wind. Environmental advocates caution that rapid buildouts could complicate climate commitments if fossil fuels fill short-term supply gaps.

Local resistance is also mounting. In San Marcos, Texas, the city council recently rejected a proposed $1.5 billion data center after sustained public opposition over concerns about grid strain and rising electricity costs.

These disputes reveal a widening tension between national AI ambitions and local resource constraints. Data centers are capital-intensive, geographically concentrated, and highly visible infrastructure projects.

One of the central questions is whether technological efficiency can outpace demand growth.

Historically, improvements in chip design and software optimization have reduced energy use per computation. However, AI workloads are expanding so rapidly that total consumption continues to climb — a classic case of the rebound effect, where efficiency gains stimulate additional usage.

Altman’s call for accelerated nuclear and renewable deployment implicitly acknowledges that efficiency alone will not solve the energy equation. Expanded generation capacity appears inevitable if AI adoption continues at current rates.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here