A new international assessment has delivered a stark verdict on the state of AI risk management, warning that the companies building the world’s most powerful systems are not prepared to control them.
The study, conducted by AI-safety specialists at the nonprofit Future of Life Institute, found that the eight most influential players in the sector “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”
The AI Safety Index evaluated leading U.S., Chinese, and European firms across a range of risk categories, with a particular focus on catastrophic harm, existential threats, and the long-term control problem surrounding artificial general intelligence. U.S. companies ranked the highest overall, led by Anthropic. OpenAI and Google DeepMind followed. Chinese firms were clustered at the bottom of the table, with Alibaba Cloud placed just ahead of DeepSeek. Yet the broader picture was grim for everyone. No company achieved better than a D on existential-risk preparedness, and Alibaba Cloud, DeepSeek, Meta, xAI, and Z.ai all received an F.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The warning at the heart of the report argues that the industry’s pursuit of ever-larger and more capable systems is not being matched by an equivalent investment in safety architecture.
“Existential safety remains the sector’s core structural failure,” the report stated, adding that the gap between escalating AGI ambitions and the absence of credible control plans is increasingly alarming.”
The authors wrote that “none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” even as companies race to build superhuman systems.
The report urges developers to release more details on their internal safety evaluations and to strengthen guardrails that address near-term harms, including risks such as AI-induced delusions known as “AI psychosis.” That recommendation ties into a broader conversation playing out across governments and research institutions about the need for clearer, more enforceable standards as frontier AI models grow in power.
In an interview accompanying the report, UC Berkeley computer scientist Stuart Russell delivered one of the most pointed rebukes yet from a veteran of the field.
“AI CEOs claim they know how to build superhuman AI, yet none can show how they’ll prevent us from losing control – after which humanity’s survival is no longer in our hands,” he said.
Russell argued that if companies insist on developing systems that could meaningfully exceed human capability, then the burden of proof must rise to match that risk.
“I’m looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,” he said. “Instead, they admit the risk could be one in 10, one in five, even one in three, and they can neither justify nor improve those numbers.”
Companies named in the index responded by stressing their existing safety programmes. A representative for OpenAI said the firm was working with external specialists to “build strong safeguards into our systems, and rigorously test our models.” Google said its Frontier Safety Framework includes protocols for detecting and mitigating severe risks in advanced models, with the company pledging to evolve that framework as capabilities grow.
“As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities,” a spokesperson said.
The findings land at a moment when governments, regulators, and the research community are locked in an unresolved debate about how fast the world is moving toward AGI, and how close today’s frontier systems are to thresholds that would require nuclear-grade governance.
National security agencies have begun to warn about the potential misuse of next-generation models in cyberwarfare and biological threats. At the same time, companies continue to roll out faster, more powerful models to keep up with competitors, a pace some researchers say has made cautious development nearly impossible.
The Index suggests that, in the absence of binding regulation, the incentives inside the industry still tilt decisively toward capability over caution. The report’s authors said that this imbalance, combined with the absence of independent oversight with real enforcement power, leaves the world exposed to both near-term and long-term risks.
The result is an industry caught between two accelerating forces: an economic race to build the most capable systems on earth, and a widening regulatory gap that struggles to catch up. The report warns that unless that gap narrows, the next wave of AI breakthroughs could arrive with fewer guardrails than the moment demands.



