DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 33

Global AI Leaders Earn Failing Marks On Catastrophic-Risk Management As Watchdog Warns Of Widening Safety Gap

2

A new international assessment has delivered a stark verdict on the state of AI risk management, warning that the companies building the world’s most powerful systems are not prepared to control them.

The study, conducted by AI-safety specialists at the nonprofit Future of Life Institute, found that the eight most influential players in the sector “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”

The AI Safety Index evaluated leading U.S., Chinese, and European firms across a range of risk categories, with a particular focus on catastrophic harm, existential threats, and the long-term control problem surrounding artificial general intelligence. U.S. companies ranked the highest overall, led by Anthropic. OpenAI and Google DeepMind followed. Chinese firms were clustered at the bottom of the table, with Alibaba Cloud placed just ahead of DeepSeek. Yet the broader picture was grim for everyone. No company achieved better than a D on existential-risk preparedness, and Alibaba Cloud, DeepSeek, Meta, xAI, and Z.ai all received an F.

The warning at the heart of the report argues that the industry’s pursuit of ever-larger and more capable systems is not being matched by an equivalent investment in safety architecture.

“Existential safety remains the sector’s core structural failure,” the report stated, adding that the gap between escalating AGI ambitions and the absence of credible control plans is increasingly alarming.”

The authors wrote that “none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” even as companies race to build superhuman systems.

The report urges developers to release more details on their internal safety evaluations and to strengthen guardrails that address near-term harms, including risks such as AI-induced delusions known as “AI psychosis.” That recommendation ties into a broader conversation playing out across governments and research institutions about the need for clearer, more enforceable standards as frontier AI models grow in power.

In an interview accompanying the report, UC Berkeley computer scientist Stuart Russell delivered one of the most pointed rebukes yet from a veteran of the field.

“AI CEOs claim they know how to build superhuman AI, yet none can show how they’ll prevent us from losing control – after which humanity’s survival is no longer in our hands,” he said.

Russell argued that if companies insist on developing systems that could meaningfully exceed human capability, then the burden of proof must rise to match that risk.

“I’m looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,” he said. “Instead, they admit the risk could be one in 10, one in five, even one in three, and they can neither justify nor improve those numbers.”

Companies named in the index responded by stressing their existing safety programmes. A representative for OpenAI said the firm was working with external specialists to “build strong safeguards into our systems, and rigorously test our models.” Google said its Frontier Safety Framework includes protocols for detecting and mitigating severe risks in advanced models, with the company pledging to evolve that framework as capabilities grow.

“As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities,” a spokesperson said.

The findings land at a moment when governments, regulators, and the research community are locked in an unresolved debate about how fast the world is moving toward AGI, and how close today’s frontier systems are to thresholds that would require nuclear-grade governance.

National security agencies have begun to warn about the potential misuse of next-generation models in cyberwarfare and biological threats. At the same time, companies continue to roll out faster, more powerful models to keep up with competitors, a pace some researchers say has made cautious development nearly impossible.

The Index suggests that, in the absence of binding regulation, the incentives inside the industry still tilt decisively toward capability over caution. The report’s authors said that this imbalance, combined with the absence of independent oversight with real enforcement power, leaves the world exposed to both near-term and long-term risks.

The result is an industry caught between two accelerating forces: an economic race to build the most capable systems on earth, and a widening regulatory gap that struggles to catch up. The report warns that unless that gap narrows, the next wave of AI breakthroughs could arrive with fewer guardrails than the moment demands.

EU opens Sweeping Antitrust Probe into Google’s AI Data Practices

0

Google is facing fresh regulatory heat in Europe after the European Commission on Tuesday launched a formal antitrust investigation into the company’s use of online content to train and power its artificial intelligence services.

The probe marks one of the bloc’s most significant moves yet to scrutinize how a dominant American tech platform gathers and deploys data in the emerging AI economy.

The Commission said it is examining whether Google violated EU competition rules by drawing on content from web publishers and creators on YouTube in ways that could distort competition, impose unfair terms on content owners, or give the company an artificial advantage over rival AI developers. Regulators want to know whether Google is using material it does not adequately compensate for, and whether publishers have any meaningful ability to refuse without jeopardizing their visibility on Google Search.

Teresa Ribera, the EU’s commissioner for competition, framed the inquiry as part of a wider effort to ensure that Europe’s shift toward AI does not erode core market principles.

“AI is bringing remarkable innovation and many benefits for people and businesses across Europe, but this progress cannot come at the expense of the principles at the heart of our societies,” Ribera said.

She added that regulators are investigating whether Google has applied unfair terms to publishers and creators while disadvantaging other AI model developers in a way that breaches EU rules.

The initial focus rests heavily on Google’s AI Overviews and AI Mode — both of which can ingest and summarize publisher content as they respond to user queries. EU officials said they will assess how much of this material is derived from news sites, independent creators, and video uploads, whether it is properly licensed or compensated, and whether Google has used its dominance in search to force smaller publishers into arrangements that deprive them of leverage.

A Google spokesperson pushed back sharply. “This complaint risks stifling innovation in a market that is more competitive than ever,” the company told CNBC in its first response.

The spokesperson added that Google would continue working with news organizations and creative industries “as they transition to the AI era,” arguing that Europeans should not be denied access to new technologies.

The probe puts Google under even deeper scrutiny just months after the EU fined the company nearly 3 billion euros for allegedly distorting competition in the ad tech sector. Google’s global head of regulatory affairs, Lee-Anne Mulholland, said in September that the decision was “wrong” and confirmed the company would appeal. She insisted the firm does not act anti-competitively, pointing to a growing field of ad tech rivals offering alternative services.

Those earlier penalties were part of a broader effort by Brussels to limit the power of large digital platforms — an effort that has gained urgency with the rapid rise of AI. Regulators across the bloc have warned that the economic and informational clout of companies like Google, Meta, Apple, and Amazon risks becoming even more entrenched as generative AI systems are trained on vast troves of digital content.

For the EU, the fear is that control of those datasets and the models built on them could shape everything from advertising markets to news distribution.

The latest probe is occurring against a backdrop of escalating conflicts between the EU and U.S. tech leaders. Just days ago, the Commission fined Elon Musk’s platform X 120 million euros for failures tied to ad transparency requirements and what regulators described as “deceptive design” around its verification system. Musk reacted by calling for the European Union to be abolished altogether, prompting criticism and counterattacks from Republican officials in Washington.

The bloc also opened an antitrust investigation into Meta last week, targeting the company’s new policy giving AI developers access to WhatsApp metadata. Regulators said that the arrangement may break EU competition rules by granting preferential access to Meta’s ecosystem.

The Commission’s pressure campaign signals an increasingly assertive posture toward AI-related activity. European officials are not only enforcing existing antitrust frameworks but also preparing for the next regulatory phase under the Digital Markets Act and the AI Act — both intended to blunt the structural advantages big platforms hold in data access, distribution, and algorithmic scale.

Google now sits at the center of this storm. The company is trying to push aggressively into generative AI after OpenAI’s rapid rise and the escalating competition from Anthropic, Meta, and a swarm of new enterprise model developers. Training world-class models requires staggering volumes of data, and that has placed companies like Google in a difficult position: they depend on publisher content to remain competitive, but those publishers increasingly depend on regulators to counterbalance Google’s dominance.

The new investigation will test whether Google’s AI products are built on an unfair foundation or whether they simply represent the natural progression of a competitive market. For publishers and creators, it poses a larger question about survival in a landscape where their work fuels AI systems that can summarize, repackage, and potentially displace their original content.

However, Google now faces one of the most consequential regulatory tests of its AI strategy — a test that may significantly shape how its models are trained.

Binance Expands USD1 Stablecoin Integration with New Spot Trading Pairs

0

Binance, the world’s largest cryptocurrency exchange by trading volume, has significantly boosted support for USD1—the USD-pegged stablecoin issued by World Liberty Financial (WLFI), a project backed by the Trump family.

This move aims to enhance liquidity and adoption of USD1 as a bridge between traditional finance and decentralized ecosystems. Trading is now live for BNB/USD1, ETH/USD1, SOL/USD1, and BTC/USD1. These pairs allow users to trade major assets like BNB, Ethereum, Solana, and Bitcoin directly against USD1.

All users enjoy 0% fees on USD1/USDT and USD1/USDC pairs starting December 11, 2025 (UTC+8). VIP levels 2–9 and spot liquidity providers get zero fees on the new major pairs (BNB/USD1, BTC/USD1, ETH/USD1, SOL/USD1).

From December 11, 2025 (17:00 UTC+8), USD1 will be available as a unified margin asset in Binance Futures’ multi-assets mode. All backing assets for Binance-Peg BUSD (B-Token) will automatically convert to USD1 at a 1:1 ratio, completing within one week of the announcement. This effectively sunsets BUSD collateral in favor of USD1.

USD1, launched earlier in 2025, is backed 1:1 by U.S. dollar deposits, short-term Treasuries, and cash equivalents, with custody via BitGo Trust Company for regulatory compliance. It’s designed for institutional use and cross-chain compatibility with Ethereum, Solana, BNB Chain.

The stablecoin’s market cap has grown to around $2.7 billion, and this Binance integration—without listing fees—positions it as a compliant alternative to USDT and USDC, especially amid U.S. regulatory shifts under the Trump administration.

Community reaction on X has been enthusiastic, with WLFI co-founder Donald Trump Jr. calling it a “rewrite of global finance rules” and users highlighting the zero-fee perks for high-volume trading.

Analysts note this could accelerate USD1’s role in DeFi payments and arbitrage, though it’s unavailable in restricted regions like the U.S. due to ongoing compliance hurdles.

CZ Foresees a 2026 Crypto “Supercycle” Driven by Institutions

Changpeng Zhao (CZ), Binance’s co-founder and former CEO, shared an optimistic outlook at the Bitcoin MENA conference in Abu Dhabi on December 9, 2025.

He predicted that the traditional four-year Bitcoin halving cycle may be disrupted, potentially giving way to a prolonged “supercycle” in 2026—characterized by sustained growth rather than sharp corrections.

Unlike past retail-driven cycles, this one is led by Wall Street via Bitcoin ETFs, corporate treasuries (e.g., MicroStrategy), and sovereign accumulations (e.g., El Salvador, Bhutan). CZ emphasized that ETF inflows and regulated custody could provide “enough force to offset the four-year cycle.”

U.S. Federal Reserve rate cuts, quantitative easing, and pro-crypto policies under President Trump lighter regulations, national Bitcoin reserves could fuel a longer bull phase. CZ likened Bitcoin’s potential yearly gains to gold’s historical 60% annual returns.

While CZ avoided exact price targets, he suggested Bitcoin could hit $500K–$1M this cycle, driven by tokenization of assets and crypto’s integration into payments. He contrasted his multi-chain view with Bitcoin maximalists like Michael Saylor, advocating for broader crypto adoption.

After his U.S. pardon earlier in 2025, CZ positioned himself as an independent advisor, noting governments now seek his input on regulation and reserves. This aligns with broader analyst forecasts, like Bernstein’s $150K Bitcoin target for 2026 and CoinMarketCap’s Q1 2026 bull run prediction based on macro indicators.

On X, the clip of CZ’s remarks went viral, with users debating if it signals a “broken cycle” or just hype—though sentiment leans bullish, tying into USD1’s Trump-linked momentum. These developments underscore 2025’s theme of institutional crypto maturation.

As always, crypto markets are volatile; DYOR and consider risks like regulatory changes. If you’re trading USD1 pairs, the zero-fee window starts soon—monitor Binance announcements for updates.

L.xyz Reveals LXYZ Token Presale for Its Hybrid AMM + Order Book Trading Ecosystem on Solana

0

L.xyz has officially revealed the presale for its native token LXYZ, a key milestone in the construction of its advanced trading ecosystem built on the Solana blockchain. The platform is introducing a hybrid model that combines Automated Market Maker liquidity with a centralized style order book. This approach aims to improve precision, reduce slippage, and deliver fast execution for a wide range of traders.

The L.xyz team is focused on creating an exchange where speed, stability, and flexibility come together. Solana provides the foundation for this vision, offering high throughput, low transaction fees, and rapid finality. Traders using L.xyz will have access to spot markets, futures markets, and high leverage tools that reach up to 100x on select pairs.

A Dual Engine Designed for Better Trading

The hybrid AMM and order book model allows L.xyz to offer liquidity depth similar to centralized venues while keeping the entire system non-custodial. Liquidity pools ensure that users can execute trades at any time, even during volatile conditions. At the same time, the order book provides more control for traders who prefer structured entries, limit orders, and stop orders.

This structure is designed to support professional strategies and fast reactions to market changes. With Solana handling transaction processing in milliseconds, traders can engage with the market more efficiently than on traditional blockchain based DEXs.

LXYZ Token Presale Opens Access to Core Ecosystem Features

The LXYZ token is central to how users will participate in governance, staking, liquidity programs, and future cross-chain utilities. The presale includes 40 percent of the total supply, equal to 200 million tokens, divided into ten structured phases. Each phase offers 20 million tokens with a price increase of 0.05 USD from the previous phase.

This phased model gives early supporters access to the token at transparent and predictable prices. Presale tokens include a lock-up period and vesting structure to maintain long term stability within the ecosystem.

Tools Created for Both Active Traders and Liquidity Providers

L.xyz offers a set of features intended to support traders who need speed and accuracy as well as users who prefer passive income opportunities.

Key features include:

  • Spot, margin, and futures markets
    • Risk management tools including stop loss and liquidation monitoring
    • Liquidity mining programs that reward users with LXYZ tokens
    • Staking options for long term participants
    • Real time charting and analytics for market analysis

The platform will continue expanding with AI based tools, automated strategies, cross chain swap functionality, and multi chain liquidity pools. These future capabilities are outlined in the L.xyz roadmap and position the exchange for broader ecosystem growth.

Governance Led by the Community

L.xyz is structured as a fully non custodial exchange. Users keep full control of their assets through Solana compatible wallets. Governance decisions will be made by LXYZ token holders who can vote on listings, fee adjustments, feature upgrades, and proposals submitted through the DAO.

This governance model ensures that the direction of L.xyz reflects the interests of its community rather than a centralized authority.

Building Toward a High Performance Trading Future

With its hybrid trading engine, focus on low latency execution, and long term roadmap, L.xyz is working to create a high performance environment for both retail and institutional level traders. The presale provides the first opportunity for users to engage with the ecosystem and contribute to its development.

As adoption grows, the platform is designed to evolve into a comprehensive, fast, and reliable trading hub powered by the Solana blockchain.

Telegram: T.me/ldotxyz

X: X.com/ldotxyz

Aware Super’s new CIO warns of “orange lights” in AI financing

0

Australia’s Aware Super has entered the new year with a guarded but confident view of the global artificial intelligence boom, as its newly appointed chief investment officer, Simon Warner, flags emerging fragilities in the way some AI ventures are now being financed.

Warner, who took over leadership of the A$210 billion fund’s investment team last week, said the AI sector’s economic model is shaping up to be the defining financial market risk of 2026, even as earnings growth from the dominant players continues to justify steep valuations.

Warner told Reuters that the extraordinary rise in AI infrastructure spending — from data centers to large language models — had until recently been underwritten by the most stable source available in capital markets: retained earnings from companies with long track records of profitability. That created a sense of comfort for institutional investors who viewed the boom as self-funded rather than debt-driven or reliant on speculative capital.

He noted that the tone has shifted. Over the past six months, a trickle of more exotic financing structures has begun to appear, raising concerns about how some companies are bankrolling their AI expansions. Warner described these new arrangements as “circular financing” and “conduit financing,” mechanisms that move money through more complex channels rather than straight from a company’s balance sheet. In his words, “nothing red, but definitely orange,” signaling caution without alarm.

The shift comes at a time when AI-driven stock gains have become a major force in global markets. The Magnificent Seven — Microsoft, Apple, Alphabet, Nvidia, Meta, Amazon, and Tesla — have carried a significant share of U.S. equity performance, helping buoy investor wealth and household demand. Warner said that the relationship between tech valuations, capital expenditure, and the broader U.S. economy has created a delicate interdependence. If even one of those pillars falters, he warned, financial markets could quickly feel the shock.

The urge to scale up AI capacity has led to unprecedented spending across the industry. Meta disclosed in October that it secured a $27 billion financing deal from Blue Owl Capital for what will be its largest data center project globally. The company has been racing to expand its generative AI capabilities, which require enormous power, cooling, and real estate footprints. For many analysts, that financing arrangement underscored how the sector’s capital needs have grown beyond what routine cash flow can comfortably support.

Microsoft, which remains Aware Super’s second-largest listed holding in its balanced fund, has also been pouring billions into new data center clusters and advanced chips to support its partnership with OpenAI. Alongside Microsoft, Aware holds stakes in Nvidia, Apple, Alphabet, and Meta, giving Warner a direct view into how the world’s most influential companies are navigating the cost of staying ahead in AI.

Even with the “orange lights” flashing, Warner said the earnings trajectory of these firms still validates their lofty valuations, though he acknowledged that fatigue in capital expenditure could eventually threaten those valuations. He suggested that investors who have been wary of how long the spending boom can last are right to stay vigilant.

The concern is not about an imminent downturn but about whether the current pace of investment can be sustained without creating vulnerabilities. A significant pullback in spending by any of the big technology names could ripple through markets, tightening liquidity and altering the wealth effects that have propped up U.S. consumer sentiment. Warner described this possibility as a dynamic worth watching closely.

For global funds like Aware Super, the AI sector remains both an engine of returns and a source of latent risk — a combination that demands deep scrutiny as the industry moves into a more mature, capital-intensive phase. Warner’s early remarks as CIO position him as someone determined to track the fine print beneath the sector’s explosive growth, wary of the structures now emerging around what has become the world’s most expensive technological race.