DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog

France Pushes for Stricter MiCA Limits on Dollar Stablecoins

0

The Banque de France has requested that European Union regulators strengthen the Markets in Crypto-Assets (MiCA) framework regarding stablecoin oversight. French authorities maintain that while MiCA provides a foundational legal infrastructure, it fails to address the emerging threats posed by stablecoins originating from outside European jurisdiction.

Dollar-backed stablecoins (primarily USDT) remain the primary concern here. Regulatory reports show that 98% of worldwide stablecoins currently operate with U.S. dollar backing, creating a situation that regulators view as a permanent dependence on foreign currency for digital finance operations. If European users continue to rely on private dollar-based tokens for their daily payments, the prominence of the euro within the financial ecosystem could decrease — a shift that could undermine the euro’s long-term standing against the dollar (EUR/USD).

Deputy Governor Denis Beau of the Banque de France has declared that MiCA provides only partial protection against the dangers linked to widespread asset usage. Therefore, French authorities want to establish precise regulatory changes to enhance monitoring capabilities and reduce dependence on non-European stablecoins.

One of the main proposed changes involves restricting the use of non-euro stablecoins in certain payment scenarios. Regulators are particularly focused on monitoring large-scale or systemic transactions, where dependence on foreign tokens could expose the European financial system to external shocks. By restricting the payment functions of cryptocurrencies, authorities seek to maintain their supervisory power over the eurozone.

A second major proposal aims to strengthen requirements for reserve holdings. The Banque de France calls for European stablecoin issuers to maintain their reserves in euro currency instead of U.S. dollars. Implementing this practice would decrease currency mismatch risks and promote the development of euro-backed stablecoins across digital markets.

The system is designed to implement new measures that will allow organizations to track their activities with greater accuracy. France has also implemented a national strategy, establishing rules that mandate individuals to report digital asset holdings over €5,000 in self-managed wallets.

It is also intended to address financial stability concerns. Stablecoins operate as fixed-value assets — they depend on their reserve assets and the trustworthiness of their issuers. A failure or loss of confidence in a major issuer could trigger large-scale redemptions and disrupt markets. European regulators believe that risks are significantly higher when companies operate outside EU borders, as this complicates oversight and emergency handling.

Therefore, the EU aims to reduce digital currency dependency through its MiCA legislation, as the framework encourages local market development, supporting euro-based stablecoin creation and digital euro implementation.

The proposed changes seek to create a new European crypto environment, enriching the crypto heatmap with new euro-backed assets.

While these measures — if adopted — could lead to greater stability and clearer regulation, they may also present operational difficulties for market players and raise questions regarding innovation and market competitiveness.

Openai In Talks To Commit Up To $1.5bn To Private Equity JV, Signals A New Phase In The Battle For Corporate AI

0

OpenAI is preparing a sweeping push into the corporate technology market through a new joint venture structure that blends venture-scale ambition with private equity capital discipline, in what industry insiders see as one of the most aggressive moves yet to lock in enterprise adoption of artificial intelligence.

The initiative, known internally as DeployCo, is expected to be valued at around $10 billion when its first funding round closes in early May, according to people familiar with the matter cited by the Financial Times. OpenAI will anchor the vehicle with an initial $500 million equity investment, with total commitments potentially rising to $1.5 billion over time.

At its core, DeployCo is designed to do something OpenAI has not previously attempted at this scale: industrialize the distribution of its workplace AI tools through private equity networks that control large swathes of the global corporate economy. Rather than relying on individual enterprise contracts, the structure is intended to embed AI deployment decisions at the portfolio level, where operational changes can be executed across multiple companies simultaneously.

The backers reflect that ambition. Private equity firms, including TPG, Bain Capital, Advent International, Brookfield, and Goanna Capital, are expected to invest roughly $4 billion into the venture.

What has drawn particular attention in financial circles is the structure of returns. According to the report, investors are being offered a guaranteed annual return of 17.5% over five years, a rare feature in high-growth technology partnerships where returns are typically contingent on performance. OpenAI will also have the option to inject an additional $1 billion at a later stage, while retaining super-voting shares that give it effective control over DeployCo’s direction.

The design underlines both opportunity and urgency. This is because the enterprise AI market has become the central battleground for the next phase of industry competition, as growth in consumer-facing products begins to normalize and attention shifts to long-term corporate integration. The challenge for OpenAI is no longer model capability alone, but distribution—ensuring its tools are embedded deeply enough into business workflows to become indispensable.

That competition is already well underway. Rival Anthropic has gained traction in enterprise environments, particularly with its Claude models, which have found early adoption in coding, compliance, and knowledge management tasks. Reuters reported earlier this year that both firms have been actively courting private equity groups, recognizing their influence over procurement decisions and operational strategy across large corporate portfolios.

DeployCo is a direct response to that dynamic. Private equity firms do not merely finance companies; they often shape restructuring, cost optimization, and technology adoption across entire portfolios. OpenAI is effectively attempting to bypass fragmented enterprise sales cycles and instead secure systemic adoption across multiple businesses at once by embedding AI tools at that level.

The approach also underpins a shift in how AI monetization is evolving. Early gains in the sector were driven by consumer applications and developer ecosystems. The next phase is increasingly about integration into core business systems—finance, supply chain management, legal operations, and customer service—where efficiency gains can be measured in cost savings and productivity improvements rather than user growth alone.

However, guaranteed returns of 17.5% introduce financial obligations that could become difficult to sustain if enterprise adoption lags expectations or if corporate spending on AI slows. The arrangement effectively ties OpenAI’s expansion strategy to a capital structure that resembles private credit more than traditional venture funding, adding a layer of pressure not usually associated with software deployment.

The move also highlights how tightly interwoven capital markets have become with AI infrastructure. Private equity firms are increasingly acting as intermediaries in technological adoption, bridging the gap between software providers and legacy industries that are still working through digital transformation cycles.

For OpenAI, the venture is as much about speed as scale. Enterprise adoption is often slow, fragmented, and dependent on internal procurement cycles. DeployCo attempts to compress that timeline by centralizing decision-making across investment portfolios, turning AI rollout into a coordinated operational directive rather than a series of individual corporate experiments.

The broader backdrop is a market in transition. Companies are under pressure to demonstrate measurable returns from AI investments, moving beyond experimentation toward embedded, productivity-linked use cases. That shift is beginning to separate vendors that can deliver integration at scale from those still focused on standalone products.

In that environment, DeployCo represents a wager that, if successful, could give OpenAI a structural advantage in enterprise penetration that rivals would struggle to replicate. If it falters, the financial guarantees and capital commitments could weigh heavily on a company.

Altman Escalates AI Governance Clash, Accuses Anthropic of ‘Fear-Based Marketing’ of Mythos in Deepening Battle Over Frontier Model Control

0

The competition over frontier artificial intelligence has moved beyond product rivalry into an open dispute over narrative control, safety authority, and who gets to define the boundaries of access.

OpenAI CEO Sam Altman has accused rival Anthropic of deliberately amplifying existential fears to market its newest model, Claude Mythos, while simultaneously restricting access to a tightly selected group of corporate partners.

Speaking on the “Core Memory” podcast hosted by Ashlee Vance, Altman characterized the messaging around Anthropic’s rollout as strategically alarmist.

“It is clearly incredible marketing to say, ‘We have built a bomb. We were about to drop it on your head. We will sell you a bomb shelter for $100 million to run across all your stuff, but only if we pick you as a customer,’” he said.

The framing points to a deeper ideological divide in Silicon Valley over whether frontier AI systems should be broadly distributed under controlled safeguards or concentrated within a limited set of vetted institutions.

Anthropic has opted for the latter approach with Claude Mythos. The company has withheld a public release, citing heightened cybersecurity capabilities within the model, particularly its ability to identify system vulnerabilities that could be misused. Instead, it introduced a restricted access framework known as Project Glasswing.

Under that programme, only 11 organizations were granted access, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase. The selection spans cloud infrastructure, semiconductor manufacturing, and financial services, effectively placing frontier model access within a narrow layer of global digital infrastructure providers.

The rationale is rooted in the containment risk for Anthropic. As models become more capable of autonomous reasoning and system-level analysis, the potential for dual-use exploitation increases, particularly in cybersecurity contexts. Restricting access, in this view, becomes a precondition for controlled experimentation.

Altman rejects the implication that such restrictions are neutral or purely safety-driven. He argues that they also function as a form of narrative positioning that consolidates authority over AI deployment decisions.

“There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people,” he said. “You could justify that in a lot of different ways, and some of it’s real like there are going to be legitimate safety concerns. But if what you want is like, ‘We need control of AI, just us, because we’re the trustworthy people, I think the fear-based marketing is probably the most effective way to justify that.”

The disagreement reflects a broader structural tension emerging in the AI sector: whether governance should be decentralized through broad access and iterative safeguards, or centralized through controlled deployment to a small set of institutions deemed capable of managing systemic risk.

OpenAI, where Altman leads operations, has generally pursued a more distributed model, releasing models to the public with layered safety constraints, usage monitoring, and incremental capability expansion. Even so, Altman acknowledged that not all systems would be broadly released.

“There will be very dangerous models that will have to be released in different ways,” he said. “The goal here is to benefit everybody and also to, I don’t say market this in a way but like get the world to come on this journey with us, and to say, ‘We are going to give you more powerful technology, there’s going to be responsibility that goes along with that.’”

“We are going to try to help set up the world for as much success as we can,” he added.

The contrast between the two approaches has sharpened as model capabilities accelerate. Anthropic’s Claude Mythos reportedly demonstrates heightened competence in identifying cybersecurity weaknesses, a capability that raises both defensive and offensive implications. In restricted-release environments such as Project Glasswing, those risks are managed through controlled exposure, limiting both the user base and the operational context.

Critics of restricted deployment models believe that they risk concentrating power in a small set of corporations, effectively turning frontier AI into an infrastructure layer governed by private gatekeepers rather than broadly accessible tools. Proponents counter that premature mass deployment could amplify misuse risks before sufficient containment mechanisms are established.

The rivalry has also become increasingly personal. Anthropic’s chief executive, Dario Amodei, previously held senior roles at OpenAI before founding the competing firm, embedding institutional memory and philosophical divergence into the competition itself.

Altman suggested that the broader discourse around AI risk has intensified tensions within the industry.

“I think the doomerism talk hasn’t helped. I think the way certain other labs talk about us hasn’t helped,” he said, adding, “I think the way Anthropic talks about OpenAI doesn’t help.”

Beyond corporate positioning, the dispute indicates an unresolved policy vacuum. Governments have yet to establish consistent global frameworks for frontier AI deployment, leaving major labs to effectively define their own governance regimes. In that environment, safety arguments, commercial strategy, and institutional trust become difficult to separate.

What is emerging is not just a product race, but a contest over legitimacy: who is authorized to build, release, and constrain systems that are increasingly embedded in critical infrastructure, financial networks, and cybersecurity operations. As capability thresholds continue to rise, the divide between open deployment and controlled access is likely to deepen, turning today’s rhetorical conflict into a defining fault line in the governance of advanced artificial intelligence.

X Launches Custom Timelines, a For You and Following Tabs Feed Features 

0

X just launched Custom Timelines — a major new feature that lets you pin hyper-personalized, topic-specific feeds directly to your Home tab. Instead of relying only on the general For You or Following tabs, you can now select from over 75 topics with more coming and pin a dedicated timeline for that niche.

Examples include things like art, finance, sports, tech, or whatever you’re deeply into. Each Custom Timeline is powered by Grok:
Grok understands the content of virtually every post on X. It combines that with X’s existing personalization algorithm. A feed that’s tailored just for you around that single topic — and it gets even better if you already engage with that subject a lot. This makes it easier to dive deep into specific interests without the mix of unrelated content that sometimes floods the main timeline.

Early access right now for Premium subscribers on iOS. Android rollout coming very soon. Announced today by X’s Head of Product, Nikita Bier, as one of the platform’s biggest changes in a while. This feels like a smart evolution: more control over what you see, less noise, and deeper engagement in the niches that matter to people. Grok’s role in classifying content at scale is what makes the personalization actually work at this level.

Grok’s content classification on X is the core AI capability that powers features like Custom Timelines, the personalized For You feed, and ranking in the Following tab. Grok doesn’t just scan for keywords or hashtags. It semantically understands the meaning, context, nuance, and intent of virtually every post on X — including text, images, videos, threads, replies, and memes.

For each post, Grok performs real-time analysis to determine: Main topic(s) — e.g., is this about Formula 1, AI ethics, Sub-niches and granularity — It can distinguish deep sub-topics. How substantive, original, or engaging the content is. Sarcasm, humor, misinformation risks, or cross-topic connections. This understanding is then combined with X’s traditional personalization engine (your past likes, replies, follows, dwell time, etc.) to decide what shows up where.

When you pin a topic; one of 75+ available, like Art, Finance, Tech, Sports, Memes, etc: Grok filters the entire firehose of X posts. It classifies which posts belong to your chosen topic using its deep semantic model not rigid rules. It personalizes the feed further: posts you’re more likely to engage with based on your history in that niche rank higher.

The result is a clean, dedicated timeline focused only on that topic — and it gets smarter and more precise the more you interact with that subject. This is why the official description says: It’s powered by Grok’s understanding of every post with the algorithm’s personalization—meaning every timeline is made just for you. And it works even better when it’s a topic you already engage with.

Grok helps recommend posts beyond who you follow by classifying content and matching it to your inferred interests. Even posts from accounts you follow are now ranked not purely chronological using Grok’s predictions of what you’ll find engaging. Topic snooze, content moderation signals, spam/low-quality filtering, and future prompt-based feed adjustments all lean on this same classification layer.

Traditional social media algorithms rely heavily on engagement metrics likes, retweets, views + basic topic tagging. Grok shifts toward true understanding: It handles nuance; context, sarcasm, evolving slang. It scales to hundreds of millions of posts daily. It reduces noise in niche feeds because it’s not just matching keywords — it’s comprehending meaning.

The system is still evolving. X has been moving toward a purely AI algorithm, with Grok playing a central and growing role in ranking and filtering. In short Grok acts as X’s intelligent content layer — reading and categorizing posts at massive scale so the platform can serve you hyper-relevant, topic-specific experiences instead of one generic algorithmic soup.

Global Central Banks Hold Approximately 38,666 Metric Tonnes of Gold Amid Uncertainty in the Middle East

0

Global central banks collectively hold approximately 38,666 metric tons of gold, which represents roughly 17–18% of all gold ever mined throughout human history. Central bank holdings: ~38,666 tonnes. This has grown steadily due to net buying by many emerging-market central banks in recent years, though a few like Turkey in certain periods have sold.

Total gold ever mined above-ground stocks: Estimates from the World Gold Council and similar sources put this at around 216,000–220,000 tonnes as of late 2025/early 2026. About two-thirds of that has been mined since 1950 thanks to modern technology. Dividing 38,666 by ~216,265; a common recent above-ground stock figure gives roughly 17.9%, so the 17% or roughly 17% claim holds up well, with minor variations depending on exact reporting dates and whether all gold ever mined strictly means above-ground stocks.

Central banks are indeed major players in the gold market: They provide a floor for demand, especially during geopolitical uncertainty, inflation concerns, or de-dollarization efforts by some nations. In 2025, they net bought around 863 tonnes; forecasts for 2026 hover around 800–850 tonnes—still well above historical averages.

Top holders include the US ~8,133 tonnes, Germany, Italy, France, Russia, and China which has been adding steadily. Many emerging economies are increasing their share as a hedge. The rest of the gold is split roughly like this approximate % of above-ground stocks: Jewelry: ~44–45%, Bars, coins, and ETFs (investment): ~23% Other industrial/decorative uses and other categories: the remainder.

This distribution underscores gold’s dual role as both a monetary asset (central banks) and a cultural and commodity one; jewelry, especially in places like India and China. The figure highlights a long-term shift: after decades of selling or stability, many central banks have become consistent net buyers since around 2010, viewing gold as a neutral, no-counterparty-risk reserve asset.

The 38,666-tonne number is a solid snapshot, though exact totals get updated quarterly via IMF and World Gold Council data. Central bank accumulation acts as a structural demand floor. Their net purchases like 863 tonnes in 2025, with forecasts around 800–850 tonnes for 2026 remove physical supply from the market permanently, as they are long-term holders rather than traders.

This tightens the supply-demand balance, putting upward pressure on gold prices and contributing to rallies, gold hit record highs above $5,000/oz in recent periods amid strong official buying. It reduces downside volatility during corrections — central banks often provide consistent bids when private demand weakens.

Large buys can cause short-term price spikes and increased volatility, while their presence signals confidence, encouraging other investors to follow. In a market where annual mine supply is only ~4,700–5,000 tonnes, central banks have accounted for 15–26% of demand in recent years, making them a dominant force.

Gold has overtaken U.S. Treasuries in value within central bank reserves for the first time in decades; gold ~$3.87 trillion vs. valuation-adjusted USD assets ~$3.73 trillion in early 2026 data. The U.S. dollar’s share of global reserves has declined to ~56–57% by 2025, partly as emerging-market central banks diversify into gold to hedge against geopolitical risks, sanctions, and potential dollar weaponization.

Gold serves as a neutral, no-counterparty-risk asset— it has no issuer liability, unlike fiat currencies or bonds. This reflects broader de-dollarization trends though the dollar remains dominant. Gold acts as a hedge rather than a full replacement. Many central banks especially in emerging economies like China, India, Poland, Brazil, and Turkey cite inflation protection, crisis performance, and portfolio diversification as motivations.

Surveys show 43–70% plan further increases, with 95% expecting global gold reserves to rise. Countries reduce reliance on foreign currencies or custodian like repatriation from New York Fed or Bank of England. Hedge against inflation, debt, and uncertainty — With global debt exceeding $300 trillion and persistent inflation concerns in some regions, gold helps preserve reserve value during economic stress or currency debasement.

Potential long-term pressure on dollar dominance — While not an immediate threat, sustained shifts could influence borrowing costs, capital flows, and the dollar’s role in trade and finance if confidence erodes further. Gold’s share of total foreign reserves is ~17%, up in value terms relative to GDP, signaling its renewed relevance post-Bretton Woods.

Persistent official demand supports higher average prices and a structurally elevated floor, even if private investment (ETFs) fluctuates. Analysts link this to forecasts of strong gold performance into 2026. Heavy buying often coincides with geopolitical tensions, high debt levels, or doubts about traditional reserve assets.

Unlike the 1990s–2000s; when some banks sold, the trend has reversed to net buying for 15+ years, with no surveyed banks planning reductions. The 38,666-tonne hoard underscores gold’s evolution from a barbarous relic to a strategic reserve asset in a multipolar world. It provides price support, accelerates diversification away from dollar-heavy portfolios, and reflects caution about fiat risks — but it also highlights ongoing global uncertainties rather than a complete overhaul of the system.