DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 9

JP Morgan’s JPM Coin Accelerating Expansion to Canton Network

0

J.P. Morgan through its blockchain unit Kinexys, formerly Onyx announced plans to issue its JPM Coin also referred to as JPMD, a USD-denominated tokenized deposit natively on the Canton Network.

JPM Coin is a bank-issued digital token representing U.S. dollar deposits held at J.P. Morgan. It enables institutional clients to make near-instant, 24/7 peer-to-peer payments and transfers on blockchain infrastructure, while maintaining the security and backing of traditional bank deposits. It is designed for wholesale and inter-institutional use rather than retail.

The Canton Network is a privacy-enabled, public Layer 1 blockchain developed by Digital Asset. It is built specifically for institutional finance, allowing synchronized, atomic settlement across different applications and participants while preserving privacy. Key participants and users include major institutions such as Goldman Sachs, BNP Paribas, HSBC, Broadridge, and others. It already handles significant volume, including over $350 billion daily in U.S. Treasury repo settlements in related ecosystems, and supports tokenized assets and regulated digital money.

Native issuance of JPM Coin on Canton, not just bridged or wrapped. Institutions on Canton will be able to issue, transfer, and redeem JPMD near-instantly in a secure, interoperable environment. Phased rollout throughout 2026. Initial focus is on building technical and business frameworks for issuance, transfer, and redemption.

Broader availability including pilots with select clients depends on testing and regulatory factors. Some reports reference pilot activity or related integration steps potentially starting in 2025/early 2026. Enhance efficiency, unlock liquidity, enable 24/7 real-time settlement of digital cash alongside tokenized assets, and support interoperable regulated digital money across financial markets. It builds on JPM Coin’s prior expansion.

This move is part of a broader trend of institutional tokenization and blockchain adoption in traditional finance. Canton is gaining traction as a shared infrastructure for major players like recent activity with HSBC tokenized deposits and upcoming DTCC Treasury tokenization. JPMorgan’s involvement as a participant in Canton applications including prior JPM Coin integrations makes this a natural extension.

As of April 2026, the integration is still in the planning and phased implementation stage—no full production launch has been reported yet, but it reflects growing momentum for programmable digital payments in institutional settings. It’s enables near-instant, 24/7 peer-to-peer transfers and atomic settlement of digital cash alongside tokenized assets. This reduces settlement times, counterparty risk, and operational friction compared to traditional systems.

Unlocks liquidity by allowing seamless movement of bank-backed USD deposits across Canton participants including Goldman Sachs, HSBC, BNP Paribas, Broadridge. Institutions on Canton can issue, transfer, and redeem JPM Coin directly in a synchronized, privacy-enabled environment.

Canton’s sub-transaction privacy supports confidential trades and settlements among competitors, making regulated digital money more viable for sensitive wholesale finance use cases. Strengthens Canton as infrastructure for tokenized real-world assets (RWAs) and payments. Builds on JPM Coin’s existing volume and prior expansions, accelerating institutional blockchain use for payments, collateral, and risk management.

Positions JPM as a leader in bank-issued digital money on public and permissioned chains. Potential for new revenue from higher transaction volumes, lower costs via automation, and expanded client services in a multi-chain setup. Signals growing comfort with on-chain regulated cash.

Reinforces the shift toward programmable, interoperable digital finance in TradFi. Could influence regulatory views and encourage more banks to issue similar tokenized deposits, contributing to overall tokenization momentum. The move is seen as a pragmatic step bridging traditional banking rails with blockchain without compromising security or compliance. Full effects will unfold as the 2026 phases progress.

OpenAI’s $852bn Valuation Faces Investor Scrutiny as Enterprise Pivot Tests AI Leadership

0

OpenAI’s towering $852 billion valuation is coming under sharper examination from some of its own backers as the company recalibrates its growth strategy, shifting deeper into enterprise software and coding tools in a bid to counter rising competitive pressure from Anthropic and a reinvigorated Google.

The concerns, first reported by the Financial Times, come barely a month after OpenAI completed what is widely seen as the largest fundraising round in Silicon Valley history, raising $122 billion in an oversubscribed deal that cemented its status as one of the world’s most valuable private technology companies.

The central question now confronting investors is not whether OpenAI can raise capital, but whether its strategic direction can justify a valuation approaching $1 trillion as it moves toward a potential public offering later this year.

At the heart of the debate is the company’s shifting product roadmap. According to the report, OpenAI has redrawn its product strategy twice in the past six months, first in response to pressure from Google and more recently to defend market share against Anthropic, whose Claude ecosystem has been gaining traction, particularly in enterprise and developer workflows.

For some investors, that pace of strategic revision is beginning to raise focus questions.

“You have ChatGPT, a 1 billion-user business growing 50-100% a year, what are you doing talking about enterprise and code?” an early backer told the FT.

“It’s a deeply unfocused company.”

That quote captures the core tension around OpenAI’s current positioning. On one side is ChatGPT, a consumer product that has become one of the fastest-growing platforms in technology history, with a user base and growth profile that many companies would be reluctant to divert attention from. On the other hand, the enterprise market is where revenue is typically stickier, margins can be higher, and investor appetite ahead of an IPO often hinges on recurring business contracts rather than consumer engagement metrics.

The shift suggests OpenAI is increasingly prioritizing the latter, though not as a defensive response.

Private market investors and future public shareholders tend to place a premium on predictable enterprise revenue streams, especially in software and infrastructure businesses. Consumer usage can drive brand dominance, but enterprise contracts are often what support sustained multiple expansion in public markets.

That makes the pivot toward code assistants, API integrations, enterprise agents, and workflow products financially rational, even if it risks diluting focus in the near term. The timing is especially sensitive because Anthropic is reportedly growing at an accelerated pace. Some industry watchers now expect Anthropic’s revenue growth to overtake OpenAI’s within the next few months, an assessment that has intensified pressure on OpenAI to defend its position in corporate AI deployments.

This matters because the revenue mix between the two companies is evolving differently. OpenAI still retains enormous consumer dominance through ChatGPT, while Anthropic has built significant momentum in enterprise coding, research, and developer-heavy use cases. That divergence is increasingly shaping investor perception ahead of possible IPO filings.

The competitive threat from Google adds another layer. Google’s renewed push through Gemini and enterprise AI tooling means OpenAI is now defending leadership on two fronts: consumer mindshare and enterprise monetization.

In that context, the product roadmap revisions may be viewed less as indecision and more as rapid adaptation in an industry where leadership positions can change within quarters.

Still, investor unease is clearly building.

At an $852 billion valuation, expectations are extraordinarily high. The market is no longer pricing OpenAI as simply the creator of ChatGPT. It is pricing the company as a long-term AI platform leader with durable monetization, enterprise scale, and eventual public-market readiness. That explains why even modest signs of strategic uncertainty attract outsized scrutiny.

OpenAI has strongly pushed back on the suggestion that investors are losing confidence. Chief Financial Officer Sarah Friar said the idea that backers are not supportive of the company’s strategy “defies the facts,” according to the report.

In a statement to Reuters, an OpenAI spokesperson reinforced that position, saying the $122 billion raise was “oversubscribed, completed in record time and backed by a broad set of leading global investors, reflecting strong conviction in both our direction, current business momentum, and long-term value.”

The broader insight is that OpenAI has entered a new phase where the debate is no longer about whether generative AI is transformational but about which business model best captures that transformation: mass consumer adoption, enterprise integration, or a hybrid approach.

For a company valued at $852 billion, every product decision is now being judged not only on innovation merit but on its implications for revenue durability, competitive moat, and IPO optics. That is why the scrutiny from its own investors may prove as consequential as the competitive threat from rivals.

The $40 billion dollar accidental risk: Inside the Claude code leak

0

The $40 billion dollar accidental risk: Inside the Claude code leak

@ChaofanShou (The researcher who first flagged the leak): “Claude code source code has been leaked via a map file in their npm registry! This is massive. You can see every single internal prompt and tool definition.”

Last two weeks, X (twitter) erupted when an AI researcher made the tweet above. It would eventually gain over 34 million views with many other X users mobilising meet-ups to analyse the source codes with others already forking and porting while the skeptics wondered if this was really an accident as it was too good to be true.

Back story

The last day of March, 2026 was the last day Anthropic as an AI company lost protection of over 500,000 lines of pure typescripts from its flagship developer tool, Claude code. This was a classic case of a high-tech powerhouse being humbled by a low-tech mistake. What makes this event particularly striking and embarrassing (on their part) is not just the scale – it wasn’t a sophisticated hack at all; rather, a simple human error in the npm packaging process accidentally including a debugging sourcemap for Claude Code, v2.1.88, thus, supplying the techies enough tacos to feast on. Expectedly, within hours, the internet had de-obfuscated and reconstructed over 512,000 lines of TypeScript, effectively handing the world the blueprint for Anthropic’s flagship AI agent.

The Anatomy of the Leak: A bad day for the Claude crew

The leak exposed the inner workings of how Claude interacts with a user’s local file system, its “kairos” autonomous background agent mode, and even “Undercover Mode”—a feature that allowed Anthropic employees to contribute to public repositories while masking the AI’s involvement. Even though the model weights –  the core brain is still safe behind Anthropic’s servers, the scaffolding, that is, how the AI agent thinks, plans, and executes commands is now public. The leak originated from a misconfigured release that unintentionally included a source map (.map) file, which linked to a full archive of Claude Code’s internal TypeScript codebase. Within hours, the code spread across thousands of GitHub repositories, making containment virtually impossible. 

Importantly, Anthropic clarified that no user data or model weights were exposed. However, the leak did reveal internal architecture, hidden features, and product roadmap signals—effectively giving competitors a rare look under the hood of a top AI system. 

Implications for Anthropic

This incident creates a complex mix of risks and opportunities which will continue to unfold in the coming days. First beneficiaries here are the competitors. That’s not a hard guess.
The competitive exposure occasioned by this incident is served on a platter. The leak provides rivals with insights into Anthropic’s agent architecture, tooling strategy, and upcoming features.  It’s also a shortcut for any startup trying to build a rival coding assistant as they can now access how anthropic handles long-running tasks, error recovery and tool-calling logic. These vulnerabilities can also be weaponised against them. With the client-side logic exposed, bad actors can now find “jailbreaks” more easily. They can see exactly how the agent validates permissions, making it easier to craft malicious repositories that trick Claude into exfiltrating data or running unauthorised shell commands. Too bad but that’s only the pre-amble. That is to say that, the problem is not for the anthropic company alone, real-world security threats beyond their intellectual property loss has now been introduced as some leaked versions have already been repackaged with malware like Vidar (an information stealer) and Ghostsocks. This is a significant supply chain risk.

The reputation of the company as a leader in AI safety and security has also been massively hit. Coming at a time when Anthropic is already navigating tensions with the U.S. government over national security risks, this “human error” provides ammunition to critics who argue that AI labs cannot yet be trusted with sensitive defense-related deployments thus tightening their scrutiny and regularisation. Multiple leaks within a short period is not a positive sign and undermines the security legacy plus significant concerns about operational discipline in the event of partnerships, collaborators and investors.

Ironically, in all of these, one good news is that this leak has great potential to promote innovation across the industry thus accelerating the AI ecosystem advancement in general. Developers now better understand how advanced AI agents are orchestrated, lowering the barrier to entry.

Anthropic can regain its edge: Recommendations

To turn this disaster into a pivot that can help them remain competitive and credible, Anthropic needs to move decisively. This is a call to double-down on operational security and transparency. This mistake was professionally preventable. Implementing stricter CI/CD checks, artifact scanning, and “fail-closed” deployment pipelines is a great place to start. 

In addition, the real moat in AI is increasingly shifting toward proprietary data, training techniques, and model performance, not just tooling. Anthropic should lean into strengthening Claude’s core intelligence by shifting from code advantage (scaffolding) to model advantage. As unpleasant as this incidence has been, the temptation to fight the inevitability of leaks, it is important that they prioritise the path of selective transparency since their internal “Undercover Mode” has sparked a trust deficit, Anthropic could open-source controlled components and position itself as a leader in responsible transparency – similar to how some companies leverage open systems strategically. This move will boost developer trust in their operations. Furthermore, clear communication, rapid patching, and visible improvements in security processes will be key to retaining enterprise users and developers’ confidence. To neutralise any advantage gained from the leak, all the planned roadmap for product differentiation must be accelerated at a faster rate. The leak revealed ambitious features with internal codenames like “Capybara” and “Numbat”. Anthropic must ship these models earlier than originally scheduled and smarter than originally intended, that way they can stay ahead of the race and the old logic becomes obsolete. As an extra step, tightening the bond between the Claude Code CLI and secure hardware environments (like trusted execution environments), this level of integration can make the leaked software logic useless for anyone trying to run it on unverified systems

Final Thoughts

The Claude Code leak is more than just a technical mishap—it’s a signal moment for the AI industry. It highlights a paradox: the more powerful and complex AI systems become, the more fragile their operational layers can be. The leak is a bruising reminder that in the AI race, the “agentic” wrapper is just as valuable as the model itself. Anthropic still has strong fundamentals, but in a race where trust, speed, and innovation matter equally, incidents like this can shift momentum quickly. Anthropic’s “edge” no longer lies in how they built the tool, but in how fast they can evolve it beyond the version currently sitting in 8,000 GitHub mirrors. The next few months will determine whether this leak becomes a temporary setback—or a defining turning point.

Novo Nordisk Turns to OpenAI in High-Stakes Bid to Accelerate Drug Discovery and Regain Weight-Loss Lead

0

In yet another move that underscores how artificial intelligence is rapidly becoming a significant leverage in pharmaceuticals, Novo Nordisk has struck a far-reaching partnership with OpenAI to accelerate drug discovery, streamline clinical development, and sharpen operational efficiency across its global business.

The Danish drugmaker said Tuesday that the alliance is designed to help “bring new and better treatment options to patients faster,” a goal that carries enormous commercial and medical significance as the company battles to reclaim lost ground in the lucrative obesity and diabetes market.

The partnership is seen as a calculated response to intensifying pressure from Eli Lilly and Company, which has steadily eroded Novo’s early lead in the GLP-1 weight-loss segment. What is at stake is not simply innovation prestige, but leadership in a market projected to remain one of the most profitable corners of global healthcare for years.

Novo said the partnership will enable it to use advanced AI systems to analyze highly complex datasets, identify promising therapeutic candidates, and compress the time it takes for a medicine to move from early research to patient use.

“There are millions of people living with obesity and diabetes who need treatment options, and we know there are therapies still waiting to be discovered that could change their lives,” said Novo CEO Mike Doustdar. “Integrating AI in our everyday work gives us the ability to analyze datasets at a scale that was previously impossible, identify patterns we could not see, and test hypotheses faster than ever.”

That statement goes to the heart of the current transformation in life sciences. Drug discovery has historically been a long, expensive, and failure-prone process, often taking more than a decade and billions of dollars from molecule screening to regulatory approval. AI’s promise lies in reducing attrition rates early in the pipeline by spotting molecular patterns, biomarker relationships, and trial signals that conventional methods may miss.

OpenAI chief executive Sam Altman framed the partnership in broader industry terms.

“AI is reshaping industries and in life sciences, it can help people live better, longer lives,” Altman said. “This collaboration with Novo Nordisk will help them accelerate scientific discovery, run smarter global operations, and redefine the future of patient care.”

However, the scope of the partnership extends well beyond laboratory research. According to the company, OpenAI’s capabilities will also be deployed in manufacturing, supply chain optimization, distribution, and commercial operations, suggesting that Novo is seeking productivity gains across its entire value chain rather than limiting AI to drug screening alone.

In pharmaceuticals, the real bottlenecks often emerge not only in discovery but also in trial design, patient recruitment, site selection, regulatory documentation, and production scaling. These are precisely the areas where AI can generate faster near-term returns.

Industry experts have repeatedly noted that while the idea of AI “discovering the next miracle drug” captures headlines, the more immediate commercial upside often lies in operational acceleration.

Clinical trial optimization, for instance, remains one of the most time-consuming stages of development. AI can help identify patient populations, predict dropout risks, improve site matching, and streamline protocol design, shaving months off timelines and potentially saving hundreds of millions of dollars.

That matters enormously for Novo Nordisk at this moment as the company has been under mounting pressure to defend its position in the obesity market, where its flagship brands Wegovy and Ozempic once gave it a commanding first-mover advantage. That lead has narrowed as Eli Lilly expands aggressively with rival therapies and newer oral formulations.

Novo is now trying to claw back market share through its Wegovy pill and a next-generation pipeline that investors are watching closely. The market’s immediate reaction suggests investors view the OpenAI partnership with interest. Shares rose about 2.8% shortly after the opening bell, with some reports showing gains above 3% in premarket trade.

There is also a longer strategic arc here. This latest deal builds on Novo’s prior AI initiatives, including its collaboration with Nvidia and the use of Denmark’s Gefion sovereign AI supercomputer to accelerate biomedical research. Gefion has already been positioned as a core pillar of Denmark’s AI research infrastructure, and Novo’s latest move suggests the company is layering best-in-class generative AI capabilities on top of that compute backbone.

This points to a larger industry trend where the pharmaceutical race is increasingly becoming a contest of data infrastructure, computing power, and model sophistication.

The partnership thus goes beyond simply discovering drugs faster to restoring competitive momentum, protecting margins, accelerating time-to-market, and ensuring that Novo’s next blockbuster reaches patients before rivals do.

Oracle leads as Tech Shares Rebound, Software Surge Signals Potential Return to Record Highs

0

After a punishing start to 2026 that wiped billions of dollars off technology valuations and briefly pushed the sector back toward pre-ChatGPT pricing levels, Wall Street is beginning to see the outlines of a renewed rally.

Software stocks, led by Oracle’s double-digit jump, saw a sharp rebound on Monday, with the development being interpreted not as a fleeting bounce but as the opening phase of a broader technology recovery.

The rally comes at a moment when investor sentiment toward growth stocks is undergoing a rapid reassessment. For much of the first quarter, technology shares were battered by geopolitical instability, particularly the Iran conflict and the resulting energy-price shock, alongside persistent concerns that the AI boom had left valuations overstretched. That correction, however, has now compressed multiples to levels many strategists consider historically attractive.

At the forefront of Monday’s rebound was Oracle, whose shares surged as much as 12%, making it one of the strongest performers in both software and the broader technology sector. The stock’s advance helped propel the software complex to its best day in roughly a year, reinforcing the view that institutional investors are beginning to rotate back into beaten-down AI and enterprise software names.

The significance of Oracle’s move goes beyond a single-session price spike. The company has become a proxy for investor confidence in enterprise AI infrastructure spending. Its cloud expansion, large contracted revenue backlog, and positioning in database and data-center infrastructure make it a bellwether for whether legacy software companies can successfully monetize AI rather than be displaced by it.

What makes the current setup especially compelling for bulls is the sharp compression in valuations.

“Tech valuations are now LOWER than they were when ChatGPT was announced,” Adam Kobeissi, founder of The Kobeissi Letter, said. “As the Iran War drives markets lower, AI is only getting bigger. Record highs are on the horizon.”

That assessment has found support among other market voices. Apollo chief economist Torsten Slok has similarly noted that sector multiples have fallen from around 40 times earnings to roughly 20 times, effectively erasing much of the premium built during the AI frenzy of 2023 through 2025.

This reset is crucial because, unlike previous speculative bubbles, earnings expectations have not collapsed alongside prices.

According to BlackRock, the technology sector is still projected to deliver approximately 43% earnings growth in 2026, a remarkably strong figure that suggests the selloff was driven more by macro fear and profit-taking than by a deterioration in corporate fundamentals.

That divergence between falling valuations and stable earnings estimates is one reason strategists increasingly describe the current moment as a buy-the-dip opportunity rather than the start of a prolonged bear market.

Daniel Newman, chief executive of Futurum, described the present environment as “a historically opportune moment” to re-enter the AI trade.

The AI story itself remains the central pillar of the bull case. Even as war-driven volatility pressured markets, the underlying capital expenditure cycle tied to artificial intelligence has continued to expand. Hyperscalers and enterprise software companies are still investing massive sums in compute, cybersecurity, data storage, and AI tools. This suggests that the secular demand story remains intact even as share prices undergo cyclical corrections.

Importantly, investors are becoming more selective. Rather than buying technology indiscriminately, capital is increasingly flowing toward companies viewed as essential infrastructure providers or those seen as least vulnerable to AI disintermediation. This includes names such as Microsoft, Alphabet, Meta, Amazon, and NVIDIA, all of which remain deeply embedded in the AI ecosystem through cloud services, chips, platforms, or advertising engines.

Cybersecurity is another area drawing heightened attention. As geopolitical risks intensify and AI tools raise the stakes around enterprise security, firms such as CrowdStrike, Palo Alto Networks, and Zscaler are increasingly being viewed as structural beneficiaries of the next technology upswing.

Another major catalyst now in focus is earnings season. First-quarter corporate results are expected to begin shortly, and analysts broadly expect technology to once again lead profit growth across the S&P 500. This matters because earnings will determine whether the recent rally develops into a sustained re-rating.

Analysts believe that if large-cap tech companies deliver resilient revenue growth and strong AI monetization metrics, investors may gain the confidence needed to push the sector back toward all-time highs.

Goldman Sachs has already pointed to “secular growth” stocks, many of them concentrated in technology, as best positioned for the next expansion phase. That view is reinforced by the market’s gradual shift away from purely macro-driven selling toward stock-specific fundamental analysis.

The broader narrative, therefore, is one of transition – as demonstrated by the first quarter, which was defined by de-risking, war headlines, and multiple compressions. The second quarter is expected to increasingly be defined by earnings validation, AI monetization, and renewed institutional positioning.

Monday’s software rally may prove to be the first meaningful signal that investors believe the valuation reset is complete and that the next leg of the AI-driven tech cycle is beginning. Analysts note that if earnings confirm that thesis, the sector could indeed be laying the groundwork for a fresh run toward record territory.