DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 7

The Token Illusion: Why AI’s Explosive Demand May Be Mispriced—and How Anthropic Is Positioning for a Reset

0

The artificial intelligence boom is being measured in tokens, billed in tokens, and increasingly justified by tokens. Yet beneath the surge in usage metrics lies a growing concern across the industry: the core signal used to validate hundreds of billions of dollars in infrastructure investment may be overstating real economic demand.

Tokens, the fragments of text that make up prompts and responses, have become the de facto unit of AI consumption. Every interaction with systems built by Anthropic or OpenAI translates into token flow, and at scale, those flows are immense. Simple chat interactions consume modest volumes, but agentic systems—capable of coding, browsing, and executing multi-step workflows, multiply usage dramatically, often running continuously in the background.

At current pricing, that consumption translates directly into revenue potential. Anthropic charges $5 per million input tokens and $25 per million output tokens on its latest models. Multiply that across enterprise deployments and autonomous agents, and the numbers appear to support the industry’s vast capital expenditure on data centers, chips, and energy infrastructure.

But the reliability of that signal is increasingly under scrutiny. Inside large organizations, token usage is becoming a performance metric. Meta and Shopify have introduced internal tracking systems that rank employees by how much AI they consume. Jensen Huang has gone further, suggesting he would be “deeply alarmed” if a highly paid engineer were not generating substantial AI compute spend.

Such benchmarks create a predictable distortion. When consumption is rewarded, optimization follows. Engineers and teams begin to maximize token usage rather than output quality, effectively turning AI into a budget line to be spent rather than a tool to be optimized.

Ali Ghodsi, chief executive of Databricks, has described how easily that system can be gamed. Re-running queries, duplicating workloads, or looping processes can drive up token consumption with little incremental value. The metric inflates, the bill rises, but productivity does not necessarily follow.

This disconnect is becoming visible at the executive level. Harvard Business School AI Institute executive director Jen Stave says many CIOs and CTOs are struggling to construct a credible return-on-investment framework for AI. The challenge is not adoption; tools are being deployed widely, but attribution. Companies can measure what they spend on AI; they cannot yet consistently measure what they gain.

That gap has implications that extend beyond enterprise budgets. It calls into question the demand assumptions underpinning the industry’s infrastructure buildout. Data centers require years to plan and construct, meaning today’s investment decisions are based on forecasts that may not fully account for behavioral distortions in usage.

Anthropic’s chief executive, Dario Amodei, has framed this uncertainty in operational terms, describing a “cone of uncertainty” around demand. Build too little capacity and risk losing customers; build too much and face underutilized assets and delayed revenue.

“If you’re off by a couple years, that can be ruinous,” he said, highlighting the asymmetry of the risk.

Anthropic’s response has been to tighten the link between usage and revenue. The company is moving decisively toward per-token billing, abandoning the flat-rate subscription structures that defined the early phase of AI adoption. That shift is both defensive and diagnostic: it protects margins while generating clearer data on how much customers truly value different types of AI workloads.

The transition has already exposed inefficiencies. Anthropic recently curtailed access to third-party tools that were routing heavy, continuous workloads through consumer subscription plans. In some cases, users paying $200 per month were generating usage that would have cost thousands under a metered model. The arbitrage highlighted a fundamental mismatch between pricing design and actual usage patterns.

Enterprise contracts are undergoing a similar overhaul. Legacy seat-based pricing, with bundled usage allowances, is being replaced by hybrid structures that combine per-user fees with direct billing for token consumption. The result is a model that scales revenue with compute demand but also forces customers to confront the true cost of their AI usage.

Competitors are converging on the same realization. At OpenAI, ChatGPT head Nick Turley has acknowledged that unlimited plans may be economically untenable, likening them to offering unlimited electricity in an environment where consumption can scale without constraint. The analogy is instructive: as AI shifts from occasional interaction to continuous operation, it behaves less like software and more like infrastructure.

From the financial side, the consequences are already visible. Ramp reports that AI spending across its customer base has increased thirteenfold in a year, yet budgeting frameworks remain immature. Companies are spending heavily without a clear sense of optimal allocation, a dynamic that is sustainable only as long as capital remains abundant.

That dynamic introduces a structural tension. Providers benefit from higher token consumption, but long-term adoption depends on efficiency and demonstrable value. If customers begin to optimize for cost rather than usage, revenue growth tied purely to volume could slow.

Some companies are beginning to anticipate that shift. Salesforce is experimenting with “agentic work units,” an attempt to measure AI output rather than input. The concept reframes the value equation: instead of tracking how much compute is consumed, it asks what work is actually completed.

The distinction is likely to become central as leading AI firms approach public markets. Both Anthropic and OpenAI are widely expected to pursue IPOs, where investor scrutiny will focus less on headline growth and more on the quality and sustainability of that growth. Token counts alone will not suffice; markets will demand evidence that usage translates into durable economic value.

In that environment, pricing strategy becomes a signal. Anthropic’s move toward metered billing may produce slower, more disciplined growth figures, but it also yields cleaner data and more predictable unit economics. OpenAI’s broader reach and more aggressive scaling may generate larger top-line numbers, but with greater ambiguity around how much of that demand is structural versus inflated.

The broader risk is that the industry has entered a phase where activity is being mistaken for demand. If a portion of token consumption is driven by internal incentives, experimental overuse, or poorly optimized workflows, then the true baseline for AI demand may be lower than current projections suggest.

Should that correction materialize, its effects would cascade through the system. Infrastructure investments could face underutilization, pricing models would tighten further, and companies reliant on volume growth would be forced to recalibrate.

In that scenario, the advantage shifts to those who priced for reality rather than momentum. The companies that survive will not be those that generated the most tokens, but those that understood which tokens mattered—and were paid accordingly.

African Born AI Startup Lua Secures $5.8M Seed Funding to Power Next-Gen AI Agent Workforces for Businesses

0

African-born AI startup Lua has secured $5.8 million in seed funding, to build an operating system for human-AI agent collaboration in the workplace.

The seed fund marks a significant milestone in the company’s mission to redefine how businesses operate in the age of automation.

The round was led by Africa-focused growth fund Norrsken22, with participation from Flourish Ventures, 20VC, P1 Ventures, Phosphor Capital, Y Combinator, and prominent angel investors including Henri Stern (CEO of Privy), Kaz Nejatian (CEO of Opendoor), and Med Benmansour (CEO of Nuitee).

Announcing the funding round, the company wrote via a post on LinkedIn,

“Today we’re announcing our $5.8m seed round. When we launched our developer platform in October, we set out to give companies real ownership over their agent outcomes. A way for any business to build the org chart of the future, where humans and agents collaborate seamlessly and agents are managed with the same intentionality as your human team.

“Lua is where companies come to build a truly compounding agent workforce. And it’s starting to get noticed. In Q1, agents on the platform grew 10x, we shipped our first open source releases, and momentum started building globally.”

Also commenting on the funding round, Novitske, General Partner at Norrsken22 Lexi Novitske said,

“We are thrilled to support Lua. The founders fundamentally understand how agent and human workforces need to collaborate to get work done”.

Lua will use the funding to continue to build out its developer community and the Lua Implementation Network, a growing community of independent partners deploying Lua agent workforces in their own markets around the world.

Investors highlighted Lua’s potential to become the foundational layer for AI agent adoption in businesses worldwide, particularly as companies seek practical ways to integrate autonomous agents into daily operations without losing control or incurring massive complexity.

Norrsken22’s leadership of the round underscores growing confidence in Africa-rooted talent building globally relevant AI infrastructure.

Lua’s platform enables businesses  both technical and non-technical to rapidly build, deploy, and manage teams of integrated AI agents.

By providing infrastructure, model orchestration, and channel integrations, Lua allows companies to focus on their core business logic while giving them full ownership over their AI outcomes.

The company positions its solution as a full-stack “agent OS” that supports natural language interfaces and one-click deployments, making advanced AI workforce management accessible beyond big tech.

Founded by Lorcan O’Cathain and Stefan Kruger, Lua officially launched its developer platform in October 2025 and has already demonstrated impressive early traction, including rapid revenue growth and deployments with clients across Africa, Asia, the United States, and Europe. Notable early adopters include African fintechs such as Turaco and Umba.

Since launching its agent developer platform, Lua has grown revenue close to 30% week-on-week. In February 2026 alone, more agents were built on Lua than in the entire cumulative period since launch.

As the AI agent space heats up, Lua enters with a clear value proposition: making AI workforces practical, integrable, and owned by the businesses that deploy them.

With strong backers, experienced founders, and proven early momentum, the company is well-positioned to capture a slice of the expanding market for agentic AI tools.

Notably, Lua’s announcement comes at a moment when enterprises are moving beyond simple chatbots toward coordinated teams of specialized AI agents that can handle complex, multi-step workflows.

Cursor, the Breakneck AI Coding Startup, Lines Up $2 Billion+ Round at $50 Billion Valuation

0

Cursor is closing in on one of the largest venture checks of the year, with talks underway for a funding round that would bring in at least $2 billion and value the four-year-old company at roughly $50 billion before the new money, according to people familiar with the discussions quoted by TechCrunch.

Returning backers Thrive Capital and Andreessen Horowitz are expected to lead the deal, while Battery Ventures is in line to join as a new investor. Nvidia, already a partner, is also poised to write another check. The round is already oversubscribed, though terms could still shift before closing.

If completed, the financing would nearly double Cursor’s post-money valuation from just six months ago, when it raised $2.3 billion at $29.3 billion. That kind of jump in such a short window underscores the ferocious investor appetite for tools that promise to reshape how software gets written.

Despite stiff competition from heavyweights, including Anthropic’s Claude Code and OpenAI’s updated Codex, Cursor’s revenue has kept climbing at a startling pace. The company is projecting an annualized revenue run rate above $6 billion by the end of 2026, implying it expects to more than triple its current trajectory in the next ten months.

As recently as February, its annualized revenue had already crossed $2 billion, having doubled in just three months, with roughly 60 percent coming from enterprise customers.

Cursor, originally known as Anysphere, was founded in 2022 by four MIT students: Michael Truell, Sualeh Asif, Arvid Lunnemark, and Aman Sanger. What started as an ambitious student project has exploded into one of the fastest-growing AI companies on record, now used daily by a significant portion of Fortune 500 companies and generating hundreds of millions of lines of code for enterprises.

A key turning point came last November with the launch of Cursor’s proprietary Composer model. Before that, like many AI coding tools reliant on third-party large language models, Cursor was running at negative gross margins — the cost of inference simply outstripped what it could charge. Composer, combined with smarter routing to more affordable models such as China’s Kimi, has flipped the economics.

The introduction of a proprietary Composer model last November, along with the ability to call on less expensive models like China’s Kimi, has helped the company achieve slight gross margin profitability, the people said.

The company has now reached a slight overall gross margin profitability. On enterprise deals, the margins are solidly positive, though it still loses money on many individual developer subscriptions.

That shift is strategic as much as financial. By depending less on outside model providers, especially Anthropic, whose Claude Code has become Cursor’s most direct rival, the startup is working to protect itself from being commoditized or displaced by the very companies supplying its underlying intelligence.

The rapid move toward in-house capabilities also points to a broader reality in the AI coding space: raw model performance is becoming more democratized, so sustainable advantage increasingly hinges on tight product integration, superior user experience, enterprise-grade reliability, and defensible data moats.

Cursor appears to be winning on the product front, with developers praising its speed, context awareness, and ability to handle large codebases in ways that feel almost collaborative.

The fundraising comes amid intense competition, but also explosive demand. AI-assisted coding is moving quickly from experimental sidekick to standard developer workflow. Companies are racing to adopt these tools not just to boost individual productivity but to accelerate entire engineering organizations. Cursor’s ability to command premium pricing from large enterprises, even while still refining its margins on the consumer side, shows it is carving out a meaningful position in that transition.

For investors, the $50 billion pre-money mark represents a bold wager that Cursor can maintain its momentum as the field matures. The participation of Nvidia adds another layer of validation, given the chipmaker’s central role in powering the AI infrastructure that makes tools like Cursor possible.

With the round still in flux and terms not finalized, the exact size and final valuation could shift. But the early interest already signals that Cursor has graduated from promising startup to one of the defining companies in the AI developer tooling boom. Founded by MIT students barely four years ago, it now sits at the center of a transformation that could reshape how software is built for decades to come — if it can keep executing at the pace it has set so far.

Bitcoin Surged Past $78,000 as Iran Reopens Strait of Hormuz, then Dropped to $75,000

0

The price of Bitcoin surged past the $78,000 mark as global markets reacted to easing geopolitical tensions following Iran’s decision to reopen the Strait of Hormuz, a critical oil transit route. It has since dropped to about $75,000.

The news, amplified by President Donald Trump, triggered an immediate risk-on rally across global markets, with oil prices plunging and equities surging.

The move signaled a temporary de-escalation in the Middle East, boosting investor confidence and driving renewed momentum into risk assets like cryptocurrencies.

Geopolitical Breakthrough Eases Tensions

The Strait of Hormuz, a narrow chokepoint through which roughly 20% of the world’s oil flows had been a major flashpoint amid escalating regional conflicts involving Iran, Israel, Hezbollah in Lebanon, and the United States.

Weeks of disruptions had driven oil prices higher and injected uncertainty into financial markets.

Iranian Foreign Minister Seyed Abbas Araghchi posted on X that, “in line with the ceasefire in Lebanon, the passage for all commercial vessels through the Strait of Hormuz is declared completely open for the remaining period of ceasefire.”

President Trump quickly echoed the development, declaring the waterway “fully open and ready for full passage,” while noting that the U.S. naval blockade on Iranian vessels remains in place until broader negotiations progress.

The announcement follows a fragile 10-day ceasefire between Israel and Hezbollah, offering temporary de-escalation after prolonged tensions that had raised fears of wider conflict and supply disruptions.

Market Reaction: Oil Crashes, Risk Assets Soar

Traders wasted no time interpreting the news as a reduction in geopolitical risk premium.

Bitcoin jumped from around $74,000–$76,000 levels earlier in the day, briefly testing $78,000 on some exchanges reaching as high as $78,287.

At the time of writing, BTC was trading at 77,329, up over 3% in 24 hours. Crude oil plummeted nearly 10–13%, with U.S. benchmarks dropping sharply as fears of supply disruptions evaporated.

Stocks rallied, with major indices posting strong gains as investors rotated into risk assets.

Crypto analysts noted that Bitcoin behaved more like a  high-beta risk asset than “digital gold” in this instance surging alongside equities on de-escalation hopes rather than acting as a safe haven.

Why This Move Matters for Bitcoin

This isn’t the first time Bitcoin has reacted sharply to Middle East headlines. Earlier ceasefire signals had already lifted BTC from lower levels, but today’s Hormuz-specific announcement provided a clear, headline-driven catalyst.

Key factors driving the surge:

Lower oil prices: reduce inflationary pressures and free up capital for riskier investments.

Reduced global uncertainty: encourages capital flows into growth assets like crypto and tech stocks.

Short squeeze dynamics: liquidations in the crypto derivatives market amplified the upside move.

However, caution remains. The ceasefire is short-term (ending around April 22), and the U.S. blockade on Iranian shipping continues.

Some traders have expressed skepticism, calling it a potential “bull trap” or warning that renewed tensions could reverse gains quickly.

Outlook

The reopening ofthe Strait of Hormuz, highlights Bitcoin’s growing integration with traditional macro narratives.

While fundamentals like adoption and halving cycles matter long-term, short-term price action continues to be heavily influenced by geopolitics, energy markets, and risk sentiment.

Analysts are now watching the $76,000–$78,000 resistance zone that has capped rallies since early February. A decisive weekly close above $78,000 could open the door toward $80,000 or higher in the short term.

Conversely, any signs of ceasefire breakdown or renewed threats to the Strait could see Bitcoin give back gains rapidly, given its sensitivity to headline risk.

Netflix Rewires Discovery With TikTok-Style Feed, Doubles Down on AI Across Production and Ads

0

Netflix is moving to reshape how audiences find and consume content, introducing a TikTok-style vertical video feed inside its app while embedding artificial intelligence more deeply across recommendations, content creation, and advertising.

The short-form feed, set to launch this month after a year of testing, marks a notable shift in the company’s interface design. It allows users to scroll through clips drawn from films, series, and video podcasts, effectively turning discovery into a continuous, algorithm-driven experience. While the format mirrors mechanics popularized by TikTok, the strategic intent is distinct. Netflix is not attempting to become a social platform; it is trying to compress the time between opening the app and choosing what to watch.

That problem has become more acute as Netflix’s catalogue expands. With hundreds of millions of users and a growing mix of long-form and episodic content, the platform faces what executives have long described as a “decision bottleneck.” The vertical feed is designed to address this by surfacing high-signal content snippets that can trigger immediate viewing decisions, reducing reliance on static thumbnails and traditional browsing rows.

Co-CEO Gregory Peters framed the initiative as an evolution of Netflix’s recommendation engine, which has been central to its strategy for two decades.

“We still see tremendous room to make it better by leveraging newer technologies,” he said, pointing to advances in AI model architectures that allow faster iteration and more precise personalization.

In effect, the feed becomes both a product feature and a data engine, continuously refining user preferences based on micro-interactions such as scroll speed, replays, and skips.

This is where the integration with ChatGPT-powered search becomes more consequential. Introduced last year, the conversational search tool allows users to describe what they want in natural language. Combined with a vertical feed, Netflix is building a multi-layered discovery system, one that blends passive consumption with active querying. The convergence suggests a broader ambition to move beyond recommendation as a static output toward a more adaptive, real-time system.

Beyond the interface, the company is advancing AI deeper into the production pipeline. Co-CEO Ted Sarandos positioned generative AI as an augmentation tool for creators rather than a replacement.

“It takes a great artist to make great art,” he said, emphasizing that AI’s role is to improve tools and workflows.

Practically, this could reshape cost structures in areas such as visual effects, localization, dubbing, and pre-production design, where automation can compress timelines and reduce overhead.

Netflix’s acquisition of Interpositive, an AI-focused company co-founded by Ben Affleck, provides a clearer signal of intent. Unlike general-purpose generative AI tools, Interpositive’s technology is tailored specifically for filmmaking, suggesting Netflix is investing in proprietary systems that could differentiate its production capabilities. Early interest from creators indicates that adoption may be driven not just by cost savings, but by expanded creative possibilities.

The commercial implications extend to advertising. Netflix expects to generate about $3 billion in ad revenue this year, and AI is central to that target. More advanced recommendation systems can be repurposed for ad targeting, enabling dynamic ad insertion, personalized formats, and improved measurement. This effectively turns the platform’s core strength, user data and engagement patterns, into a monetization engine beyond subscriptions.

Financial results provide context for the scale of these bets. First-quarter revenue rose 16.2% year-on-year to $12.25 billion, while profit surged 83% to $5.28 billion. The margin expansion indicates that Netflix is beginning to benefit from operating leverage, where incremental revenue growth outpaces cost increases. The company ended 2025 with 325 million paying subscribers, maintaining its position as the largest global streaming platform.

Recent price increases in the United States are expected to further lift revenue, though they also introduce the risk of higher churn in a market where competition remains intense. Rivals continue to invest heavily in both content and technology, making differentiation in user experience increasingly important.

Netflix’s pivot toward short-form discovery comes at a time when viewing habits are shifting, particularly among younger audiences who are accustomed to algorithmic feeds and shorter content cycles. By integrating a vertical feed into a long-form platform, Netflix is attempting to bridge two consumption models, capturing the engagement dynamics of short video without abandoning its core identity.

However, the strategy carries execution risks. A feed-driven interface could fragment attention, encouraging sampling over sustained viewing. It also raises questions about content positioning, particularly for high-budget productions that rely on immersive storytelling rather than quick-hit engagement. Balancing these dynamics will be critical.

Governance changes add another layer to the transition. Co-founder Reed Hastings is set to leave the board this summer, closing a chapter in the company’s leadership evolution. The shift places greater operational responsibility on the current executive structure as Netflix navigates this next phase.

Together, the introduction of a vertical video feed and the expansion of AI across the business point to a company recalibrating its model around speed, personalization, and efficiency.