DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog

Amazon Opens the Gates to OpenAI and Anthropic’s AI Coding Tools

0

Amazon is formally expanding access to Anthropic’s Claude Code and OpenAI’s Codex across its corporate workforce, marking one of the clearest signs yet that even the world’s largest cloud companies are increasingly relying on outside artificial intelligence systems to accelerate software development.

The rollout, announced internally by Amazon Vice President of Software Builder Experience Jim Haughwout, signals a major shift in the company’s AI strategy. Rather than relying primarily on its internally developed Kiro coding platform, Amazon is now embracing a broader ecosystem of AI coding assistants as competition intensifies across the technology industry.

The move also underpins how generative AI has rapidly evolved from an experimental tool into a foundational layer of software engineering infrastructure inside large corporations.

“To help you invent more for customers, we are expanding the agentic AI tools available to you,” Haughwout wrote in a memo to staff obtained by Business Insider.

According to the note, Claude Code is being made available immediately to Amazon employees, while OpenAI’s Codex will begin rolling out company-wide on May 12. Both systems will operate through Amazon Bedrock and Amazon Web Services infrastructure, allowing the company to maintain centralized control over computing resources, data handling and security compliance.

“Both run on Bedrock, where all inference runs,” Haughwout wrote. “Both will have easy install for all Amazon builders.”

The decision reflects a growing reality inside Silicon Valley: companies can no longer afford to restrict engineers to internally developed AI systems if rival tools are proving more effective in boosting productivity.

For months, Amazon engineers had reportedly pushed for wider access to Claude Code, which many developers viewed as more capable for certain programming tasks than Amazon’s own Kiro platform. Until recently, use of Claude Code for production work reportedly required special approvals, creating frustration among teams trying to accelerate development cycles.

The broader rollout indicates that Amazon leadership ultimately concluded that limiting access to best-performing AI tools posed a greater risk than opening the ecosystem.

An Amazon spokesperson confirmed the company is now “standardizing” access to Claude Code and Codex across its workforce.

“At Amazon, we’ve long held there’s no one-size-fits-all approach to how our teams innovate,” the spokesperson said. “Our builders are using Kiro for agentic coding, and now with both Claude Code and Codex running on AWS, we are making additional tools available as well.”

The spokesperson added that Kiro remains heavily used internally, saying roughly 83% of Amazon engineers still “primarily” use the company’s in-house platform.

Still, the significance of the announcement extends far beyond developer preferences. It highlights how the AI arms race is reshaping alliances across the technology sector, where competitors increasingly depend on one another for infrastructure, models, and compute power.

Amazon has spent aggressively to deepen relationships with both Anthropic and OpenAI in recent months. In February, the company announced a major partnership with OpenAI involving investments of up to $50 billion. In return, OpenAI agreed to expand use of Amazon’s Trainium AI chips and collaborate with AWS on customized AI services and infrastructure.

Also, Amazon has significantly expanded its backing of Anthropic. In April, the company pledged up to an additional $25 billion investment in the startup, on top of the $8 billion it had already committed. Anthropic also agreed to purchase $100 billion worth of Amazon Trainium chips over time.

Those arrangements are widely viewed as part of Amazon’s broader effort to challenge Nvidia’s dominance in AI computing while cementing AWS as the backbone of enterprise AI deployment. The rollout of Claude Code and Codex through Bedrock serves that strategy directly. Even when employees use outside models, Amazon still keeps workloads, inference activity, and cloud consumption within its own infrastructure environment.

That distinction is increasingly important as cloud providers battle for long-term control of what many analysts view as the next major computing platform.

The shift also exposes how AI coding assistants are transforming software engineering itself. Modern AI systems are no longer limited to generating snippets of code. They can now debug applications, write documentation, conduct testing, suggest architectural changes, and autonomously complete increasingly complex engineering workflows.

That evolution has triggered growing anxiety across the technology workforce. Several AI executives and investors have warned that coding automation could eventually reduce demand for entry-level programmers and junior developers. Amazon, however, appears to be positioning the technology more as a force multiplier than a replacement mechanism.

The company continues to hire engineers aggressively while integrating AI deeper into development workflows. AWS CEO Matt Garman recently said Amazon was hiring “just as many software developers as we ever had,” while arguing that future engineers would need broader systems thinking and problem-solving capabilities rather than narrow coding expertise.

The internal rollout also comes as investors increasingly scrutinize whether the hundreds of billions of dollars being poured into AI infrastructure are producing measurable productivity gains.

Amazon’s integrating Claude Code and Codex across its engineering organization is expected to help accelerate software deployment, reduce development bottlenecks, and improve operational efficiency at a time when hyperscalers are racing to justify unprecedented capital expenditures on AI infrastructure.

More broadly, the move signals that the future AI landscape may not be dominated by single-model ecosystems. Instead, major technology firms appear to be moving toward multi-model environments where companies combine proprietary systems with specialized external tools, while competing fiercely for control of the cloud infrastructure layer underneath.

In that contest, Amazon is making clear that it is less concerned about whose model engineers use than about ensuring the entire AI economy ultimately runs on AWS.

Burry Turns on GameStop’s eBay $56bn Bid, Moves to Sell Off His Entire Shares

0

Michael Burry, one of GameStop’s most closely watched backers, is signaling a potential exit from the stock after the company’s shock attempt to acquire eBay, warning that the proposed takeover risks burying the retailer under unsustainable debt while exposing the limits of CEO Ryan Cohen’s transformation strategy.

The investor, whose early bullish bet on GameStop during the meme-stock frenzy turned him into a cult figure among retail traders, said Monday he may dramatically reduce or completely sell his position following news of the company’s $56 billion offer for eBay.

“I may not last the week with my GameStop position fully intact,” Burry wrote in a Substack post. “I will certainly sell to an extent, perhaps all or some but alas, no, not none.”

The remarks mark one of the sharpest public criticisms yet of Cohen’s efforts to reinvent GameStop from a struggling brick-and-mortar gaming retailer into a broader digital commerce and technology platform.

GameStop announced over the weekend that it had submitted a $125-per-share offer for eBay, valuing the online marketplace at roughly $56 billion. The company said the proposed acquisition would be financed through a combination of existing cash and third-party funding, including a “highly-confident” financing letter from TD Securities for as much as $20 billion.

For Burry, however, the issue is not simply the size of the transaction. It is what the deal says about GameStop’s strategic direction at a moment when financial markets are becoming increasingly hostile toward highly leveraged corporate expansion.

After years in which ultra-low interest rates fueled aggressive mergers and speculative growth strategies, investors are now rewarding balance-sheet discipline, stable cash generation, and exposure to structural themes such as artificial intelligence infrastructure and cloud computing. Against that backdrop, Burry argued that GameStop’s pursuit of eBay looks less like innovation and more like a conventional retail consolidation play with potentially dangerous financing risks.

Despite previously praising Cohen as a rare capital allocator and even likening him to Warren Buffett, Burry said the eBay strategy “could not be more pedestrian.”

“Ryan cannot be after fat to cut, if only because no amount of cut fat makes this deal work,” Burry wrote.

His criticism centers heavily on leverage. Burry warned that the announced $56 billion figure is likely only an opening proposal and that any final agreement could require substantially more financing, pushing GameStop into what he described as distress-level debt territory.

“That also means that the deal would probably carry much more leverage, ‘to a level of debt that borders on distressed and tends to strip competitiveness and innovation from such-stricken companies,’” he wrote.

The concern reflects broader anxiety on Wall Street over whether companies attempting large acquisitions can maintain flexibility in a high-interest-rate environment where interest expenses quickly erode profitability and strategic maneuverability.

Burry’s skepticism is particularly notable because GameStop’s turnaround under Cohen had previously been built around preserving liquidity, cutting costs, and maintaining optionality. The retailer accumulated a significant cash position after capital raises during the meme-stock boom, giving management flexibility to explore acquisitions and diversification opportunities without immediately endangering the balance sheet.

The proposed eBay deal, however, would radically alter that equation. Analysts say the acquisition could transform GameStop into a broader marketplace player spanning collectibles, electronics, refurbished products, and peer-to-peer commerce. Yet it would also pit the company more directly against dominant e-commerce giants, including Amazon, while exposing it to slowing discretionary consumer spending and intensifying online retail competition.

Burry suggested Cohen is pursuing the wrong battlefield altogether.

“If Ryan really wanted to compete with Amazon, he would have acquired Wayfair (70% of its own last mile deliveries and warehouses all over) along with a cash flow machine and a bunch of float,” he wrote.

That argument highlights a deeper divide emerging in corporate America. Increasingly, investors are placing higher value on logistics infrastructure, cloud ecosystems, and AI-driven operational efficiency rather than pure marketplace scale. Companies with proprietary distribution networks and data advantages are seen as better positioned to defend margins and maintain customer loyalty in an increasingly automated economy.

Burry appeared to argue that eBay’s marketplace model lacks those strategic moats, especially when financed with large amounts of debt.

He acknowledged the attractiveness of the collectibles and secondhand goods market, which naturally overlaps with GameStop’s legacy business in used games and gaming hardware. But he argued the acquisition structure itself could cripple the company’s ability to compete effectively.

“If GameStop wants to do it with billions of interest expense and all manner covenants restricting its movements, it will not be breaking new ground,” Burry wrote. “It will be trotting in well-worn ruts on the road to capitalist Hell.”

The unusually blunt warning comes as speculative enthusiasm returns to portions of the technology and retail market, fueled partly by AI optimism and renewed appetite for growth assets. Yet Burry’s comments underscore a growing divide between investors chasing transformational narratives and those focused on valuation discipline and cash-flow durability.

His criticism also reinforces a broader point increasingly echoed across Wall Street: even companies with strong brands and ambitious leadership can become poor investments if the price of expansion becomes too high.

Musk Reached Out to OpenAI for Settlement Days Before His Suit Against Altman Went on Trial – Court Filing Shows

0

The legal war between Elon Musk and OpenAI took a sharper turn over the weekend after court filings revealed Musk allegedly warned OpenAI president Greg Brockman that he and CEO Sam Altman would become “the most hated men in America” if they refused to settle the case before trial.

The disclosure emerged in a filing submitted Sunday by lawyers representing Altman and Brockman in the closely watched California civil trial, which has increasingly evolved into a broader fight over the future control, governance, and economics of artificial intelligence.

According to the filing, Musk contacted Brockman around April 25, just two days before trial proceedings began, to test whether OpenAI’s leadership was interested in reaching a settlement.

Brockman reportedly replied by proposing that both parties withdraw their claims. The filing says Musk then responded: “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”

Lawyers for OpenAI’s executives are now seeking to have the exchange admitted into evidence, arguing it demonstrates Musk’s underlying motivation in the lawsuit and supports their claim that the litigation is tied to competitive tensions in the AI industry.

The filing argues the message “tends to prove motive and bias,” particularly that Musk’s objective is to damage a rival AI company and its leadership.

The courtroom battle has become one of the most consequential disputes in the global AI industry because it sits at the intersection of corporate control, nonprofit governance, investor influence, and the commercial race to dominate generative AI.

Musk sued OpenAI, Altman, and Brockman in 2024, accusing the company’s leadership of betraying its original nonprofit mission. He claims he donated roughly $38 million to help establish OpenAI in 2015 on the understanding that the organization would develop artificial intelligence for the benefit of humanity rather than private financial gain.

Musk’s argument is centered on OpenAI’s evolution from a nonprofit research lab into a hybrid structure dominated by a rapidly expanding for-profit arm now valued at more than $800 billion. The xAI founder has repeatedly described the transition as a “bait-and-switch,” accusing OpenAI executives of commercializing technology originally developed under a public-interest mandate.

Taking the witness stand last week, Musk told the court: “Essentially, they’re trying to steal a charity, and we’re trying to stop them.”

The case has become even more politically and commercially sensitive because Musk now directly competes with OpenAI through his AI company, xAI, creator of the Grok chatbot.

That competitive overlap has allowed OpenAI’s legal team to frame the lawsuit as part ideological dispute, part corporate warfare. The tensions also expose the increasingly fractured alliances among Silicon Valley’s most influential AI figures.

Musk was one of OpenAI’s original co-founders alongside Altman, Brockman, Ilya Sutskever, and others. But relationships deteriorated after Musk departed the organization in 2018 following disagreements over governance, strategy, and commercial direction. Since then, OpenAI’s rise has fundamentally reshaped the technology industry and triggered one of the largest infrastructure and investment booms in Silicon Valley history.

The company’s launch of ChatGPT in late 2022 ignited a global race among technology firms to build increasingly powerful AI systems. Microsoft, Amazon, Google, and Meta have collectively committed hundreds of billions of dollars toward AI infrastructure, data centers, and specialized chips.

OpenAI’s partnership with Microsoft has been particularly contentious for Musk, who has criticized the company’s deep commercial ties and its shift toward proprietary products. The latest filing also suggests the trial may delve deeper into private internal communications from OpenAI’s formative years.

Brockman is expected to testify on Monday and could face questioning about diary entries written before Musk’s departure from the company. One entry cited in the filing states: “His story will correctly be that we weren’t honest with him in the end about still wanting to do the for-profit just without him.”

That statement could prove important because it appears to acknowledge internal discussions about transitioning toward a for-profit structure while Musk remained involved. Legal experts say the trial could have implications far beyond the parties involved. A ruling against OpenAI could complicate governance structures increasingly used by AI companies seeking to balance nonprofit missions with enormous capital requirements.

The case also highlights growing tensions within the AI sector over whether frontier AI systems should remain mission-driven public-interest projects or evolve into conventional profit-maximizing corporations competing for dominance in a trillion-dollar industry.

The trial comes as scrutiny intensifies globally over the concentration of power in artificial intelligence, especially among a small group of technology executives and firms controlling the world’s most advanced AI models, computing infrastructure, and data resources.

Jim Cramer Says AI and Data Center Boom Is Rewiring Markets Beyond Oil and Geopolitical Shocks, Recommends Types of Stocks to Own

0

CNBC host Jim Cramer said investors should resist the temptation to retreat from equities during geopolitical market selloffs, arguing that the real long-term opportunity lies in companies driving the rapid transformation toward an AI- and compute-powered economy.

Speaking Monday on CNBC’s “Mad Money,” Cramer said the latest market volatility tied to renewed Middle East tensions does not alter what he sees as the dominant structural force shaping markets: the explosive growth of artificial intelligence, cloud infrastructure, and data-center computing.

“What you really would need to own are the companies that actually dominate the new economy,” Cramer said, pointing to technology and infrastructure firms tied to AI workloads and digital computing demand.

His remarks came after the Dow Jones Industrial Average fell more than 1% while oil prices and U.S. Treasury yields surged amid fears that tensions in the Middle East could escalate further and disrupt global energy markets.

International benchmark Brent crude has surged above $110 a barrel in recent weeks as traders price in the risk of prolonged instability around the Strait of Hormuz, one of the world’s most critical energy chokepoints. Rising oil prices have simultaneously pushed inflation concerns back into focus, complicating expectations for central bank interest-rate cuts.

Cramer acknowledged that geopolitical shocks still matter to investors, particularly through their effect on oil prices, inflation, and borrowing costs. However, he argued that the modern U.S. economy is increasingly being driven by computing infrastructure rather than traditional industrial cycles.

“This economy is a computer-driven economy,” he said. “We run on compute.”

That viewpoint points to a broader shift underway across Wall Street, where investors have increasingly separated AI-linked technology companies from the rest of the economy. While many sectors remain vulnerable to high interest rates and slowing growth, AI infrastructure spending has continued to accelerate at an extraordinary pace.

Major technology firms, including Amazon, Microsoft, Alphabet, and Meta, are collectively expected to spend hundreds of billions of dollars this year on AI data centers, networking systems, and specialized chips. That spending boom has become one of the strongest forces supporting equity markets in 2026, particularly across semiconductor, cloud, and infrastructure-related stocks.

Cramer singled out Amazon as a company well-positioned to weather economic pressure because of its scale, logistics capabilities, and dominance in cloud computing through Amazon Web Services.

He argued that Amazon’s business model benefits both from AI infrastructure demand and from consumer behavior during periods of economic strain, as households often shift spending toward lower-cost retailers when inflation rises.

“Higher interest rates can fell many a company,” Cramer said. “But if you want to guess who’ll be the last man standing, you could do a lot worse than betting on Amazon.”

His comments align with a growing Wall Street narrative that hyperscalers and AI infrastructure providers have become increasingly insulated from cyclical slowdowns because AI spending is now viewed as strategically unavoidable.

Executives from major cloud providers have repeatedly indicated that AI infrastructure investment is no longer discretionary. Instead, it is increasingly treated as foundational spending necessary to remain competitive.

That dynamic has fueled a powerful rally in technology stocks this year. The Nasdaq Composite recently recorded its strongest monthly performance since the early stages of the Covid pandemic, driven largely by surging demand tied to AI infrastructure, semiconductors, and data-center expansion.

Chipmakers including Nvidia, AMD, and Broadcom have emerged as major beneficiaries of the spending wave, while data-center suppliers, networking firms, and cloud operators have also seen valuations climb sharply.

Cramer suggested that investors focusing too heavily on short-term geopolitical volatility may be missing the scale of the technological transition underway.

“The computer-driven economy doesn’t care much about oil or interest rates,” he said. “The drive of computers is going in one direction, higher.”

Still, some analysts caution that the sector’s resilience could eventually be tested if energy costs continue rising sharply or if sustained inflation forces central banks to keep interest rates elevated longer than expected.

The AI boom itself is also contributing to rising electricity demand globally, increasing pressure on energy infrastructure and creating new vulnerabilities tied to power supply and commodity costs.

Nevertheless, many investors continue treating AI infrastructure as one of the few areas of the global economy still delivering strong and durable growth despite mounting geopolitical uncertainty.

Cramer’s comments capture the increasingly dominant belief on Wall Street that the next phase of economic expansion will be driven less by traditional consumer cycles and more by computing power, AI systems, and the infrastructure supporting them.

For investors, that has turned ownership of AI-linked companies from a speculative trade into what many now view as a core long-term market strategy.

Palantir Says AI Token is The New Coal While Warning Against “Slop”, Raises Guidance After Strong Quarter

0

Palantir Technologies is leaning hard into the surging economics of AI usage, with its executives declaring that tokens, the basic unit of computation in large language models, have become the new fuel powering enterprise AI adoption.

On its first-quarter earnings call Monday, Chief Technology Officer Shyam Sankar highlighted record token consumption on the company’s Artificial Intelligence Platform (AIP), noting that rapidly falling costs are paradoxically driving higher usage rather than curbing it.

“Tokens are the new coal,” Sankar said. “AIP is the train.”

He invoked the Jevons paradox to explain the phenomenon: “When the Victorians built more efficient steam engines, everyone assumed coal consumption would fall. Instead, it skyrocketed.”

This dynamic is playing out in real time at Palantir. As the cost per token drops, customers are deploying AI more aggressively across their operations, burning through far more tokens than expected. For Palantir, this translates into accelerating platform usage and stronger revenue momentum.

However, Sankar was quick to draw a firm boundary. Palantir is deliberately positioning itself as a “no slop zone,” pushing back against the industry tendency toward flashy but low-value AI experiments.

“More tokens means more slop,” Sankar said, criticizing the practice of “tokenmaxxing” — maximizing usage for its own sake rather than driving measurable business outcomes.

This disciplined approach appears to be resonating with customers. Palantir posted robust first-quarter results and significantly raised its full-year outlook, reinforcing its status as one of the clearest winners in the enterprise and government AI software market.

Strong Results Across the Board

For the quarter ended March 31, Palantir reported revenue of $1.63 billion, beating Wall Street expectations of $1.54 billion. Adjusted earnings per share came in at 33 cents, ahead of the 28 cents forecasted.

Growth was particularly impressive on the commercial side, with U.S. commercial revenue exploding 133% year-over-year to $595 million. U.S. government revenue rose 84% to $687 million, helping drive overall revenue up 85% to $1.6 billion. U.S. revenue doubled to $1.28 billion.

CEO Alex Karp struck a bullish tone in his letter to shareholders. He said: “The United States remains the center, the constant core, of our business. And that business is erupting.”

Buoyed by demand, Palantir raised its fiscal 2026 revenue guidance to $7.65–$7.66 billion, up from the prior range of $7.18–$7.20 billion. It also increased its U.S. commercial revenue target to more than $3.22 billion. For the second quarter, the company expects revenue between $1.797 billion and $1.801 billion, comfortably above consensus estimates of $1.68 billion.

Palantir’s government business continues to benefit from the growing role of AI in modern warfare. Its Maven AI system, which supports data analysis and real-time targeting decisions, became an official “program of record” for the Pentagon in March — a designation that locks in long-term funding and institutional adoption across the U.S. military. The company also recently secured a $300 million contract with the U.S. Department of Agriculture.

On the commercial front, enterprises are increasingly turning to Palantir’s platforms to integrate disparate data sources and automate complex operational decisions. The combination of sticky government contracts and accelerating commercial wins has created a powerful dual growth engine.

While many AI companies are chasing raw usage volume, Palantir is betting that customers ultimately want reliable, high-impact systems rather than experimental demos. By focusing on practical deployments and warning against “AI slop,” the company is trying to build a reputation for quality and ROI in a market increasingly flooded with hype.

This approach seems to be paying off. Palantir’s ability to deliver tangible results for large organizations, particularly in complex environments like defense, intelligence, and heavy industry, sets it apart from pure-play AI model providers and more generic software vendors.

Palantir has come a long way from its early days, focused primarily on government intelligence work. It is now firmly established as a major player in the enterprise AI space, with a growing commercial book and a defensible position in high-stakes national security applications.

As token consumption continues to surge and AI becomes more deeply embedded in organizational workflows, Palantir’s “no slop zone” philosophy could prove to be a significant competitive advantage, helping it win and retain customers who are growing weary of flashy but ineffective AI experiments.