DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 13

Tesla Set to Launch Ambitious In-House AI Chip Manufacturing Project

0

Tesla is set to launch its ambitious in-house AI chip manufacturing project, known as the Terafab (or “TeraFab”), imminently. Elon Musk announced via X that the “Terafab Project launches in 7 days,” which points to March 21, 2026.

Tesla aims to build a “gigantic” semiconductor fabrication facility (fab) to produce custom AI chips in-house. This addresses supply constraints from external foundries like TSMC and Samsung, which Musk has said won’t meet Tesla’s massive future demand for AI compute. The project is described as vertically integrated, combining logic processing, memory, and advanced packaging.

Projections include: Targeting 100,000 wafer starts per month initially, with potential scaling to much higher volumes. Annual production of 100–200 billion AI and memory chips. Estimated cost around $20 billion; though some analyst views suggest it could reach hundreds of billions long-term.

Primarily to power Tesla’s autonomous driving tech (Full Self-Driving software), Robotaxi and Cybercab fleet, Optimus humanoid robots, and Dojo supercomputing for AI training. Likely at or near Giga Texas in Austin (North Campus expansion), though no official confirmation on the exact groundbreaking site has been detailed yet.

Musk first floated the idea of a massive in-house fab in late 2025, emphasizing the need for vertical integration to avoid bottlenecks in AI chip supply. Tesla has reportedly begun hiring for the Terafab in Austin, with roles spanning factory design, construction, and production ramp-up. This marks a concrete step forward.

This move positions Tesla to reduce reliance on third-party manufacturers and accelerate its AI ecosystem including Dojo supercomputers and next-gen chips like AI5/AI6. It’s being hailed as potentially Tesla’s “Gigafactory moment” for AI—bold, high-risk, and transformative if executed successfully.

xAI’s AI hardware plans center on building the world’s most powerful and rapidly scalable AI compute infrastructure to train and run frontier models like Grok. Unlike Tesla’s focus on in-house chip fabrication (e.g., Terafab for massive AI chip production), xAI prioritizes hyperscale GPU clusters, dedicated power solutions, and emerging custom silicon design—while heavily relying on Nvidia GPUs for now.

This approach emphasizes speed of deployment, vertical integration in compute and power, and long-term efficiency to outpace competitors in the race toward superintelligence.

The Core of xAI’s Hardware Strategy

AI’s flagship is the Colossus supercomputer cluster in Memphis, Tennessee built in a repurposed factory shell. It’s described as the world’s largest AI training system by scale and coherence. Initial Build: Launched in 2024 with 100,000 Nvidia H100 GPUs in just 122 days—far faster than industry norms.

Doubled to 200,000 GPUs (mix of H100/H200) in 92 days. By 2025–2026, it evolved into Colossus 1 (230,000 GPUs, including early Blackwell GB200s) and Colossus 2 (gigawatt-scale, targeting 500,000+ Blackwell GPUs like GB200/GB300).

Reports indicate 450,000–550,000+ GPUs active, with Colossus 2 operational as the first gigawatt-scale coherent AI training cluster; power draw ~1 GW, with upgrades to 1.5–2 GW planned soon. The full Memphis campus including expansions like “MACROHARD” and “MACROHARDRR” buildings targets ~2 GW total capacity and 1 million+ GPUs.

Massive memory bandwidth; 194 PB/s at 200k GPUs, high-speed Nvidia Spectrum-X Ethernet networking, and liquid cooling for efficiency. Primary 1.2 GW natural gas power plant plus grid, Tesla Megapacks, and potential solar. xAI is addressing energy as the emerging bottleneck after chips.

This “Gigafactory of Compute” enables simultaneous training of multiple Grok models and powers Grok’s advancements. xAI is developing its own AI accelerators to reduce reliance on external suppliers: Active hiring since mid-2025 for custom silicon engineers to co-design “from silicon to software compilers to models.”

Rumored efforts include inference-optimized chips and training accelerators. Deals and discussions with foundries like TSMC and Samsung, plus Broadcom for large custom ASICs. Optimize for Grok workloads, improve power efficiency/performance over off-the-shelf GPUs, and handle extreme scale.

xAI continues massive Nvidia purchases; billions spent on H100/H200/Blackwell GPUs and plans orders from Nvidia/AMD at scale. Elon Musk has praised Nvidia while noting xAI/SpaceX/Tesla will buy heavily from them. Elon Musk targets xAI having more AI compute than everyone else combined within ~5 years, with roadmaps to 1M+ GPUs and far beyond.

This includes potential international hyperscale builds; Saudi Arabia partnership for nationwide Grok deployment with new GPU data centers. Exploration of space-based/orbital data centers via SpaceX synergies for solar-powered, low-cost compute to bypass Earth’s energy limits.

Raised tens of billions to fuel GPU buys, data center builds, and power plants. Emphasis on owning infrastructure outright vs. leasing. xAI’s hardware push is aggressive and execution-focused—turning compute bottlenecks into advantages through speed, scale, and partial vertical integration.

It’s tightly coupled to advancing Grok toward superintelligence, with energy and custom chips as next frontiers.

Stripe Collaborates with Tempo to Launch Machine Payments Protocol (MPP)

0

Stripe, in collaboration with Tempo, the payments-focused Layer 1 blockchain it co-developed with Paradigm has announced the Machine Payments Protocol (MPP) as Tempo’s mainnet officially went live.

Tempo is a high-throughput, low-cost blockchain purpose-built for stablecoin payments and high-frequency transactions; think sub-second finality, predictable fees, and support for tens of thousands of TPS. It has no native gas token—instead, fees settle in major stablecoins. The mainnet opens public RPC endpoints for developers to build on, following a public testnet phase that included partners like Mastercard, Visa, UBS, and Klarna.

Machine Payments Protocol (MPP)

MPP is an open, rail-agnostic standard for autonomous “machine-to-machine” and AI agent payments. It enables AI agents or software and services to programmatically request, authorize, and settle payments without human intervention.

Key features include: A “sessions” primitive: Agents pre-authorize a spending limit, then stream continuous micropayments for API calls, data access, compute, or ongoing services without needing an on-chain tx per interaction—settlements can aggregate many small actions.

Supports multiple rails: Starts with stablecoins on Tempo, but extends to fiat like cards via Stripe/Visa, Bitcoin Lightning via Lightspark, and more. It’s designed to be extensible beyond any single blockchain or payment system. As AI agents become more autonomous, they need seamless ways to pay for resources (data, tools, services) across the internet—MPP standardizes this to avoid fragmented billing systems.

Stripe’s blog post calls it “an open standard, internet-native way for agents to pay,” co-authored with Tempo. Developers can integrate MPP support using Stripe’s existing APIs like PaymentIntents in just a few lines of code for accepting such payments. This positions Tempo as a settlement layer for an emerging “AI-native” economy, bridging traditional fintech with crypto, stablecoins and agentic AI use cases.

It’s a major step in making programmable, autonomous payments practical at scale. Exciting times for AI + payments intersection. The Machine Payments Protocol (MPP), launched by Stripe and Tempo, enables AI agents (autonomous software entities) to make programmatic, autonomous payments for services, resources, or goods without constant human intervention.

It uses a simple, HTTP-based flow: an agent requests a resource ? the service returns an HTTP 402 “Payment Required” with details ? the agent authorizes often via pre-approved sessions? payment settles instantly on Tempo/stablecoins, cards via Stripe/Visa, Lightning/Bitcoin, etc. ? access is granted.

This unlocks agentic commerce at scale, especially for high-frequency, low-value transactions (micropayments, streaming payments) that traditional billing can’t handle efficiently. Here are prominent, real-world or immediately live examples from the launch announcements, integrations, and early ecosystem: Pay-per-use API access and inference — Agents pay for individual LLM calls, data queries, or tool invocations on demand.

No need for API keys/accounts; just a wallet. Services like OpenAI, Anthropic, Google and others in the MPP directory can charge per request. This enables agents to dynamically switch models or access premium endpoints without setup friction.

Agents spin up headless browsers or run research tasks, paying per session or per query. Browserbase (browser infrastructure) already supports MPP for per-session billing. Parallel.ai integrates for web search, content extraction, and multi-hop research—agents pay per use with no account required.

Agents handle real-world tasks requiring payment. Postalform lets agents fund and send physical mail/letters. Early demos include agents ordering food delivery from a sandwich shop in NYC via integrated services. An agent pre-authorizes a spending cap once, then streams tiny payments as it consumes ongoing resources.

Ideal for agents running complex workflows that rack up thousands of sub-cent interactions. Agents pay for datasets, premium content, or analytics. This powers autonomous research agents that crawl, synthesize, and pay for access across fragmented sources. Agents shop, book travel, or handle logistics on behalf of users.

Examples include paying for flights/hotels via APIs, ordering products, or even coordinating physical delivery. MPP’s multi-rail support makes this seamless across web2/web3. Agents pay for compute/testing infra, code execution environments, or specialized tools without human-gated signups.

This lowers barriers for agent swarms collaborating on tasks. Tempo handles tens of thousands of TPS with sub-second finality and predictable fees in stablecoins—no gas volatility. One-time approval for bounded spending, then autonomous micropayments and streaming.

Sellers add MPP support via Stripe’s APIs in a few lines of code, inheriting fraud tools, reporting. Launch includes 100+ compatible services making discovery plug-and-play. This is still early—agent payments are nascent—but MPP with partners like Visa, Mastercard, Shopify, OpenAI positions it as practical infrastructure for an “AI-native economy.”

Developers can start building via public Tempo RPCs. Agents as true economic actors, paying each other and services fluidly.

Phantom Receives First-of-its-Kind No-action Letter from CFTC 

0
Signage is seen outside of the US Commodity Futures Trading Commission (CFTC) in Washington, D.C., U.S., August 30, 2020. REUTERS/Andrew Kelly

Phantom, the popular self-custodial crypto wallet especially dominant in the Solana ecosystem, has received a first-of-its-kind no-action letter from the U.S. Commodity Futures Trading Commission (CFTC).

This relief means the CFTC’s Market Participants Division will not recommend enforcement action against Phantom for failing to register as an introducing broker or as an associated person of one when providing certain features.

Phantom can now act as a non-custodial software interface that connects users directly to CFTC-regulated entities, such as: Registered futures commission merchants (FCMs). Introducing brokers (IBs). Designated contract markets (DCMs). This allows users to: View market data, Track positions. Submit orders for regulated derivatives, including event contracts, perpetual contracts, and other CFTC-regulated products.

Importantly, Phantom remains non-custodial—it never takes control of user funds, and trades execute directly through the registered partners and exchanges. The no-action position is conditional.

Phantom must: Provide clear user disclosures on risks, potential conflicts of interest, and derivatives trading specifics. Maintain compliance policies for marketing and communications. Keep records of derivatives-related activities.

This setup treats Phantom’s role as a passive front-end tool rather than an active broker intermediary. Phantom described this as “first-of-its-kind” relief, creating a potential regulatory template for other self-custodial wallets to integrate with traditional regulated derivatives markets without heavy registration burdens.

It signals evolving U.S. regulatory clarity for crypto wallets bridging to TradFi derivatives, potentially boosting adoption, user access, and integration between crypto and regulated financial products.

Phantom’s relief via CFTC Letter No. 26-09 from the Market Participants Division is explicitly described as first-of-its-kind. It establishes a potential regulatory template or pathway for non-custodial wallet software to act as a passive interface—viewing data, tracking positions, and submitting orders to registered FCMs, IBs, or DCMs—without triggering IB registration, as long as conditions like disclosures, compliance policies, and record-keeping are met.

No follow-on or parallel relief has been issued or reported for other major wallets; like MetaMask, Coinbase Wallet, Trust Wallet, or Solana competitors like Backpack. Recent CFTC no-action letters from late 2025 into early 2026 focus on unrelated areas, such as: Digital asset collateral acceptance by FCMs; Letter 25-40 for BTC/ETH/stablecoins as margin.

CPO/CTA registration exemptions for certain private fund managers; interim “QEP Exemption” restoration. Event contract reporting simplifications or cross-border swap rules. These do not address wallet-to-derivatives interfaces or IB registration for self-custodial software.

Industry commentary highlights this as a breakthrough for self-custody and potential TradFi-crypto integration, with some noting it could encourage other wallets to pursue similar requests. However, no evidence suggests any have received or publicly applied for identical relief yet—likely because the Phantom decision is brand new.

 

This could set a precedent, making it easier for other non-custodial wallets to seek comparable no-action positions in the future, especially under a more crypto-accommodating CFTC stance. Wallets interested in adding regulated derivatives features might now reference Phantom’s letter in their own requests to the CFTC.

Interim relief modeled on the former “QEP Exemption” rescinded in 2012 allows certain SEC-registered managers to avoid CPO/CTA registration for pools offered only to qualified eligible persons (QEPs), pending rulemaking. This reduces duplicative burdens and reflects responsiveness to industry requests.

Cross-Border Swaps Simplification: Narrowed scope of certain swap requirements for non-U.S. persons using unified “U.S. person” and “guarantee” definitions, easing compliance without new filings. Relief from swap data reporting and recordkeeping for specific binary and bounded event contracts on designated platforms.

These build on earlier crypto-related guidance, like spot market anti-fraud jurisdiction over BTC and ETH and tokenized collateral explorations. The Phantom letter references prior “TSV Letters” for technology service vendors providing order-entry facilitation without IB status.

Phantom’s relief extends similar logic to modern self-custodial wallet software, with conditions like pre-existing user relationships with registered entities, no custody, clear disclosures, and no direct order solicitation.

U.S. Federal Reserve Holds its Benchmark FED Funds Rate at Range 3.5% to 3.75% 

0

The Federal Reserve announced that it is holding its benchmark federal funds rate steady at a target range of 3.5% to 3.75%.

This marks the second consecutive meeting in 2026 where the FOMC has paused rate changes, following three 25-basis-point cuts late in 2025 (September, October, and December). The decision was made by an 11-1 vote, with Governor Stephen Miran dissenting in favor of a 25-basis-point cut.

The Fed cited solid economic expansion, a labor market showing some softening with job gains described as low, persistent inflation above the 2% target, and heightened uncertainty largely due to the ongoing U.S.-Israeli war with Iran, which has driven surges in oil prices and broader economic risks.

In its updated Summary of Economic Projections including the “dot plot” of individual policymakers’ rate expectations, the median forecast remains unchanged from December 2025: officials still anticipate just one 25-basis-point rate cut in 2026. This would bring the target range to approximately 3.25%-3.5% by year-end.

Projections for subsequent years also held steady, with another single cut expected in 2027 bringing rates toward 3.0%-3.25% by end-2027 and stability around 3.1% in the longer run. Key updates to other projections include: Slightly higher GDP growth for 2026; median now 2.4%, up from 2.3% in December.

Unemployment steady at 4.4% for 2026 then dipping to 4.3% in 2027. Inflation forecasts ticked higher due to energy price pressures: PCE inflation at 2.7% for 2026 up from 2.4% and core PCE also at 2.7% up from 2.5%. Fed Chair Jerome Powell emphasized caution in post-meeting remarks, noting progress on inflation but not as rapid as hoped, with risks from geopolitical factors potentially delaying easing.

The ongoing 2026 Iran war which began February 28 with U.S.-Israeli airstrikes on Iranian targets, including the killing of Supreme Leader Ali Khamenei has triggered one of the largest oil supply shocks in decades. Iran sits on the Strait of Hormuz — the chokepoint for roughly 20% of global seaborne oil and LNG — and has retaliated with missile/drone attacks, threats to sink tankers, and strikes on Gulf energy infrastructure.

This has effectively halted or severely disrupted tanker traffic, damaged Iranian export terminals; Kharg Island, which handles ~90% of Iran’s crude, and spread risk to nearby producers. Pre-war baseline (late 2025/early 2026): Brent crude hovered in the mid-$60s to low $70s.

Immediate reaction (late Feb/early March): +7–13% in single sessions, quickly pushing Brent into the high $70s–low $80s. More than 25–50%+ overall, with Brent now trading in the $108–116 range. Brief spike past $82, then $91, then over $100 within days/weeks.

Goldman Sachs noted an added ~$14/bbl “risk premium” just from Hormuz uncertainty. ~1/5 of global oil flows stopped or rerouted; Iran’s own exports; pre-war ~3–3.5 mbpd largely offline; secondary effects on Iraq and Gulf shipping. Insurers pulled coverage; rates skyrocketed; tankers diverted or idled.

Even short disruptions trigger long-term hedging and speculative buying. Higher energy costs feed directly into PCE as the Fed highlighted in its March 18 meeting, with U.S. gasoline up ~$0.43/gallon in a week and UK petrol/diesel rising 4–8p/litre.

 

Governments including the U.S. are releasing emergency strategic reserves; spare capacity and alternative pipelines are partially offsetting losses. If the war drags on: Analysts warn of $100–120+ sustained levels, or worse if Hormuz stays closed for weeks. A quick ceasefire could see prices fall 20–30% rapidly.

Higher input costs for airlines, shipping, manufacturing; stock-market volatility; and added pressure on central banks explaining the Fed’s cautious “one-cut” 2026 outlook. In short, the war has already added a massive risk premium and physical supply crunch that is visibly showing up at the pump and in inflation forecasts.

Markets remain highly sensitive to every new strike or diplomatic signal — exactly why the Fed cited “heightened uncertainty” from this conflict in its latest statement. The situation is fluid; any de-escalation or further escalation will move prices sharply. Jerome indicated the Fed remains data-dependent and prepared to adjust if conditions shift, though no hikes are currently projected.

Markets had largely priced in a hold, with futures reflecting low odds of near-term cuts and some debate about whether even one cut materializes in 2026 given the uncertainties. The next FOMC meeting is scheduled for April 28-29, 2026.

Uber and Nvidia Partnership to Accelerate Drive on Robotaxis Adoption 

0

Uber and Nvidia have expanded their partnership to roll out robotaxis; autonomous Level 4 vehicles on Uber’s ride-hailing network.

The rollout starts in Los Angeles and San Francisco in the first half of 2027, initially with data-collection vehicles and safety drivers, transitioning to fully driverless operations.

It will scale to 28 cities globally by 2028, spanning North America, Europe, Australia, and Asia. The vehicles will be powered by Nvidia’s DRIVE Hyperion autonomous vehicle platform and Alpamayo, a new reasoning-based AI model designed to handle complex, unpredictable real-world scenarios like construction zones or erratic pedestrians using chain-of-thought logic.

This builds on an earlier collaboration where Uber aims to integrate Nvidia’s tech for a large-scale Level 4-ready mobility network, potentially involving hundreds of thousands of vehicles long-term. Uber’s strategy emphasizes a “multi-player” ecosystem, partnering with various automakers and AV developers rather than building everything in-house.

Other Nvidia partners in autonomous driving include companies like BYD, Hyundai, Nissan, Stellantis, Lucid, Mercedes-Benz, and ride-hailing players like Lyft, Bolt, and Grab. The news boosted Uber stock while it reinforces Nvidia’s push into full-stack autonomous driving software beyond just chips.

This positions Nvidia as a key enabler in the robotaxi space, making advanced AV tech more accessible to multiple operators and potentially accelerating global adoption. No specific list of all 28 cities has been detailed yet beyond the initial LA/SF launches.

Nvidia’s Alpamayo is a family of open-source AI models, tools, simulation frameworks, and datasets specifically designed to accelerate the development of safe, reasoning-based autonomous vehicles (AVs), particularly targeting Level 4 autonomy where the vehicle can handle all driving tasks in specific conditions without human intervention.

Announced at CES 2026 on January 5, 2026 with further expansions and mentions at GTC 2026 in March, Nvidia positioned Alpamayo as a major advancement in “physical AI,” often described as the “ChatGPT moment” for autonomous driving and robotics.

It addresses key challenges in the industry, especially the “long-tail” problem—rare, unpredictable edge cases; erratic pedestrians, unusual construction zones, or complex urban interactions that cause traditional perception-planning AV systems to fail or require frequent human takeovers.

Alpamayo isn’t just a single model—it’s an ecosystem: Alpamayo 1; initial flagship, ~10 billion parameters: A Vision-Language-Action (VLA) model that processes multimodal inputs primarily camera video, but supports fusion with lidar, radar, etc. and outputs driving actions like trajectory planning.

Unlike earlier end-to-end models that map inputs directly to actions, it incorporates chain-of-thought (CoT) reasoning or “Chain of Causation”. The model explicitly “thinks” step-by-step before deciding—generating human-readable reasoning traces; The pedestrian is jaywalking unpredictably ? I should slow down and yield ? Adjust trajectory to maintain safe distance.

This makes decisions more interpretable, safer, and easier to debug and validate for regulators. Later iterations: Enhanced versions with improved steerability, interactive reasoning, and better handling of real-time control. Supporting tools: Physical AI AV datasets: Massive, open multi-sensor real-world driving data for training.

AlpaSim: Open-source, realistic closed-loop simulators for testing reasoning in edge cases without real-world risk. Integration with Nvidia’s DRIVE Hyperion hardware platform for deployment in vehicles. Traditional systems rely on separate perception ? prediction ? planning modules, often rule-based or with limited adaptability to novelties.

Alpamayo uses an end-to-end trainable reasoning VLA that mimics human-like judgment: perceive the scene, reason causally about risks and options, then act precisely; generating feasible trajectories via diffusion-based decoders. This enables better generalization to unseen scenarios, higher safety through explainable decisions, and faster iteration for developers—especially partners who don’t want to build everything from scratch.

In the Uber-Nvidia collaboration Alpamayo powers the AI stack for Level 4 robotaxis launching in LA/SF in 2027 and scaling to 28 cities by 2028. It complements DRIVE Hyperion hardware, allowing operators like Uber to deploy reasoning-capable AVs more quickly.

Early real-world tests and simulations show strong performance in complex scenarios, though like most emerging AV tech, it may still require safety drivers or occasional interventions during initial rollouts. Alpamayo represents Nvidia’s push to democratize advanced autonomy via open models, shifting from pure hardware (chips) to full-stack software that enables “thinking” vehicles.