DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 32

Nvidia Unveils Computing Platforms for Orbital Data Centers as AI Race Extends into Space

0

Nvidia on Monday unveiled a new generation of computing platforms designed for orbital data centers, marking a significant step toward deploying artificial intelligence infrastructure in space as demand for computing power continues to surge.

The announcement came during the company’s GTC 2026 conference, where CEO Jensen Huang described space-based computing as the next frontier for AI systems that increasingly require massive processing capacity close to where data is generated.

“Space computing, the final frontier, has arrived,” Huang said during the event. “As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated.”

The move marks a major shift as the AI industry explores unconventional solutions to meet the soaring demand for computing resources driven by generative AI, robotics, and autonomous systems.

A New Computing Platform Built For Space

At the center of the announcement is Nvidia’s Vera Rubin Space-1 Module, a computing platform designed specifically for use in satellites and orbital infrastructure. The module integrates Nvidia’s IGX Thor and Jetson Orin processors and is engineered to operate in size-, weight-, and power-constrained environments, conditions that are critical for space missions where hardware must be compact, energy-efficient, and resilient.

According to the company, the platform will support space missions being developed by several industry partners, including Axiom Space, Starcloud, and Planet Labs. These systems are expected to enable satellites to process data directly in orbit, reducing the need to transmit massive volumes of raw data back to Earth before analysis.

Such an approach could transform how Earth observation, communications, and deep-space exploration missions operate. Traditionally, satellites have collected data and transmitted it back to ground-based data centers for processing.

Nvidia’s approach aims to bring AI computing closer to the data source, allowing satellites to analyze information in real time. This capability could allow satellites to filter and process imagery, track weather systems, monitor infrastructure, or detect anomalies without waiting for instructions from Earth.

For example, Earth-observation satellites could use onboard AI to identify natural disasters, track deforestation, or analyze agricultural activity instantly, dramatically reducing response times. The development also aligns with a broader trend in computing known as edge AI, where processing occurs near the point where data is generated rather than in distant centralized servers.

The Engineering Challenges

Although the technology is promising, significant engineering hurdles remain before orbital data centers become widespread. One of the most difficult challenges is cooling high-performance computing systems in the vacuum of space.

“In space, there’s no convection, there’s just radiation,” Huang said during his keynote address.

“And so we have to figure out how to cool these systems out in space, but we’ve got lots of great engineers working on it.”

Cooling is a major issue because traditional data centers rely on air or liquid circulation to remove heat generated by processors. In space, heat must instead be dissipated through radiation, requiring new thermal designs and materials.

The Search For Power Beyond Earth

The push toward space-based computing is partly driven by the rapidly escalating energy demands of artificial intelligence. The construction of massive AI data centers on Earth has already been linked to rising electricity consumption and strain on power grids in several regions.

By contrast, satellites and orbital facilities could potentially harness virtually unlimited solar energy without the land and infrastructure constraints faced by terrestrial data centers. Technology companies are increasingly studying whether space could provide a long-term solution to the energy demands of large-scale computing.

In November, Google announced Project Suncatcher, an initiative exploring the feasibility of deploying computing infrastructure powered by solar energy in orbit.

The concept is also gaining traction among companies involved in space launch and satellite infrastructure. Last month, Elon Musk’s AI startup xAI was acquired by SpaceX in a deal valued at $1.25 trillion, a move widely interpreted as part of a strategy to build AI-powered computing systems in orbit.

SpaceX is one of Nvidia’s largest customers for AI chips, supplying hardware used to train and operate advanced AI models. Earlier this year, SpaceX also asked the Federal Communications Commission for approval to launch as many as one million satellites intended to support orbital AI infrastructure.

With generative AI models becoming larger and more computationally demanding, traditional data centers on Earth may struggle to keep pace with the scale of future workloads. Against this backdrop, space has become an alternative. But Space-based computing remains an ambitious concept with significant technical and regulatory challenges.

Nvidia Expands Autonomous Driving Push With Hyundai, Nissan, BYD, and Geely as Robotaxi Race Intensifies

0

Nvidia is expanding its presence in the autonomous vehicle industry through new partnerships with major global automakers, including Hyundai Motor, Nissan Motor, Isuzu Motors, China’s electric vehicle giant BYD, and automaker Geely, as the U.S. chipmaker seeks to position its technology at the center of the next phase of self-driving vehicle development.

The new agreements involve Nvidia’s Drive Hyperion platform, a system that combines specialized chips, software, simulation tools, and artificial intelligence models designed to enable advanced driver-assistance and autonomous driving functions.

The platform is built to support “Level 4” autonomous vehicles — a stage where cars can operate without human intervention under specific conditions or within defined geographic zones.

The partnerships were announced at the company’s annual Nvidia GTC developer conference in San Jose, where CEO Jensen Huang framed the latest agreements as evidence that the long-anticipated autonomous driving revolution may finally be approaching a turning point.

“We’ve been working on self-driving cars for a long time. The ChatGPT moment of self-driving cars has arrived,” Huang said during the keynote address. “We now know we could successfully autonomously drive cars, and today, we are announcing four new partners for Nvidia’s robotaxi-ready platform. … The number of robotaxi-ready cars in the future are going to be incredible.”

The new partnerships underline the push by Nvidia to become the computing backbone for the autonomous vehicle industry rather than building vehicles itself.

Drive Hyperion forms part of the company’s broader end-to-end autonomous vehicle platform that includes data-center computing for training AI models, large-scale simulation systems that replicate real-world driving conditions, and powerful in-vehicle processors that act as the “brain” of autonomous cars.

By providing the underlying computing architecture, Nvidia allows automakers to focus on vehicle manufacturing and consumer experience while outsourcing much of the complex AI processing required to operate self-driving systems. This approach mirrors the company’s role in the artificial intelligence boom, where its processors power many of the world’s largest AI data centers used to train and run advanced models.

Level 4 Autonomy Remains The Industry’s Critical Milestone

Autonomous vehicles are typically classified across six levels of automation. Most vehicles available to consumers today operate at Level 2, meaning the system can assist with steering, braking, and acceleration, but drivers must remain attentive and ready to take control at all times.

Level 4 represents a major technological leap. At this stage, vehicles can drive themselves without human supervision in predefined conditions such as urban districts, dedicated lanes, or mapped city routes.

While fully autonomous vehicles capable of operating anywhere without human intervention — known as Level 5 autonomy — remain a long-term ambition, many companies view Level 4 robotaxis as the first realistic commercial breakthrough. Companies such as Alphabet’s self-driving subsidiary Waymo already operate robotaxi fleets in several U.S. cities, where vehicles can transport passengers without a driver in certain designated areas.

Advances in artificial intelligence are increasingly viewed as the catalyst that could unlock the long-promised autonomous driving revolution.

Modern AI systems are able to process vast amounts of sensor data from cameras, radar, and lidar to identify objects, predict movement, and make split-second driving decisions. These capabilities have improved dramatically in recent years as machine-learning models have become larger and more sophisticated, trained using enormous datasets collected from real-world driving.

Many industry observers believe the same AI breakthroughs powering generative AI systems are also accelerating progress in autonomous vehicle technology. This convergence explains Huang’s reference to a “ChatGPT moment” for self-driving cars — a comparison suggesting the technology may be approaching a phase of rapid adoption similar to the explosive growth of generative AI tools.

For automakers, partnerships with technology companies such as Nvidia have become increasingly important as vehicles evolve into complex computing systems. Modern cars already contain hundreds of software-controlled features, and autonomous driving systems require enormous processing power to analyze road conditions in real time.

Automakers, including Hyundai and Nissan, are under pressure to accelerate their autonomous vehicle programs as competition intensifies across both the traditional automotive sector and the technology industry. Chinese companies such as BYD and Geely are also rapidly advancing their autonomous driving capabilities as China seeks to become a global leader in next-generation vehicle technologies.

By aligning with Nvidia’s platform, these companies gain access to advanced computing systems without needing to build them entirely from scratch.

The newly announced partnerships add to a growing list of companies already working with Nvidia’s autonomous driving technology. Existing customers using the Drive Hyperion platform include autonomous technology firms such as Aurora Innovation and Nuro. Other companies integrating Nvidia technology into their mobility platforms include Sony Group, Uber Technologies, Stellantis — the parent company of Jeep — and electric vehicle manufacturer Lucid Group.

This growing ecosystem suggests Nvidia is attempting to establish its platform as a common operating system for autonomous vehicles across multiple manufacturers and markets.

A Multitrillion-Dollar Opportunity

Many analysts believe autonomous mobility could eventually become one of the largest new markets in transportation. Robotaxis, autonomous delivery vehicles, and self-driving logistics fleets could reshape urban transportation networks and reduce the cost of mobility services.

Industry executives and Wall Street analysts frequently estimate that the long-term market for autonomous vehicles and related services could reach several trillion dollars globally.

For Nvidia, capturing even a portion of that market would represent a major new revenue stream beyond its already dominant position in artificial intelligence chips.

However, the path toward fully autonomous vehicles has been marked by setbacks and expensive failures. Several companies that once promised rapid deployment of robotaxis have struggled to overcome technical, regulatory, and safety challenges.

General Motors’ autonomous driving subsidiary Cruise was once considered one of the leaders in the sector alongside Waymo. However, Cruise shut down its robotaxi operations in 2024 following a high-profile incident in San Francisco in which one of its vehicles dragged a pedestrian. General Motors had invested more than $10 billion in the project before ultimately abandoning the program.

Meanwhile, companies including Tesla continue to pursue alternative strategies for autonomous driving, relying primarily on camera-based AI systems. Technology firms such as Amazon are also exploring the sector through their autonomous vehicle unit Zoox.

Autonomous Vehicles As Nvidia’s Next Growth Frontier

While Nvidia’s explosive growth in recent years has been driven largely by demand for artificial intelligence chips, the automotive sector represents one of the company’s most significant long-term expansion opportunities.

Vehicles are increasingly becoming high-performance computing platforms, requiring powerful processors to run advanced software systems ranging from driver-assistance features to entertainment platforms and connected services.

The robotaxi networks, if they eventually scale globally, the computing infrastructure inside those vehicles — and the data centers supporting them — are expected to become the next major battlegrounds in the technology industry.

Nvidia could establish a lasting foothold in the future of mobility by embedding its chips and software deep inside those systems. The company, whose processors already underpin much of the world’s AI infrastructure, extending that dominance into autonomous transportation could transform the chipmaker from a supplier of chips into a foundational platform provider for the future of mobility.

Nvidia Sees $1tn AI Chip Opportunity by 2027 as Race Shifts Toward Real-Time Inference

0

Nvidia said the revenue opportunity for its artificial intelligence chips could reach at least $1 trillion by 2027, as the company pivots more aggressively toward the rapidly expanding market for real-time AI computing.

The forecast was outlined by CEO Jensen Huang during the company’s annual Nvidia GTC developer conference in San Jose, California, where the chipmaker introduced new processors and system designs aimed at accelerating AI responses for large-scale applications.

“The inference inflection has arrived,” Huang said during the keynote address delivered in a packed arena with more than 18,000 attendees, highlighting how the AI industry is transitioning from building models to deploying them widely.

The estimate represents a sharp increase from the roughly $500 billion AI infrastructure opportunity Nvidia previously projected for 2026, reinforcing investor expectations that global demand for AI computing capacity will continue expanding at a rapid pace.

Shares of Nvidia — currently the world’s most valuable publicly traded company with a market capitalization exceeding $4.3 trillion — briefly rose following the announcement before closing about 1.6% higher.

AI industry entering deployment phase

The company’s updated forecast indicates a broader shift taking place across the artificial intelligence industry. Over the past two years, the biggest technology companies have spent hundreds of billions of dollars acquiring computing hardware to train increasingly sophisticated AI models. Now the focus is rapidly moving toward inference — the process where trained models generate responses, make predictions, or execute tasks in real time for users.

This stage of AI computing is expected to be even larger in scale than the training phase because it involves serving millions or potentially billions of user queries daily across applications such as chatbots, search engines, productivity tools, and autonomous systems.

Companies including OpenAI and Anthropic are rapidly expanding AI services to support growing user bases, while technology firms such as Meta Platforms are integrating AI assistants into social media platforms used by billions of people. As a result, demand for hardware capable of delivering AI responses instantly — with minimal latency — is rising sharply.

While Nvidia dominates the market for chips used to train large language models, inference computing has become a more competitive arena. Large technology companies have begun designing their own specialized processors optimized for running AI models efficiently at scale.

Meta, for example, has invested heavily in developing custom AI chips for its internal infrastructure, while several cloud providers are also building proprietary silicon.

To maintain its leadership, Nvidia unveiled a new architecture designed specifically for high-performance inference workloads. The company’s upcoming Vera Rubin AI chip will perform a stage known as “prefill,” in which human prompts are converted into machine-readable tokens that AI systems can process.

The second stage of the process, called “decode,” will be accelerated by chips from startup Groq, which specializes in extremely fast AI inference.

Nvidia licensed Groq’s technology in a deal valued at $17 billion last year, denoting how critical inference performance has become for the next phase of AI deployment. The company aims to reduce the time it takes AI systems to generate responses — a key metric for applications ranging from digital assistants to automated customer service tools — by combining its own processors with Groq’s specialized hardware.

Even as competitors develop alternative AI chips, Nvidia continues to benefit from a powerful ecosystem built around its programming platform, CUDA. The CUDA software environment allows developers to design algorithms optimized for Nvidia hardware, creating a large installed base that reinforces the company’s dominance.

“The installed base is what attracts developers who then create the new algorithms that achieve the breakthrough technologies,” Huang said.

“We are in every cloud. We’re in every computer company. We serve just about every single industry.”

That ecosystem advantage has made it difficult for competing chip architectures to gain widespread adoption, even when they offer specialized performance improvements.

Next Generation Of AI Processors

Huang also introduced a new processor called the Feynman AI chip, named after the Nobel Prize-winning physicist Richard Feynman. The chip forms part of Nvidia’s long-term roadmap for advancing AI computing capabilities, with future processors expected to deliver significantly higher performance for both training and inference workloads.

The company is also investing heavily in technologies that enable faster communication between processors in large AI data centers. Analysts expect Nvidia to elaborate further on its recent $2 billion investments in optical networking companies Lumentum and Coherent Corp., which manufacture laser-based components used to transmit data between chips using beams of light.

Optical interconnects are increasingly essential as AI systems grow to include tens of thousands of processors operating simultaneously within massive data center clusters.

Beyond technology companies, national governments are also becoming major buyers of AI infrastructure. Countries, including Saudi Arabia, are investing heavily in sovereign AI systems designed to support national data processing, research, and digital services. These projects often rely on Nvidia’s processors, reinforcing the company’s role as a foundational supplier to the global AI ecosystem.

At the same time, AI hardware has become a key element of technological competition between the United States and China, with export controls limiting the sale of some advanced chips to Chinese companies. Despite those geopolitical tensions, Nvidia continues to release open-source AI software tools, positioning itself as a central platform provider for developers worldwide.

Nvidia’s $1 trillion revenue opportunity forecast highlights how the AI infrastructure boom is expected to extend well beyond the initial wave of model development.

As artificial intelligence becomes embedded across industries — from healthcare and finance to logistics and manufacturing — the amount of computing power required to support those systems will expand dramatically. Analysts say the next stage of growth will be driven by applications that require real-time AI responses, including autonomous vehicles, intelligent robotics, and advanced digital assistants.

If those technologies scale globally, demand for AI computing infrastructure could grow far beyond current projections. The transition from model training to real-time inference is being interpreted as the next major phase of growth, especially for Nvidia, which has already become the most valuable company in the world, largely due to the AI boom.

CNN Fear & Greed Index Stands at 20 Which Falls Under Extreme Fear Category 

0

As of the most recent data, the CNN Fear & Greed Index stands at 20, which falls squarely in the Extreme Fear category. The index ranges from 0 to 100:0–24: Extreme Fear, 25–44: Fear 45–55: Neutral, 56–75: Greed, 76–100: Extreme Greed.

At 20, the market is already deep into Extreme Fear, not merely “on the verge” of entering it. This sentiment gauge, maintained by CNN, aggregates seven equally weighted indicators of investor psychology and market conditions: Stock price momentum. Stock price strength (new highs/lows). Stock price breadth (advancing/declining volume). Put/call ratio.

The low reading reflects broad pessimism, likely driven by recent market declines. For context, the S&P 500 closed at 6,632.19 on March 13 down ~0.61% that day, following a drop from 6,672.62 the prior session and higher levels earlier in the week.

The index had been higher recently—around 21 the day before, 25 a week prior, and 37 a month ago—indicating a sharp slide into deeper fear territory over the past couple of weeks. Extreme Fear readings historically suggest capitulation or oversold conditions, where stocks may be undervalued due to panic selling (contrarian investors sometimes view it as a potential buying opportunity).

However, it can also signal ongoing downward pressure if fundamentals or external factors continue weighing on sentiment. The VIX (volatility index) was recently around 27, elevated but not at panic extremes. Market conditions can shift quickly, so this reflects the snapshot as of mid-March 2026.

The VIX, officially known as the CBOE Volatility Index (ticker: ^VIX), is a real-time market index created and maintained by the Chicago Board Options Exchange (CBOE). It measures the market’s expectation of 30-day forward-looking volatility in the S&P 500 Index (SPX), derived from the prices of SPX options.

Often called the “fear gauge” or “fear index”, it reflects investor sentiment and perceived risk in the U.S. stock market—higher VIX levels indicate greater expected turbulence (fear/panic), while lower levels suggest calm and complacency. Introduced in 1993 by the CBOE. Originally based on at-the-money options on the S&P 100 Index (OEX).

In 2003, updated to use a broader range of out-of-the-money and in-the-money SPX options for a more accurate, model-independent measure of implied volatility. Since then, it has become the world’s premier benchmark for equity market volatility. Related products launched later: VIX futures (2004), VIX options, mini-VIX futures, and various volatility ETPs/ETFs (e.g., VXX, UVXY).

The VIX does not track historical volatility of past stock movements. Instead, it captures implied volatility — the volatility level “implied” by current option prices. In simple terms: Option prices rise when traders expect bigger future swings. The VIX aggregates these option prices into a single number representing the expected annualized standard deviation of the S&P 500 over the next 30 calendar days.

A VIX of 20 means the market expects the S&P 500 to move up or down by about 20% annualized, or roughly ±1.15% per trading day (20% ÷ ?252 trading days ? 1.26%, but commonly approximated as ~1.15–1.2% daily for rough math). The VIX expresses volatility in percentage terms (not points).

The current methodology (post-2003) is model-free and uses a wide range of SPX options:It focuses on options expiring in the near term (weighted to target exactly 30 days to expiration). Uses both weekly and monthly SPX options. Includes out-of-the-money puts and calls not just at-the-money.

The calculation interpolates between two expiration cycles to achieve a constant 30-day horizon. The core idea is a variance swap replication formula, simplified as:?² ? (2/T) × ? [ (?K / K²) × e^(rT) × Q(K) ] – (1/T) × [ (F/K? – 1)² ]Where:? = VIX / 100 (volatility as decimal) T = time to expiration (in years) K = strike price ?K = interval between strikes Q(K) = mid-quote price of the option at strike K F = forward index level derived from options r = risk-free rate K? = first strike below the forward level.

The result is then annualized and square-rooted to get volatility in percent. In practice, you don’t need to compute it manually—CBOE publishes the real-time VIX value during market hours. Below 15–20: Low volatility / complacency often seen in bull markets; e.g., long periods in 2017 or mid-2020s.

Historical average: Around 19–20 since inception, but it tends to spend more time low and spike sharply during stress. The VIX exhibits strong mean reversion — spikes are usually temporary, and volatility tends to fall back over time. Contrarians view very high VIX and very low VIX (<12–15) as warning signs of complacency.

Buy VIX futures/options/ETFs when expecting volatility spikes (inverse to stock market direction in most cases). Speculate on whether implied volatility is too high/low relative to expected realized volatility (volatility risk premium often makes VIX futures contango, benefiting short-vol strategies in calm periods).

As of the most recent close, the VIX settled at 27.19 down slightly from prior levels, with intraday range of 24.67–28.47. This places it in an elevated zone, consistent with recent market uncertainty and the Fear & Greed Index dipping into Extreme Fear territory.

Bitcoin Surges Past $74,000 as Geopolitical Tensions Rise, Market Eyes Further Upside

0

Bitcoin climbed above the $74,000 mark on Monday, reaching an intraday high of $74,471 as rising geopolitical tensions in the Middle East fueled renewed momentum in the cryptocurrency market.

The rally comes as investors increasingly look to digital assets as alternative stores of value during periods of global uncertainty.

According to a market note from QCP Capital, the market may be heading toward “a late-quarter plot twist,” as both Bitcoin and Ethereum began the week with strong upward momentum. While Bitcoin broke through a key resistance level above $74,000, Ethereum followed closely behind, trading near $2,700.

Market analysts note that Bitcoin is showing early signs of recovery after successfully defending a major confluence support zone. The strong reaction from this level suggests buyers have stepped in aggressively to absorb selling pressure, potentially laying the groundwork for a broader bullish reversal.

The recent price surge has also triggered significant liquidations in the derivatives market. Over the past 24 hours, short positions across the crypto market totaling approximately $300 million were wiped out. Data Coinglass shows that Bitcoin futures open interest rose by about 6% during the same period, climbing to $49.2 billion.

Commenting on the trend, Coinglass noted that the simultaneous rise in both price and open interest has historically preceded periods of heightened volatility. “New fuel is building again,” the firm said, suggesting that the market may be preparing for another major move.

Technical analysts have also pointed to Bitcoin’s consolidation around the 200-week exponential moving average (EMA) and the weekly fair value gap between $70,000 and $76,000 as key signals that market dynamics may be shifting. According to crypto analytics platform Cryptorphic, the current price action indicates a transition from accumulation and absorption into the early stages of a potential trend reversal.

Bitcoin’s rally also coincides with strong institutional demand. The world’s largest cryptocurrency recently benefited from significant purchases by Michael Saylor through his firm MicroStrategy, alongside continued inflows into spot Bitcoin exchange-traded funds. Analysts at Laser Digital, a digital asset unit backed by Nomura, highlighted these factors as key drivers of the latest price momentum.

Meanwhile, Chris Beauchamp, chief market analyst at IG Group, noted that Bitcoin appears to be carving out its own niche amid broader market volatility.

“Everything else seems to live or die based on oil prices,” Beauchamp said. “Bitcoin has been immune to that. It’s been finding its own little haven niche.”

Notably, crypto analyst Michael van de Poppe believes the rally could extend further, suggesting that stronger performance from Ethereum may help propel Bitcoin toward the $80,000 level.

However, market commentator Ted Pillows warned that Bitcoin may face heavy resistance between $75,000 and $76,000. According to him, the asset could briefly break above $76,000 before reversing sharply and falling back below $60,000.

Similarly, Arthur Hayes, former CEO of BitMEX, recently cautioned that persistent macroeconomic and geopolitical instability could trigger a deeper correction, potentially pushing Bitcoin below the $60,000 mark.

Outlook

Despite short-term uncertainty, the broader outlook for Bitcoin remains cautiously bullish. Analysts say the cryptocurrency appears to be breaking out of a prolonged compression phase, which could signal the formation of a higher-timeframe base.

If Bitcoin manages to maintain strength above the $74,000 level, the next major target for bulls would be around $80,600, a price zone that previously served as a breakdown point. A successful push beyond that level could open the door for further gains.

For now, investors remain focused on whether the cryptocurrency can sustain its momentum amid geopolitical developments and evolving macroeconomic conditions.