Uber and Nvidia have expanded their partnership to roll out robotaxis; autonomous Level 4 vehicles on Uber’s ride-hailing network.
The rollout starts in Los Angeles and San Francisco in the first half of 2027, initially with data-collection vehicles and safety drivers, transitioning to fully driverless operations.
It will scale to 28 cities globally by 2028, spanning North America, Europe, Australia, and Asia. The vehicles will be powered by Nvidia’s DRIVE Hyperion autonomous vehicle platform and Alpamayo, a new reasoning-based AI model designed to handle complex, unpredictable real-world scenarios like construction zones or erratic pedestrians using chain-of-thought logic.
This builds on an earlier collaboration where Uber aims to integrate Nvidia’s tech for a large-scale Level 4-ready mobility network, potentially involving hundreds of thousands of vehicles long-term. Uber’s strategy emphasizes a “multi-player” ecosystem, partnering with various automakers and AV developers rather than building everything in-house.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Other Nvidia partners in autonomous driving include companies like BYD, Hyundai, Nissan, Stellantis, Lucid, Mercedes-Benz, and ride-hailing players like Lyft, Bolt, and Grab. The news boosted Uber stock while it reinforces Nvidia’s push into full-stack autonomous driving software beyond just chips.
This positions Nvidia as a key enabler in the robotaxi space, making advanced AV tech more accessible to multiple operators and potentially accelerating global adoption. No specific list of all 28 cities has been detailed yet beyond the initial LA/SF launches.
Nvidia’s Alpamayo is a family of open-source AI models, tools, simulation frameworks, and datasets specifically designed to accelerate the development of safe, reasoning-based autonomous vehicles (AVs), particularly targeting Level 4 autonomy where the vehicle can handle all driving tasks in specific conditions without human intervention.
Announced at CES 2026 on January 5, 2026 with further expansions and mentions at GTC 2026 in March, Nvidia positioned Alpamayo as a major advancement in “physical AI,” often described as the “ChatGPT moment” for autonomous driving and robotics.
It addresses key challenges in the industry, especially the “long-tail” problem—rare, unpredictable edge cases; erratic pedestrians, unusual construction zones, or complex urban interactions that cause traditional perception-planning AV systems to fail or require frequent human takeovers.
Alpamayo isn’t just a single model—it’s an ecosystem: Alpamayo 1; initial flagship, ~10 billion parameters: A Vision-Language-Action (VLA) model that processes multimodal inputs primarily camera video, but supports fusion with lidar, radar, etc. and outputs driving actions like trajectory planning.
Unlike earlier end-to-end models that map inputs directly to actions, it incorporates chain-of-thought (CoT) reasoning or “Chain of Causation”. The model explicitly “thinks” step-by-step before deciding—generating human-readable reasoning traces; The pedestrian is jaywalking unpredictably ? I should slow down and yield ? Adjust trajectory to maintain safe distance.
This makes decisions more interpretable, safer, and easier to debug and validate for regulators. Later iterations: Enhanced versions with improved steerability, interactive reasoning, and better handling of real-time control. Supporting tools: Physical AI AV datasets: Massive, open multi-sensor real-world driving data for training.
AlpaSim: Open-source, realistic closed-loop simulators for testing reasoning in edge cases without real-world risk. Integration with Nvidia’s DRIVE Hyperion hardware platform for deployment in vehicles. Traditional systems rely on separate perception ? prediction ? planning modules, often rule-based or with limited adaptability to novelties.
Alpamayo uses an end-to-end trainable reasoning VLA that mimics human-like judgment: perceive the scene, reason causally about risks and options, then act precisely; generating feasible trajectories via diffusion-based decoders. This enables better generalization to unseen scenarios, higher safety through explainable decisions, and faster iteration for developers—especially partners who don’t want to build everything from scratch.
In the Uber-Nvidia collaboration Alpamayo powers the AI stack for Level 4 robotaxis launching in LA/SF in 2027 and scaling to 28 cities by 2028. It complements DRIVE Hyperion hardware, allowing operators like Uber to deploy reasoning-capable AVs more quickly.
Early real-world tests and simulations show strong performance in complex scenarios, though like most emerging AV tech, it may still require safety drivers or occasional interventions during initial rollouts. Alpamayo represents Nvidia’s push to democratize advanced autonomy via open models, shifting from pure hardware (chips) to full-stack software that enables “thinking” vehicles.



