Home 9 Automotive 9 NVIDIA Releases Alpamayo AI Models for AV Simulation

NVIDIA Releases Alpamayo AI Models for AV Simulation

by | Jan 8, 2026

Open AI models, simulation frameworks, and datasets launch to support autonomous vehicle development, enabling reasoning-based training, testing, and validation in simulation for safer decision making before deployment
Image: NVIDIA

LAS VEGAS, NV (CES 2026), Jan 8, 2026 – NVIDIA introduced the NVIDIA Alpamayo family of open AI models, simulation tools, and datasets for autonomous vehicle development. The release supports reasoning-based autonomous driving by enabling developers to train, test, and validate vehicle behavior in simulation before deployment.

AVs struggle with rare and complex driving scenarios that fall outside standard training data. Addressing these “long-tail” cases requires models that can reason about cause and effect across perception, decision-making, and control.

The Alpamayo family applies reasoning-based vision language action (VLA) models to autonomous driving, enabling step-by-step evaluation of new and rare scenarios. This approach improves decision explainability and safety validation, with system safeguards provided by NVIDIA Halos safety framework.

“The ChatGPT moment for physical AI is here – when machines begin to understand, reason and act in the real world,” said Jensen Huang, founder and CEO of NVIDIA. “Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions – it’s the foundation for safe, scalable autonomy.”

A Complete, Open Ecosystem for Reasoning‑Based Autonomy

Alpamayo provides an open ecosystem that combines models, simulation tools, and datasets for AV development. Developers can use Alpamayo models as teacher models, then fine-tune and distill them into the foundational layers of their AV software stacks.

At CES, NVIDIA is releasing:

  • Alpamayo 1: Alpamayo 1 is a chain-of-thought reasoning VLA model for AV research, released on Hugging Face. The 10-billion-parameter model processes video input to produce driving trajectories and step-by-step reasoning traces that explain the logic behind every decision. Developers can distill Alpamayo 1 into smaller runtime versions for vehicle systems or use it to build AV development tools such as reasoning-based evaluators and auto-labeling pipelines. NVIDIA plans future models with increased parameter counts, improved reasoning capabilities, expanded input and output modalities, and commercial licensing options.

 

  • AlpaSim: An open-source simulation framework available on GitHub supports end-to-end AV development. The system combines realistic sensor models, configurable traffic dynamics, and scalable closed-loop testing to enable faster validation and policy updates.

 

  • Physical AI Open Datasets: NVIDIA offers an open dataset for AVs that includes over 1,700 hours of driving data from varied environments and conditions. The data captures rare and complex edge cases that are critical for training and evaluating reasoning-focused AV architectures. These datasets are available on Hugging Face.

AV Industry Supports Alpamayo

Companies including Lucid, JLR, Uber and Berkeley DeepDrive, are showing interest in Alpamayo to develop reasoning-based AV stacks that will enable level 4 autonomy.

“The shift toward physical AI highlights the growing need for AI systems that can reason about real-world behavior, not just process data,” said Kai Stepper, vice president of ADAS and autonomous driving at Lucid Motors. “Advanced simulation environments, rich datasets and reasoning models are important elements of the evolution.”

“Open, transparent AI development is essential to advancing autonomous mobility responsibly,” said Thomas Müller, executive director of product engineering at JLR. “By open-sourcing models like Alpamayo, NVIDIA is helping to accelerate innovation across the autonomous driving ecosystem, giving developers and researchers new tools to tackle complex real-world scenarios safely.”

“Handling long-tail and unpredictable driving scenarios is one of the defining challenges of autonomy,” said Sarfraz Maredia, global head of autonomous mobility and delivery at Uber. “Alpamayo creates exciting new opportunities for the industry to accelerate physical AI, improve transparency and increase safe level 4 deployments.”

“Alpamayo 1 enables vehicles to interpret complex environments, anticipate novel situations and make safe decisions, even in scenarios not previously encountered,” said Owen Chen, senior principal analyst of S&P Global. “The model’s open-source nature accelerates industry-wide innovation, allowing partners to adapt and refine the technology for their unique needs.”

“The launch of the Alpamayo portfolio represents a major leap forward for the research community,” said Wei Zhan, codirector of Berkeley DeepDrive. “NVIDIA’s decision to make this openly available is transformative as its access and capabilities will enable us to train at unprecedented scale – giving us the flexibility and resources needed to push autonomous driving into the mainstream.”

Source: NVIDIA

About NVIDIA

NVIDIA, founded in 1993 and headquartered in Santa Clara, CA, designs and manufactures graphics processing units, systems on chips, networking hardware, and AI intelligence software such as CUDA. Its products serve industries including gaming, data centers, autonomous vehicles, professional visualization, robotics, health care, and energy. The company introduced the GPU in 1999 and later expanded into accelerated computing and AI infrastructure. In gaming, its GPUs support high-performance rendering, while in AI and high-performance computing, its systems provide the infrastructure for training and deploying large-scale models. NVIDIA also develops tools for robotics and autonomous driving.