Home 9 AI 9 Closing the Circle, NVIDIA and Ansys

Closing the Circle, NVIDIA and Ansys

by | Sep 11, 2025

John Linford talks about NVIDIA’s Blackwell, GPUs, surrogate models at Simulation World 2025
John Linford, Head of Product, CAE/EDA at NVIDIA

John Linford, Head of Product, CAE/EDA at NVIDIA, delivered the keynote address at the one-day Ansys’ Simulation World Silicon Valley conference. He did not waste time on pleasantries. He was immediately all about accelerating the pace of design and simulation with AI.

AI runs quite well on the semiconductor chips produced by Nvidia, specifically GPUs, and the company is making a killing, riding the wave better than any other chip maker, including AMD, also at the conference. NVIDIA really doesn’t need Linford to promote GPUs, yet here he is.

“Using GPU devices and scaling up, we can do many hundreds of design cycles in a day,” he said, and adds, “The next generation of simulation is AI-powered. This is where things start to really get exciting.”

Linford was talking about nothing less than a transformation in how industries—from chipmaking to automotive to aerospace—design, test, and validate their products. The hardware that will make it all possible, on which the AI software runs blazingly fast, is NVIDIA’s Blackwell platform, the company’s latest computing architecture, which blends GPU’s multithread horsepower, CPU design, networking fabric, and software libraries into what amounts to a complete industrial AI toolbox.

Engineers Get a Chainsaw

Traditionally, simulation meant grinding through physics equations on powerful computers. That is still the case for the most part, but NVIDIA is promoting a hybrid of sorts: using AI surrogate models in tandem with physics-based solvers—the result: tens of thousands of design cycles in a day instead of dozens.

“It’s mind-boggling,” Linford said. “Designers have this chainsaw that they can just take through their design space and rapidly cut through the design possibilities and find the absolute best solution for the product they’re trying to build.”

Surrogates approximate solutions based on AI training, dramatically shortening time-to-result. They don’t replace physics, Linford clarified, but rather become “an ingredient in the digital twin.”

“You can have a digital twin of a car that hasn’t been created yet, but you can’t have a digital twin of a dragon. Not a real thing.” Surrogates, he said, are “ingredients in the digital twin, not the twin itself.”

A surrogate can reveal the behavior of a system that does not yet exist. A digital twin, on the other hand, represents an existing system, Linford explains.

The distinction highlights the blending of imagination and precision in NVIDIA’s vision. Companies can simulate designs at unprecedented scale, but the outcomes still depend on grounding models in real-world physics.

Blackwell Singing at the Edge of AI

At the heart of NVIDIA’s simulation push is Blackwell, the company’s computing platform unveiled at last year’s developer conference, GTC 2024. Linford described it as “CPUs, GPUs, high-performance networks, all tied together with massive amounts of cables—a huge, huge computing system.”

NVIDIA Blackwell GB200 NVL72 is an AI/simulation supercomputer. Image: Nvidia.

Blackwell comes in various configurations. At the high end sits the GB200 NVL72, a rack-scale system with 72 GPUs linked by the NVLink Switch System, creating a shared memory pool accessible by any CPU or GPU thread in the rack. Linford called it “absolutely massive, best-in-class performance” for both AI and simulation. You can’t beat it.”

But a quick check on the Nvidia website reveals the GB300 NVL72 with 1.5x more AI performance.

But not every company needs—or can afford—the entire rack. The GB200 NVL72 is estimated to cost $3 million. NVIDIA will sell the GB200 Grace Blackwell NVL4 Superchip by itself for companies wishing to create their own AI computing systems, as well as a desktop version, the DGX Spark.

A supercomputer that sits on your desk, says Nvidia CEO Jensen Huang, announcing the VNVIDIA DGC Spark, powered by the Grace Blackwell superchip,  at GTC 2025.

CUDA-X: The Software Glue

Known for its hardware, NVIDIA can make a case for being a software company as well. Linford emphasized that the company co-designs both hardware and software, with the latter centered around CUDA-X, a suite of domain-specific libraries.

“If you’re solving Maxwell or Navier-Stokes equations, there’s a library for you where you can program in terms of forces and vectors, tensors instead of bits and bytes,” he explained. There’s a library for GPU-accelerated fast Fourier transform (FFT) implementations, too. CUDA-X does away with the complexity of GPU programming, letting scientists and engineers focus on their domain expertise.

NVIDIA has built roughly 900 of these libraries, spanning fields from lithography to fluid dynamics. Tensor cores, the specialized engines inside GPUs, allow CUDA-X to deliver not only raw performance but also energy efficiency. By mixing FP16 and FP64 precision in clever ways, CUDA-X can accelerate high-accuracy calculations by fourfold while using one-sixth the energy.

“This is one of the reasons NVIDIA actually uses these tools to design our own chips,” Linford said. “We can build our chips on Blackwell and do it four times faster with one-sixth of the energy cost. No brainer, right?”

Scaling Without the Headache

One of the consistent challenges in simulation is scaling software across multiple nodes. Performance can plummet once calculations spill across distributed systems. NVIDIA has tried to solve this with low-latency interconnects and CUDA-X libraries that automatically scale.

Programmers, Linford said, “shouldn’t have to worry about sending bits and bytes over the NVLink network. They simply invoke that CUDA kernel, and it scales up across all the GPUs available.”

The difference is stark: an x86 system with eight GPUs must switch to InfiniBand for scaling beyond a single node, resulting in a performance dip. But with GB200 NVL72’s 72-GPU shared memory, scaling remains linear.

Industry Partnerships

NVIDIA’s strategy isn’t just to build for itself. The company has long partnered with software vendors like Synopsys and ANSYS, accelerating their electronic design automation (EDA) and computer-aided engineering (CAE) tools. Linford noted that Synopsys applications have seen “30x performance improvement” across GPU generations.

Chip thermal analysis with Ansys Electronics Desktop, which includes IcePak. Image from Ansys video.

Working with ANSYS, NVIDIA has utilized GPU acceleration for thermal and flow simulations of its own chips prior to fabrication. Linford showed examples where ANSYS Fluent and Icepak helped NVIDIA validate cooling strategies, with visualization rendered in Omniverse.

A Virtuous Cycle

The collaboration between NVIDIA and Synopsis is cyclical: NVIDIA builds faster GPUs, EDA vendors harness them, NVIDIA uses the resulting tools to design its next chips.

“We actually use Blackwell internally to design the next generation of this platform,” he said. “The cycle just keeps going.”.

“When we invest in Synopsys to accelerate their platform, we’re investing in ourselves,” Linford said.

The Stakes

The stakes for NVIDIA are high. The success of GPUs has been fueled by gaming and AI training, but industrial simulation is a different market—one defined by conservative adoption cycles and long-standing incumbents.

Yet the payoff could be enormous. If Blackwell and CUDA-X cement NVIDIA’s role as the standard platform for simulation, the company won’t just power AI startups—it will sit at the core of industries as diverse as automotive crash testing, semiconductor verification, and even medical device design.

For Linford, the message at Simulation World was clear: the future of design is iterative, AI-assisted, and GPU-powered.