Home 9 AI 9 AI-Driven Supercomputing’s Next Chapter

AI-Driven Supercomputing’s Next Chapter

by | Nov 24, 2025

Why FP64 precision and hybrid architectures still dominate scientific HPC.

 

In an interview with The Register, Nvidia’s VP & GM of Hyperscale and HPC, Ian Buck, explains that within the next year or two, scientific computing workloads will routinely incorporate AI-driven techniques. He expects GPU-based systems to follow the example of the Top500 supercomputers’ GPU-led rise.

Buck emphasizes that AI isn’t a substitute for simulation; it’s a complementary tool that helps researchers home in on promising candidates for expensive simulations. For instance, AI can predict which alloy compositions are most likely to succeed, reducing the simulation workload by orders of magnitude.

To make this shift effective, Nvidia and other vendors must build architectures that deliver both deep-precision FP64 (64-bit floating point) compute for traditional simulation and ultra-low-precision formats (FP4, FP8) tuned for modern AI inference. Buck insists, “In order to build a great supercomputer, it has to be great at simulation; it has to be great at AI; and it also has to be a quantum supercomputer.”

Nvidia’s roadmap includes the “Rubin” family of accelerators; some optimized for inference, others retaining FP64 throughput. Recent contracts show this strategy gaining traction: over 80 new supercomputing deals in the past year represent 4,500 exaFLOPS of AI compute, including the upcoming Horizon (2026 launch) with both FP64 and AI compute capabilities.

The scientific computing landscape is evolving. Engineers and researchers must prepare for hybrid systems that blend simulation, AI, and quantum adjacency, while expecting hardware that preserves precision and embraces novel inference formats.