
LIVINGSTON, NJ, Jan 12, 2026 – CoreWeave announced plans to add NVIDIA Rubin technology to its AI cloud platform. The move expands support for agentic AI, reasoning, and large-scale inference workloads. CoreWeave expects to deploy the NVIDIA Rubin platform in the second half of 2026. The company is positioned to be among the first cloud providers offering the technology as AI systems scale.
CoreWeave built its AI cloud platform to support multiple generations of technology. The design allows customers to align specific systems with evolving workload needs. Integrating the NVIDIA Rubin platform expands available performance, efficiency, and scale for enterprises, AI labs, and startups running production AI workloads.
“The NVIDIA Rubin platform represents an important advancement as AI evolves toward more sophisticated reasoning and agentic use cases,” said Michael Intrator, co-founder, chairman, and chief executive officer, CoreWeave. “Enterprises come to CoreWeave for real choice and the ability to run complex workloads reliably at production scale. With CoreWeave Mission Control™ as our operating standard, we can bring new technologies like Rubin to market quickly and enable our customers to deploy their innovations at scale with confidence.”
“CoreWeave’s speed, scale, and ingenuity make them an essential partner in this new era of computing. With Rubin, we’re pushing the boundaries of AI – from reasoning to agentic AI – and CoreWeave is helping turn that potential into production as one of the first to deploy it later this year,” said Jensen Huang, founder and chief executive officer, NVIDIA. “Together, we’re not just deploying infrastructure – we’re building the AI factories of the future.”
The NVIDIA Rubin platform targets compute-intensive workloads, including agentic AI, drug discovery, genomics, climate simulation, and fusion energy modeling. It supports mixture-of-experts models that require sustained compute. On CoreWeave, Rubin enables AI builders to train, deploy, and scale these workloads with performance and operational flexibility.
CoreWeave was the first cloud provider to offer NVIDIA GB200 NVL72 instances and the NVIDIA Grace Blackwell Ultra NVL72 platform. Its AI software stack speeds deployment while maintaining performance and reliability.
CoreWeave will deploy NVIDIA Rubin through its Mission Control operating standard for training, inference, and agentic AI workloads. Mission Control combines security, operations, and observability into a unified system designed for transparent operations. Integrated with the NVIDIA Reliability, Availability, and Serviceability (RAS) Engine, CoreWeave Mission Control provides real-time diagnostics across fleet, rack, and cabinet levels, helping customers track system health and available production capacity.
CoreWeave built the Rack Lifecycle Controller to manage power, liquid cooling, and network integration. The Kubernetes-native orchestrator treats an NVIDIA Vera Rubin NVL72 rack as one programmable system. It coordinates provisioning, power operations, and hardware validation, confirming production readiness before customer workloads run.
“Workloads like drug discovery, climate modeling, and advanced robotics demand both cutting-edge compute and the ability to run it reliably at scale,” said Dan O’Brien, president and COO, The Futurum Group. “The NVIDIA Rubin platform expands what is possible, and platforms like CoreWeave are what make those capabilities available in practice. That combination is what accelerates real progress.”
Integrating NVIDIA Rubin into the CoreWeave Cloud platform shifts customer focus from infrastructure management to AI development. Combined with CoreWeave’s software stack, NVIDIA Rubin supports training, inference, and agentic AI for intelligent applications.
CoreWeave is expanding its platform strategy to consolidate production-scale AI tools on a single cloud. The platform combines high-performance compute, multi-cloud compatible storage, and software to develop, test, and deploy AI systems. Capabilities such as Serverless RL extend this approach. Performance and operational execution are reflected in MLPerf results and Platinum rankings in SemiAnalysis ClusterMAX 1.0 and 2.0.
Source: CoreWeave
About CoreWeave

CoreWeave, established in 2017 and headquartered in Roseland, NJ, specializes in providing cloud-based GPU infrastructure tailored for AI and machine learning workloads. The company operates 32 United States and Europe data centers with over 250,000 GPUs, primarily sourced from NVIDIA. It serves clients in AI development, financial modeling, healthcare, and media production. In 2024, CoreWeave reported revenues of $1.92 billion, reflecting a significant increase in demand for AI-optimized cloud services. As of December 2024, CoreWeave employs approximately 823 individuals globally. The company’s client base includes major technology firms, with Microsoft accounting for 62% of its 2024 revenue.