Home 9 Computing 9 NVIDIA, CoreWeave Scale 5 GW AI Factory Capacity

NVIDIA, CoreWeave Scale 5 GW AI Factory Capacity

by | Jan 27, 2026

Technology providers expand alignment to build more than five gigawatts of AI factories by 2030, scaling data center infrastructure, software, and platforms to support AI workloads
Image: NVIDIA

NVIDIA and CoreWeave are expanding their partnership to accelerate construction of more than 5 gigawatts of AI factories by 2030. The move supports AI adoption and strengthens CoreWeave’s role as a cloud platform built on NVIDIA infrastructure. The companies are aligning infrastructure, software, and platform layers to scale AI data center capacity.

The companies intend to:   

  • Build AI factories developed and operated by CoreWeave using NVIDIA’s computing platform technology to meet customer demand.
  • Leverage NVIDIA’s financial strength to accelerate CoreWeave’s procurement of land, power and shell to build AI factories.
  • Test and validate CoreWeave’s AI-native software and reference architecture, including SUNK and CoreWeave Mission Control. This aims to support interoperability and help integrate these tools into NVIDIA’s reference architectures for cloud partners and enterprise customers.
  • Deploy multiple generations of NVIDIA infrastructure across CoreWeave’s platform through early adoption of NVIDIA computing architectures, including the NVIDIA Rubin platform, NVIDIA Vera CPUs, and NVIDIA BlueField storage systems.

“AI is entering its next frontier and driving the largest infrastructure buildout in human history,” said Jensen Huang, founder and CEO of NVIDIA. “CoreWeave’s deep AI factory expertise, platform software and unmatched execution velocity are recognized across the industry. Together, we’re racing to meet extraordinary demand for NVIDIA AI factories – the foundation of the AI industrial revolution.”

“From the very beginning, our collaboration has been guided by a simple conviction: AI succeeds when software, infrastructure and operations are designed together,” said Michael Intrator, co-founder, chairman and CEO of CoreWeave. “NVIDIA is the leading and most requested computing platform at every phase of AI – from pre-training to post-training – and Blackwell provides the lowest cost architecture for inference. This expanded collaboration underscores the strength of demand we are seeing across our customer base and the broader market signals as AI systems move into large-scale production.”

Source: NVIDIA

About NVIDIA

NVIDIA, founded in 1993 and headquartered in Santa Clara, CA, designs and manufactures graphics processing units, systems on chips, networking hardware, and AI intelligence software such as CUDA. Its products serve industries including gaming, data centers, autonomous vehicles, professional visualization, robotics, health care, and energy. The company introduced the GPU in 1999 and later expanded into accelerated computing and AI infrastructure. In gaming, its GPUs support high-performance rendering, while in AI and high-performance computing, its systems provide the infrastructure for training and deploying large-scale models. NVIDIA also develops tools for robotics and autonomous driving.

About CoreWeave

CoreWeave, established in 2017 and headquartered in Roseland, NJ, specializes in providing cloud-based GPU infrastructure tailored for AI and machine learning workloads. The company operates 32 United States and Europe data centers with over 250,000 GPUs, primarily sourced from NVIDIA. It serves clients in AI development, financial modeling, healthcare, and media production. As of December 2024, CoreWeave employs approximately 800 individuals globally.