Nvidia's next-generation AI server architecture, Vera Rubin, will ship with liquid cooling as a standard component when it enters production in the second half of 2026. Not an upgrade option. Not a premium SKU. Standard. At GTC, Nvidia named four cold plate suppliers and released standardized liquid cooling specifications for the platform. Asia Vital Components and Cooler Master were two of the four. The cooling infrastructure cost per Rubin system exceeds $57,000.
This is the moment the data center cooling industry has been building toward since Nvidia's thermal design power crossed 700 watts with the H100. Liquid cooling was recommended for Blackwell. For Rubin, it is the baseline.
Previous Nvidia architectures left cooling as an integration exercise for server OEMs. Dell, HPE, Lenovo, and Supermicro each designed their own thermal solutions, sourced their own cold plates, and built their own manifold assemblies. The result was a fragmented cooling supply chain where specifications varied by vendor, compatibility was not guaranteed across platforms, and operators running mixed fleets had to manage multiple cooling architectures within the same facility.
Rubin changes that. Nvidia has centralized the cold plate specification. The four named suppliers are manufacturing to Nvidia's thermal requirements, not the OEM's. That means every Rubin server, regardless of which system integrator builds the surrounding chassis, will use cold plates that conform to the same interface dimensions, flow rate requirements, and thermal performance targets.
For operators, this simplifies procurement. A CDU qualified for Rubin cold plates works across Rubin servers from any OEM. Manifold connections standardize. Spare parts inventory consolidates. The cooling layer becomes interoperable in a way it has never been for GPU servers.
For cold plate manufacturers outside the named four, the math is harder. Nvidia's standardization creates a preferred supplier list. The volumes behind Rubin are enormous. Any cold plate vendor not on that list is competing for the scraps of non-Nvidia workloads or hoping to be added in a future qualification cycle. The market just consolidated around four names, and the specification authority shifted from the OEMs to the GPU company.
The cooling infrastructure cost for a single Rubin system exceeding $57,000 is a number that reframes the economics of data center thermal management. A Blackwell-era DGX system's cooling components, cold plates, manifolds, quick-disconnect fittings, and a proportional share of the CDU, ran in the $15,000 to $25,000 range depending on configuration and vendor. Rubin roughly triples that.
The increase reflects both higher thermal loads and more sophisticated cooling architecture. Rubin's GPU modules run hotter and pack more compute into the same footprint, requiring cold plates with higher flow rates, tighter thermal tolerances, and more robust connection hardware. The CDU share per system rises because each server demands more cooling capacity from the facility-level distribution infrastructure.
At $57,000 per system across a 10,000-GPU training cluster, the cooling hardware bill alone approaches $570 million before accounting for CDUs, piping, heat rejection, and facility modifications. That is the kind of number that makes cooling vendors very happy and makes CFOs reconsider their infrastructure budgets.
At GTC, Jensen Huang stated that "power delivery and liquid cooling will become core elements requiring co-design" for future AI infrastructure. Delta Electronics, meanwhile, is developing 800V DC power systems alongside liquid cooling solutions for next-generation AI data centers. The convergence is intentional. Cooling and power are no longer independent infrastructure layers. They are becoming a single integrated system.
This tracks with how thermal design power has scaled. The H100 at 700 watts could be cooled with well-designed air systems in some configurations. Blackwell at 1,000 watts made liquid cooling the obvious choice. Rubin pushes higher still. At these power levels, the electrical delivery system and the thermal management system must be designed together because the heat generated is a direct function of the power consumed, and both systems share the same physical space inside the rack.
Co-design means the cooling vendor and the power vendor need to be in the same room during facility planning. It means the CDU placement, pipe routing, and heat rejection architecture affect and are affected by the busway layout, transformer placement, and power distribution topology. Companies that can deliver both, like Delta, Schneider Electric, and Vertiv, have a structural advantage over pure-play cooling or pure-play power suppliers.
Rubin ships in H2 2026. The cold plate suppliers are manufacturing now. The CDU vendors are scaling production. The facility designers are drawing up mechanical rooms sized for thermal loads that would have seemed absurd three years ago.
The optional era of liquid cooling lasted roughly two GPU generations. Hopper made it advisable. Blackwell made it practical. Rubin makes it mandatory. Every Nvidia AI server that ships from the second half of this year forward will require liquid cooling infrastructure. No exceptions. No air-cooled fallback.
For the cooling industry, this is the demand certainty that justifies aggressive capacity expansion. The question is no longer whether operators will adopt liquid cooling. The GPU company decided for them. The only remaining questions are which vendors capture the volume, how fast the supply chain scales to meet it, and whether $57,000 per system is the new floor or just a waypoint on the cost curve.
Nvidia writes the thermal roadmap. The cooling industry builds to it. That relationship just became explicit.