Every cooling technology decision being made in data centers right now traces back to a single forcing function: how many watts NVIDIA's next GPU generates. The H100 runs at 700 watts. The Blackwell B200 pushes 1,000 watts. Rubin, the next generation, is expected to climb higher. Each step up the power ladder compounds the thermal load per rack, per row, per facility.
At 700 watts per GPU, a rack packed with eight accelerators generates heat loads that conventional air cooling cannot manage. At 1,000 watts, the math gets worse. A dense AI training cluster running Blackwell hardware pushes rack densities past 100 kW. Some configurations approach 120 kW. Next-generation designs on engineering whiteboards are targeting 200 to 250 kW per rack.
Air cooling tops out around 25 to 30 kW. That ceiling has not moved meaningfully in years and it will not move meaningfully in the future. The physics of convective heat transfer through air set a hard limit. Fans, baffles, and raised-floor optimization can push air cooling toward its theoretical maximum, but they cannot break through it.
The gap between what GPUs produce and what air can remove widens with every product generation. NVIDIA is not the only company driving this. AMD's MI300X and Intel's Gaudi 3 both push power envelopes that exceed air cooling capabilities. But NVIDIA commands the AI accelerator market. Their roadmap sets the thermal requirements that the entire cooling supply chain must respond to.
Non-AI workloads across the global data center fleet total approximately 38 gigawatts. AI workloads are expected to hit 44 GW in 2026. The crossover point, where AI thermal load exceeds everything else combined, is arriving this year. Cooling infrastructure built for the 8 to 15 kW racks of the previous era cannot serve the 85 to 250 kW racks of the current one without fundamental redesign.
The cooling industry is, in effect, building to NVIDIA's spec. When Jensen Huang announces a new chip architecture, the thermal management implications ripple through CDU manufacturers, cold plate suppliers, and facility designers within weeks. The vendors who can design, qualify, and ship cooling hardware matched to the next GPU generation before that generation reaches volume production will own the upgrade cycle. The ones who arrive late will find that someone else already has the purchase order.