Framing the cooling debate as liquid versus air is the wrong unit of analysis. The right question is: at what rack density does air cooling lose the ability to manage thermal load safely, and what does that threshold mean for the majority of installed data center capacity still running on CRAH units and raised floors? The answers clarify the operating reality more than any market projection.
Traditional data centers managing non-AI compute workloads operate at 5 to 20 kW per rack. Server-class CPUs, storage arrays, networking equipment, virtualized infrastructure. High-performance computing for scientific simulation, financial modeling, and engineering workloads runs 20 to 40 kW per rack. Room-level air cooling with precision CRAH units, hot-aisle and cold-aisle containment, and economizer-assisted heat rejection handles these loads adequately with an industry average PUE in the 1.55 to 1.60 range. The global installed fleet holds hundreds of gigawatts of air-cooled capacity running the world's non-AI compute workloads. None of that infrastructure needs to change, and claims that air cooling is obsolete ignore the majority of what data centers actually run.
AI inference racks are running 85 to 130 kW per cabinet today. AI training clusters targeting next-generation GPU architectures are designed for 250 to 600 kW per rack. Liquid cooling at those densities is not a preference. Air cooling tops out around 40 kW before the physics of heat rejection through a 600mm wide hot aisle at realistic air velocities break down. Moving beyond that threshold requires either unacceptably high fan power, unacceptably high server inlet temperatures, or both.
The constraint is fundamental, not engineering solvable. Air has a specific heat capacity of approximately 1.005 kJ/kg·K and a density at standard conditions of roughly 1.2 kg/m³. Water has a specific heat capacity of 4.18 kJ/kg·K and a density of 1,000 kg/m³. Water carries approximately 3,500 times more thermal energy per unit volume at equivalent conditions. Achieving the volumetric flow rate of air needed to absorb 130 kW of server exhaust heat while maintaining supply temperatures below server inlet temperature limits requires air velocities that create acoustic and vibration problems, and fan power that scales with the cube of velocity. At 130 kW, the fan power required to move enough air through a conventional raised-floor hot aisle exceeds what is operationally viable. The physics close the argument.
The real near-term operating model for most facilities is neither pure air nor pure liquid. Rear-door heat exchangers mounted on existing racks capture server exhaust heat before it enters the hot aisle, handling 30 to 120 kW depending on system design, while existing air cooling infrastructure manages residual loads and non-AI equipment throughout the rest of the white space. Direct-to-chip liquid-cooled servers handle GPU thermal loads directly through cold plates and CDU loops. CRAH units continue managing the IT space around the liquid-cooled equipment.
Systems like the Belden-OptiCool rear-door heat exchanger are designed specifically for this hybrid middle ground: they mount on existing racks, leave server hardware untouched, and bring liquid cooling into an air-cooled facility without requiring manifolded floor plumbing or mechanical plant redesign. The hybrid architecture is not a transitional compromise on the way to a fully liquid-cooled facility. For the majority of operators managing facilities built before 2022 with raised floors, legacy CRAH units, and mixed workload populations, the hybrid model is the permanent operating architecture for the rest of this decade.
Single-phase direct-to-chip liquid cooling is expected to support the next several generations of GPU architectures. The cooling community has spent considerable effort debating when single-phase heat transfer limits will force the industry to two-phase refrigerant systems, and the honest answer is that single-phase water-based cooling at appropriate flow rates and cold plate geometries can manage 400 kW per rack with current CDU technology. The physical limit is real but further out than the two-phase advocacy suggests.
Two-phase cooling at the chip level works and is deployed in specialized facilities. Immersion in single-phase and two-phase dielectric fluids is a production-grade approach for AI-native greenfield builds. But two-phase systems introduce refrigerant handling requirements, hydraulic balancing complexity across multi-rack loops, and workforce skill requirements that make them the appropriate choice for purpose-built AI facilities, not for retrofits of existing infrastructure. The operators managing current AI workloads in facilities designed for a different thermal era are running single-phase hybrid architectures by necessity, and the products that serve them are the ones that meet those facilities where they actually are.
The productive question is not "liquid or air?" It is: what is the peak rack density my facility needs to support, and what cooling architecture covers that density with the least disruption to the existing mechanical plant?
Below 40 kW: air cooling with containment optimization and economizer upgrades handles the load at reasonable operating cost. 40 to 80 kW: rear-door heat exchangers on selected high-density racks, leaving the existing CRAH infrastructure in place for the rest of the white space. 80 to 130 kW: direct-to-chip cold plates with CDU loops serving the GPU racks directly, CRAH units handling residual heat and non-AI equipment. Above 200 kW per rack: purpose-built liquid-cooled infrastructure from the mechanical plant up, or containerized systems that arrive pre-integrated.
The cooling industry sells liquid cooling solutions by implying air cooling is finished. The operators managing 50 MW of installed air-cooled capacity alongside a 5 MW GPU cluster know the framing is wrong. Liquid cooling is mandatory for AI rack densities. Air cooling is adequate for everything below 40 kW. Most facilities will run both architectures for the rest of this decade, and the most useful products are the ones that bridge between them cleanly without requiring a full mechanical plant overhaul.