One number. Forty-five degrees Celsius.
That is the cooling water inlet temperature at which NVIDIA says its Vera Rubin processors can operate without performance degradation. Jensen Huang stated it directly: "With 45 degrees Celsius, no water chillers are necessary for data centers." If you have been watching the trajectory of liquid cooling adoption, you know why that sentence matters. Chillers are among the largest capital expenditures in a data center mechanical plant. They are also among the largest consumers of energy and water. Removing them from the loop changes the facility economics at every level: construction cost, operational cost, water consumption, and physical footprint.
This is the single most consequential cooling infrastructure specification change in 2026. The operators who understand it are already redesigning their mechanical plants around that number.
Traditional data center cooling relies on chilled water loops operating at 7 to 15°C supply temperatures. Producing that cold water requires vapor-compression chillers running continuously, consuming electricity to drive compressors, and rejecting heat through cooling towers or dry coolers. Cooling accounts for up to 40% of a typical data center's electricity consumption, and chillers are a dominant share of that load.
A 45°C supply temperature requirement changes the entire thermal rejection strategy. At that temperature, cooling towers and dry coolers can reject heat directly to the atmosphere using the wet-bulb or dry-bulb temperature differential without mechanical refrigeration, across most climate zones and for most of the calendar year. The mechanical plant simplifies radically. Fewer moving parts. No refrigerant circuits. Lower maintenance burden. Lower failure risk on the highest-density, most critical compute in the facility.
The direct energy savings are measurable. Direct-to-chip cooling uses 31% less power than traditional air cooling. Vertiv benchmarks put liquid cooling energy reduction at up to 25% of facility consumption. AWS reported 46% reduced energy consumption alongside 12% increased compute performance after shifting to liquid-cooled infrastructure. At 50 megawatts of IT load, the projected annual savings reach approximately $4 million, assuming data centers spend between $1.9 and $2.8 million per megawatt annually.
That math gets more favorable when you factor in chiller capital costs, which typically range from $500 to $1,500 per ton of cooling capacity, plus the associated electrical infrastructure, controls, and ongoing maintenance. A 50 MW AI data center running at modern PUE targets might require 2,000 to 4,000 tons of cooling capacity. The chiller plant alone represents tens of millions of dollars in equipment that simply does not need to exist.
The water story is equally consequential. NVIDIA claims direct-to-chip liquid cooling achieves a 300x improvement in water efficiency compared to air cooling. That figure reflects the difference between evaporative cooling towers consuming large volumes of water to reject heat at scale versus a closed-loop or near-closed-loop liquid system that circulates the same water through the facility repeatedly with minimal evaporative loss.
This matters because water availability is already a hard constraint for data center siting in many markets. The hyperscalers are facing permitting resistance, municipal water allocation limits, and state-level reporting requirements that did not exist five years ago. A cooling architecture that dramatically reduces consumptive water use removes one of the most common site selection blockers. At 45°C operating temperature, the facility can use higher approach temperatures in its heat rejection equipment, which reduces or eliminates the need for evaporative cooling towers entirely in some climates, moving to dry coolers that reject heat purely through convection and do not consume process water.
The architecture driving this change is the Blackwell NVLink rack scale platform. The GB200 NVL72 delivers 25x energy efficiency gains over its predecessor for AI inference workloads. The GB300 NVL72 extends that to 30x. These are gains measured at the compute level, meaning the useful AI work performed per watt of power consumed. More work per watt means less heat generated per unit of AI output. The thermal density per rack goes up because the absolute power draw increases, but the heat generated per inference, per training step, or per token is dramatically lower.
The implication for cooling design is that you are building a higher-density facility that produces more AI throughput per square foot but needs extremely capable thermal management at the rack level. Air cooling cannot reach the rack densities these systems require. The GPU clusters in Blackwell-class deployments run at 30 to 100 kilowatts per rack or more. No CRAC unit, in-row cooler, or rear-door heat exchanger handles that load adequately. Direct-to-chip liquid is not optional for these systems. It is a prerequisite.
NVIDIA is not alone at elevated water temperatures. Accelsius operates its two-phase immersion cooling systems at 51 to 55°C, pushing the threshold even higher and enabling heat reuse scenarios in district heating or industrial processes. LiquidStack, Supermicro, and Lenovo's Neptune platform are all positioned in the 45°C-plus operating range. The market around this temperature band is forming fast.
The liquid cooling market nearly doubled in 2025 to approximately $3 billion and is projected to reach roughly $7 billion by 2029. That growth is coming entirely from AI infrastructure. The traditional enterprise data center market is not driving it. Hyperscalers building GPU clusters are driving it.
And yet only 12% of data centers are fully liquid-cooled today. 56% of operators cite high upfront costs as the primary barrier. 53% believe air cooling is adequate for their current workloads. 29% point to a lack of standardization across vendors and platforms.
That adoption gap is closing, and not gradually. IBM demonstrated hot-water cooling as early as 2012 on its zEnterprise EC12 mainframe systems, circulating water at up to 60°C through cold plates attached directly to processor modules. The technology worked. The industry moved on. The difference now is that the AI compute platforms leave no alternative. You cannot air-cool a 72-GPU NVLink rack at 100 kilowatts. The physics close that door completely.
The operators who are still designing AI data centers around chilled water plants are building the wrong building. The capital expenditure for that mechanical plant, the energy to run it, and the water it consumes are all costs that a chiller-free 45°C direct-to-chip design avoids. The savings are not marginal. At scale, they represent hundreds of millions of dollars over a facility's operating life.
The redesign required is not trivial. Facility managers who have spent careers managing chilled water systems, CRAC unit layouts, and raised-floor air distribution need to learn a different discipline. Piping design, leak detection, coolant chemistry, and manifold infrastructure are not the same engineering domain as traditional HVAC-based cooling. The workforce transition is real, and it takes time.
But the direction is not ambiguous. Jensen Huang gave the industry a specification: 45°C inlet water, no chillers required. The hardware that runs on that specification is shipping now. The facilities being commissioned on 2026 and 2027 timelines that still include vapor-compression chiller plants are already behind the curve.
59% of data centers plan to implement liquid cooling within five years. The ones moving on that timeline with chiller-free mechanical designs will come out structurally ahead. The ones that treat it as a later-phase upgrade will spend the intervening years paying for infrastructure they are going to remove anyway.
Forty-five degrees. That is the number. Build around it.