← Back to Intel
Water March 31, 2026

Liquid Cooling Was Supposed to Solve the Water Problem. The Data Says Otherwise.

Cooling towers at a large data center facility
Cooling towers remain the dominant heat rejection mechanism even in liquid-cooled facilities. The loop still ends at evaporation.

The liquid cooling market is booming. Global liquid cooling revenue sits at $5.52 billion in 2025 and is projected to reach $15.75 billion by 2030, nearly tripling in five years. Every major hyperscaler is accelerating deployment. Direct-to-chip, immersion, two-phase. The vendor ecosystem is expanding faster than procurement teams can evaluate it. And the implicit promise behind all of it, the premise that justified the capital expenditure and the operational complexity, is that liquid cooling reduces water consumption compared to legacy air-cooled systems.

That premise is wrong. The data says so directly.

According to projections published by the Environmental and Energy Study Institute and covered by The Register in March 2026, data centers will require somewhere between 697 million and 1.45 billion gallons of water per day by 2030. For scale: New York City's entire daily water supply runs approximately one billion gallons. The AI buildout is on a trajectory to consume an equivalent volume every single day, just for cooling and power generation. And the shift from air to liquid cooling is driving that number up, not down.

This is the water math problem nobody in the liquid cooling industry wants to talk about plainly. So let us talk about it.

Global Data Center Water Demand Projection — Gallons Per Day by 2030
0 500M 1B 1.5B NYC daily supply ~300M 2025 697M 2030 Low 1.45B 2030 High Sources: EESI / The Register. Gallons per day, rounded. Low/high reflect modeling range.

The Loop That Does Not Close

Direct-to-chip cooling is the most widely deployed liquid cooling approach in new AI builds. A coolant distribution unit, typically a CDU, circulates water or a water-glycol mix through cold plates mounted directly on processors. Heat transfers from chip to coolant. Then the hot coolant has to go somewhere.

In the vast majority of deployments, it goes to a cooling tower.

The cooling tower evaporates water into the atmosphere to reject the heat load. That evaporative loss is consumptive. It does not return to the source. Direct-to-chip reduces the air volume you need to move through the facility, which cuts the load on computer room air handlers and reduces fan energy. It does not eliminate the evaporative cooling step. It just moves the rejection point from inside the building to the mechanical yard outside.

Immersion cooling compounds this. Single-phase immersion systems, where servers sit submerged in dielectric fluid, can achieve higher rack densities and better chip-level thermal management than direct-to-chip. But the dielectric fluid still needs to reject its heat load to a secondary water loop, which typically routes to a heat exchanger and then to a cooling tower. Two-phase immersion, where the fluid boils and condenses in a closed cycle, can theoretically achieve a tighter loop, but commercial deployments at hyperscale density still require secondary water-side heat rejection in most climates. The physics are unforgiving.

The only genuinely zero-water path is dry cooling: air-cooled chillers or dry coolers that reject heat directly to ambient air without evaporation. These work. They exist. They are deployed today in northern European facilities where ambient temperatures stay low enough to make them viable year-round. In Phoenix, in Singapore, in northern Virginia during a July heat event, dry cooling either fails to meet the thermal load or drives energy consumption to economically unacceptable levels. The wet-bulb advantage of evaporative systems is not academic. It is the difference between a PUE of 1.3 and a PUE of 1.8 at 40°C ambient.

The 72% Nobody Mentions in the Sales Deck

Direct water use at the facility is the number operators measure, report, and occasionally brag about reducing. It is also the smaller part of the problem. By 2030, approximately 72% of total data center water consumption will be indirect, occurring at the power plants generating electricity that flows into the facility's meter.

Thermoelectric power generation, whether coal, natural gas, or nuclear, uses water for cooling. A lot of it. A natural gas combined-cycle plant withdraws roughly 300 to 400 gallons of water per megawatt-hour generated, consuming (evaporating) around 150 to 200 of those gallons. A coal plant withdraws more. For a 100-megawatt data center running at full load for a year, the upstream water consumption from electricity generation can exceed one billion gallons annually, before a single drop touches a cooling tower on the facility side.

When an operator announces a "zero water" cooling system, read the fine print. They mean zero water on site. The upstream water bill still exists. It just belongs to the utility, not the operator, so it disappears from the sustainability report.

Oracle announced closed-loop cooling for its AI data centers in February 2026. Closed-loop is real. The primary coolant circuit does not evaporate. But it still rejects heat somewhere, and that somewhere typically involves ambient air or secondary water. Gizmodo flagged the water scarcity risk as an emerging growth constraint for the AI industry on March 6, noting that location decisions are increasingly constrained not just by power availability but by water rights, aquifer access, and municipal supply agreements.

Where the Pressure Is Building

The water constraint is already reshaping site selection. Data center developers working in the American Southwest face water rights issues that were not part of the conversation three years ago. Communities in the Great Lakes basin, once assumed to be water-rich and therefore permitting-friendly, are imposing moratoriums and demanding cumulative impact assessments before approving new facilities.

The pressure is different in Europe. The EU Taxonomy for sustainable activities now includes water consumption thresholds for data center classification. A facility that cannot demonstrate responsible water use cannot access green financing or achieve taxonomy alignment, which blocks a meaningful portion of institutional capital. That is a direct financial consequence, not an environmental externality.

The liquid cooling market will continue expanding regardless. The thermal physics of AI compute demand it. A rack dense enough to house NVIDIA GB200 NVL72 configurations cannot be air-cooled at any reasonable cost. The density question is settled. What is not settled is the water accounting that comes with it.

What Vendors Need to Confront

Cooling vendors selling direct-to-chip and immersion systems face a question they have not had to answer seriously until now: what is the consumptive water use per kilowatt of heat rejected across the full system boundary, including the cooling tower or dry cooler on the back end?

The answer varies by deployment climate, ambient wet-bulb temperature, tower sizing, and blowdown management. But it is not zero. It has never been zero. The marketing language around "waterless cooling" almost always means waterless at the chip level, not waterless at the system level. Buyers are sophisticated enough to know the difference now. Regulators and community boards are learning fast.

The vendors who get ahead of this will build systems with better metering, tighter tower management, and real per-rack water consumption reporting integrated into DCIM dashboards. The vendors who do not will face procurement questions they cannot answer cleanly, in a market where water availability is increasingly a harder constraint than power availability in the locations that matter most.

The water math is solvable. Dry cooling at scale, advanced heat recovery, purpose-built facilities in northern climates with ambient-assist capability for most of the year. None of these are theoretical. All of them require honest accounting first. The industry owes buyers that accounting. Most of it is not delivering it yet.