Microsoft, Google, Amazon, and Meta have committed over $300 billion in combined capital expenditure for 2025 data center buildouts. That number keeps climbing. Every quarter, each company revises its spend upward, adding campuses, reserving power, breaking ground on facilities that will consume hundreds of megawatts apiece. The hyperscale arms race has no off switch.
The cooling infrastructure to support those facilities does not exist yet. And the gap between what is being built and what can be thermally managed is becoming the binding constraint on the entire AI buildout.
This is the problem nobody in the capex announcements wants to talk about. You can pour concrete. You can order GPUs. You can negotiate power purchase agreements with utilities that are themselves scrambling to add generation capacity. But cooling systems have lead times, labor requirements, and water dependencies that do not compress just because a CFO approved a bigger budget.
The International Energy Agency projects global data center electricity consumption will reach 1,000 TWh by 2030. That is roughly double today's consumption. It would make data centers one of the largest single categories of electricity demand on earth, comparable to the entire residential electricity consumption of Japan.
Hyperscalers alone are planning over 50 GW of new capacity across North America, Europe, and Asia-Pacific. Every gigawatt of IT load requires a corresponding gigawatt-scale thermal rejection system. Cooling towers. Chillers. Coolant distribution units. Rear-door heat exchangers. Immersion tanks. The specifics vary by architecture. The thermal load does not negotiate.
Water consumption for data center cooling is now the fastest-growing industrial water use category in multiple U.S. states. A single hyperscale campus running evaporative cooling can consume 3 to 5 million gallons per day. Texas alone faces projected data center water demand of up to 161 billion gallons annually by 2030, a figure that has municipal water planners and agricultural interests pushing back hard.
Coolant distribution units, the hardware that moves liquid coolant to server racks in direct liquid cooling deployments, carry lead times of 6 to 9 months right now. That is for standard configurations. Custom builds for high-density AI clusters can stretch past a year. The manufacturers building CDUs are running at or near capacity. Vertiv's acquisition of ThermoKey signals how seriously the largest thermal vendors are treating the heat rejection bottleneck. They are buying capacity because they cannot build it fast enough organically.
The skilled labor problem is worse. Commissioning a liquid cooling system in a live data center is not the same as installing a traditional chilled water plant. The technicians who can pressure-test manifolds, validate flow rates across hundreds of server nodes, and troubleshoot leak detection systems in a facility running tens of millions of dollars in GPUs are in short supply. We covered the commissioning risk for 100 MW AI data centers earlier this year. The conclusion stands: the people who know how to do this work are booked 12 months out.
Every delay in cooling commissioning is a delay in revenue generation. A $2 billion data center that sits idle for three months because the thermal plant is not validated is burning $15 to $20 million per month in carrying costs. The hyperscalers know this. Their procurement teams are locking up cooling vendor capacity a year or more in advance, which pushes smaller operators and colocation providers further back in the queue.
Energy gets the headlines. Water creates the permitting fights.
Wisconsin imposed a moratorium on new data center development after residents and local officials raised alarms about water table impacts. That was not an environmental group filing a lawsuit. That was a county board saying no. The backlash pattern is replicating across the Midwest and Sun Belt, in communities that welcomed the tax revenue from data centers until they saw the water bills.
The math is straightforward. Evaporative cooling towers reject heat by evaporating water. Approximately 1.8 liters of water per kilowatt-hour of cooling. A 100 MW facility running at 80% load with a conventional evaporative cooling plant will consume roughly 300 million gallons of water per year. Scale that across the 50+ GW pipeline and the aggregate freshwater demand becomes a resource planning crisis that most municipal water authorities are not staffed to manage.
Mindy Lubber, writing in Forbes, framed the question correctly: whether data center growth becomes a net positive depends entirely on sustainability of design. But "sustainability" in the cooling context is not an ESG talking point. It is an engineering constraint. You either have enough water or you do not. You either have a thermal rejection system that works without freshwater or you are dependent on a resource that communities are increasingly unwilling to share.
Direct liquid cooling, both cold plate and single-phase immersion, eliminates or drastically reduces freshwater consumption. Closed-loop liquid cooling systems reject heat through dry coolers or hybrid adiabatic systems that use a fraction of the water that evaporative towers require. The immersion cooling market is projected to reach $14 billion by 2034. That growth rate reflects real demand, driven primarily by AI chip power densities that air cooling simply cannot handle.
NVIDIA's GB200 NVL72 racks dissipate over 120 kW per rack. At those densities, air cooling is physically insufficient. The thermal resistance between the chip junction and the ambient air is too high to maintain safe operating temperatures with fans alone. Liquid is mandatory. This is not a preference. It is thermodynamics.
But the transition from air-cooled facilities to liquid-cooled facilities involves more than swapping out hardware. The piping infrastructure, the water treatment systems (even closed-loop systems require glycol management and corrosion inhibition), the monitoring and leak detection layers, the structural reinforcement for immersion tanks that weigh tens of thousands of pounds when filled. All of it adds time, cost, and complexity to a buildout timeline that hyperscalers are already trying to compress.
The semiconductor supply chain got all the attention in 2023 and 2024. GPU allocations. TSMC capacity. CoWoS packaging bottlenecks. Those constraints are easing. NVIDIA is shipping. AMD is shipping. The silicon is flowing.
The constraint now is everything downstream of the chip. Power generation. Power delivery. And thermal management. Of those three, cooling has the longest lead times, the least mature supply chain for next-generation architectures, and the most exposure to local political and environmental opposition.
$300 billion in capex does not automatically produce $300 billion worth of operational compute. It produces $300 billion worth of construction activity that must be thermally commissioned before a single training run begins. The companies writing those checks understand this. Their thermal engineering teams are the most sought-after hires in the industry right now. The vendors supplying cooling equipment are fielding more RFPs than they can respond to.
The buildout is happening. The question is whether the cooling infrastructure scales in parallel or whether every new campus announcement adds another 12 to 18 months of thermal commissioning risk to the timeline. Right now, the gap is widening. The capex keeps accelerating. The CDU lead times are not shrinking. The labor pool is not growing. And the communities that control the water permits are paying closer attention than they were a year ago.
The hyperscalers will figure this out eventually. They have the money and the motivation. But "eventually" is doing a lot of work in that sentence. The thermal bottleneck is here now, and every quarter of delay between facility construction completion and cooling system commissioning is a quarter of stranded capital that even Microsoft and Google would prefer to avoid.