← Back to Intel
Technology March 30, 2026

From 2 kW to 130 kW Per Rack in 40 Years. Liquid Cooling Is No Longer Optional.

In the 1980s, a standard data center rack pulled 2 to 3 kilowatts. Engineers cooled them with raised floors and perforated tiles. The math was simple. The air moved slowly. Nobody lost sleep over thermal management.

That era is dead.

Nvidia's GB200 NVL72 rack draws 120 to 130 kilowatts. A single rack. That is a 40x increase in thermal load per square foot compared to those 1980s cabinets, and it represents a physics problem that no amount of chilled air can solve. According to Network World, the average rack density across the industry doubled from 8 kW to 17 kW in just two years. McKinsey projects that number will hit 30 kW by 2027. And 30 kW is the average. The frontier racks, the ones running training clusters for the next generation of foundation models, are already four to five times that.

The GPU Power Curve Broke the Old Playbook

Follow the wattage. Nvidia's A100, released in 2020, consumed 400 watts per chip. The H100 pushed that to 700 watts. The B200 crossed the 1,000-watt threshold. And Nvidia's upcoming Vera Rubin architecture, slated for the second half of 2026, will push per-GPU power draw to roughly 1,400 watts with the GB300 NVL72 configuration.

Each generation doubles thermal output while demanding tighter temperature tolerances. The chips run hotter, the transistors sit closer together, and the margin for cooling failure shrinks. Air cooling works fine up to about 20 kW per rack, according to JLL's density threshold analysis. Beyond that, you need liquid. Beyond 175 kW, you need full immersion. There is no debate here. The thermodynamics are binary.

The performance gap backs this up. Fan-driven air cooling achieves roughly 50 watts per square meter per degree Celsius at the chip surface. Cold plate liquid cooling hits approximately 15,000 watts per square meter per degree. That is a 300x improvement in heat removal speed. Water conducts heat about 25 times better than stationary air, and when you combine that conductivity advantage with the surface area of a well-designed cold plate, you get a cooling system that operates in a different physical regime entirely.

The Hyperscalers Have Already Decided

Google has deployed liquid cooling across more than 2,000 TPU pods over the past seven years, maintaining 99.999% uptime. When Google moved from the air-cooled TPU v2 to the liquid-cooled TPU v3, it doubled chip density per rack. Half the volume, twice the compute. That density advantage compounds at scale.

Microsoft announced in December 2024 that every data center designed from August 2024 onward will use a closed-loop liquid cooling system that consumes zero water through evaporation. Each facility will avoid more than 125 million liters of annual water consumption. The company cut its water usage effectiveness from 0.49 liters per kilowatt-hour in 2021 to 0.30 in 2024. These new facilities, coming online from late 2027, eliminate evaporative cooling entirely.

Meta showed off a 140 kW liquid-cooled rack at the 2024 OCP Global Summit, built on its ORv3 high-power rack platform. The company committed $800 million to a liquid-cooled data center in Indiana. Its ORv3 roadmap targets 200 kW per rack and beyond.

These are not pilot programs. Google, Microsoft, and Meta are rebuilding their entire infrastructure stacks around liquid cooling as the default. Air cooling is the legacy system now.

The Market Numbers Tell the Same Story

The data center liquid cooling market hit nearly $3 billion in 2025, roughly double the 2024 figure, according to Dell'Oro Group. The firm projects it will approach $7 billion by 2029. Technavio pegs the AI-specific liquid cooling segment at a 31.4% compound annual growth rate through 2029.

Only 45% of data centers rely purely on air cooling today, down from 48% in 2024. And 59% of operators plan to implement some form of liquid cooling within the next five years. Direct-to-chip cooling commands 47% of the AI data center liquid cooling segment. The installed base is shifting fast.

The U.S. Department of Energy estimates that cooling accounts for up to 40% of total data center energy consumption. Single-phase immersion systems can push power usage effectiveness (PUE) down to 1.02 to 1.10. Two-phase immersion gets to 1.01 to 1.03 while supporting 150 to 250+ kW rack densities. For operators running AI workloads at scale, those efficiency gains translate directly into margin.

The Technology is Not Without Complications

Two-phase immersion cooling, the approach that delivers the lowest PUE numbers and handles the highest rack densities, has a supply chain problem. The fluorocarbon fluids that made two-phase systems work are PFAS compounds. 3M discontinued its Novec product line and exited PFAS manufacturing entirely by the end of 2025. The last day to place a Novec order was March 31, 2025. Regulatory pressure from the EPA and pending EU restrictions have made PFAS-based fluids a liability.

Shell stepped into the gap in May 2025, becoming the first immersion fluid provider to receive official certification from Intel for its 4th and 5th generation Xeon processors. Shell's fluids are PFAS-free and biodegradable. Intel issued a warranty rider for chips cooled in Shell's fluids. That certification was a critical missing piece for enterprise adoption of immersion cooling.

But the PFAS question has not fully resolved. Operators who built infrastructure around 3M's fluids face replacement costs and requalification cycles. And hydrocarbon-based alternatives, while safer from a regulatory perspective, do not match the thermal performance of fluorocarbons in two-phase applications. The industry is solving this problem, but slowly.

What Comes Next

Lenovo has been shipping direct water-cooled servers under its Neptune brand since 2012, using roof-mounted passive radiators that eliminate the need for chillers in cooler climates. Nvidia's Vera Rubin platform is designed to accept supply water at 45 degrees Celsius, warm enough to eliminate mechanical chillers entirely and enable waste heat recapture.

Google discussed 1-megawatt IT racks at the 2025 OCP EMEA Summit. One megawatt per rack. If that timeline holds, even today's liquid cooling infrastructure will need another generational leap.

The trajectory is clear. GPU power draws are climbing at a rate that makes each new chip generation a cooling crisis for operators still running air. The 30 kW average rack density that McKinsey forecasts for 2027 is already below what Nvidia, Meta, and Google are deploying today. The market has moved past the question of whether liquid cooling will become standard. The only remaining question is how fast the retrofit cycle can move, and who captures the revenue as it does.