← Back to Intel
Guide March 30, 2026

How Data Center Cooling Works in 2026: The Complete Guide

Data Center Cooling Systems in 2026: The Complete Guide

Every server burns. Every GPU, every CPU, every NVMe drive converts electricity into heat as a byproduct of computation. In a single rack running eight Nvidia H100 GPUs, that heat output can exceed 70 kilowatts. Multiply by hundreds of racks in a hyperscale facility and you have a thermal problem that rivals a small industrial plant. Cooling is what keeps the whole thing from melting itself.

That is not metaphor. Semiconductors throttle and eventually fail when junction temperatures cross critical thresholds. DRAM develops bit errors. SSDs degrade. The entire economic argument for building a $2 billion data center collapses if the cooling system cannot move heat out of the building faster than the IT equipment generates it.

Cooling systems consume 30 to 40 percent of a facility's total electricity. In a 100 MW campus, that means 30 to 40 megawatts just to reject heat. The energy cost alone makes cooling the single largest lever operators have for controlling operating expenses. Get it wrong and you bleed money. Get it right and you unlock density, efficiency, and margin that competitors cannot match.

This guide covers every major cooling architecture in production or deployment today, the economics behind each, and the regulatory and supply chain forces reshaping the industry in real time. If you build, operate, invest in, or sell into data centers, this is the technical and commercial map.

Temperature Standards and PUE: The Numbers That Govern Everything

ASHRAE TC 9.9 sets the thermal envelope for data processing environments. The recommended inlet air temperature range for Class A1 through A4 equipment is 18 to 27 degrees Celsius (64.4 to 80.6 degrees Fahrenheit). That is the band where hardware manufacturers guarantee full performance and warranty coverage. The allowable range extends higher, with some equipment classes tolerating inlet temperatures up to 35 degrees Celsius and, in A4 environments, up to 45 degrees Celsius. Running at the high end of allowable saves cooling energy. It also compresses safety margins and demands more precise airflow management.

Power Usage Effectiveness, PUE, remains the primary efficiency metric. A PUE of 1.0 would mean every watt entering the facility goes directly to IT equipment. Impossible in practice. The Uptime Institute's 2025 Global Data Center Survey reported a weighted average PUE of 1.54, virtually unchanged for the sixth consecutive year. The industry average has been stuck in the 1.55 to 1.59 band since 2020. Facilities built in the last five years average around 1.45. The best hyperscale campuses operate between 1.03 and 1.10. Google's reported fleet average sits at 1.10. Meta's Prineville campus has reported 1.08.

The stagnation matters. Legacy facilities with raised floors, CRAC units built in the 2000s, and minimal containment drag the average up. New construction keeps pushing PUE lower. But the installed base turns over slowly. Uptime found that facilities larger than 1 MW and less than 15 years old average around 1.48 globally. In the Middle East, Africa, and Latin America, average PUE readings exceed 1.7. The global number hides enormous regional and vintage variation.

Hot Aisle, Cold Aisle, and Containment

The simplest improvement any data center can make costs almost nothing to understand and relatively little to implement. Servers draw cool air from one side and exhaust hot air from the other. Line them up so all intakes face the same aisle and all exhausts face the opposite aisle. Cold aisle. Hot aisle. Keep the two air streams from mixing and you can cut cooling energy by 20 to 30 percent compared to an open floor plan where hot and cold air recirculate freely.

Containment systems take this further. Physical barriers, curtains, panels, or rigid enclosures, seal either the cold aisle or the hot aisle to prevent recirculation entirely. Hot aisle containment with a plenum ceiling can support 20 to 25 kW per rack with a standard two-tile cold aisle, and 30 kW or more with a three-tile cold aisle. Cold aisle containment with end-of-aisle doors achieves similar results. Neither approach is superior in all cases. The choice depends on fire suppression requirements, ceiling height, and the facility's air handling architecture.

Containment is table stakes in 2026. Any operator running 10 kW or more per rack without containment is wasting energy and likely running hot spots that shorten hardware life. The ROI is measured in months, not years.

Air Cooling: The Workhorse and Its Limits

Computer Room Air Conditioning (CRAC) units use a direct expansion refrigerant cycle to cool air. Computer Room Air Handlers (CRAH) units use chilled water from a central plant. Both blow conditioned air into the data hall, typically through a raised floor plenum or overhead ductwork. CRACs are simpler and self-contained. CRAHs are more efficient at scale because they decouple refrigeration from air distribution and allow the central plant to optimize across multiple air handlers.

Raised floor plenums dominated data center design for decades. Conditioned air pressurizes the space below the raised floor and enters the cold aisle through perforated tiles. The approach works well at moderate densities but develops problems at higher loads: uneven tile airflow, bypass air escaping through cable cutouts, and insufficient static pressure to deliver adequate volume to high-density rows. Hard floor designs with overhead or in-row cooling units have gained traction in new builds because they eliminate the plenum's inefficiencies and deliver air directly where it is needed.

In-row cooling units sit between racks in the row, drawing hot exhaust air directly from the hot aisle and returning cooled air to the cold aisle. The short air path improves efficiency and responsiveness. Rear-door heat exchangers (RDHx) mount on the back of individual racks, using chilled water coils to absorb heat from the server exhaust before it enters the room. RDHx units can handle 30 to 40 kW per rack depending on water temperature and flow rate, making them a useful bridge technology between pure air cooling and full liquid cooling.

The ceiling on air cooling in a well-designed facility is roughly 25 to 30 kW per rack. Push beyond that and the volume of air required, the fan energy to move it, and the noise generated all become impractical. A single Nvidia DGX B200 system consumes over 14 kW. A GB200 NVL72 rack draws 120 kW. Air cooling cannot touch these workloads. Period.

Direct-to-Chip Liquid Cooling: The 2026 Default for AI

A cold plate is a metal block, typically copper or aluminum, machined with internal microchannels through which liquid coolant flows. The plate bolts directly onto a GPU or CPU, absorbing heat through conduction at the chip surface and carrying it away through a closed loop to a coolant distribution unit (CDU). The CDU transfers heat from the facility water loop to a secondary loop connected to the building's heat rejection system. The liquid touching the chip is typically a water-glycol mixture. Nothing exotic. Nothing regulated.

Cold plates are commercially mature, broadly compatible with existing server form factors, and run on fluids that no regulatory body on earth is coming for. Direct-to-chip cold plate cooling commands 47 percent of the AI data center liquid cooling segment. It is the technology Nvidia specifies for its GB200 NVL72 and the forthcoming Vera Rubin platform. It retrofits into standard 19-inch racks. It scales linearly.

The vendor landscape consolidated fast in 2025 and 2026. Ecolab agreed to acquire CoolIT Systems for $4.75 billion in March 2026. CoolIT has been in liquid cooling for 25 years. Its CDUs and cold plates serve six of the world's top ten supercomputers. Expected revenue over the next 12 months: approximately $550 million. The deal signals that industrial water treatment giants see data center cooling as their next growth vertical.

Eaton completed its $9.5 billion acquisition of Boyd Thermal in March 2026. Boyd Thermal has forecasted 2026 sales of $1.7 billion, of which $1.5 billion is in liquid cooling. Eaton CEO Craig Arnold called it a "grid-to-chip" solution, combining Eaton's power distribution infrastructure with Boyd's thermal components. At 22.5 times estimated 2026 EBITDA, the multiple tells you exactly how much the market values cooling capability right now.

Schneider Electric acquired a controlling interest in Motivair in February 2025 and unveiled a combined liquid cooling portfolio in early 2026 spanning CDUs from 105 kW to 2.5 MW, cold plates, rear-door heat exchangers, and chillers. Motivair's CDU technology powers six of the world's top ten supercomputers. Its fourth U.S. manufacturing facility opened in Buffalo, New York in June 2025.

Immersion Cooling: Single-Phase vs. Two-Phase

In immersion cooling, servers are submerged in a dielectric fluid that absorbs heat directly from all components simultaneously. No fans. No cold plates. The entire board is in contact with the coolant.

Single-phase immersion uses fluids that remain liquid throughout the process. Hydrocarbon-based or synthetic oils absorb heat through convection. The warm fluid circulates to a heat exchanger where it transfers heat to a facility water loop. Single-phase systems are simpler, use PFAS-free fluids, and held 80.9 percent of the data center immersion cooling market in 2024. That share is growing.

Two-phase immersion works differently. The fluid boils at the chip surface, absorbing enormous amounts of heat through the phase change from liquid to vapor. The vapor rises, condenses on a coil, and drips back down. The thermal transfer efficiency is superior to any other cooling method available. PUE values of 1.02 to 1.03 have been demonstrated. But the fluids that made two-phase work were PFAS-based, and the supply chain for those fluids is gone.

3M announced in December 2022 that it would cease all PFAS manufacturing by the end of 2025. The last day to order Novec fluids was March 31, 2025. 3M was facing over 4,000 lawsuits and a $12.5 billion settlement with more than 11,000 U.S. public water systems over PFAS contamination. The EPA designated PFOA and PFOS as hazardous substances under CERCLA in April 2024. Microsoft, Meta, and Google all walked away from two-phase immersion research. The liability math was simple: deploying forever chemicals in facilities with 20 to 30 year lifespans creates cleanup exposure that no CFO would approve.

Chemours developed Opteon 2P50, an HFO-based alternative with zero ozone depletion potential and a global warming potential of 10, targeting commercial production in 2026 through a manufacturing deal with Navin Fluorine. Shell secured Intel certification for its single-phase immersion fluid, unlocking preferred-vendor status in 2026 cloud tenders. But the EU's PFAS restriction proposal covers over 10,000 substances, and ECHA's final opinions are expected by end of 2026. Any vendor building a two-phase product around fluorinated chemistry is building on regulatory ground that may shift within 18 months.

The immersion vendors that survived the PFAS crisis are single-phase players. Trane Technologies completed its acquisition of LiquidStack in early 2026, adding immersion and direct-to-chip capabilities to Trane's plant-level thermal infrastructure. GRC (Green Revolution Cooling) continues to expand its single-phase immersion platform, partnering with Samsung C&T, LG Electronics, and SK Enmove. Submer raised $55.5 million, launched data center design and construction business units, and signed a 1 GW AI data center MOU with the government of Madhya Pradesh, India.

Coolant Distribution Units: The Plumbing No One Talks About

A CDU is the bridge between the IT cooling loop and the facility rejection loop. It matches flow rates, manages pressure, monitors coolant quality, and controls temperature setpoints. Without a CDU, liquid cooling does not work. The unit receives warm coolant from cold plates or immersion tanks, passes it through a heat exchanger to transfer heat to the building's chilled or condenser water system, and returns cooled fluid to the IT equipment.

CDU capacity ranges from under 100 kW for edge and small deployments to over 2.5 MW for hyperscale racks. Dell'Oro Group has flagged the CDU market as potentially approaching saturation in terms of vendor count, with dozens of manufacturers now competing. The technology itself is not complex. But reliability, serviceability, and integration with building management systems separate serious products from commodity hardware. Redundancy matters. A CDU failure in a liquid-cooled environment is a thermal emergency measured in minutes, not hours.

Nvidia Vera Rubin and the 45 Degree Celsius Inflection

Jensen Huang stood on stage in early 2026 and said something that sent HVAC company stock prices down in real time: Vera Rubin NVL72 racks would run on 45 degree Celsius warm water, and no chillers would be needed.

The physics are straightforward. Silicon junctions throttle around 100 degrees Celsius. With a 45 degree supply temperature, you still have enough delta-T to keep chips well within operating range. And 45 degree water can be cooled to ambient using nothing more than dry coolers or adiabatic assist systems. No mechanical refrigeration. No chiller plant. No compressor energy. The cooling power draw drops dramatically.

This is a genuine architectural shift. Most existing liquid-cooled deployments, including Nvidia's own Blackwell GB200 systems, run on 35 to 40 degree supply water. Dropping the chiller requirement at 45 degrees opens warm-water cooling in nearly every climate on earth. The exception: when ambient temperature approaches or exceeds 45 degrees Celsius, which happens in Phoenix, Dubai, and parts of India during peak summer. Those locations still need adiabatic assist or trim chillers for peak days. Everywhere else, the chiller plant disappears.

Vera Rubin racks are expected to arrive in data centers in fall 2026. They will be 100 percent liquid cooled. The thermal design sets the direction for every GPU platform that follows.

Hybrid Cooling: The Reality for the Next Decade

Pure liquid-cooled facilities exist. They are new builds designed from the ground up for AI training workloads. They represent a fraction of global data center capacity.

The vast majority of the world's data centers were built for 5 to 15 kW racks with raised floors and CRAC units. These facilities do not disappear because Nvidia released a new GPU. They get retrofitted. Hybrid cooling architectures that combine existing air handling with supplemental liquid cooling loops have become the standard approach for retrofit projects. A facility running mostly general-purpose compute at 8 to 12 kW per rack might add two rows of liquid-cooled cabinets at 60 kW per rack for an AI training cluster. The air system handles the legacy load. The liquid system handles the GPUs. Both reject heat to the same central plant.

Retrofit costs run $2 to $3 million per megawatt, with potential energy savings of 40 percent for AI workloads. Liquid cooling systems carry a 25 to 40 percent premium over traditional CRAC units. The ROI depends entirely on the density and utilization of the liquid-cooled racks. At 60 kW or above with sustained GPU utilization, payback periods compress to under three years. At lower densities or intermittent workloads, the economics get harder to justify.

Retrofit projects now account for 58 percent of the HVAC services market in data centers, and that share is growing. Thirty-seven percent of operators report space constraints and structural limitations when upgrading legacy facilities. The piping, the weight of CDUs, the need for secondary containment under liquid-cooled rows: these are real engineering problems that slow deployment timelines.

Hybrid will be the dominant model for years. The installed base of air-cooled facilities is enormous. The transition is happening rack by rack, row by row, not building by building.

Heat Rejection: The New Bottleneck

Every watt of heat removed from a chip must eventually leave the building. Heat rejection is the final link in the thermal chain, and in 2026 it has become the binding constraint for many new deployments.

Evaporative cooling towers are the most thermally efficient heat rejection method. Water evaporates, absorbing heat, and the cooled water returns to the system. A large cooling tower can reject hundreds of megawatts of heat. The catch: water consumption. A 1 MW data center using evaporative cooling can consume over 25 million gallons of water per year. In water-stressed regions, that volume is increasingly unavailable or politically untenable.

Dry coolers use finned coils and fans to transfer heat from water to ambient air with no evaporation. Zero water consumption. But dry coolers lose effectiveness as ambient temperature rises. In a Phoenix summer, the approach temperature between coolant and ambient narrows, fan speeds increase following a cube law (double the speed, eight times the power), and the cooling capacity of the system degrades precisely when it is needed most.

Vertiv announced its agreement to acquire ThermoKey in March 2026, specifically to expand its heat rejection portfolio. ThermoKey, founded in 1991 in Italy, manufactures heat exchangers, dry coolers, and air-cooled condensers. The acquisition is expected to close in Q2 2026. Vertiv CEO Giordano Albertazzi framed heat rejection as the part of the thermal chain where capacity constraints are tightest. He is right. You can build liquid cooling loops and CDUs relatively quickly. Manufacturing and deploying the outdoor heat rejection equipment to match those loops takes longer, requires more physical space, and in many jurisdictions faces permitting delays.

Heat rejection is where the thermal chain meets the physical world: land, water, ambient temperature, noise ordinances, and local regulation. No amount of clever engineering inside the data hall matters if you cannot get the heat out of the building.

Water Consumption and the Regulatory Reckoning

A UC Riverside study led by Shaolei Ren, published in March 2026, found that U.S. data centers will need 697 million to 1.45 billion gallons of additional daily peak water capacity within four years. That is roughly equivalent to the daily water supply of New York City. The infrastructure investment required: $10 billion to $58 billion.

The study's most important finding is not the annual average. It is the peak. Daily water demand from evaporative cooling can spike 6 to 10 times higher than average, with some planned facilities exceeding a 30-fold spike. Municipal water systems are engineered for residential demand curves, not for industrial loads that swing wildly with ambient temperature and compute utilization. Ren's team argued that "water is a hidden and even more binding constraint" than power in many communities.

Individual large data centers can withdraw over 1 million gallons daily, with some facilities allocated up to 8 million gallons per day. AI-driven data centers consumed approximately 17 billion gallons of water in 2023, with projections showing usage surging to 68 billion gallons by 2028.

The regulatory response is accelerating. Moratorium bills have been introduced in 11 states in 2026, though they face industry resistance. Dozens of municipalities have imposed local construction pauses without waiting for state action. More than 300 data center bills have been filed across 30-plus states in just the first six weeks of 2026, marking a shift from incentive-based policies to regulatory oversight. Minnesota established separate water permitting requirements for data centers in 2025. More states are following.

The industry's response has been uneven. Microsoft has unveiled zero-water data center designs for desert climates. Google publishes water usage data by facility. Many operators disclose nothing. The gap between best practice and standard practice is wide, and regulators are losing patience with voluntary disclosure frameworks that produce vague annual averages instead of peak demand data.

Market Numbers: Where the Money Is Going

The data center liquid cooling market roughly doubled in 2025, reaching close to $3 billion in manufacturer revenue, with Dell'Oro Group projecting the market to approach $7 billion by 2029. Alex Cordovil, research director at Dell'Oro, put it directly: "Liquid cooling has crossed a critical threshold. What was once treated as an optional efficiency upgrade is now a functional requirement for large-scale AI deployments."

Dell'Oro had previously forecast the liquid cooling market topping $15 billion cumulative over five years in a July 2024 report, before revising to the $7 billion annual run rate by 2029 in their January 2026 update. GPU thermal design power is projected to exceed 4,000 watts by 2029, making liquid cooling structurally essential.

The total data center cooling market, including air and liquid, is valued at $10.8 billion in 2025 and projected to reach $25.1 billion by 2031, a 15.1 percent CAGR per Mordor Intelligence.

Adoption surveys tell the demand story. Fifty-nine percent of data center operators plan to implement liquid cooling within five years, according to a November 2025 S&P Global 451 Research survey. Only 45 percent of facilities now run purely on air cooling, down from 48 percent in 2024. The tipping point is behind us.

The M&A activity alone confirms the trajectory. Eaton paid $9.5 billion for Boyd Thermal. Ecolab paid $4.75 billion for CoolIT. Trane acquired LiquidStack. Schneider took a controlling stake in Motivair. Vertiv moved to acquire ThermoKey. In the span of 18 months, the five largest thermal infrastructure deals in data center history all closed. That is not a trend. That is a market repricing the value of cooling at a fundamental level.

What Comes Next

The trajectory is set. Air cooling will continue to serve general-purpose compute at moderate densities. It is not going away. But every new AI training cluster, every GPU-dense inference deployment, every rack above 30 kW will require some form of liquid cooling. The question is no longer whether to adopt liquid cooling. The question is which architecture, which vendor, and how fast you can get capacity online.

Direct-to-chip cold plates will remain the dominant liquid cooling technology through at least 2028. They are mature, they retrofit, and they are what Nvidia specifies. Immersion cooling will grow in niches where its zero-fan, full-component coverage advantages justify the operational complexity. Two-phase immersion will survive in a narrow band of ultra-high-density applications, constrained by fluid availability and regulatory uncertainty.

The real bottlenecks are downstream. Heat rejection capacity. Water availability. Permitting timelines. The chip-level thermal problem is solved. The building-level and community-level thermal problems are just getting started. Every operator planning a liquid-cooled deployment in 2026 should be spending as much time on their heat rejection strategy and water sourcing as they spend on CDU specifications. The rack is the easy part. Getting the heat out of the building and out of the neighborhood is where the hard engineering begins.