Resources
Data center cooling has its own language. CDUs, cold plates, PUE, PFAS, two-phase immersion. Whether you are speccing your first liquid-ready facility or getting up to speed on the thermal management buildout, this is the reference we wish existed when we started covering the industry.
A cooling method that reduces air temperature through water evaporation. Warm outside air passes through wetted media or a fine mist, and the evaporation process absorbs heat from the air. Used in economizer systems and dry coolers to reduce mechanical chiller load. Effective in hot, dry climates. Consumes water, which makes it a target of water-use regulations in states like Arizona and Virginia.
A large, centralized unit that conditions and circulates air through ductwork to the data hall. Contains fans, filters, heating coils, and cooling coils. Common in legacy facilities and perimeter cooling designs. Being displaced in high-density builds by in-row cooling and liquid cooling systems that reject heat closer to the source.
The standards body that sets recommended and allowable operating temperature and humidity ranges for data center equipment. ASHRAE's TC 9.9 committee publishes the thermal guidelines that most operators use to set supply air temperatures and evaluate cooling strategies. The 2021 guidelines expanded the recommended envelope to 18-27°C (64.4-80.6°F), which enabled wider free cooling adoption. ASHRAE also publishes standards for liquid cooling connections and facility water quality.
A filler panel installed in unused rack unit positions to prevent hot exhaust air from recirculating to the cold aisle. A simple, cheap component that has an outsized impact on cooling efficiency. Missing blanking panels in a single rack can raise inlet temperatures by 5-10°F and force CRACs to work harder than necessary.
The central hub of a liquid cooling loop. A CDU receives heated coolant returning from servers, transfers that heat to a facility water loop through a heat exchanger, and pumps cooled fluid back to the racks. CDUs regulate flow rate, temperature, and pressure across the secondary cooling loop. They sit between the IT hardware and the facility's heat rejection infrastructure. Lead times for CDUs in 2025-2026 range from 16 to 30+ weeks depending on the vendor, making them one of the longest-lead components in the liquid cooling supply chain.
A mechanical refrigeration system that produces chilled water for cooling distribution. Data center chillers typically use vapor compression cycles with scroll, screw, or centrifugal compressors. Chillers are the largest single energy consumers in traditional air-cooled facilities, often accounting for 30-40% of total cooling energy. Air-cooled chillers reject heat to outdoor air. Water-cooled chillers reject heat through cooling towers, which adds water consumption to the equation.
The standard airflow management layout in data centers. Server racks face each other in alternating rows, creating cold aisles (where chilled supply air enters the front of servers) and hot aisles (where heated exhaust exits the rear). Containment systems seal either the cold or hot aisle to prevent mixing. This layout is the foundation of air-cooled data center design. At rack densities above 20-25kW, even well-contained hot aisle / cold aisle setups begin to struggle.
A metal block, typically copper or aluminum, that mounts directly to a heat-generating component like a CPU or GPU. Coolant flows through internal channels machined into the plate, absorbing heat through conduction. Cold plates are the contact point in direct-to-chip liquid cooling systems. They currently dominate the liquid cooling market with roughly 47% share. Nvidia's GB200 NVL72 and the upcoming Vera Rubin platforms ship with cold plate liquid cooling as the standard thermal solution.
Physical barriers that separate hot exhaust air from cold supply air in a data center. Cold aisle containment encloses the cold aisle with doors and a roof. Hot aisle containment seals the hot aisle and routes exhaust directly to return plenums. Proper containment can improve cooling efficiency by 20-40% over open floor plans. A prerequisite for any air-cooled facility running above 10kW per rack without wasting energy.
An evaporative heat rejection device that cools facility water by exposing it to outdoor air. Water cascades over fill media while fans draw air across it, and evaporation removes heat. Cooling towers are the dominant heat rejection method for large data centers. They are also the primary driver of data center water consumption. A single 1MW data center can consume 25-50 million gallons of water per year through cooling tower evaporation, depending on climate and PUE.
Floor-mounted units that cool data center air. A CRAC uses a built-in refrigeration compressor to cool air directly. A CRAH uses chilled water from a central plant and blows air across a cooling coil. CRAHs are more energy efficient and common in larger facilities. Both typically distribute cold air through a raised floor plenum. At rack densities above 15-20kW, perimeter CRACs and CRAHs cannot deliver enough airflow to keep pace with the thermal load.
The temperature difference between the supply and return of a cooling medium. In air cooling, Delta T is the difference between cold aisle supply air and hot aisle return air. In liquid cooling, it is the difference between coolant entering and leaving the cold plates or immersion tank. A higher Delta T means more heat is being captured per unit of coolant flow, which generally means the system is working efficiently. Typical Delta T targets for liquid cooling loops range from 10-20°C.
An electrically non-conductive liquid used in immersion cooling. Servers are submerged directly in the fluid, which absorbs heat from all components simultaneously. Common dielectric fluids include synthetic hydrocarbons, mineral oils, and fluorinated fluids (fluorocarbons). Fluorinated fluids offer superior thermal performance and are used in most two-phase immersion systems, but they face mounting regulatory pressure under PFAS restrictions in the EU and several U.S. states.
A liquid cooling approach where coolant is piped directly to cold plates mounted on the hottest components in a server, typically CPUs and GPUs. The rest of the server remains air-cooled. DTC is the most widely adopted form of liquid cooling in production data centers. It handles the highest heat-generating components while requiring less infrastructure change than full immersion. Facilities running DTC still need supplemental air cooling for memory, storage, VRMs, and networking components.
An air-to-fluid heat exchanger that rejects heat from a liquid loop to outdoor air without using water evaporation. Glycol or water flows through a coil, and fans blow ambient air across it. Dry coolers consume zero water but are limited by ambient air temperature. When outdoor air is warmer than the required supply temperature, a dry cooler alone cannot do the job. Often paired with adiabatic assist (water spray) for peak conditions, creating a hybrid dry cooler.
A system that uses outside air or water conditions to cool the data center without running mechanical refrigeration. Airside economizers bring filtered outdoor air directly into the facility when conditions permit. Waterside economizers use cool outdoor air to chill facility water through a heat exchanger or cooling tower, bypassing the chiller. Economizer hours, the number of hours per year when free cooling is available, vary dramatically by climate. A facility in Stockholm might achieve 8,000+ economizer hours. A facility in Phoenix might get 2,000.
Operating a data center cooling system without mechanical refrigeration by leveraging cold outdoor air or water temperatures. Free cooling reduces energy consumption and operating cost. It is the primary reason hyperscalers build in Nordic countries, the Pacific Northwest, and other cool climates. The ASHRAE expanded temperature envelope (up to 27°C supply) increased free cooling hours across most geographies. At liquid cooling temperatures, where supply water can run 35-45°C, free cooling hours increase further because the heat rejection threshold is higher.
A chemical additive (typically propylene glycol or ethylene glycol) mixed with water in cooling loops to prevent freezing and lower the fluid's freezing point. Used in outdoor cooling loops and secondary loops exposed to cold ambient conditions. Glycol reduces heat transfer efficiency compared to pure water, so the concentration is kept as low as practical, usually 20-40% depending on the minimum expected temperature.
A device that transfers heat between two fluid loops without mixing them. In liquid cooling, heat exchangers sit inside CDUs and transfer heat from the server-side coolant loop to the facility-side water loop. Plate heat exchangers and shell-and-tube designs are the most common in data center applications. Heat exchanger sizing directly determines how much thermal load a cooling system can handle.
Capturing waste heat from data center cooling systems and redirecting it for productive use: district heating, industrial processes, agriculture, or building HVAC. Liquid cooling makes heat reuse viable because it captures heat at 40-60°C, warm enough to be useful. Air-cooled facilities exhaust heat at 30-35°C, which is too low for most reuse applications without a heat pump. Germany mandated data center heat reuse by 2026. Stockholm already heats 30,000 apartments from data center waste heat. Heat reuse is becoming a revenue line for operators and a permitting advantage in communities skeptical of data center energy consumption.
A localized area within a data center or server where temperatures exceed design thresholds. Hot spots occur when cooling capacity cannot keep up with the thermal load in a specific zone. Common causes include poor airflow management, missing blanking panels, overloaded racks, or cable obstructions. In air-cooled environments, a single 40kW rack in a row of 10kW racks creates a hot spot that the perimeter CRAH was never sized to handle.
A cooling strategy that combines two or more methods in the same facility. The most common hybrid approach pairs direct-to-chip liquid cooling for CPUs and GPUs with air cooling for everything else: memory, SSDs, VRMs, NICs, and fans. Some facilities combine liquid-cooled compute racks with traditional air-cooled storage and networking racks in the same data hall. Hybrid cooling is the de facto standard for new AI-capable facilities because no single cooling method optimally handles every component in a modern server.
A cooling method where servers or other IT equipment are fully submerged in a thermally conductive, electrically non-conductive dielectric fluid. The fluid absorbs heat from all components simultaneously, eliminating the need for fans, heat sinks, and traditional airflow management. Two variants exist: single-phase immersion (fluid stays liquid) and two-phase immersion (fluid boils on contact with hot components, then condenses and drips back). Immersion cooling can achieve PUE as low as 1.02-1.03 and handles rack densities above 100kW. Despite the thermal performance advantages, immersion holds roughly 8-12% of the liquid cooling market. The adoption gap comes down to workforce training, facility design changes, and maintenance procedures that operators have not standardized.
Cooling units installed between server racks within a row, rather than along the perimeter of the data hall. In-row units place cooling capacity closer to the heat source, reducing the distance air must travel and improving efficiency. They use chilled water or refrigerant to cool air drawn from the hot aisle and deliver it directly to the cold aisle. More responsive to local thermal loads than perimeter CRAHs, and common in medium-density deployments (10-25kW per rack).
Any cooling method that uses a liquid medium to absorb and transport heat away from IT equipment. The three primary forms are direct-to-chip (cold plates on CPUs/GPUs), rear-door heat exchangers (liquid-cooled doors on standard racks), and immersion cooling (full submersion in dielectric fluid). Liquid carries roughly 3,000 times more heat per unit volume than air. At rack densities above 40kW, liquid cooling transitions from a preference to a physical requirement. Goldman Sachs projects 76% of AI servers will be liquid-cooled by 2026.
A distribution component that splits a single coolant supply line into multiple branches feeding individual servers or racks, and collects return flow back into a single line. Manifolds sit at the rack level or row level and are critical for balancing flow rates across all connected cold plates. Leaking manifold connections are one of the top operational concerns cited by facility managers evaluating liquid cooling for the first time.
An open-source hardware initiative founded by Facebook (Meta) in 2011. OCP publishes open specifications for data center hardware, including rack designs, server form factors, and cooling standards. The OCP Advanced Cooling Solutions subproject is developing standardized liquid cooling interfaces to reduce vendor lock-in and accelerate adoption. OCP's liquid cooling specifications are increasingly referenced in hyperscale procurement requirements.
A class of synthetic chemicals known as "forever chemicals" because they do not break down in the environment. Fluorinated dielectric fluids used in two-phase immersion cooling contain PFAS compounds. The EU's proposed PFAS restriction, if enacted, would ban the manufacture and import of most fluorinated cooling fluids. Several U.S. states have enacted or proposed their own restrictions. This regulatory trajectory is the single largest risk factor for the two-phase immersion cooling market. Vendors including 3M (which exited the market in 2025) and Solvay are directly affected. The industry is exploring fluorine-free alternatives, but none match the thermal performance of fluorocarbons in two-phase applications.
An enclosed space used for air distribution. In raised-floor data centers, the space beneath the raised floor tiles serves as a supply air plenum, distributing chilled air from CRAHs to perforated tiles in the cold aisle. Above the racks, a ceiling plenum can collect hot return air. Plenum design, including depth, obstructions from cabling and piping, and tile placement, directly affects airflow distribution and cooling effectiveness.
The ratio of total facility energy to IT equipment energy. A PUE of 1.0 would mean every watt entering the facility goes to compute, with zero overhead. That is physically impossible. Traditional air-cooled facilities run a PUE of 1.5 to 1.8. Well-optimized air-cooled hyperscale facilities achieve 1.1-1.2. Direct-to-chip liquid cooling brings PUE to the 1.05-1.15 range. Two-phase immersion cooling can reach 1.02-1.03. The global average PUE for data centers is approximately 1.58. PUE is the most widely cited efficiency metric in the industry, though it does not capture water consumption, embodied carbon, or total environmental impact.
The total power consumption (and corresponding thermal load) of a single server rack, measured in kilowatts (kW). Traditional enterprise racks run 5-10kW. Standard cloud and colocation racks range from 10-20kW. AI training racks using Nvidia A100/H100 GPUs run 30-70kW. Nvidia's GB200 NVL72 racks draw 120-130kW. The Vera Rubin generation is expected to push higher. Rack density is the forcing function behind the transition to liquid cooling. Air cooling physically cannot handle densities above approximately 40kW per rack without exotic and increasingly impractical airflow engineering.
An elevated floor system in a data center consisting of removable tiles supported on pedestals, creating a plenum underneath for air distribution. Chilled air is pushed under the raised floor by CRAHs and delivered to the cold aisle through perforated tiles. Raised floors were the dominant data center design for decades. Newer facilities are increasingly moving to overhead air distribution or liquid cooling, which eliminates the need for a floor plenum entirely.
A liquid cooling device that replaces the rear door of a standard server rack with a heat exchanger coil. Hot exhaust air from the servers passes through the coil, which is fed with chilled or ambient-temperature water. The heat transfers to the liquid loop, and the air exits the rear door at a neutral or near-neutral temperature. RDHx systems can handle 30-50kW per rack without modifying the servers themselves. They are a common retrofit path for facilities that want to increase density without deploying direct-to-chip or immersion infrastructure.
The level of backup capacity built into cooling infrastructure. N means exactly enough capacity to handle the load with no backup. N+1 adds one additional unit beyond what is required, so if any single component fails, the system continues operating. 2N provides a fully redundant, parallel cooling system. The redundancy level directly affects both capital cost and availability guarantees. Most colocation SLAs require at minimum N+1 cooling redundancy. Hyperscale operators increasingly design for 2N on the liquid cooling side because a leak or CDU failure in a liquid loop affects more IT capacity than a single CRAH failure in an air-cooled facility.
Cooling that reduces the temperature of air without changing its moisture content. The opposite of latent cooling, which removes humidity. Data center cooling is overwhelmingly sensible cooling. Servers produce dry heat. The goal is to lower the air (or liquid) temperature, not to dehumidify. This is why data center cooling loads are calculated primarily in terms of sensible heat ratio (SHR), typically 0.95-1.0.
The temperature of cooling medium (air or liquid) delivered to the IT equipment. For air cooling, ASHRAE recommends 18-27°C (64.4-80.6°F) at the server inlet. For liquid cooling, supply temperatures vary widely by system design. Direct-to-chip systems typically supply coolant at 25-45°C. Warm-water cooling architectures push supply temperatures to 40-50°C, which is high enough to reject heat with dry coolers and eliminate chillers entirely. Higher supply temperatures mean more free cooling hours and lower energy bills.
The maximum amount of heat a processor generates under sustained workload, measured in watts. TDP defines the cooling capacity required for a given chip. Nvidia's H100 GPU has a TDP of 700W. The B200 reaches 1,000W. Vera Rubin is expected to exceed that. TDP is the number that cooling system designers start from when sizing cold plates, CDUs, and heat rejection capacity. When GPU TDP doubles, the entire cooling chain must be re-specced.
A material's ability to conduct heat, measured in watts per meter-kelvin (W/m·K). Copper (386 W/m·K) and aluminum (205 W/m·K) are used in cold plates because they transfer heat rapidly from the chip surface to the coolant. Water has a thermal conductivity of 0.6 W/m·K, low compared to metals but roughly 25 times higher than air (0.025 W/m·K). This physical gap is why liquid cooling captures and moves heat so much more effectively than air cooling.
A cooling method where the working fluid undergoes a phase change from liquid to vapor to absorb heat. In two-phase immersion cooling, dielectric fluid boils on contact with hot components. The vapor rises to a condenser, releases heat, returns to liquid, and drips back into the tank. The phase change absorbs significantly more energy than simple liquid convection (latent heat vs. sensible heat), which is why two-phase systems achieve the highest heat removal rates. The trade-off: most two-phase fluids are fluorinated compounds subject to PFAS regulations.
A liquid cooling approach that uses supply water temperatures of 35-50°C, significantly warmer than traditional chilled water (7-12°C). At these temperatures, heat can be rejected to the outdoors using dry coolers or cooling towers without running mechanical chillers for most or all of the year. Warm-water cooling dramatically reduces energy consumption and enables heat reuse for district heating and industrial processes. IBM pioneered warm-water cooling with its Power Systems servers. The approach is gaining traction in new liquid-cooled AI facilities where operators want to eliminate chiller infrastructure entirely.
The ratio of annual water consumption (in liters) to IT equipment energy (in kWh). WUE measures how much water a data center uses per unit of compute. Lower is better. Air-cooled facilities using cooling towers typically report WUE of 1.0-2.0 L/kWh. Facilities using dry coolers or liquid cooling with closed-loop heat rejection can achieve WUE approaching 0. WUE is becoming a reporting requirement under EU sustainability regulations and is increasingly scrutinized in water-stressed permitting jurisdictions.
Every cooling method deployed in data centers today, with the specs and trade-offs that matter.
The legacy standard. CRAHs or CRACs along the perimeter push chilled air through a raised floor plenum to cold aisles. Still running in the majority of data centers worldwide. Effective up to 10-15kW per rack with proper containment. Struggles above 20kW. The installed base is enormous, but no one is building new AI-capable facilities with this design.
Cooling units placed between racks, shortening the air path and improving responsiveness to local heat loads. Works well for medium-density deployments and mixed environments. Common in enterprise and colocation facilities that need more capacity than perimeter units can deliver but are below the liquid cooling threshold.
Liquid-cooled door replaces the standard rear door on a server rack. Hot exhaust passes through a water coil and exits at neutral temperature. No server modifications required. Retrofittable into existing facilities. Handles moderate-to-high densities and reduces or eliminates the need for room-level air conditioning. A practical bridge for operators increasing density without overhauling their cooling infrastructure.
Coolant piped directly to cold plates mounted on CPUs and GPUs. The dominant liquid cooling method in production AI data centers. Handles the highest heat-generating components while leaving the rest of the server air-cooled. Supported natively by Nvidia's GB200 and Vera Rubin platforms. Requires CDUs, manifolds, and facility plumbing infrastructure, but does not require custom server enclosures or dielectric fluids.
Servers submerged in a non-conductive liquid that remains in a liquid state throughout. Heat transfers from all components to the fluid, which circulates to a heat exchanger. Eliminates fans, heat sinks, and airflow management. Handles extreme densities. Uses hydrocarbon-based or synthetic fluids that are not subject to PFAS restrictions. Maintenance procedures require draining and drying hardware for servicing, which adds complexity compared to air-cooled or DTC environments.
Servers submerged in a dielectric fluid engineered to boil at a low temperature. The fluid vaporizes on contact with hot components, rises to a condenser, releases heat, returns to liquid. The phase change absorbs far more energy per unit mass than single-phase convection. Achieves the lowest PUE of any cooling method in production. The constraint: most two-phase fluids are fluorinated compounds facing regulatory restrictions under PFAS legislation. This is the technology with the best thermodynamics and the most uncertain regulatory future.
The reference numbers that show up in RFPs, spec sheets, earnings calls, and facility design conversations. Bookmark this.
| Workload | Typical Density | Cooling Required |
|---|---|---|
| Enterprise / storage | 5-10 kW | Air cooling |
| Cloud / colocation | 10-20 kW | Air cooling with containment |
| High-performance compute | 20-40 kW | In-row or RDHx |
| AI inference | 30-50 kW | RDHx or direct-to-chip |
| AI training (H100/B200) | 50-100 kW | Direct-to-chip liquid cooling |
| AI training (GB200 NVL72) | 120-130 kW | Direct-to-chip liquid cooling (required) |
| Next-gen AI (Vera Rubin) | 200-250 kW (projected) | Liquid cooling (mandatory) |
| GPU | TDP | Cooling |
|---|---|---|
| Nvidia A100 | 400W | Air or liquid |
| Nvidia H100 | 700W | Air or liquid |
| Nvidia B200 | 1,000W | Liquid recommended |
| Nvidia GB200 NVL72 | 1,400W (per GPU tray) | Liquid required |
| Cooling Method | PUE Range |
|---|---|
| Legacy air (perimeter CRAC) | 1.5 - 1.8 |
| Optimized air (in-row, containment) | 1.2 - 1.4 |
| Hyperscale air (economizers) | 1.08 - 1.2 |
| Rear-door heat exchanger | 1.15 - 1.3 |
| Direct-to-chip liquid | 1.05 - 1.15 |
| Single-phase immersion | 1.02 - 1.08 |
| Two-phase immersion | 1.02 - 1.03 |
| Parameter | Range |
|---|---|
| ASHRAE recommended inlet (air) | 18-27°C (64-81°F) |
| ASHRAE allowable inlet (air) | 5-45°C (41-113°F) |
| Chilled water supply (traditional) | 7-12°C (45-54°F) |
| Direct-to-chip coolant supply | 25-45°C (77-113°F) |
| Warm-water cooling supply | 35-50°C (95-122°F) |
| Liquid cooling return (typical) | 40-65°C (104-149°F) |
| Heat reuse viable threshold | >40°C (>104°F) |
Go Deeper
The complete technical and commercial map. Air cooling, liquid cooling, immersion, heat rejection, PFAS, warm-water architectures, water regulation, and every market number that matters. 13 sections. 31 sources.
Read the Guide