← Back to Intel
Technology April 21, 2026

Schneider Electric Maps the Full Liquid Cooling Stack From 50kW to 1MW Per Rack. The Transition Is Already Underway.

Everyone in the cooling industry knows the math on air cooling broke years ago, and the replacement architecture that follows is a spectrum of thermal management approaches, each with distinct plumbing, distinct risk profiles, and distinct density ceilings. At a technical session hosted by AeD India Centre, organized by Ravi Kochak and Sanjeev Sharma of Schneider Electric, Sujit Kamble laid out the full progression. Kamble, a General Manager at Schneider Electric and an HVAC specialist with deep deployment experience, walked more than 15 industry professionals through the cooling architecture that carries a data center from 50kW per rack to 1MW per rack, across three tiers with three completely different sets of engineering constraints.

The Air Ceiling Is 50kW. Period.

Kamble put the number plainly. Air cooling, including rear door heat exchangers, supports up to 50kW per rack. That is the absolute ceiling when you have optimized airflow management, hot aisle containment, and high-density CRAC/CRAH configurations. For context, a single NVIDIA GB200 NVL72 rack pulls roughly 120kW. A DGX SuperPOD rack can exceed 130kW. Air cooling does not get you within shouting distance of modern AI training infrastructure.

Kamble stated the 50kW figure without hedging, without a footnote about future innovations in air distribution, and that matters coming from a Schneider Electric engineer tracking rack density trends at the world's largest data center cooling vendor. The air chapter is closed.

Direct-to-Chip: 80% of Heat at the Source

Direct-to-chip liquid cooling is where most of the industry is heading first. Cold plates mounted directly to the processor die carry coolant across microchannel structures, pulling heat away before it ever reaches the surrounding air volume. Kamble noted that this approach removes up to 80% of heat at the source. The remaining 20%, generated by memory, VRMs, storage, and networking components, still dissipates into the ambient environment and requires supplemental air handling. So you get a hybrid. Liquid on the processors. Air on everything else.

The critical detail Kamble highlighted is the collaborative design requirement. Cold plates are designed in collaboration with chip manufacturers. The cold plate geometry, the microchannel dimensions, the mounting pressure, and the thermal interface material all have to match the die package. NVIDIA, AMD, and Intel each have specific thermal specifications. A cold plate designed for one GPU does not simply transfer to another. This co-design relationship between the cooling vendor and the silicon vendor is where Schneider's deployment work becomes relevant, because the company operates across multiple chip platforms simultaneously.

CDU Specifications Tell the Real Story

A coolant distribution unit is the heart of any direct-to-chip or immersion deployment. It sits between the facility water loop and the IT equipment loop, managing flow rates, temperatures, pressures, and fluid quality. Kamble provided specific numbers. Schneider's CDUs filter down to 25 microns, handle up to 2.5MW of thermal capacity, carry a 20+ year design lifespan, and include continuous fluid property monitoring.

The 25-micron filtration number deserves attention because microchannel cold plates have internal passages as narrow as 50 to 100 microns, and a single particle at 30 microns can lodge in a channel restriction, reduce flow, create a local hot spot, and degrade or kill a processor. That filtration works alongside continuous monitoring of glycol concentration, conductivity, and pH, catching degradation in real time before it reaches the cold plate, because coolant chemistry drifts and in a system where the fluid is in direct thermal contact with servers worth more than $1 million each, quarterly sampling is not a strategy. Operators moving from air-cooled facilities, where the worst contamination scenario is dust on a heat sink fin, into liquid-cooled environments need to internalize the difference in consequences.

The 20-year lifespan claim is also worth parsing. A CDU rated for two decades is designed to outlast multiple server refresh cycles. That makes the CDU a capital infrastructure asset, one that stays in place while the IT equipment turns over around it. This changes procurement decisions, maintenance planning, and total cost of ownership modeling in ways that most operators are still working through.

Two-Phase and Full Immersion: the 1MW Tier

At the top of Kamble's density spectrum sits two-phase and full immersion cooling. Servers submerged entirely in dielectric fluid, with thermal capacity reaching up to 1MW per rack. This is the far end of the density curve. No fans. No air handling. Every component, every connector, every stick of memory sits in a bath of engineered fluid that absorbs heat through direct contact and, in two-phase systems, through the phase change from liquid to vapor.

The engineering advantages at 1MW per rack are obvious. The practical challenges are equally well documented. Serviceability. Fluid cost and management. Compatibility testing across every component. Connector corrosion profiles over years of submersion. The industry has been circling immersion for a decade without reaching mass adoption, and Kamble's presentation positioned it correctly: as the high end of a spectrum, not as the default path forward.

PUE Drops from 1.5 to 1.1. Here Is Why.

Kamble cited PUE as low as 1.1 for liquid-cooled facilities versus 1.5 for traditional air-cooled data centers. That 0.4 delta sounds modest until you do the multiplication. At 100MW of IT load, a PUE of 1.5 means 50MW of overhead power going to cooling, lighting, and distribution. A PUE of 1.1 means 10MW of overhead. That is 40MW of saved power. At $0.06 per kWh, that is roughly $21 million per year in energy costs eliminated.

The reason liquid cooling drives PUE down so aggressively is thermodynamic. Water has roughly 3,500 times the volumetric heat capacity of air. Moving the same amount of heat requires dramatically less fluid volume and dramatically less fan energy. In a direct-to-chip system, the pump energy to circulate coolant through cold plates and a CDU is a fraction of what rows of CRAC units and overhead fans consume in an air-cooled hall. Eliminate the fans, eliminate the massive air handlers, eliminate the raised floor pressure requirements, and the parasitic power load drops by an order of magnitude.

The India Context

This session took place in India, hosted by AeD India Centre, and that geography matters. India's AI data center buildout is accelerating at a pace that outstrips legacy infrastructure planning. Ambient temperatures in key Indian data center markets regularly exceed 40 degrees Celsius during summer months, which compresses the already limited free cooling hours available to air-cooled facilities. Liquid cooling is not optional in these conditions at modern densities. It is physics.

Schneider operates testing labs including one in Bangalore, which positions the company to validate cooling architectures against the specific ambient conditions, water quality profiles, and power infrastructure realities of the Indian market. The workforce training gap remains a concern, but the hardware validation infrastructure is in place.

The Read

Everything Schneider presented here ships today. The 50kW air ceiling, the 80% heat capture rate for direct-to-chip, the 1MW immersion tier, the CDU specifications. The transition is a procurement decision.

The majority of new AI-focused builds over the next 18 months will deploy direct-to-chip as the primary cooling method, with supplemental air handling for the residual 20%, while immersion remains confined to specialty deployments and operators who have already committed to the platform. The CDU becomes the long-lived infrastructure backbone, the piece that stays when the servers turn over. Filtration, fluid monitoring, the risk calculus of million-dollar servers in liquid contact: these separate the operators who are ready from those who are not. The density curve keeps climbing. Nobody is waiting.