The liquid cooling market is growing at more than 30% annually. The number of facilities teams that have actually commissioned a liquid cooling system remains a fraction of that growth curve. That gap between market trajectory and operational readiness is the single biggest risk in the AI infrastructure buildout right now, and Schneider Electric just published a detailed guide that makes the problem impossible to ignore.
The company's best practices document for deploying liquid-cooled servers in AI data centers reads less like a product pitch and more like a field manual written by people who have watched too many commissioning projects go sideways. Schneider argues that direct-to-chip cooling is the recommended architecture for AI and HPC workloads, ahead of immersion and rear-door heat exchangers. They are picking a side, and they are doing it with the weight of their Motivair acquisition behind them.
Schneider's headline hardware is the MCDU-70, a modular coolant distribution unit that delivers 2.5 MW of cooling capacity per unit. Daisy-chain enough of them and you scale beyond 10 MW. That is not a cooling system for a single rack or even a single row. That is a cooling system for a building. The Motivair acquisition, which closed in 2024, gave Schneider the engineering bench to build this class of hardware. They are now deploying it against the same customers they already sell power distribution and building management systems to. The cross-sell opportunity is obvious. The execution risk is real.
A 2.5 MW CDU is a serious piece of infrastructure. It requires mechanical room space, piping runs, water treatment, and a team that understands fluid dynamics at scale. Most data center operators have spent their entire careers managing air handlers and CRACs. Over 70% of global data center capacity sits in buildings designed for air cooling. Asking those same teams to commission and maintain a high-pressure liquid loop is like asking a residential electrician to wire a substation.
The Schneider guide lays out several operational requirements that sound straightforward on paper but are brutal in practice. First: constant differential pressure control. This means the cooling loop must maintain stable pressure regardless of how many servers are drawing coolant at any given moment. In an air-cooled environment, you adjust fan speeds. In a liquid-cooled environment, pressure fluctuations can cause cavitation in pumps, uneven flow distribution across cold plates, and in the worst case, thermal throttling on GPUs that cost $30,000 each.
Second: redundant pumps. Every CDU needs backup pumps that can take over without interruption. A pump failure in an air-cooled facility means a hot spot. A pump failure in a liquid-cooled facility means coolant stops flowing to silicon. The thermal mass of a cold plate buys you seconds, not minutes.
Third: dual power supplies with UPS protection on every CDU. Schneider is explicit about this. The cooling distribution unit is now as critical as the UPS itself. If the CDU loses power, the servers lose cooling, and the thermal runway happens faster than any operator can respond manually. This is a fundamental shift in how facilities teams need to think about power architecture. The CDU has become Tier 1 infrastructure, as critical as the switchgear and the UPS.
Schneider's guide also pushes operators to partner early with IT vendors, cooling specialists, and system integrators. This is corporate language for a blunt reality: you cannot do this alone. The IT team needs to coordinate with the mechanical team, the mechanical team needs to coordinate with the server OEM, and the server OEM needs to coordinate with the chip vendor on cold plate specifications. In an air-cooled world, these teams operate in silos. In a liquid-cooled world, a mismatch between the cold plate flow rate and the CDU output pressure means the entire deployment fails acceptance testing.
The system integrator role is particularly interesting. In traditional data center construction, the integrator handles structured cabling and rack installation. In a liquid-cooled deployment, the integrator is responsible for pipe routing, leak detection, water treatment, and commissioning a pressurized fluid system. These are skills that come from industrial process engineering, drawn from an entirely different labor pool than IT infrastructure. The talent pool is small. The demand is growing at 30% a year.
One section of the Schneider guide that will get less attention than it deserves covers tying liquid cooling strategies to ESG goals and regional regulations. In the EU, the Energy Efficiency Directive already requires data centers above 500 kW to report PUE and water usage effectiveness. In parts of the American Southwest, water permits for data center cooling are being denied or restricted. Liquid cooling does not eliminate water usage, but it can reduce it dramatically compared to evaporative cooling towers, and the heat rejection temperatures are high enough to feed waste heat into district heating systems.
Schneider is positioning liquid cooling not just as a thermal management solution but as a regulatory compliance tool. That framing will resonate with CFOs and general counsel in ways that pure engineering arguments never will. When a $500 million data center project faces permitting delays because of water usage concerns, a cooling architecture that cuts consumption by 40% becomes a scheduling tool, not just an engineering choice.
The hardest paragraph in the Schneider guide is the one that says what everyone in the industry already knows. Most facilities teams have never commissioned liquid cooling before. They have never pressurized a coolant loop. They have never tested a leak detection system under load. They have never managed the water chemistry required to prevent corrosion in copper cold plates. And they are being asked to deploy these systems at a pace that gives them no margin for learning on the job.
Schneider can publish all the best practices documents it wants. The bottleneck is people. It has been for two years, and the gap keeps widening. The liquid cooling market is growing at 30% annually. The workforce that can deploy and maintain these systems is growing at a fraction of that rate. Until that gap closes, every best practice in this guide is theoretical for the majority of operators in the market.
The companies that figure out training, hiring, and retention for liquid cooling operations will win the next decade of data center infrastructure. The ones that treat this as a procurement exercise will spend the next five years explaining thermal incidents to their board.