Most data centers running AI inference today were built for 8 to 15 kW racks. The CRAH units are sized for that. The raised floors are sized for that. The electrical distribution, the fire suppression zoning, the chilled water loops were all commissioned for a thermal world that no longer exists, and now operators are being asked to run 60, 80, sometimes 120 kW per rack in the same white space with cooling infrastructure that was never designed to absorb that kind of heat density. Every new GPU generation widens the gap.
Belden Inc. and OptiCool Technologies announced a partnership to deliver an integrated rack-level system that combines infrastructure, power distribution, connectivity, and two-phase rear-door heat exchangers. The target: 120 kW per rack, with a claimed 85% reduction in cooling energy consumption versus traditional air-based methods and a PUE as low as 1.02. They showed it at Data Center World, booth #437.
The rear-door heat exchanger sits where the name says it does. It mounts on the back of the rack, in the path of the server exhaust air, and captures heat before it enters the hot aisle or return plenum. OptiCool's implementation uses a refrigerant as the working fluid instead of water, and the two-phase cycle means the refrigerant enters as a liquid, absorbs thermal energy from the exhaust air stream, undergoes a phase change to vapor, and carries that latent heat away to a condenser or heat rejection unit outside the IT space. Latent heat absorption per unit mass of refrigerant so far exceeds what sensible heating of water can accomplish in the same flow regime that the phase change alone is what makes 120 kW through a single door plausible.
At that density, a single rack produces roughly 409,000 BTU/hr of waste heat. A conventional CRAH unit trying to manage that through room-level air circulation would need massive airflow volumes, and the mixing losses between hot and cold aisles would be punishing. The rear-door approach intercepts the heat at the source, before it ever reaches the room, which is a fundamentally different thermal architecture than what most facilities were designed around.
Direct-to-chip liquid cooling requires plumbing changes inside the server: cold plates on CPUs and GPUs, manifolds, quick-disconnect fittings, drip containment. For operators running leased colocation space or managing facilities with mixed-vintage equipment, that level of server-side modification can be a non-starter. A rear-door heat exchanger leaves the servers untouched. Cooling attaches to the rack. The compute stays clean.
Belden is a $2.5 billion infrastructure company that has spent decades selling cables, connectivity, racks, and power distribution units, and with this partnership the company has folded cooling directly into its bill of materials. That changes the procurement model.
Historically, operators buying liquid cooling had to coordinate between the rack vendor, the PDU supplier, the structured cabling team, and the cooling system integrator, juggling four different vendors, four different timelines, and four different commissioning processes. The Belden-OptiCool system ships as an integrated unit with rack, power, connectivity, and thermal management in a single SKU available through channel partners. For a brownfield retrofit where the operator needs racks deployed in existing white space without redesigning the mechanical plant, that integration saves weeks of coordination and eliminates finger-pointing between vendors when something does not work.
Then there is the distribution angle. Belden sells through a large network of VARs and distributors. OptiCool on its own is a smaller, specialized thermal company. Routing through Belden's channel gives OptiCool access to procurement teams that already have Belden on their approved vendor lists, and in enterprise data center procurement, being on the AVL is half the battle.
The cooling industry has been arguing about modalities for three years now. Immersion keeps losing to cold plates in actual deployments. Direct-to-chip liquid cooling is winning the hyperscale contracts. NVIDIA's Rubin architecture makes liquid cooling the default for next-generation GPU clusters. Rear-door heat exchangers occupy a specific niche in that hierarchy: they are simultaneously the highest-density air-side cooling solution and the lowest-disruption liquid-side cooling solution available.
Room-level air cooling with CRAH units tops out around 15 to 20 kW per rack before the airflow physics collapse, while full immersion or direct-to-chip systems can handle 100 kW and above but require server-level modifications and specialized fluid management. The rear-door heat exchanger sits between those two poles, extending the life of an air-cooled facility by capturing rack exhaust heat with a liquid loop without touching the servers themselves.
The 120 kW claim from OptiCool is aggressive for an RDHx. Most rear-door units on the market handle 30 to 60 kW comfortably, and getting to 120 kW with a rear-door unit means the refrigerant loop, the heat exchanger surface area, and the condenser capacity all need to be sized well beyond what the industry has traditionally deployed. Two-phase is what makes it plausible. Single-phase water-based RDHx systems would struggle to reject that much thermal load through a single door-mounted coil without enormous flow rates and pressure drops.
A PUE of 1.02 means that for every watt consumed by IT equipment, only 0.02 watts go to cooling and other overhead. Essentially zero. The industry average PUE sits around 1.55 to 1.60, a well-run hyperscale facility typically achieves 1.10 to 1.20, and Google's best facilities report PUEs of 1.10, so reaching 1.02 with a rack-level refrigerant system implies that the compressor work, the condenser fan power, and the pump energy for the refrigerant loop are vanishingly small relative to the IT load.
Two-phase systems can get there because they exploit the latent heat of vaporization, with the refrigerant absorbing heat by changing phase in a process that requires no pump energy for the transition itself. The circulation can be driven by thermosiphon effects or very low-power pumps, and if the condenser rejects heat to an outdoor ambient loop with favorable temperature differentials, the parasitic power draw stays minimal. But whether that 1.02 holds at full 120 kW load across a range of ambient conditions is the question that matters for real deployments. PUE numbers measured at partial load or at 15 degrees C ambient are a different story than PUE at full rack density in a Phoenix summer.
The liquid cooling supply chain is stretched, with every major cooling vendor quoting 16 to 24 week lead times on custom CDU and manifold assemblies. By integrating into Belden's existing manufacturing and distribution infrastructure, OptiCool potentially sidesteps some of that bottleneck. Belden already has the rack fabrication, the PDU integration, and the logistics network. Adding a heat exchanger door to an existing rack production line is a fundamentally different supply chain problem than standing up an entirely new cooling product from scratch.
The refrigerant choice also simplifies things. Water-based systems need leak detection, corrosion inhibitors, water treatment, and integration with facility chilled water plants. Refrigerant-based systems are self-contained closed loops that reject heat through an air-cooled or liquid-cooled condenser while keeping the primary loop sealed at the factory. For a channel-distributed product, that sealed-loop architecture is far easier to deploy than a system requiring field plumbing to a facility water supply.
This partnership is a bet on the brownfield market, aimed squarely at operators who cannot rip out their raised floors and install manifolded liquid cooling loops, at colocation providers who need to offer AI-ready racks without a two-year mechanical retrofit, and at enterprise data centers sitting on 20 MW of existing capacity that need to run GPU clusters in it next quarter.
Belden folding cooling into the integrated rack product is a signal that the industry's procurement model for thermal management is changing. Cooling is becoming a rack feature. The compression of the supply chain from four vendors to one purchase order is where the real value sits for operators under time pressure.
The 120 kW number at 1.02 PUE needs to be validated under real operating conditions across a full year of ambient temperature variation. If those numbers hold, this is one of the most deployable high-density cooling solutions on the market for existing facilities. If they hold only under ideal conditions, it is still a strong 60 to 80 kW solution with a simpler deployment model than the competition. The rear-door heat exchanger just became harder to ignore.