← Back to Intel
Operations April 8, 2026

Seventy Percent of Global Data Center Capacity Is Already Built. Liquid Cooling Has to Work Around It.

More than 70% of global data center capacity sits in existing buildings. Most of that floor space was designed for air cooling at rack densities of 5–15 kW. The AI buildout is not happening in new greenfield facilities that operators get to spec from scratch. It is happening in buildings that were never designed for 100 kW per rack, and the cooling retrofit problem that creates is the mainstream challenge in the industry, not the edge case.

The brownfield modernization argument used to be a cost question: retrofitting is cheaper than building new. The AI cycle made it a speed question. Permitting and constructing a new hyperscale facility takes 3–5 years in most markets. A GPU platform iterates on an 18-month cycle. Operators who wait for greenfield capacity before deploying liquid cooling are handing someone else two to three silicon generations of competitive lead time.

The Brownfield Scale Problem

Over 70% of global data center capacity resides in existing buildings. A substantial portion of that capacity is underutilized at current air-cooled densities. Retrofit costs for liquid cooling run 40–60% of original construction cost for a full facility conversion. Rack-by-rack phased deployment reduces that upfront capital exposure and preserves cash flow during the transition period.

What Brownfield Retrofit Actually Requires

Adding liquid cooling to an existing air-cooled facility is not a drop-in operation. Three systems have to change in sync: power distribution, cooling infrastructure, and monitoring and controls. Upgrading one without the others creates the failure modes that operators run into most often. A CDU installation that trips circuit breakers because the power density math was done on the old spec sheet. A hybrid environment where half the racks are liquid-cooled and the building management system has no visibility into either loop.

Rack-by-rack deployment is the approach that works. It lets operators stand up liquid cooling in a defined zone of the facility, validate the CDU integration, and expand incrementally as demand justifies it. A hybrid air-and-liquid environment is the correct transition state for a facility moving from legacy workloads to AI-dense compute, not a permanent compromise. Trying to convert an entire floor in a single campaign creates commissioning risk that most facilities teams are not staffed to manage.

The Power-Cooling Interdependency

Brownfield liquid cooling retrofits expose a problem that operators often underestimate: the power distribution system in a facility designed for 10 kW per rack cannot support 100 kW per rack without upgrades that run well ahead of the cooling work. Breaker sizing, PDU capacity, busway ratings. Every component in the power path was engineered for a density that AI compute has long since exceeded.

The operators who execute brownfield retrofits successfully treat power and cooling as a single scope, not two sequential projects. CDU placement, coolant distribution manifold routing, and the electrical capacity to run the pump sets and control systems all have to be planned together. Facilities that scope them separately typically discover the conflict during commissioning, which is the most expensive time to find it.

Speed to Revenue vs. Capital Efficiency

The financial case for brownfield deployment is not that it is cheap. It is that it is fast and capital-efficient relative to the alternative. An existing facility with available power has infrastructure value (grid interconnect, site permits, operational staff) that a greenfield build cannot replicate on a short timeline. A brownfield operator who can deploy liquid cooling in an existing cage and take a colocation customer in six months is competing against a greenfield developer who will not have a building permit for two years.

The tradeoff is ceiling. A purpose-built AI facility can be designed around 150+ kW per rack from the floor slab up. A brownfield retrofit will hit structural or power limits before that density. Operators need to know their ceiling before they spec the liquid cooling architecture, because a CDU and manifold layout that works at 80 kW per rack may not scale to 130 kW without additional facility work. The brownfield advantage is speed and capital efficiency at current densities, not infinite scalability.

The Real Bottleneck

For most operators, the constraint on brownfield liquid cooling is not the cooling technology. Cold plates and CDUs are commercially available. The constraint is the workforce to design, install, and commission hybrid thermal environments at the pace the market is moving. Facilities engineers who have spent careers managing chilled-water air handlers are now being asked to integrate direct-to-chip liquid loops, balance CDU flow rates, and commission mixed cooling architectures. That skill set does not exist in volume yet. The gap between what the technology can do and what the workforce can execute is the actual bottleneck in brownfield modernization, and it compounds with every quarter that demand outpaces training.