The number that should be keeping cooling engineers awake is not $750 billion. It is 17 gigawatts.
According to a BloombergNEF report published this month, combined capital expenditure from the 14 largest publicly owned data center operators is nearing $750 billion in 2026, up from roughly $450 billion in 2025. That is a two-thirds jump in a single fiscal year. Analyst expectations for FY2027 climbed 56% between August 2025 and February 2026 alone, meaning the consensus keeps getting revised upward faster than anyone can track. As of September 2025, more than 23 GW of data center IT capacity was under construction globally. About 75% of that capacity is coming online in the United States.
Do the math. Seventy-five percent of 23 GW is approximately 17.25 GW. That is the amount of new US data center capacity that needs to be cooled, staffed, powered, and commissioned within the next 24 to 36 months. The cooling industry's manufacturing base was sized for a world where annual new capacity additions ran between 5 and 8 GW. That world no longer exists.
Capacity numbers in gigawatts are misleading if you think about them the way you would have in 2019. Back then, a typical data center rack drew somewhere between 7 and 15 kilowatts. The thermal load per megawatt of IT capacity was manageable with traditional CRAC units, hot-aisle containment, and a chiller plant designed around ambient air. That math is obsolete.
Modern AI training clusters built around NVIDIA's H100 and B200 GPUs are drawing 80 to 130 kilowatts per rack. Some configurations push past that. A single rack at 130 kW generates more heat than four or five legacy racks combined, concentrated in a space roughly the same size. Air cooling at those densities is not just inefficient; it is physically insufficient. The fluid dynamics break down before you get to the thermal limits. You need liquid at the chip, not air at the ceiling.
That shift in rack density means the actual thermal load embedded in 17 GW of new US capacity is substantially higher than 17 GW of legacy capacity would have been. The cooling industry cannot use prior build cycles as a guide. Every benchmark derived from pre-GPU-cluster deployments underestimates the problem.
The constraint is not awareness. Every major cooling vendor, every systems integrator, every hyperscale procurement team knows liquid cooling is coming. The constraint is manufacturing throughput, lead times, and skilled labor, in that order.
Coolant distribution units are the first hard wall. The CDU is the thermal heart of a direct-to-chip liquid cooling system: it circulates dielectric or water-based coolant between the chip-level cold plates and the facility's heat rejection system. Lead times for CDUs from major vendors ran 26 to 40 weeks through much of 2025. At the scale implied by 17 GW of new construction, that queue does not clear without significant capacity expansion from manufacturers. Building new CDU manufacturing lines takes time and capital that the industry has not yet fully committed.
Cold plate supply is the second constraint. Cold plates are precision-machined copper or aluminum components that sit directly on GPU dies. The tolerances are tight. The surface finish requirements are exacting. Machining capacity is concentrated in a small number of contract manufacturers, most of them serving both the data center and high-performance computing markets simultaneously. Demand from both is accelerating at the same time.
Chiller capacity is the third. Facility-level heat rejection still runs through water-cooled chillers in most large deployments. The chiller industry, dominated by Carrier, Trane Technologies, and Johnson Controls, is a mature manufacturing sector with long lead times baked in. A large custom chiller plant for a 50 MW facility can carry a 52-week lead time. At the current pace of construction, chiller procurement needs to happen before architectural drawings are finalized.
Then there is labor. Commissioning a large-scale liquid cooling deployment requires technicians who understand both the fluid systems and the IT infrastructure. That combination is rare. The training pipelines do not exist at the scale needed. This is the constraint that cannot be solved by writing a purchase order.
BNEF tracked hyperscaler leases from neoclouds worth over $100 billion in just the six months up to March 2026. That figure represents forward commitments, not current deployments. The capacity being contracted today starts coming online 18 to 36 months from now. Cooling infrastructure for those buildings needs to be ordered now. For many of those projects, it is already late.
The equity markets have expressed some skepticism about the pace of AI infrastructure spending. Concerns about bubble dynamics and demand sustainability are real. But the lease data tells a different story than the share prices. Operators are not slowing down. The commitments are made. The concrete is being poured. The question for the cooling industry is not whether 17 GW of new US capacity needs thermal management. It does. The question is who has the manufacturing capacity, the lead time advantage, and the trained workforce to capture the work.
When demand grows faster than supply in a capital-intensive manufacturing sector, pricing does not stay flat. CDU pricing has already moved. Cold plate pricing has moved. The vendors who locked in long-term supply agreements with hyperscalers in 2024 and early 2025 did so at rates that look favorable in retrospect. The vendors entering the market now are quoting into a different environment.
For operators trying to source cooling infrastructure for projects breaking ground in the second half of 2026, the options narrow quickly. They can pay elevated prices to vendors with available capacity. They can accept longer lead times and push commissioning dates. Or they can make design concessions, deploying lower-density configurations than their AI workloads actually require, which has its own downstream costs in compute efficiency and operating expense.
None of those are good options. All of them are real.
The $750 billion capex surge is real. The 23 GW under construction is real. The bottleneck in CDU lead times, cold plate machining, chiller procurement, and commissioning labor is equally real, and considerably less discussed. The build is happening at full speed. The cooling industry now has to decide whether it can keep up.