← Back to Intel
Market March 31, 2026

The March 2026 Construction Pipeline Is the Largest Cooling Procurement Opportunity in a Generation

Aerial view of large-scale data center campus under construction
North American vacancy rates held at approximately 1% for the second consecutive year as hyperscaler construction hit a historic pace. Source: Data Center Knowledge / S&P Global.

One percent.

That is the North American data center vacancy rate right now, and it has been for two consecutive years. Every megawatt of capacity that comes online gets absorbed before the cooling systems inside it have finished commissioning. The pipeline is not running ahead of demand. Demand is running ahead of the pipeline, and the pipeline itself is the largest in industry history.

According to Data Center Knowledge's March 2026 construction update, S&P Global tracked 113 completed data center transactions in 2025 totaling $69 billion. That includes a single $40 billion acquisition of Aligned Data Centers. The transaction volume alone tells you where capital is flowing. And capital is flowing faster than the thermal engineering community can spec, source, and ship the hardware to support it.

Nearly two-thirds of new capacity is now developing outside traditional hubs. Northern Virginia, Phoenix, Dallas, Silicon Valley. Those markets are still active but they are constrained by power availability and community opposition. The new buildout is spreading into secondary and tertiary markets where power infrastructure, land, and political appetite for development still exist. That geographic diffusion creates a procurement challenge: cooling vendors who built distribution and field service density around the major hubs now need coverage in places they have never staffed.

6 GW From a Single Agreement

AMD and Meta announced a $100 billion agreement to supply up to 6 gigawatts of AI capacity, with AMD Instinct MI450 GPUs slated for deployment in late 2026. Six gigawatts. To put that in context, the entire U.S. data center market consumed roughly 17 GW of power in 2023. This single AMD-Meta arrangement represents more than a third of that historical baseline, concentrated in a narrow deployment window.

The MI450 is a high-density accelerator. High-density accelerators produce heat fluxes that air cooling cannot handle at scale. The thermal answer is direct liquid cooling: cold plates on the GPU trays, CDUs on the rack rear, facility-level coolant distribution infrastructure tying back to dry coolers or cooling towers. Every watt of AI compute AMD ships to Meta's facilities needs a corresponding cooling decision made 12 to 18 months earlier. At 6 GW, that procurement window is open right now.

Selected March 2026 Hyperscaler Capacity Commitments (GW)
6.0 GW AMD / Meta Agreement 4.5 GW Duke Energy DC Contracts 1.0 GW Meta Indiana

The Foxconn Site and What Retrofit Cooling Actually Costs

Microsoft is building 15 new data centers on the former Foxconn manufacturing site in Mount Pleasant, Wisconsin, with taxable construction value exceeding $13 billion. This is one of the more instructive projects in the current pipeline, not because of its scale, though the scale is extraordinary, but because of what a manufacturing-to-data-center conversion demands from the thermal infrastructure team.

The Foxconn site was built for electronics manufacturing. Its slab loads, power distribution topology, and mechanical infrastructure were designed around assembly line equipment, not server rack density. Converting that envelope to support hyperscale AI workloads means tearing out cooling assumptions from the ground up. The existing mechanical rooms, pipe chases, and utility connections are starting points, not assets. Cooling engineers working on this project are not upgrading an existing system. They are designing a new one inside an old shell, with all the constraints that come from fixed structural bays, existing utility easements, and a construction schedule driven by Microsoft's AI compute deployment timelines rather than by what the thermal design team would prefer.

This is not unique to Wisconsin. As more hyperscalers acquire non-traditional sites to find available land and power in constrained markets, retrofit cooling projects will become a larger share of the total pipeline. That is a different procurement and engineering motion than greenfield construction, and the vendors who understand how to execute it quickly will take disproportionate share of those contracts.

Duke Energy's Contract Portfolio Is a Leading Indicator

Duke Energy's total data center contracts grew from 3 GW to 4.5 GW. That 50% expansion in committed load represents facilities that are either under construction or in advanced planning. A utility does not sign a 4.5 GW contract portfolio without corresponding interconnection agreements and construction timelines. The cooling procurement for the facilities behind those contracts is happening now or will happen in the next 6 to 12 months.

Watch utility contract announcements as a proxy for cooling demand timing. They are one of the most reliable leading indicators available in the public record. When a utility reports a jump in contracted data center load, cooling infrastructure orders follow within 2 to 3 quarters. Duke's 1.5 GW increase is a signal, not background noise.

Meta's Indiana Campus: 1 GW at $10 Billion

Meta is building a 1 GW campus in Lebanon, Indiana, with a $10 billion investment commitment. Lebanon is not a traditional hyperscaler market. It was selected because it has available land, utility infrastructure capable of supporting the load, and a state government that has been actively courting data center investment. This is the geographic diffusion story in concrete form.

A 1 GW campus running AI workloads at the density Meta deploys requires cooling infrastructure that would have been considered extreme for an entire metropolitan market five years ago. The cooling plant alone for a facility this size, including CDUs, primary and secondary coolant loops, dry coolers, and the controls infrastructure to manage it all, represents a capital expenditure in the hundreds of millions of dollars. Cooling is not a line item on this project. It is a primary engineering discipline.

The Supply Chain Math

At current liquid cooling attachment rates, the projects announced and under construction in March 2026 represent somewhere between $15 billion and $20 billion in cooling hardware over the next 36 months. That estimate includes CDUs, cold plates, rear-door heat exchangers, primary coolant distribution piping, dry coolers and cooling towers, and the facility-level controls systems that tie them together. It does not include engineering services, commissioning, or ongoing maintenance contracts, which add meaningful revenue on top of hardware.

The constraint is not demand. Demand is obvious and documented. The constraint is manufacturing throughput at the tier-one CDU and cold plate vendors, lead times on custom thermal interface materials for next-generation GPU trays, and the engineering talent to execute thermal design at the project scale these facilities require.

Google's 150 MW geothermal PPA with Ormat Technologies and NV Energy in Nevada points toward where the broader power procurement conversation is heading: hyperscalers are locking in non-carbon power sources because they have to, both for ESG commitments and because traditional grid capacity in high-demand markets is simply not available at the scale they need. Geothermal provides firm, dispatchable power that solar and wind cannot. The thermal engineering intersection here is interesting: facilities powered by geothermal often have access to moderate-temperature geothermal fluid that can be used for free cooling or heat rejection, reducing mechanical cooling loads significantly if the plant is designed to take advantage of it.

atNorth's SWE04 mega site in Sollefteå, Sweden adds the Northern European dimension. The Nordics remain one of the few regions where ambient conditions, hydroelectric power, and proximity to subsea fiber routes converge in a way that still allows air-assisted and immersion cooling architectures at scale. Sweden's buildout is a reminder that the pipeline is global even when the headline deals are domestic American.

SpaceX's acquisition of xAI and the advancing plans for space-based data centers are worth monitoring separately. The thermal physics of space-based computing are genuinely different: radiative cooling replaces convective cooling entirely, and the engineering challenges are unsolved at commercial scale. That is a 5 to 10 year problem. The 36-month problem is the 6 GW AMD-Meta agreement and the 15 other facilities breaking ground this quarter.

The pipeline is here. The procurement window is open. The vendors who move now will set lead times and lock in engineering relationships before the cohort who waits for the market to look even more obvious. At 1% vacancy, there is no ambiguity left about the direction of demand.