← Back to Intel
Energy April 1, 2026

The Grid Can't Keep Up. Power Constraints Are Now the Primary Bottleneck in Data Center Development.

Capital is available. Land is available. Cooling technology is available. The thing that is not available — in sufficient quantity, in the right markets, on any timeline shorter than half a decade — is grid-connected electrical power. Grid interconnection for a new large data center in a constrained market takes 4 to 5 years at minimum. In markets with severe interconnection queues, energization can take 10 years from initial application to first commercial power.

The DOE estimates that 100 GW of new electric generating peak capacity will be required by 2030. Data centers are responsible for approximately half of that demand. The five largest data center operators are projected to invest up to $700 billion in US-based facilities in 2026 alone. The capital is moving faster than the grid can accommodate it.

Grid interconnection timeline

New data center grid interconnection, favorable site: 4–5 years. Interconnection with required grid upgrades: 6–10 years in some markets. DOE new generating capacity requirement by 2030: 100 GW. Data center share of that requirement: approximately 50 GW. Top 5 operator US data center capex projected for 2026: up to $700 billion. "Powered land" — parcels with existing electrical infrastructure — has become the most valuable site characteristic in data center real estate.

Why the Queue Exists

Grid interconnection in the US requires a series of studies by the Independent System Operator and the local utility to confirm the proposed load can be absorbed without destabilizing the grid. Each study takes months. Each study reveals upgrade requirements that require additional studies. Projects sit in queue while earlier-filed projects complete their study processes. A project that enters the queue in 2026 may not receive a final interconnection agreement until 2031 or later in markets like PJM and ERCOT, which have the densest data center concentration and the longest queues.

The queues are not clearing. Every hyperscaler capex announcement adds new interconnection requests to already-congested markets. The interconnection process was designed for one or two large industrial loads per year in a given service territory. It is receiving dozens simultaneously.

What Operators Are Doing

Operators with 10-year grid timelines are pursuing parallel strategies. Self-supply through on-site generation — natural gas peakers, small modular reactors, fuel cells — bypasses the interconnection queue entirely for baseload capacity, though it introduces different regulatory, permitting, and cost variables. Site selection has shifted toward locations with existing electrical infrastructure: former industrial sites with utility substations, retired power plants adjacent to transmission lines, municipalities with stranded generating capacity from industrial departures.

The premium for "powered land" — parcels that already have electrical infrastructure capable of serving data center loads — has made former steel mills, paper mills, and aluminium smelters legitimately valuable data center sites. Ecolab explicitly cited retrofitting "defunct steel mills" and former cryptocurrency mining operations as part of its CoolIT acquisition strategy. The cooling infrastructure in these cases is greenfield. The power infrastructure is not, which is what makes them viable.

The Cooling Implication

Power constraints drive cooling efficiency as a competitive requirement, not just an operational preference. A facility that draws 30 MW for cooling on a 100 MW compute allocation is consuming 23% of its scarce power budget on thermal management. Every percentage point of cooling efficiency improvement — through liquid cooling, heat recovery, AI-optimized setpoints, or warm-water architectures — converts directly into additional compute capacity that can fit within the same power envelope.

Nvidia's 45°C warm-water architecture eliminates chillers entirely. The chiller plant at a 100 MW facility can draw 8 to 12 MW. Eliminating it through warm-water cooling converts 8 to 12 MW of power budget from thermal management to compute. In a market where additional grid capacity takes a decade to procure, that conversion is worth more than the cooling infrastructure capex it replaces.