← Back to Intel
Energy March 31, 2026

US Data Centers Burn 176 TWh a Year. Between 53 and 70 TWh of That Goes Purely to Cooling.

High-voltage power transmission lines feeding a large data center campus
Data centers now account for 4.4% of total US electricity consumption, with cooling representing the single largest non-compute load category.

The conversation about data center energy consumption focuses almost entirely on the wrong number.

The headline figure is 176 terawatt-hours. That is how much electricity US data centers consume annually as of 2026, according to analysis from the International Energy Agency. It represents 4.4% of total US electricity consumption. It gets cited in congressional testimony, in utility earnings calls, in headlines about whether AI is destroying the grid. The number is real. But it obscures the number that actually matters to this industry.

Between 53 and 70 terawatt-hours of that 176 TWh goes purely to cooling. Not compute. Not networking. Not lighting or power conversion losses. Cooling alone. At 30 to 40 percent of total consumption, thermal management is the single largest non-compute load category in the US data center sector, and it is the one load category where the efficiency delta between current practice and best available technology is largest.

53 to 70 TWh. That is more electricity than Portugal consumed in all of 2024. The cooling systems keeping US data centers operational draw more power annually than many sovereign nations.

US Data Center Electricity Breakdown — 176 TWh Total (2026 est.)
176 TWh TOTAL Compute & networking (~55%) ~96.8 TWh Cooling (30–40%) 53–70 TWh Other (power, lighting, misc.) ~17.6 TWh Sources: IEA Energy and AI Report; industry standard PUE benchmarks

What 4.4% Actually Means on the Grid

The IEA's figure of 4.4% of total US electricity consumption is a present-tense snapshot. The trajectory is the problem. Researchers at the Harvard Kennedy School's Belfer Center have documented that data center electricity demand is growing at 15 to 20 percent annually. The Electric Power Research Institute projects that data centers could represent between 9 and 17 percent of US electricity consumption by 2030. That range is enormous, and both ends of it are alarming.

Globally, the picture scales proportionally. The IEA projects that global data center electricity consumption will exceed 500 TWh in 2026, approximately 2% of total global electricity. There are 550 planned projects with 125 GW of total capacity globally in the pipeline. All of that capacity carries the same 30 to 40 percent cooling overhead, unless the industry actively changes the underlying technology.

Retail electricity prices in the United States have risen 42% since 2019. Data centers operating at scale are already paying between $1.9 million and $2.8 million per megawatt annually in energy costs. That is before the grid congestion premiums that operators in capacity-constrained markets are absorbing. The cooling fraction of that operating cost is not a rounding error. At $2.4 million per MW per year with 35% going to cooling, a 100 MW facility spends roughly $84 million annually just keeping equipment from overheating.

The PUE Gap and What It Actually Represents

Power Usage Effectiveness, PUE, is the standard metric for data center energy efficiency. A PUE of 1.0 means every watt delivered to the facility goes directly to IT equipment. A PUE of 2.0 means one watt goes to IT and another watt goes to overhead, primarily cooling. Typical air-cooled data centers operate between PUE 1.6 and 2.0. Well-optimized facilities with advanced air management push toward 1.4. Liquid-cooled facilities using direct-to-chip systems routinely achieve PUE between 1.05 and 1.2.

That gap, 0.4 to 0.8 PUE points across a 176 TWh sector, is tens of terawatt-hours of electricity per year. Apply it to the full US data center load and the numbers become very concrete. Moving from an industry average PUE of 1.58 to 1.15 across 176 TWh of annual consumption would reduce total electricity demand by roughly 48 TWh. Forty-eight terawatt-hours per year. That is comparable to taking 4.5 million US homes off the grid.

The efficiency argument for liquid cooling has historically been framed as a rack density argument. You need liquid cooling because your GPUs run too hot for air. That framing is accurate but narrow. The more persuasive argument, the one that resonates with utility commissions, government policy makers, and hyperscale CFOs facing multi-billion-dollar energy procurement contracts, is the grid argument. Liquid cooling at scale is one of the few available levers that can meaningfully reduce data center electricity demand without reducing compute output. No other intervention comes close to the same magnitude of impact.

Why the Industry Has Been Slow to Move

The cooling technology existed before the urgency did. Companies like CoolIT Systems, Asetek, and Vertiv have been building direct-to-chip liquid cooling products for years. The hyperscalers knew the physics. Google's first direct liquid cooling deployments date back nearly a decade. Meta has been using custom cold-plate assemblies in its AI training infrastructure since at least 2021.

The reason adoption lagged in the broader market was inertia, not ignorance. Air cooling infrastructure was already installed. Facilities management teams understood it. Maintenance contracts were in place. The incremental cost of ripping out CRAC units and plumbing coolant loops through existing raised-floor environments was real, and the business case required energy prices and rack densities that, until recently, most enterprise customers had not reached.

Both of those barriers are now gone. Rack densities in AI-configured deployments have crossed the physical threshold where air cooling fails regardless of cost. Energy prices have moved far enough that the operating expense savings from liquid cooling produce payback periods measured in months, not years. The industry did not lack conviction; it lacked a forcing function. The forcing function arrived with the H100, and everything after that is a matter of execution speed.

The Grid Is Now a Stakeholder

The most consequential shift in the data center energy conversation is not happening inside hyperscale campuses. It is happening in state utility commissions, in Department of Energy working groups, and in the interconnection queues where data center developers are waiting years for transmission capacity.

When a single data center operator is signing power agreements that add multiple gigawatts to regional grid loads, the grid operator becomes a direct stakeholder in how efficiently that power is used. Several state public utility commissions have already begun requiring demand-side efficiency documentation as part of large load interconnection applications. The question of whether a facility uses air cooling at PUE 1.6 or liquid cooling at PUE 1.15 may soon have regulatory weight, not just commercial weight.

That is a different kind of pressure than the industry has faced before. Cooling vendors who can document PUE outcomes at scale, who can show audited performance data from commissioned facilities, will have an advantage that goes beyond the thermal argument. They will have a regulatory argument. In a permitting environment where grid capacity is the binding constraint on data center growth, that matters.

53 to 70 TWh consumed purely by cooling. Growing at 15 to 20 percent per year. Against a backdrop of 42% higher electricity prices since 2019 and a grid that is already straining under the load. The liquid cooling industry does not need to make the case that its technology works. The physics settled that. The case it needs to make now is that it can deploy fast enough, at scale, to actually move the number. That is the harder problem, and the one the next 36 months will determine.