Intel

Long-form analysis, market context, and editorial takes on what's shaping the data center cooling industry.

Germany Says Your Data Center Has to Heat the Neighborhood. The Deadline Is July 1.

On July 1, 2026, a law takes effect in Germany that no other major data center market has attempted at this scale. Any new data center commissioned on or after that date must reuse at least 10% of its waste heat. By 2027, the threshold rises to 15%. By 2028, 20%. Miss the target and the fines start at EUR 50,000, climbing to EUR 100,000 depending on the violation.

The law is called the Energieeffizienzgesetz. The German Parliament passed it in September 2023. It took effect in November of that year. And for most of the time since, the data center industry has treated it like something that would sort itself out before the deadline arrived. The deadline arrived.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Read Full Article (Free Preview)

The Fluid That Made Two-Phase Immersion Cooling Work Just Became a Liability Worth $12.5 Billion

On December 20, 2022, 3M announced it would stop manufacturing all PFAS chemicals by the end of 2025. That single decision vaporized the supply chain for two-phase immersion cooling in data centers. The fluids that made the technology possible, Novec 7100, Novec 649, Fluorinert FC-72, are gone. The last day to place a new Novec order was March 31, 2025. Manufacturing lines shut down by year's end.

3M did not make this call because they found a better product. They made it because they were staring down over 4,000 lawsuits and a $12.5 billion settlement with more than 11,000 U.S. public water systems alleging PFAS contamination in drinking water.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Read Full Article (Free Preview)

72% of Data Center Water Consumption Happens Somewhere You Can't See It

The number that dominates the data center water debate is wrong. Not wrong in the sense of inaccurate. Wrong in the sense of incomplete. When people talk about data center water consumption, they picture cooling towers on a rooftop evaporating thousands of gallons an hour. That is real. That happens. And it accounts for roughly 28% of the total water footprint.

The other 72% happens off-site. At the power plants generating the electricity that these facilities consume around the clock. Bluefield Research published these figures in late February 2026 in a report titled "The Water-Power Nexus." The numbers reframe the entire conversation.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Read Full Article (Free Preview)

The Global Water Table Is Collapsing. Data Centers Are Drinking Faster.

A United Nations report published in March 2025 introduced a term that should unsettle anyone building cooling infrastructure: "global water bankruptcy." The language is deliberate. According to the UN University's Institute for Water, Environment and Health, the world has moved past water stress and past water crisis into a condition where accumulated damage to freshwater systems has become, in many regions, irreversible. Half of the planet's large lakes have lost water since the early 1990s. Seventy percent of major aquifers are in long-term decline. Dozens of major rivers no longer reach the sea year-round.

Against that backdrop, the data center industry is scaling water consumption at a rate that would have been unthinkable five years ago. A large hyperscale data center consumes roughly 300,000 gallons of water per day. Facilities running dense AI workloads push that figure to 5 million gallons daily. Google reported approximately 6 billion gallons consumed by its data centers in 2024. Microsoft hit 1.69 billion gallons in fiscal year 2024, a 34% year-over-year increase driven almost entirely by AI infrastructure expansion.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Subscribe to Continue Reading

120 Kilowatts Per Rack and Rising: Why Liquid Cooling Became Mandatory for AI Infrastructure

There is a clean line in the thermal management timeline. Before 2023, air cooling worked for the overwhelming majority of data center deployments. Standard server racks generated 7 to 15 kilowatts of heat, and a well-designed hot-aisle/cold-aisle configuration with precision air handlers could manage that load without breaking a sweat. Literally.

Then AI training clusters showed up at 40, 60, 85 kilowatts per rack. And now the next generation of GPU-dense cabinets is pushing past 120 kW, with designs on the board targeting 200 to 250 kW. Air cooling hits a hard physical ceiling around 25 to 30 kW per rack. The liquid cooling market has responded accordingly. BIS Research pegs the global market at $3.93 billion in 2024, growing to $22.57 billion by 2034. Goldman Sachs projects that 76% of AI servers will be liquid-cooled by the end of 2026, up from 15% in 2024. A fivefold increase in adoption in two years.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Subscribe to Continue Reading

Nevada Banned Evaporative Cooling for Data Centers. Other States Are Watching.

In 2025, Nevada became the first U.S. state to ban evaporative cooling in new data center construction. The decision was not symbolic. Nevada sits in the driest region of the country, relies heavily on the Colorado River system (which has been in sustained decline for over two decades), and had watched a cluster of hyperscale data center proposals land on its doorstep, each one requesting municipal water allocations that would have supplied thousands of homes.

The state said no. Not to data centers entirely. To the specific cooling technology that consumes the most water. California followed with SB 58, a disclosure law requiring data centers above a certain capacity to report their water consumption publicly. Several other states have water reporting mandates in various stages of committee work. The European Commission announced minimum water performance standards for data centers that will take effect in 2026. The regulatory direction is clear. The only variable is speed.

This is a premium deep dive.

We've made it available as a preview. Subscribe to read the full analysis.

Subscribe to Continue Reading

Free to Read

The Water-Power Tradeoff That Data Center Operators Keep Getting Wrong

A growing number of data center operators have started swapping water-cooled systems for air-cooled alternatives, claiming sustainability wins. The math tells a different story. Air cooling eliminates on-site water use, sure. But it doubles or triples electricity consumption, pushing the water burden upstream to power plants that need their own cooling loops to generate that extra juice.

Read more →

A growing number of data center operators have started swapping water-cooled systems for air-cooled alternatives, claiming sustainability wins. The math tells a different story. Air cooling eliminates on-site water use, sure. But it doubles or triples electricity consumption, pushing the water burden upstream to power plants that need their own cooling loops to generate that extra juice. The problem doesn't vanish. It moves.

Colocation vacancy rates have cratered to 2.3%, down from 9.8% in 2020. The construction pipeline grew tenfold over the same period. Every new facility that comes online has to make a fundamental call on how it manages heat, and that decision ripples through local water tables and power grids for decades. A large data center drinks roughly what a town of 50,000 people does in a day. Regulators in multiple states have started blocking projects over that kind of draw.

The operators getting this right tend to match their cooling architecture to their actual scale. Hyperscalers running 100+ MW loads are exploring on-site power generation, including hydrogen fuel cells that produce water as a byproduct. Facilities in central Ohio are already piloting private microgrids built around this concept. Mid-tier and edge deployments, meanwhile, are finding that modern evaporative cooling towers can hit the efficiency marks without the electricity penalty. And micro data centers, anything from a large closet to a shipping container, remain firmly in air-cooling territory, where even the smallest cooling tower would be ten times more capacity than needed.

True sustainability means refusing to solve one problem by creating another. The operators who claim green credentials while tripling their grid draw are playing an accounting trick, not running an efficient facility.

Adaptive Cooling, Immersion Bets, and the Vendors Shaping What Comes Next

Schneider Electric ships cooling units packed with IoT sensors that run predictive maintenance cycles before failures happen. Iceotope has built immersion cooling platforms that work across traditional, hyperscale, and edge environments, pushing PUE numbers into territory that air-cooled facilities cannot touch. Three very different approaches from three companies that agree on one thing: the old way of blowing cold air through server rows has a ceiling, and the industry is about to hit it.

Read more →

Schneider Electric ships cooling units packed with IoT sensors that run predictive maintenance cycles before failures happen. Iceotope has built immersion cooling platforms that work across traditional, hyperscale, and edge environments, pushing PUE numbers into territory that air-cooled facilities cannot touch. Stulz leans on free cooling and precise humidity control to shave CO2 output. Three very different approaches from three companies that agree on one thing: the old way of blowing cold air through server rows has a ceiling, and the industry is about to hit it.

The next shift is adaptive cooling, systems that use AI to learn a facility's thermal behavior in real time and adjust output to match actual load. Most data centers today over-cool by a significant margin because their control systems react to worst-case thresholds rather than live conditions. Adaptive systems eliminate that cushion, and the energy savings compound across thousands of racks.

Digital Realty introduced direct liquid cooling across 170 data centers worldwide in 2024, signaling that the colocation giants see liquid as table stakes rather than a premium add-on. Edge computing adds another dimension. Smaller facilities in distributed locations create opportunities for cooling designs that would never make sense at hyperscale, from geothermal loops to ambient-air setups in northern climates.

The vendors winning contracts right now are the ones who can deliver across all three tiers: hyperscale, colo, and edge. Single-product companies are getting boxed out.

500 Megawatts in an Indiana Cornfield: The Physical Cost of the AI Buildout

Pull up satellite imagery of New Carlisle, Indiana, from 2023 and you see farmland. Pull it up today and you see seven rectangular data centers with 23 more permitted. A single campus there already draws over 500 megawatts, enough to power several hundred thousand homes. When the full build finishes, the load will exceed what two cities the size of Atlanta consume.

Read more →

Pull up satellite imagery of New Carlisle, Indiana, from 2023 and you see farmland. Pull it up today and you see seven rectangular data centers with 23 more permitted. A single campus there already draws over 500 megawatts, enough to power several hundred thousand homes. When the full build finishes, the load will exceed what two cities the size of Atlanta consume.

The Atlantic's Matteo Wong reported from these sites, including Memphis, where a new data center megaproject sits downwind from an active natural-gas plant in a neighborhood already dealing with pollution from decades of industrial use. KeShaun Pearson, who runs the nonprofit Memphis Community Against Pollution, told Wong the area's air already tastes like soot and asphalt. Another facility won't improve things.

The numbers at a national level tell the same story. U.S. data centers consumed 176 terawatt-hours in 2023, roughly 4.4% of total national electricity. Globally, the figure hit 415 TWh in 2024 and is projected to double to 945 TWh by 2030. AI-related capital spending now accounts for 92% of GDP growth in the first half of 2025, and the tech sector has ballooned from 22% to a third of the S&P 500 since ChatGPT launched. That concentration of economic activity in a single sector, built on a single resource constraint, should make anyone in infrastructure planning pay attention.

Cooling is the bottleneck inside the bottleneck. Forty percent of a data center's electricity goes to thermal management. At the densities AI training requires, the cooling problem scales faster than the compute problem.

67% Energy Savings Are on the Table. Most Data Centers Leave Them There.

A comprehensive review published in the International Journal of Refrigeration examined every major cooling optimization technology available to data centers today. The headline finding: advanced cooling architectures can cut energy consumption by up to 67.2% compared to conventional setups. The industry average PUE, according to Uptime Institute's 2024 survey, sits at 1.56. State-of-the-art facilities report 1.06. That gap represents billions of kilowatt-hours left on the table every year.

Read more →

A comprehensive review published in the International Journal of Refrigeration examined every major cooling optimization technology available to data centers today. The headline finding: advanced cooling architectures can cut energy consumption by up to 67.2% compared to conventional setups. The industry average PUE, according to Uptime Institute's 2024 survey, sits at 1.56. State-of-the-art facilities report 1.06. That gap represents billions of kilowatt-hours left on the table every year.

The research breaks down where the waste lives. In a typical data center, only 30% of electricity actually reaches the servers doing useful work. The thermal management stack, air conditioning, chillers, humidifiers, consumes 45%. The rest goes to power distribution and overhead. Those proportions have been roughly stable for years, which means the industry has been building new capacity without fixing the fundamental inefficiency of how it cools existing capacity.

Liquid cold plates, immersion tanks, heat pipes, and thermosiphon-based systems all showed significant PUE improvements in the review. AWS reported a 46% drop in mechanical cooling energy after deploying a custom liquid solution, bringing its global PUE to 1.15. Vertiv's data shows that moving to 75% liquid cooling in a hybrid facility cuts total site power consumption by 15.5%.

Microprocessor thermal design power is expected to blow past 700 watts this year. Air cooling tops out around 280 watts. The arithmetic on when liquid becomes mandatory has already been done. The only question is how many operators will wait until they have no other option.

$64 Billion in Data Center Projects Blocked or Delayed by the People Next Door

Community opposition has stalled or killed $64 billion worth of U.S. data center projects. That figure comes from Good Jobs First, which has been tracking the growing collision between hyperscale ambitions and local resistance since the buildout accelerated in 2024.

Read more →

Community opposition has stalled or killed $64 billion worth of U.S. data center projects. That figure comes from Good Jobs First, which has been tracking the growing collision between hyperscale ambitions and local resistance since the buildout accelerated in 2024.

The opposition is not coming from environmentalists alone. Homeowners worried about property values. Farmers who do not want to sell to a land agent working for an unnamed tech company. Municipal leaders watching their water tables drop. School boards wondering why a $2 billion facility pays almost nothing in property taxes thanks to abatement deals negotiated behind closed doors.

New York's S.9144, introduced in early 2026, would impose a three-year statewide pause on permits for data centers drawing 20 megawatts or more. The bill has not passed its originating chamber, but the fact that it was introduced in one of the country's most important data center markets says something about where the political winds are blowing.

Moratorium bills have been introduced in 11 states across 14 separate pieces of legislation in 2026. None have passed yet. But the pattern is consistent: proposals are getting more specific, the sponsors are getting more serious, and the public comment periods are getting louder.

Over 300 data center bills were filed across more than 30 states in the first six weeks of 2026 legislative sessions. The industry spent years operating in an incentive-friendly regulatory environment. That environment is shifting. Operators who plan multi-year construction timelines without accounting for community opposition are building schedule risk into every project.

Why Immersion Cooling Keeps Losing to Cold Plates

Immersion cooling can hit a PUE of 1.02. Direct-to-chip liquid cooling lands around 1.15 to 1.20. On raw thermal efficiency, immersion wins. It has won that comparison for years. And yet direct-to-chip holds 47% of the liquid cooling market while immersion sits at roughly $270 million.

Read more →

Immersion cooling can hit a PUE of 1.02. Direct-to-chip liquid cooling lands around 1.15 to 1.20. On raw thermal efficiency, immersion wins. It has won that comparison for years. And yet direct-to-chip holds 47% of the liquid cooling market while immersion sits at roughly $270 million, growing at 25% CAGR toward a projected $2.54 billion by 2032.

The gap between what works in a lab and what ships in volume comes down to three things that have nothing to do with thermodynamics.

Server compatibility is the first. Direct-to-chip cold plates mount onto existing CPU and GPU packages inside standard server chassis. Dell, HPE, and Lenovo all offer factory-integrated DTC options. Immersion requires purpose-built or heavily modified servers. Standard components with standard connectors and standard cable routing do not survive submersion in dielectric fluid.

Workforce readiness is the second. Most data center operations teams have spent their careers managing air-cooled environments. DTC adds manifolds, hoses, and coolant distribution units. Immersion asks a maintenance technician to pull a server out of a tank of fluid, let it drain, service it, and resubmerge it.

Retrofit economics is the third. DTC fits into existing rack infrastructure with manageable modifications. Immersion requires tanks, fluid inventory, specialized containment, and a fundamentally different floor layout. For the majority of operators adding liquid cooling to facilities that were built for air, DTC is the path that does not require gutting the room.

The Liquid Cooling Supply Chain Race Has Three Frontrunners and a Hundred Chasers

The data center liquid cooling market hit $5.52 billion in December 2025. It is projected to reach $15.75 billion by 2030. That kind of growth rate attracts everyone. The question is who can actually manufacture at the scale the buildout demands.

Read more →

The data center liquid cooling market hit $5.52 billion in December 2025. It is projected to reach $15.75 billion by 2030. That kind of growth rate attracts everyone. The question is who can actually manufacture at the scale the buildout demands.

Schneider Electric moved first among the industrial conglomerates. Their acquisition of Motivair in February 2025 gave them a dedicated liquid cooling portfolio: coolant distribution units, ChilledDoor rear-door heat exchangers, dynamic cold plates, and chillers. Since the acquisition, Motivair has opened a fourth production facility and is tripling global manufacturing capacity across plants in Buffalo, Italy, and India.

Vertiv has been in the cooling business longer than most of its competitors have existed. Their rear-door heat exchangers and CDU product lines are specified by default at several major colocation providers.

Eaton's $9.5 billion acquisition of Boyd Thermal was the largest pure-play cooling deal in data center history. Boyd brings manufacturing depth in heat exchangers, cold plates, and thermal interface materials. The combination mirrors Schneider's strategy: own enough of the cooling and power stack to sell integrated solutions.

The constraint is not demand. Demand is running ahead of supply across CDUs, cold plates, and rear-door heat exchangers. Lead times have stretched. The vendors who can ship on schedule will capture market share regardless of whose product benchmarks better on a spec sheet.

NVIDIA's Watt Roadmap Is Writing the Cooling Industry's Business Plan

Every cooling technology decision being made in data centers right now traces back to a single forcing function: how many watts NVIDIA's next GPU generates. The H100 runs at 700 watts. The Blackwell B200 pushes 1,000 watts. Rubin, the next generation, is expected to climb higher.

Read more →

Every cooling technology decision being made in data centers right now traces back to a single forcing function: how many watts NVIDIA's next GPU generates. The H100 runs at 700 watts. The Blackwell B200 pushes 1,000 watts. Rubin, the next generation, is expected to climb higher. Each step up the power ladder compounds the thermal load per rack, per row, per facility.

Air cooling tops out around 25 to 30 kW per rack. That ceiling has not moved meaningfully in years and it will not move meaningfully in the future. The physics of convective heat transfer through air set a hard limit.

Non-AI workloads across the global data center fleet total approximately 38 gigawatts. AI workloads are expected to hit 44 GW in 2026. The crossover point, where AI thermal load exceeds everything else combined, is arriving this year.

The cooling industry is, in effect, building to NVIDIA's spec. When Jensen Huang announces a new chip architecture, the thermal management implications ripple through CDU manufacturers, cold plate suppliers, and facility designers within weeks. The vendors who can design, qualify, and ship cooling hardware matched to the next GPU generation before that generation reaches volume production will own the upgrade cycle.

Frore Systems Just Raised $143 Million at a $1.64 Billion Valuation for Chip-Level Cooling

Frore Systems closed a funding round that valued the company at $1.64 billion. The $143 million raise was led by MVP Ventures, with participation from Fidelity and Qualcomm Ventures. The company makes solid-state cooling devices. No fans. No moving fluids.

Read more →

Frore Systems closed a funding round that valued the company at $1.64 billion. The $143 million raise was led by MVP Ventures, with participation from Fidelity and Qualcomm Ventures.

The company makes solid-state cooling devices. No fans. No moving fluids. Frore's AirJet technology uses piezoelectric membranes that vibrate at ultrasonic frequencies to create localized airflow directly over a chip surface. The data center application extends the same principle to GPU and CPU packages where targeted, high-velocity airflow can supplement or replace broader cooling architectures.

A $1.64 billion valuation for a cooling component company is notable on its own. For context, the entire immersion cooling segment is projected at $2.54 billion by 2032. Frore is valued at 65% of that projected market before its data center product is widely deployed.

Whether AirJet technology scales to the thermal loads of a 1,000-watt Blackwell GPU remains to be demonstrated at production volume. Frore does not need to replace liquid cooling. It needs to prove that a chip-level supplement adds enough thermal headroom to justify the per-unit cost. At $1.64 billion, the market is betting it can.

Zero-Water Cooling Pilots Are Launching in Phoenix and Mt. Pleasant. The Results Will Set the Standard.

Two facilities opening in 2026 will answer the question that the entire cooling industry has been arguing about: can you cool a high-density data center without consuming water, in a climate where you actually need cooling?

Read more →

Two facilities opening in 2026 will answer the question that the entire cooling industry has been arguing about: can you cool a high-density data center without consuming water, in a climate where you actually need cooling?

Phoenix, Arizona is the first test. A zero-water pilot project launching this year in a market where summer temperatures regularly exceed 110 degrees Fahrenheit and the municipal water supply depends on a Colorado River system that has been in sustained decline for two decades. If zero-water cooling works in Phoenix, it works everywhere in the continental United States.

Mt. Pleasant, Wisconsin is the second. A facility designed to operate without consumptive water use in a humid Midwestern climate where the thermal challenge is different. Humidity limits the effectiveness of certain dry cooling approaches.

The technology stack for zero-water cooling is not a mystery. Dry coolers, closed-loop liquid systems, and heat rejection without evaporation. The tradeoff is capacity and cost. Dry coolers sized for peak ambient temperatures in Phoenix require significantly more radiator surface area and fan power than an equivalent evaporative system.

The results from these pilots will influence permitting decisions, facility design standards, and vendor selection for the next generation of builds. If zero-water performance holds at commercially viable cost points, the argument for new evaporative installations in water-stressed regions collapses.

The Biggest Barrier to Liquid Cooling Adoption Has Nothing to Do With Technology

Ask a cooling vendor what slows down liquid cooling deployment and you will hear about PUE, capex, and fluid compatibility. Ask the operator who just signed a purchase order for 500 racks of direct-to-chip cooling and the answer is different. They cannot find people who know how to install it.

Read more →

Ask a cooling vendor what slows down liquid cooling deployment and you will hear about PUE, capex, and fluid compatibility. Ask the operator who just signed a purchase order for 500 racks of direct-to-chip cooling and the answer is different. They cannot find people who know how to install it, maintain it, and troubleshoot it when something goes wrong.

The workforce skills gap is the single most cited operational barrier to liquid cooling adoption in 2026. The data center industry spent two decades building a labor force trained on air-cooled infrastructure. CRAC units, raised floors, hot-aisle/cold-aisle containment. Those skills are mature and widely distributed.

Liquid cooling introduces an entirely different set of competencies. Plumbing and fluid dynamics replace airflow management. Leak detection systems require different monitoring protocols. Cold plate connections at the server level demand mechanical precision that a technician accustomed to swapping fans and filters has never been asked to deliver.

Some operators are solving this by partnering directly with cooling vendors for managed maintenance. Others are investing in internal training programs, pulling from adjacent trades like HVAC, plumbing, and industrial process cooling where the mechanical skills overlap.

The vendors who bundle training and certification into their sales process will have an advantage that does not show up on a spec sheet. The cooling hardware market has multiple credible suppliers. The cooling labor market does not have enough credible technicians. That imbalance will shape purchasing decisions as much as price or performance.