← Back to Intel
Deals May 8, 2026

Anthropic Signed $1.8 Billion With Akamai for Compute Capacity. Akamai Stock Jumped 27%. The Cooling Capacity to Deliver It Is the Next Question.

Anthropic disclosed a seven-year $1.8 billion compute capacity agreement with Akamai Technologies on May 8. DCD's coverage notes this is the largest single deal in Akamai's history. Akamai shares closed up 27% on the announcement, reflecting a market view that the CDN company is now positioned as a credible AI infrastructure partner.

The deal extends Anthropic's compute commitments alongside its existing arrangements with Google, Amazon Web Services, CoreWeave, and the SpaceX-xAI Colossus 1 facility. CEO Dario Amodei has publicly cited 80x year-over-year annualized revenue growth as the demand signal driving these procurement decisions. Revenue from the Akamai commitment is expected to start in Q4 2026, contributing $20 to $25 million in that period.

What This Says About AI Inference Distribution

Akamai's core business is content delivery: pushing data from edge nodes close to end users. The Anthropic deal repurposes that edge footprint for AI inference. The architectural fit makes sense. Inference workloads benefit from proximity to users in a way that training workloads do not. A user querying Claude from Tokyo gets a better experience if the inference happens in Tokyo or nearby, not in Northern Virginia.

The implication is that Anthropic and other foundation model providers are pushing inference compute outward from a handful of mega-campuses to a distributed network of edge sites. Akamai operates roughly 4,200 edge points of presence globally. Even if only a fraction of those are upgraded for AI inference, the cooling implications for those sites are substantial. Edge data centers historically run at low rack densities with air cooling and modest chilled water plants. AI inference at scale pushes those sites toward direct-to-chip cooling territory whether or not the existing infrastructure was designed for it.

The Retrofit Wave Just Got Bigger

The Akamai-Anthropic deal sits inside a broader pattern of brownfield liquid cooling retrofit work that is now in active deployment across the industry. Akamai will need to upgrade thermal capacity at a substantial portion of its edge footprint to deliver the compute that Anthropic is paying for. The cooling vendor base supplying brownfield retrofits is positioned well for this kind of project, because the contracts are smaller per site but multiply across hundreds of locations.

The other side of the calculation is timing. Anthropic's Q4 2026 revenue start means Akamai needs the capacity online in roughly six months. That is an aggressive timeline for thermal infrastructure upgrades at scale. Cooling vendors with established retrofit playbooks at edge sites will get first call. Vendors who have been investing in modular cooling solutions tailored to existing colo and edge environments are the most likely beneficiaries.

The Anthropic Compute Stack Is Getting Strategic

Looking at Anthropic's stack of compute commitments together is instructive. Google for general-purpose cloud capacity, AWS for both training and inference, CoreWeave for purpose-built GPU clusters, Colossus 1 for the largest training runs, and now Akamai for edge inference. That stack reads as an explicit diversification strategy. Anthropic is not putting its compute on a single supplier. The company is building optionality against supply, geography, and political risk simultaneously.

The cooling vendors who serve each of those infrastructure layers have a fragmented view of the same customer. Google has its own thermal vertical. AWS designs its own cooling. CoreWeave specifies cooling separately for each cluster build. Colossus runs SpaceX's bespoke architecture. Akamai is now adding a fifth thermal architecture for the same workload. The cooling industry serving Anthropic indirectly is now five separate procurement conversations, each with different specifications and lead times. That fragmentation is the new normal as foundation model providers refuse to put all their eggs in one supplier basket.