← Back to Intel
Infrastructure April 28, 2026

HPE's Containerized Racks Support 400 kW Per Rack Without Fans. Era4 Is Building in Sheffield. Traditional Data Centers Take Three Years. This Takes Months.

Traditional data center projects run 24 to 36 months from ground break to first power-on. Permitting, civil works, structural systems, mechanical plant, electrical distribution, IT fit-out, commissioning. For an operator who committed to AI capacity in 2025 and needs it running in 2026, that timeline is a commercial problem with no engineering solution inside the traditional construction model. The answer the industry has arrived at is to parallelize the work. While civil teams prepare the site, the hardware is being integrated at a factory. When the site is ready, the containerized unit arrives and connects. Commissioning takes days.

HPE's current containerized direct liquid-cooled systems support up to 400 kW per rack using a fanless architecture. Every watt of heat goes to the liquid loop. There are no server fans. NVIDIA's roadmap projects that future GPU generations will require megawatt-level rack power, and HPE's 400 kW offering is designed as the first segment of that trajectory. The containerized format gets that capacity to site in a fraction of the time a conventionally built facility requires.

What Fanless at 400 kW Actually Means

Removing fans from a 400 kW rack is not a marketing claim about noise reduction. It is a statement about thermal architecture. At 400 kW, the liquid cooling loop carries all heat rejection. Cold plates on GPU and CPU dies transfer heat to coolant circulating through manifolds and CDUs. No air is involved in the primary thermal path.

At standard CDU parameters, 20°C supply temperature and 45°C return, carrying 400 kW of thermal load requires approximately 5.7 liters per second of water flow per rack. That is manageable with industrial CDU units and properly sized primary loop manifolding. The reason the fanless architecture matters at this density is not aesthetic. A rack of 40 servers, each with two 80mm axial fans running at moderate speed, consumes 3 to 5 kW in server fan power alone. Eliminating that load improves PUE by removing parasitic electrical consumption from the IT load denominator. More practically, removing fans at the rack level removes moving parts operating in a high-heat environment, which is where bearing failures and fan replacements concentrate in conventional deployed systems. Fewer moving parts inside the thermal envelope means fewer unplanned maintenance events.

Era4 in Sheffield

Era4, a British AI infrastructure company based in Sheffield, is deploying containerized liquid-cooled systems with hardware scheduled to be operational by mid-2026. Ground preparation for the Sheffield project began in early 2026. The timeline is possible specifically because the hardware integration is happening at the factory concurrently with civil site work. Era4 is building AI infrastructure for customers who cannot operate on a 36-month construction timeline, and the containerized modular approach is what makes the schedule viable.

Contour Advanced Systems, based in Varsseveld, the Netherlands, is among the specialized manufacturers building the containerized units for deployments like Era4's across Europe. These are purpose-built enclosures with server racks, cooling, power distribution, and structured cabling already integrated at the factory. External requirements at the deployment site are the electrical feed, a water supply, and a water return for the liquid cooling loop. The containerized unit arrives with everything else complete.

The Overprojection Problem Traditional Builds Create

Traditional data center design forces operators to commit to cooling and power infrastructure well ahead of knowing which workloads will arrive and at what scale. A facility permitted for 30 MW of IT load installs 30 MW of mechanical plant, whether the first year of operations loads 3 MW or 30 MW. The capital cost and operational overhead of that overprovisioned capacity sits idle while the facility ramps to utilization. For AI infrastructure specifically, where workload requirements shift on GPU generation cycles measured in 12 to 18 months, committing to fixed mechanical plant five years in advance is a procurement strategy that ages poorly.

Containerized modular builds let operators add capacity in increments that match actual demand. Data center capacity, cooling capacity, and power delivery can scale independently as workloads grow. A deployment that starts at 2 MW of GPU cluster capacity can add a containerized unit to reach 4 MW without redesigning the primary cooling plant, because each containerized unit ships with its own integrated CDU and heat rejection equipment. The facility grows incrementally rather than all at once.

Where Containerized Liquid Cooling Fits in the Broader Market

Containerized data centers were originally associated with edge deployments, oil platforms, military forward operating bases, and remote industrial sites. The association with extreme environments is accurate but incomplete. The same characteristics that make containerized systems deployable in those environments, factory integration, self-contained mechanical systems, rapid site commissioning, are exactly what AI operators need when they are trying to commission GPU capacity faster than conventional construction allows.

The $750 billion in data center capex committed through 2028 includes both conventional hyperscale builds and modular deployments serving operators who cannot access construction timelines that hyperscalers negotiate. For a regional AI cloud provider or an enterprise building internal GPU capacity, the containerized path from hardware commitment to operational cluster compresses to months rather than years. Era4's Sheffield deployment is a production example of what that timeline looks like in practice.

The operators who need GPU capacity in 2026 and do not have 36 months to build it should be running the numbers on containerized modular deployments. HPE's 400 kW fanless specification is not a theoretical ceiling. It is a current catalog offering. Era4 is building on it now.