← Back to Intel
Operations May 11, 2026

A Fire at NorthC's Almere Data Center Took Utrecht University Offline. The Recovery Is Delayed Six Days Waiting on a European Component.

A fire broke out around 8:45 a.m. on Thursday, May 7 at the rear of NorthC's Almere data center on Rondebeltweg in the Netherlands. Techzine has been tracking the recovery. The fire department escalated the response to GRIP 1 initially before downgrading to GRIP 0 the following day once the fire was contained. NorthC expects to restore power by noon on May 13, six days after the incident, delayed by a critical component that needed to be shipped from a European supplier.

The customers affected include Utrecht University, public transport operator Transdev, the Dutch Chamber of Commerce, and Statistics Netherlands (CBS). Utrecht University had to cancel exams. Transdev faced communications outages. The blaze took out a portion of national digital infrastructure for the better part of a working week.

What This Tells Us About Cooling-Adjacent Fire Risk

NorthC has not disclosed the cause. The location of the fire at the rear of the facility is consistent with an electrical issue in the power distribution or cooling support equipment areas, which is where most data center fires originate. Server hall fires are rare. Most data center fires occur in adjacent infrastructure: switchgear rooms, UPS battery installations, chiller plants, and cooling tower mechanical rooms. The 2021 OVHcloud fire in Strasbourg, the 2024 Iron Mountain incident in Singapore, and several smaller events over the last decade all originated outside the white space.

The cooling plant is one of the higher-risk zones because it contains rotating equipment, refrigerant under pressure, oils, and electrical loads operating in environments that are by design exposed to thermal stress. Adding to that risk, the move to liquid cooling has introduced new failure modes: high-pressure coolant loops, dielectric fluids in immersion tanks, and chemical reactivity in some two-phase systems. Operators that retrofit liquid cooling into existing facilities have to update their fire suppression strategy because the suppressants designed for air-cooled CRAH environments do not always cover the new risk profile.

The Recovery Time Tells Us About Supply Chain Fragility

The detail in the Techzine reporting that should worry every operator is the six-day recovery delayed by waiting for a European component. NorthC operates a network of data centers across Northern Europe and has access to deep vendor relationships. If a facility with that profile cannot get a replacement critical component faster than six days after a fire, the supply chain is materially more fragile than the redundancy diagrams suggest.

The component in question, based on the description that it involves emergency power systems, generators, UPS, and distribution panels plus over a kilometer of replacement cable, is likely either a transformer or a high-voltage switchgear assembly. Both have lead times that have stretched across the cooling industry. Building cable, distribution panels, and certain types of UPS modules are also in extended lead time backlogs because of the same supply pressure that switchgear procurement is now creating across the industry.

The Operational Lessons

Operators reviewing the NorthC incident should pull three lessons. First, the recovery time after a single-asset failure is now measured in days, not hours, because component lead times have stretched beyond the spare parts inventory that most facilities carry. The N+1 redundancy model assumes the +1 can be restored quickly enough that a second failure within the recovery window is improbable. If recovery takes six days, that assumption breaks.

Second, the customers most affected by the outage were public-sector and infrastructure users that almost certainly believed they had geographic redundancy. They did not, because that level of geographic redundancy is expensive and not standard for university and statistical agency procurement. The actual failure radius of a major data center fire reaches further into critical infrastructure than the original procurement decision anticipated.

Third, the cooling plant is the under-discussed fire risk in data center operations. Operators reviewing their fire suppression strategy should be focused on the chiller plant, the UPS battery room, and any liquid cooling equipment outside the white space. Those are the zones that NorthC's recovery is now in the process of validating. The cooling industry should expect the post-incident reports, once published, to include recommendations that propagate across European operations and probably to North America over the following 18 months.