DataCool, the JohnsonMarCraft HVAC Products division of Arizon Companies, launched three new air handler product lines on April 16. Alpine, Glacier, and Kodiak scale from 2,000 to 100,000 CFM and up to 300 tons of cooling capacity, with single-point 460/3 electrical connections, ECM fans, MERV 8-16 filtration, and both chilled water and DX coil options. Matt Polizzi, Vice President at DataCool, framed the launch around what he called a fundamental change in how data centers are being designed and operated under AI workloads.
The easy read is that this is a traditional HVAC vendor scaling up to meet AI demand. The more useful read is the specific market this product family is targeting, because it is not the one the headlines focus on.
300 tons is roughly 1.05 MW of thermal capacity. That is enough to cool 20 to 25 racks at standard enterprise density of 10 to 15 kW per rack. For AI training clusters running H100s at 40 kW per rack or the B200 generation above that, the same capacity covers 6 to 8 racks before the per-rack air flow runs out.
Which is the point. DataCool is not positioning against direct-to-chip CDUs. The product family is aimed at the installed base that will not rip out its air cooling infrastructure to retrofit liquid in the next two years. More than 70% of global data center capacity sits in buildings designed for air cooling. That fleet still needs replacement units as the existing CRACs and AHUs hit end of life.
The more interesting target is the hybrid facility. A modern build often runs direct-to-chip liquid for the GPU racks and traditional air handling for the storage racks, network gear, and CPU-only compute that still constitutes the majority of the silicon footprint. In those builds, air handlers with MERV 16 filtration, ECM fan efficiency, and integrated controls still matter. The storage side of the AI cluster is often overlooked, and it is the place where air cooling still has a role.
The Alpine, Glacier, and Kodiak lines appear calibrated for this context. Modular scalable architecture, simplified installation with integral piping, and customizable controls integration are the features hybrid facilities actually use. The units need to be small enough to drop into a retrofit mechanical yard and large enough to carry the non-GPU thermal load without adding a third system type.
There are two. First, air handling efficiency is measured by fan energy per CFM per degree of temperature lift, and ECM fans are table stakes at this point. The differentiator is controls integration with the facility's DCIM and the ability to modulate flow against variable load. DataCool's press material mentions customizable controls but does not specify protocols. Operators buying into a long depreciation cycle will want BACnet, Modbus, and ideally a published API for the facility's twin. That detail is not in the launch material.
Second, air-cooled facilities are hitting a different kind of wall. Community pushback on noise is real, and air handlers running at 60 to 80 decibels are part of the story. Vendors who differentiate on acoustic performance alongside thermal capacity will have the easier conversation with facility planners in residential-adjacent sites.
Air cooling is not dead. It covers the majority of non-AI workloads and a meaningful slice of the AI support infrastructure. Vendors like DataCool, targeting the enterprise and mid-market side of the buildout, still have a growing market to ship into. The caution is that the total share of data center thermal load served by air is shrinking every quarter, and the products that ship in 2026 need to be designed for the 2030 market where direct-to-chip and immersion are the default for anything compute-dense. A 300-ton AHU is a useful piece of infrastructure. It is not the centerpiece of the AI cooling conversation.