← Back to Intel
Hardware March 30, 2026

MiTAC Showed Up to CloudFest With a 256-GPU Liquid-Cooled Rack. The ODMs Are Coming for AI.

A Taiwanese server company that spent most of its life building boxes for other people's brands just rolled into Europa-Park and made a case that it belongs in the AI infrastructure conversation. MiTAC Computing Technology, a subsidiary of MiTAC Holdings (TSE: 3706), used CloudFest 2026 to debut the MR1100 series, a 48U liquid-cooled rack built from the ground up for large-scale AI training. The timing says something. When second-tier ODMs start shipping purpose-built liquid cooling racks with 256 GPUs inside, the thermal engineering supply chain needs to pay attention.

What MiTAC Actually Brought to CloudFest

The MR1100 is not a retrofit. It is a full 48U EIA rack designed around cold-plate direct liquid cooling, packed with up to 256 AMD Instinct MI355X GPUs spread across nodes that each hold eight accelerators alongside AMD EPYC 9005 Series CPUs. Memory tops out at 6TB per unit. Networking runs through AMD Pensando Pollara 400 AI NICs on a 400/800 Gb/s fabric. Every piece of the stack is AMD.

That GPU choice matters for cooling vendors. The MI355X draws 1,400 watts per accelerator. Eight of them in a single node means roughly 11 kW just from GPUs before you count the CPUs, memory, and networking silicon. A fully loaded 256-GPU rack pushes thermal densities that make air cooling physically impossible. AMD knows this. The MI355X ships only in an OAM/UBB 2.0 form factor designed for direct-to-plate liquid cooling. The air-cooled variant, the MI350X, caps out at about 1,000 watts and lower performance. If you want the full chip, you need plumbing.

MiTAC showed two other platforms at Booth H15 alongside the MR1100. The G4520G6 runs Intel Xeon 6 processors with up to eight double-width PCIe Gen5 GPUs, targeting cloud and HPC workloads. The TN85-B8261 is a dual-socket GPU server with four dual-slot GPUs and 24 DDR5-6400 RDIMM slots. Both are air-cooled. Both are conventional. The MR1100 is the one that breaks from what MiTAC has done before.

The Brand Behind the Brand

Most people in the data center world know TYAN, not MiTAC. That changed in October 2024 when MiTAC absorbed the TYAN brand entirely, consolidating everything under one name. The company entered server ODM work in 1999, acquired Tyan Computer in 2007, and has been building machines for the world's five largest server brands ever since. For decades the play was quiet contract manufacturing. Now MiTAC wants its own name on AI racks.

That pivot makes the MR1100 more interesting than a typical product launch. MiTAC is betting that the shift to liquid-cooled AI infrastructure opens a lane for ODMs willing to deliver complete rack-scale solutions rather than just motherboards and barebones. Supermicro, Wiwynn, and Quanta are all making similar moves. The difference is that MiTAC's CloudFest showing was specifically aimed at the European hosting and cloud market, not hyperscalers.

The Qarnot Case Study Deserves a Closer Look

MiTAC shared the CloudFest stage with Qarnot, a French cloud provider founded in 2010 that has built its entire business model around waste heat recovery. Qarnot distributes HPC servers across locations where the thermal output can be used directly for district heating or hot water in buildings. Their QBx modules run up to 24 processors per unit and transfer 95% of generated heat into usable warmth through aluminum cold plates.

The joint deployment uses MiTAC's OCP-compliant Capri 3 server. The results Qarnot claims are striking: a PUE of 1.01 and a 50% reduction in operational costs for clients across aerospace, automotive, energy, and banking in France and across Europe. A PUE of 1.01 means almost zero energy goes to cooling infrastructure. The servers are the heating system.

For cooling system vendors, this model represents both an opportunity and a threat. Qarnot eliminates traditional cooling hardware entirely. No chillers. No CDUs. No rear-door heat exchangers. The cold plates on the servers connect directly to building heating loops. That works in northern and central Europe where heating demand is consistent. It works less well in Phoenix. But the broader point stands: when someone captures 95% of server heat and finds a buyer for it, the entire cooling value chain gets compressed into the cold plate itself.

Why This Matters for Cooling Infrastructure

The data center liquid cooling market hit roughly $6.65 billion in 2025 and analysts expect it to triple by the early 2030s. Cold plate technology dominated that market last year and is growing at a 35% CAGR, faster than immersion. Ecolab just announced it is acquiring CoolIT Systems, one of the largest cold plate DLC providers on the planet. The money is following the physics.

MiTAC's MR1100 fits squarely into this trajectory. The OCP compliance matters because it means the rack's cooling interfaces should interoperate with standardized CDU connections and manifold designs coming out of the OCP Cooling Environments project. That project now includes cold plate specs, CDU guidelines, and the emerging Technology Cooling System framework designed for deployments from 10 MW to 300+ MW. For a cooling vendor looking to sell into European cloud providers buying MiTAC racks, OCP compliance is the entry ticket.

The MI355X's 1,400-watt envelope is also a useful benchmark for anyone designing next-generation cold plates. Each accelerator module needs to reject that heat through a contact area defined by the OAM form factor. The thermal interface, flow rate, and pressure drop requirements at that power density are materially different from what a 300W CPU cold plate handles. Companies building cold plates for these modules, whether CoolIT, Chilldyne, or smaller specialists, are engineering for a fundamentally different thermal problem than they were three years ago.

The European Angle

CloudFest draws over 9,000 attendees to Rust, Germany, and its audience skews toward European hosting companies, cloud providers, and MSPs rather than American hyperscalers. MiTAC chose this venue deliberately. The company's booth partners included ScaleUp Technologies and ASBIS Enterprises, both European distributors. The Qarnot case study reinforces the message: MiTAC builds hardware that European operators can deploy under European sustainability expectations.

That positioning matters because the EU's Energy Efficiency Directive requires large data centers to report energy performance metrics starting this year. Operators who can demonstrate near-unity PUE and productive heat reuse have a regulatory advantage. A complete liquid-cooled rack from an ODM willing to partner on heat recovery, delivered through European channel partners, solves several problems at once for a mid-tier French or German cloud provider that does not have the engineering staff to design custom cooling loops from scratch.

MiTAC is still a small player in the branded AI infrastructure market. But the MR1100 is a real product with real specs targeting a real gap. The cooling vendors who supply cold plates, manifolds, and CDUs to this tier of the market should be watching what ships next.