Elon Musk officially launched the Terafab on Saturday at an event inside the old Seaholm power plant in Austin. The project is a joint venture between Tesla, SpaceX, and xAI. The stated ambition: build a semiconductor fabrication facility capable of producing custom AI chips at the 2-nanometer node, with an initial target of 100,000 wafer starts per month scaling to one million. The price tag sits between $20 billion and $25 billion. The primary site will be the north campus of Giga Texas in Austin.
Those are large numbers. The cooling implications are larger.
A semiconductor fabrication facility at this scale is one of the most thermally intensive industrial operations on Earth. EUV lithography machines, the tools required for 2nm production, each consume roughly 1 megawatt of power and generate extraordinary heat loads that must be managed with precision cooling loops maintaining temperature stability within fractions of a degree. Clean rooms operating at Class 1 standards require massive air handling systems running continuously. Process cooling for chemical baths, wafer rinse systems, and etching chambers adds another layer of thermal load.
TSMC's Fab 18 in Tainan, the facility producing Apple's most advanced chips, draws hundreds of megawatts and operates some of the most sophisticated industrial cooling infrastructure ever built. Intel's Fab 52 in Arizona required a dedicated water reclamation facility before ground was broken. The Terafab, if built to the scale Musk described, would face identical thermal engineering challenges.
Austin's climate makes those challenges harder. Summer temperatures regularly exceed 100 degrees Fahrenheit. The city has been managing drought conditions and water use restrictions for years. A 2nm fab running at volume production in central Texas will need either massive evaporative cooling capacity, pulling from an already stressed municipal water system, or a dry cooling architecture that trades water for electricity at a ratio that compounds the region's power grid concerns.
The Terafab plans to produce two chip families. The AI5 is a custom AI processor for terrestrial use, powering Tesla's Full Self-Driving systems, the Cybercab robotaxi, and the Optimus humanoid robot line. Small-batch production is targeted for 2026, with volume in 2027. The D3 is a space-optimized chip designed to run at higher temperatures with radiation hardening for orbital deployment.
The terrestrial chips will land in data centers. Musk framed the project as a response to what he called a ceiling on external chip capacity from TSMC, Samsung, and Micron. "There is a maximum rate at which they're comfortable expanding," he told investors during Tesla's Q4 2025 earnings call, "and that rate is much less than we would like." The production target of 100 to 200 billion custom AI and memory chips per year, if even partially achieved, feeds a pipeline of servers that need to be cooled somewhere.
xAI already operates one of the largest AI training clusters in the world at its Memphis facility. That campus drew scrutiny last year for its power consumption and thermal footprint. More custom silicon at higher densities means more thermal load per rack, more CDUs per row, more chilled water per facility. The chips do not cool themselves.
Here is where the Terafab story gets strange, and interesting. Musk said the "vast majority" of the facility's production will go toward D3 chips for orbital data centers. In January, SpaceX filed with the FCC for a license to launch one million data center satellites, each providing 100 kilowatts of onboard compute power. The rationale, per Musk: "Current AI technology advancement relies on massive terrestrial data centers that require enormous power and cooling. AI demand cannot be met by ground-based infrastructure alone."
The thermal logic of space-based compute is real, if distant. In orbit, solar irradiance is roughly five times what reaches Earth's surface, and the vacuum of space provides a heat sink that terrestrial data centers cannot access. Chips designed to run hotter reduce the mass of cooling systems on each satellite, which reduces launch costs, which improves unit economics. The D3 chip's higher thermal operating range is a feature, not a limitation, in that context.
Whether SpaceX can actually deploy a million compute satellites, manage them, and deliver latency-competitive AI inference from orbit is a question that belongs more to science fiction than infrastructure planning. For the cooling industry, the near-term signal matters more than the long-term vision. Musk is telling the market that terrestrial cooling constraints are a binding limitation on AI scaling. He is building a chip factory partly to route around those constraints.
The Terafab creates cooling demand at three levels. The fab itself needs industrial-scale process cooling in a water-stressed, heat-intensive climate. The terrestrial chips it produces will fill data center racks that need liquid cooling infrastructure. And the orbital ambition, even if it never fully materializes, signals that the people writing the largest checks in AI infrastructure view cooling as a bottleneck worth spending $20 billion to circumvent.
Cooling vendors serving the Austin-San Antonio corridor should be paying attention. Tesla is already hiring semiconductor infrastructure roles for the site. Construction timelines for a facility this complex typically run three to five years, which means the cooling systems need to be specified, procured, and installed well before the first wafer moves through lithography.
The Terafab may or may not produce chips at the scale Musk described. His track record on timelines is generous to call mixed. But the thermal engineering requirements are real regardless of whether production hits 100,000 wafer starts per month or 10,000. A 2nm fab in Austin, Texas needs world-class cooling. That part is not negotiable.