Elon Musk presented the space component of Project Terafab at Giga Texas in March: electromagnetic mass drivers constructed on the lunar surface, used to accelerate AI-packed satellites to escape velocity without rocket propellant. The system would leverage the moon's vacuum environment, low gravity, and unobstructed solar access to power a distributed orbital AI compute network.
The concept is not new. Edward Fitch Northrup proposed lunar mass drivers in 1937. Gerard K. O'Neill popularized them in the 1970s as a mechanism for transporting lunar material to orbital construction sites. What is new is the application: using them to deploy AI data center hardware at scale, with the moon as a manufacturing and launch platform rather than a destination.
Solar panels in LEO generate roughly 5x more power than Earth-based panels due to the absence of atmospheric absorption and day/night cycles. The moon's escape velocity is 2.38 km/s, versus Earth's 11.2 km/s — requiring dramatically less energy per kilogram launched. Electromagnetic tracks can stretch dozens of kilometers. No chemical propellant. No booster debris.
Launching from Earth to orbit requires overcoming 11.2 km/s of escape velocity against a thick atmosphere. Launching from the lunar surface requires overcoming 2.38 km/s in a vacuum. The energy math is categorically different. A lunar mass driver running on solar power could theoretically launch hardware continuously without the per-launch fuel cost of chemical rockets.
The cooling rationale is equally direct. On Earth, data centers compete with cities for electrical power and water access. Liquid cooling, immersion cooling, and heat rejection infrastructure are large, expensive, and resource-intensive. In orbit, the vacuum of space serves as an infinite heat sink. Passive radiative cooling panels reject heat at approximately 838 watts per square meter. No CDUs. No cooling towers. No water. StarCloud's $1.1 billion valuation was built on exactly this premise.
Two electromagnetic track designs are under consideration. Railguns deliver a single high-power pulse and have well-understood engineering constraints. Coilguns use sequenced magnets to accelerate cargo more gradually — preferred for sensitive AI hardware that cannot survive the peak g-forces of a railgun launch.
The scale problem is significant. Musk's stated target of a distributed orbital network providing "1,000 times the power of current systems" would require launching over a million tons of material to achieve. At 135 Starship launches per day for initial seeding of the lunar manufacturing base — before mass driver construction, before hardware production at scale — the logistics stack is not yet practical. A demonstration is targeted for mid-2026.
The people spending the most money on terrestrial cooling infrastructure — the hyperscalers — are also the ones funding the research that treats terrestrial cooling as a bottleneck worth circumventing. Nvidia's Vera Rubin Space-1 module, announced the same week, frames orbital data centers as the next compute frontier explicitly because ground-based thermal constraints limit AI scaling.
The terrestrial cooling market is real and growing at rates that will sustain it for decades. But the framing from the people at the top of the capital stack has shifted. They are not building liquid cooling infrastructure as the permanent solution. They are building it as the best available solution while the permanent solution gets engineered. The cooling industry should understand which side of that distinction it is on.