Texas data centers consumed an estimated 25 billion gallons of water in 2025. That figure could reach 161 billion gallons annually by 2030. Evaporative cooling towers are the reason. They reject heat by boiling off freshwater at industrial scale, and the entire terrestrial data center fleet depends on them. Every new gigawatt campus breaks ground with a water budget that competes directly against agriculture, residential supply, and municipal reserves.
StarCloud thinks the answer is 400 kilometers straight up.
The Redmond, Washington startup just raised a $170 million Series A at a $1.1 billion valuation, led by Benchmark and EQT Ventures. Macquarie Capital, NFX, Y Combinator, and 776 Ventures also participated. The round makes StarCloud the fastest unicorn in Y Combinator history, reaching that threshold 17 months after its demo day. Total capital raised now sits at $200 million.
The pitch: build data centers in orbit, where solar energy is abundant and cooling is free. The cooling part is the story that matters to this audience.
On Earth, rejecting waste heat from a 100 MW data center requires chillers, cooling towers, pumps, fans, and millions of gallons of water. The thermodynamic chain is long. Compressors consume 30 to 40 percent of a facility's total electricity. Evaporative towers dump latent heat into the atmosphere by sacrificing freshwater. Dry cooling alternatives carry a fan power penalty that follows a cube law: double the airflow, draw eight times the electricity.
In vacuum, none of that exists.
Heat rejection in space works through a single mechanism: infrared radiation. A hot surface emits photons into the void. No medium required. No water. No fans. No compressors. No moving parts. The Stefan-Boltzmann law governs the rate. A 1-meter by 1-meter black plate at 20 degrees Celsius, radiating from both sides into deep space at 2.7 Kelvin, emits roughly 838 watts. That is approximately three times the electricity generated per square meter by a terrestrial solar panel.
Read that again. A passive metal plate with no energy input rejects more thermal power per unit area than a solar panel generates. The heat sink is the universe itself, 2.7 Kelvin in every direction. You cannot build a colder reservoir on Earth. You cannot even come close.
The elegance is structural. Terrestrial cooling systems fight thermodynamics. They spend energy to move heat from a warm place to a slightly less warm place, and the gap between those temperatures determines efficiency. In orbit, the gap between a server running at 70 degrees Celsius and the cosmic microwave background is enormous. Radiative transfer scales with the fourth power of temperature. The hotter the component, the more aggressively it radiates. Self-regulating.
StarCloud launched Starcloud-1 in November 2025 carrying a single Nvidia H100 GPU. First data center-class GPU in orbit. Ever. The satellite trained NanoGPT on Shakespeare's complete works and ran Google's Gemma large language model in space. A proof of concept, not a production system. But the thermal validation mattered: the H100 ran its full training loop while the satellite's passive radiator handled waste heat rejection without any active cooling system.
CEO Philip Johnston spent two years at McKinsey working on satellite projects for national space agencies before founding the company. He is not a thermal engineer. He is a math-and-finance operator who looked at data center power consumption curves and concluded the terrestrial grid cannot keep up.
Starcloud-2 is scheduled for October 2026. Multiple H100 GPUs. Nvidia Blackwell hardware. An AWS server blade. A bitcoin mining computer. The satellite will carry what the company calls the largest commercial deployable radiator ever sent to space and will generate 100 times the power of Starcloud-1, roughly 8 kilowatts. That radiator is the thermal backbone. Every watt of compute dissipated through radiation alone.
StarCloud's broader thesis leans on space-based solar. In orbit, solar arrays achieve a capacity factor above 95 percent. No atmosphere. No weather. No night cycle in certain orbital configurations. The median capacity factor for terrestrial solar in the United States is 24 percent. In northern Europe, under 10 percent. One square meter of solar panel in space produces roughly eight times the annual energy of the same panel on a rooftop in Berlin.
Johnston projects that energy costs in orbit will be 10 times cheaper than terrestrial equivalents, even after accounting for launch costs. That projection depends on SpaceX's Starship reaching commercial cadence at around $500 per kilogram to orbit. It is not flying commercial payloads yet. Johnston has said he expects access to open up in 2028 and 2029, with Starcloud-3 being the first orbital data center that reaches cost parity with ground facilities.
But strip out the energy argument for a moment. Focus only on cooling. A terrestrial data center with a PUE of 1.3 spends 23 percent of its total energy budget on cooling infrastructure. For a 100 MW facility, that is 23 MW of cooling load. In orbit, that 23 MW goes to zero. The waste heat still exists, but it leaves through the radiator for free. No electricity consumed. No water consumed. The PUE contribution from cooling drops to effectively 1.0.
That is not an incremental improvement over direct liquid cooling or immersion. It is a different category.
Aetherflux plans to launch its first solar-powered orbital data center satellite by Q1 2027, with a power-beaming demonstration satellite going up in 2026. Google's Project Suncatcher pairs custom Trillium TPU v6e chips with Planet Labs satellite hardware for a 2027 demonstration. Aethero launched Nvidia's first space-based Jetson GPU in 2025. SpaceX, after acquiring xAI, has asked the US government for permission to build a million-satellite distributed compute network.
BIS Research projects the in-orbit data center market at $1.77 billion by 2029 and $39 billion by 2035.
Every one of these systems will use radiative cooling. There is no alternative in vacuum. The question is not whether space-based thermal rejection works. The Stefan-Boltzmann equation settled that centuries ago. The question is whether orbital compute can reach densities where the cooling advantage translates into real economic pressure on the terrestrial data center supply chain.
Eight kilowatts. That is what Starcloud-2 will generate. A single rack in a modern AI data center draws 40 to 120 kilowatts. StarCloud's entire next-generation satellite produces less power than one rack. The long-term plan calls for an 88,000-satellite constellation. Even at scale, the total compute capacity of that constellation would represent a fraction of what a single hyperscale campus in Virginia delivers today.
Latency is the other constraint. Low Earth orbit adds 4 to 8 milliseconds of round-trip delay. Fine for training workloads and batch inference. Unacceptable for real-time applications that need sub-millisecond response. Orbital data centers will not replace Equinix. They will serve a different workload profile entirely: long-running training jobs, batch processing, and compute tasks where energy cost matters more than latency.
The radiator surface area problem also scales uncomfortably. At 838 watts per square meter, rejecting 1 megawatt of waste heat requires roughly 1,200 square meters of radiator. Deploying that much surface area on a satellite is a structural engineering challenge that gets harder with every order of magnitude. The ISS solar arrays span about 2,500 square meters total. A 10 MW orbital data center would need radiator area approaching that scale, plus its own solar arrays on top.
Orbital data centers will not kill the cooling tower. Not in five years. Probably not in ten. The density gap is too large. The launch economics are too immature. The total addressable compute in orbit will remain a rounding error against terrestrial capacity for the foreseeable future.
But that framing misses the point.
StarCloud and its competitors are building proof that compute does not have to consume water. That waste heat rejection does not require electricity. That the entire cooling stack, from chiller plants to evaporative towers to CDU loops, exists only because we insist on putting servers at the bottom of an atmosphere. The physics of radiative cooling in vacuum are so favorable that a passive metal plate outperforms the most advanced terrestrial cooling system ever built.
The real threat to the cooling industry is not 88,000 satellites. It is the demonstration effect. When a startup can train a model in orbit with zero water and zero cooling energy, it resets the benchmark for what terrestrial operators should demand from their own thermal infrastructure. Every gallon of water consumed by an evaporative tower becomes harder to justify. Every megawatt spent on compressors looks more like a tax on geography than a law of physics.
StarCloud is not a cooling company. But it just raised a billion-dollar valuation on the thesis that cooling should cost nothing. The terrestrial cooling industry should treat that as a provocation, not a curiosity.