Orbital, a Los Angeles startup backed by a16z Speedrun, will launch its first AI data center satellite on a SpaceX Falcon 9 in April 2027. The payload is NVIDIA-powered servers running inference workloads in low Earth orbit, powered by continuous solar and cooled by radiating heat directly into the vacuum of space. CEO Euwyn Poon, whose previous company Spin was acquired by Ford, framed the pitch simply. AI progress is being constrained by the grid. Data center economics are dominated by electricity and cooling. Both are getting harder.
The instinct is to treat this as a founder story with science-fiction packaging. The more useful reading is what the cooling architecture actually looks like in orbit, because it inverts every assumption terrestrial data centers operate under.
There is no convection in space. No conduction to the surrounding atmosphere, because there is no atmosphere. The only heat rejection mechanism available is radiative transfer through infrared emission, governed by the Stefan-Boltzmann law. A satellite dumps heat at a rate proportional to the fourth power of its surface temperature, multiplied by the emissivity of its radiator surfaces, multiplied by the exposed area. That is it. Fans are useless. Pumps are useless. Cold plates can shuttle heat to a radiator panel, but the radiator itself is the only interface.
The practical constraint is surface area. A modern GPU rack dissipates 100 to 130 kW of heat. Rejecting that thermal load through radiation at space-relevant temperatures requires tens of square meters of radiator panel per rack, with unobstructed sightlines to deep space and careful attitude control to keep the radiator pointed away from the Sun and the warm Earth limb. Orbital's test mission is almost certainly sub-rack in scale, which makes the engineering more tractable. Scaling to hyperscale density in orbit is a different problem entirely.
The cooling story is striking, but the economics Poon is chasing come from the power side. A satellite in sun-synchronous low Earth orbit can access effectively continuous solar illumination with capacity factors above 99 percent, compared to 20 to 25 percent for terrestrial solar. No batteries required for nighttime operation. No grid interconnect queue. No four-year substation buildout. For AI inference workloads, where latency tolerance is measured in tens of milliseconds and the compute is embarrassingly parallel, the downlink and uplink math may actually work.
The company is running through FCC filings for a broader constellation. That is the signal to track. A single test satellite is a publicity moment. A constellation is a distribution architecture for distributed AI inference, and it competes with terrestrial data centers for exactly the workloads that are blowing up grid allocation budgets in Virginia and Phoenix.
The honest read is that space data centers will not replace terrestrial infrastructure at any meaningful scale in this decade. Launch costs, radiation-hardening overhead, servicing limitations, and the physics of heat rejection all conspire to keep orbital compute in a niche. Training workloads, which need enormous contiguous clusters with sub-microsecond interconnect latency, stay on the ground.
The category to track is latency-tolerant inference. Model serving where 200 ms of round-trip delay is acceptable. Certain batch processing workloads. Archival inference at volume. If those use cases move off-planet, they free up terrestrial capacity for training workloads and the latency-sensitive inference that cannot leave Earth. The cooling industry should welcome that. Every inference workload exported to orbit is a megawatt freed from the grid allocation queue.
Orbital is not a threat to the cooling industry. It is an experiment in whether the compute problem can be relocated to a place where power and cooling are no longer the binding constraints. If the April 2027 mission demonstrates sustained GPU performance and radiation resilience in orbit, the conversation changes. Not immediately, and not for every workload. But the direction of the arrow is clear.
The grid is the constraint. The cooling is the second constraint. Orbital is betting that both can be engineered around by moving the compute to a place where the grid does not apply and the cooling is free. That bet is a long shot. It is also exactly the kind of experiment that should be running right now, given how constrained the terrestrial buildout has become.