LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals
Infrastructure · Orbital · Long-Arc Thesis

AI Compute in Orbit: The Long-Arc Thesis. Why Solar + Vacuum Beats Texas + Gas (Eventually).

Ripper··7 min read

The reason this is worth taking seriously is not that we are anywhere near building it. We are not. The reason is that the four constraints terrestrial AI infrastructure runs into right now (grid, water, permits, NIMBY) all go away in orbit, and the one constraint that replaces them (launch cost) is the one constraint whose curve is actively bending in the right direction. That is a different shape of bet than most long-dated infrastructure plays. It is the long-arc thesis sitting underneath the short-cycle gigawatt-class buildout we cover on the infrastructure tracker.

The Constraints Terrestrial Runs Into

Four of them, all hardening over the next decade.

Grid. Building a new 500 kV transmission line takes five to ten years in the US, mostly waiting on permits and easements. The load growth from AI is happening in three to five years. The math does not work. Utilities are filing load adjustments that exceed their multi-year transmission capacity, and the gap is closing by adding peaker plants, which is exactly the opposite of what hyperscaler net-zero commitments need.

Water. Evaporative cooling is the cheap option for terrestrial campuses. A two-gigawatt facility evaporates millions of gallons per day. In wet climates this is fine. In Arizona, Texas, and parts of Nevada it is a political and physical constraint. Some Arizona municipalities now require closed-loop systems before permitting. Closed-loop is more expensive and less efficient.

Permits. Loudoun County Virginia, the single largest data center cluster on Earth, has been debating moratoriums for two years. Memphis residents have filed complaints about Colossus turbine emissions. Permitting cycles are getting longer, not shorter, and the political ceiling on a single county's data center footprint is finite.

NIMBY. Related to permits but social, not regulatory. A gigawatt-class campus is visually enormous, it changes traffic patterns, it changes the local power and water economy, and it employs fewer people than the tax-incentive presentations imply (most jobs are short-term construction). Local opposition is rising and it does not split cleanly along party lines.

Why Orbit Sidesteps Each One

Continuous solar in low Earth orbit gets roughly 30% more energy per panel area than the sunniest terrestrial site because there is no atmosphere, no clouds, no day-night cycle for satellites in the right orbital regimes (sun-synchronous polar orbits stay in continuous sunlight). Power density goes up. No grid needed.

Cooling in vacuum is heat-radiation to the 3 Kelvin background of space. Sized correctly, this scales without water. Liquid cooling loops still exist on the spacecraft side for moving heat to the radiators, but no evaporative loss. No water needed.

Permits do not apply. International Telecommunication Union slot allocations and national launch licenses are real, but compared to county-by-county data center permitting, the cycle is shorter. No municipal moratoriums. No NIMBY (or at least, NIMBY of a different and more diffuse kind).

The Catches

The pitch is too good. Four real constraints replace the four terrestrial ones.

ConstraintStatusCurve
Launch cost$2,000/kg (Falcon 9)Starship target: $100 to $500/kg by 2030
Radiation hardeningcommercial GPUs are not rad-hardActive research; shielding is mass-expensive
Mass to orbit~100t per StarshipA 1 GW facility is millions of kg of mass
Ground bandwidthlimited downlink capacityOptical ISLs + Starlink-class arrays help

Launch cost is the load-bearing constraint, and it is the only one with a clear downward trajectory. Starship at full reusability targets the $100 to $500 per kilogram range. Falcon 9 reusable currently sits around $2,000. Pre-reusable launch was tens of thousands per kilogram. That is the cost curve that makes orbital compute go from impossible to merely difficult.

Radiation hardening is not solved at commercial GPU scale. NVIDIA H100s on the ground would fry in low Earth orbit within months from total ionizing dose and single-event upsets unless heavily shielded. Shielding adds mass, which adds launch cost. The alternatives are rad-hard custom silicon (slow, expensive, several generations behind consumer) or accepting shorter mission lifetimes with hot-swap replacement (which requires routine launch cadence). Both are research problems with no obvious solution yet.

Mass is the brute economic constraint. A modern AI rack is dense (120 kW per rack, ~1 ton per rack including chassis). A 1 GW facility is roughly 8,000 racks, plus structure, plus radiators, plus station-keeping fuel. Order of magnitude millions of kilograms. Even at $100/kg, that is hundreds of millions of dollars in launch alone for one GW of compute. Today that buys you maybe a 250 MW terrestrial campus including buildings. The orbital math gets close but never gets cheaper than dirt-based steel.

Ground bandwidth is the underrated constraint. Even a heavily-saturated orbital training cluster has to ship checkpoints, gradients, and inference results back to Earth. Existing high-throughput Ka-band downlinks are gigabit-class. A 1 GW training run produces petabytes per day of internal traffic, only some of which has to come down, but the part that does come down is still big. Optical inter-satellite links and phased-array downlink architectures help. They do not eliminate the issue.

Who Is Exploring

Multiple separately-reported feasibility programs, all in concept or early-engineering stage.

Anthropic and SpaceX have publicly discussed orbital extensions of Colossus-class training compute. We covered the announcement on May 9 in the Colossus orbital piece. The framing in that piece holds: the orbital footnote is structurally the bigger story even though the near-term GPU booking on Colossus 1 was the news headline.

Google has reportedly explored similar concepts internally, sometimes referenced as Project Suncatcher in the trade press. Less public than the Anthropic + SpaceX track, and more research-coded than commercial-coded. Worth watching at I/O cycles in the next two years for any public movement.

Starcloud is the clearest commercial player explicitly chasing orbital data centers as its founding mission. Small company, real engineering, real seed funding. The first real test of whether the bottom-up startup version of this thesis attracts capital at serious scale.

Lockheed Martin and Northrop Grumman have dual-use studies, framed as national-security space compute rather than commercial AI infrastructure. The DOD has historically been where rad-hard expensive space silicon gets funded. If commercial orbital compute happens, it likely happens through some lineage that touches defense.

China is reportedly exploring similar architectures via state-owned space companies. Opaque to outside observers but worth assuming non-zero. Sovereign-AI compute as a national security argument applies in orbit even more than on the ground.

The Timeline Reality

First megawatt-class orbital compute demonstration: 2030 to 2033, probably as part of a larger orbital infrastructure program (manufacturing, satellite servicing, lunar prep) rather than as a standalone data center mission. Operational first GW-class orbital compute: 2035 plus, contingent on Starship reaching cost targets and rad-hard solutions becoming commercial.

Terrestrial AI infrastructure carries the load for the next decade. The 2026 to 2030 gigawatt-class campuses on the tracker are not getting replaced by orbital. The bet is what comes after, when terrestrial starts running out of room and water and patience.

Why This Is Worth Watching Anyway

Three reasons.

One, capital allocation today shapes whether this happens in 2032 or 2042. If Anthropic and SpaceX put a billion dollars into orbital R&D in the next three years, the demonstration mission moves up five years. If they do not, it slips a decade. Long-dated R&D investment IS the front edge.

Two, the geopolitical implications are real and current, not future. The country that first deploys serious orbital compute has the option to keep its training runs out of adversary jurisdiction reach, away from export controls, and away from terrestrial permit cycles. That option has strategic value before the first kilowatt actually arrives in orbit.

Three, every constraint that pushes terrestrial buildout harder makes the orbital math incrementally less crazy. The Memphis turbine fight, the Loudoun moratoriums, the Arizona water permits, the FERC ruling on grid bypass: every one of them is a small force pushing the long-arc thesis from impossible toward inevitable. Watch the terrestrial constraints. The orbital answer comes into focus exactly as those get worse.

The next phase of AI is not just about better models. It is not even just about who has the steel. The phase after that is about whether the steel still has to be on Earth.

Concept-stage entry on the AI infrastructure tracker. Original Colossus-orbital coverage at the May 9 piece. Companion buildout analysis at The AI Buildout, Plain English.