LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals
Infrastructure · AI Buildout

The AI Buildout, Plain English: What Is Actually Getting Built

Marcus Chen··7 min read

The AI industry is putting steel and concrete in the ground at a pace nobody has seen since the dotcom buildout of physical fiber. Stargate, Hyperion, Colossus, nuclear plants getting unmothballed, gas turbines arriving on flatbeds, utility commissions filing emergency load adjustments. We track 10 of the biggest projects on the new AI infrastructure page. This piece is the plain-English read of what they are, what they need, and why this is happening so fast.

What is being built

Big buildings full of computers, drawing a lot of electricity. That is the short version. The longer version: the new AI data centers are different from the cloud data centers of the 2010s in three structural ways. First, they are bigger. A modern Meta campus like Hyperion is heading for 2 gigawatts of power draw on completion. A traditional general-purpose data center campus tops out at one or two hundred megawatts. Hyperion alone could draw 10 to 20x what a 2018-era hyperscale campus did.

Second, the silicon density is higher. A rack of Nvidia GB200 NVL72 systems draws roughly 120 kilowatts. A traditional server rack drew 5 to 15. That is the same floor area pulling 10x the power, which means new cooling (liquid cooling is now the default), new power distribution (some campuses run their own substations), and new heat rejection plans. Some of the gas turbine controversy at xAI Colossus in Memphis comes straight from this density problem: the grid could not deliver the kilowatts per square foot on the timeline, so xAI brought in temporary methane turbines.

Third, the workload profile is different. AI training runs are flatter than traditional cloud workloads. A model training job pulls close to peak power 24 hours a day for weeks or months at a time. Inference is more variable but still bursty in a different way than web traffic. This matters for the grid because peak-flat is harder to balance than peak-spiky. Utilities are used to planning around residential evening peaks and industrial daytime loads. AI training is its own load curve and most US utilities are still figuring out how to model it.

The four numbers that matter

NumberValueWhat it tells you
Total announced~$500B (Stargate alone)Five-year program across multiple sites
Single-campus drawup to 2 GWMeta Hyperion class; rivals a small city
Nuclear MW signed~1,800 MW (TMI + Susquehanna)Hyperscalers buying or restarting reactors directly
Operational year cluster2026 to 2030Most new campuses come online in this window

Why nuclear is suddenly back

For 30 years US utility nuclear was in retreat. New plants got cancelled, old plants got retired, and the orthodoxy was that we were done building reactors. Then Microsoft signed a 20-year deal to restart Three Mile Island Unit 1 (the undamaged one; Unit 2 is the 1979 partial meltdown and remains permanently shut). Amazon bought a 480 MW direct feed from Talen Energy's Susquehanna plant with provisions to scale to 960 MW. Google signed with Kairos Power for small modular reactors. Oracle announced three SMRs of its own.

The hyperscalers want clean firm baseload that runs 24/7 and does not need backup gas. A nuclear plant fits that exactly. They also want to write 15 to 20 year power purchase agreements at predictable prices, which works for nuclear economics but does not work for solar or wind alone. Net effect: AI capital is reopening reactors that the previous decade closed. That is a notable shift and it shows up most clearly in the permits and PPAs, not the marketing.

Where this gets contested

Three flashpoints, all factual, none speculative.

Water draws. Liquid cooling and evaporative cooling pull water. In wet climates this is mostly fine. In Arizona, Texas, and parts of Nevada it competes with municipal and agricultural use. Several Arizona municipalities have started requiring closed-loop systems and pre-treatment commitments before permitting new builds. Public records on water consumption per facility remain spotty; some operators publish, some do not.

Grid bypass. The Amazon-Talen Susquehanna deal triggered a FERC fight in late 2024 about whether co-located data centers should pay full transmission cost-share if they are technically behind the meter. The answer is unsettled. If FERC sides against bypass structures, the economics of every direct-feed nuclear deal changes. If it sides for them, every utility ratepayer in the country may end up subsidizing infrastructure that does not serve their load.

Local pushback. Memphis residents living downwind of the Colossus campus filed complaints over methane turbine emissions. Loudoun County Virginia, the largest data center cluster on Earth, has been debating moratoriums for two years. Some counties want the tax revenue and jobs; some want the steel out of their viewshed and the gas turbines out of their air. It depends on the county and the neighborhood and it does not split cleanly along party lines.

What this means for the AI you use

Pricing floors first. The reason model pricing has been falling for two years is that compute supply outran demand. The gigawatt-class campuses arriving in 2026 to 2030 keep that supply curve growing. As long as new capacity comes online faster than agent and enterprise adoption picks up the slack, prices keep drifting down. The moment buildout slips or demand spikes, prices stop falling. The 2027 to 2028 window is the most interesting one to watch on that front because most of the new GW capacity lands then.

Provider-specific reliability second. The campuses backing different providers come online at different times with different power profiles. Anthropic just locked in five years of Google TPU capacity that begins arriving in 2027. OpenAI's Stargate flagship in Abilene comes online in 2026. xAI is running on Memphis methane until the grid catches up. These delivery timelines map directly to which providers have headroom for which workloads in which year. Our /status page and /pricing page will start surfacing those constraints as they get real.

Geopolitics third. The buildout is heavily US-concentrated, with secondary clusters in the UAE, Saudi Arabia, France, and the UK. China is building its own version, mostly opaque to outside observers. Sovereign-AI compute as a national security argument is getting louder and is now showing up in DOE filings, DOD partnerships, and Treasury export-control decisions. That story is bigger than one article and we will keep covering it.

The bottom line

The AI industry is no longer software-only. It is steel, concrete, transformers, cooling towers, and 20-year power contracts. That changes how fast it can grow (slower than software, faster than utilities are used to), where it can grow (where the power is and where the permits clear), and who has leverage (whoever signs the long-dated power deals first).

For everyone using the AI: the buildout is mostly good news for the next three years. More compute means cheaper inference, more model variety, more headroom for agents. After that the picture gets harder to read because we will see whether the demand curve has caught up to the supply curve. If yes, prices stabilize and the providers with the most signed-up power win. If no, somebody owns 2 gigawatts of empty server halls.

Either way, the next phase of AI is not just about better models. It is about who has the steel.

We track 10 of the largest projects on the AI infrastructure page. Free JSON for agents at /api/ai-infrastructure/projects.json.