- What does this page cover?
- Major AI buildout projects across four operator categories: hyperscalers selling capacity (Microsoft, Google, Amazon, Meta, Apple), frontier-lab compute farms consuming capacity (Stargate, Colossus, the Anthropic TPU commit), AI-specialized GPU clouds (CoreWeave, Lambda), and Bitcoin-pivot AI hosting (IREN, Hut 8). Plus the nuclear PPAs and restarts powering them and a concept-stage orbital entry. Editorial curation, every entry sourced to public records. Free JSON at /api/ai-infrastructure/projects.json.
- Why a separate page for this?
- AI infrastructure is the physical substrate of every model, every API call, every agent on this site. We already cover the model layer, the pricing layer, and the funding layer. This is the layer underneath them all. The next two years will be defined by which projects come online on time, which slip, and which utility relationships hold up.
- Is xAI a hyperscaler?
- Strictly speaking, no. The traditional definition of hyperscaler is a cloud provider operating at massive scale that sells capacity to third parties: AWS, Microsoft Azure, Google Cloud, Oracle Cloud, plus arguably Meta and Apple operating hyperscale infrastructure for internal use. xAI Colossus today is a single-tenant frontier-lab compute farm: xAI builds it to train Grok, not to rent capacity. But the buzz term has expanded. Reporters and analysts increasingly use "hyperscaler" to mean "any operator running compute at hyperscale", which includes xAI. Elon has also signaled that future Colossus generations may open multi-tenant capacity, which would make the strict definition apply too. We track xAI Colossus as a frontier-lab compute farm in this registry, but flagging the terminology drift here so the search-term traffic finds the right entry.
- How are projects added?
- Editorial cadence. An entry is added when an authoritative source (company announcement, regulatory filing, utility commission docket, or established trade reporting) confirms a project at the gigawatt class or with an unusual structural feature (nuclear PPA, dedicated build, grid-bypass). We would rather ship 10 well-sourced entries than 100 stale ones.
- Why no opinions on the politics?
- Because the politics changes faster than the steel and concrete. We track the physical buildout, the power deals, the timelines, and the announced capacity. Reasonable people disagree on whether a 2 GW data center is good or bad for a community; reasonable people do not disagree on whether it is being built. We stay on the latter.
- How does this connect to the rest of TensorFeed?
- Cross-linked with /funding/portfolio (the capital flowing into AI infrastructure), /pricing (the model prices these data centers serve), and /status (the live state of the operators that run them). Together they form a closed-loop view of the AI ecosystem from capital to silicon to inference.
- Will you cover environmental impact?
- Yes, factually. Water draws, grid strain, peaker plant filings, emissions, and community pushback all appear in the context paragraph of relevant entries with their primary sources. Pros and cons both get reported. No endorsements, no advocacy, just sourced facts.