LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals
Markets · AI Infrastructure

Anthropic's $200B Compute Bill Is Bigger Than Its Revenue. The Google TPU Deal in Numbers.

Marcus Chen··6 min read

The Information broke the Anthropic to Google number on Tuesday: a $200 billion commitment for cloud and Broadcom-built TPU capacity over five years, with the new gigawatt-scale buildout coming online starting in 2027. Neither side has confirmed it. Neither side has denied it. The number sat at the top of our weekly roundup, and it deserves its own piece, because the math underneath it tells you something specific about how the AI industry now actually works.

Headline: Anthropic just promised Google more money than Anthropic currently earns.

The Math

Anthropic's annualized run-rate revenue is currently somewhere north of $30 billion, up from roughly $9 billion at the end of 2025. The company's 2026 server cost is expected to land near $20 billion. Average the $200B commitment across five years and you get $40 billion of Google compute spend per year, before any other supplier shows up on the bill.

NumberValueNotes
Total commitment$200BFive years, Google Cloud + Broadcom TPU capacity
Average annual draw$40B/yrLikely back-weighted as 2027+ capacity comes online
Anthropic run-rate revenue~$30BUp from ~$9B at end of 2025
2026 server cost~$20BPer Anthropic's own forecast
Google reported backlog$460BDoubled this quarter, Anthropic is 40%+ of it
TPU vs Nvidia price delta40 to 50% lowerGoogle's own framing on equivalent capacity

Two ways to read $40B/year against $30B of revenue. The optimistic read is that Anthropic is buying capacity for revenue it does not yet have but is convinced is coming, on the curve that took it from $9B to $30B in the last five months. The pessimistic read is that this is leverage, full stop. Both reads are probably right. The deal only works if the revenue line keeps doubling every six to nine months for at least the next two years, and Anthropic clearly believes that is what is going to happen.

What Google Actually Gets

Three things, in roughly this order of importance.

One, Google recollects most of its own equity stake on the compute side. Alphabet has put about $40 billion of equity into Anthropic over the past two years, including the $40B tranche we covered in April. Under the new deal, Anthropic spends $40 billion per year, on average, on Google infrastructure. By year two of the contract, Google has booked more compute revenue from Anthropic than Google ever invested in Anthropic. The equity becomes a hedge on a customer relationship that the customer is now contractually anchored to. This is the most efficient capital recycling story in the cloud industry right now.

Two, the TPU manufacturing roadmap gets a guaranteed off-taker through 2032. TPUs are expensive to design and slow to ramp, especially with Broadcom as the silicon partner on the new generations. A multi-gigawatt commitment from a single frontier lab gives Google's capacity-planning team something Nvidia does not have at this scale: a hard, named demand floor that fab allocations and power purchase agreements can be sized against. Google is not just selling Anthropic chips. It is letting Anthropic pre-finance the TPU buildout.

Three, the contract becomes a moat against Anthropic ever fully migrating off TPU. Multi-year contracts at this scale come with consumption commitments, not optional ceilings. Once Anthropic's training and inference graphs are tuned for TPU architectures and the workloads are running on Broadcom-designed silicon, the switching cost grows every quarter. The deal does not just lock in revenue; it locks in technical dependency.

What This Does to Nvidia

Less than the headlines suggest, but more than zero.

Anthropic is not abandoning Nvidia. The company still uses Nvidia GPUs through AWS Trainium-adjacent capacity, the SpaceXAI Colossus 1 cluster we covered earlier this week, and bare-metal rentals across multiple cloud and neocloud providers. The picture is now unambiguously multi-silicon: TPU as the largest committed wedge, Trainium as the AWS relationship's native chip, Nvidia as the workhorse for everything else.

What changes for Nvidia is the negotiating posture of its biggest buyers. Until this quarter, the working assumption inside the cloud industry was that Nvidia's frontier GPUs were the only credible option for training a frontier model at scale. The Anthropic commitment to TPU at $200 billion is the loudest possible counter-example. If TPU is good enough for Claude training and Claude inference, it is good enough for any other model on the frontier curve. Nvidia's pricing power at the top of the buyer list, the part of the curve that drives the multiple, just got an asterisk.

The market has not fully repriced this. Nvidia's data center revenue line is still near a record, and the Vera Rubin platform deployment with OpenAI is on track for the second half of this year. But the next four quarterly calls are going to feature the phrase "custom silicon" more times than the previous four combined, and that repricing is going to happen one investor call at a time.

The 2027 Floor

The capacity in this deal does not arrive in 2026. It arrives starting in 2027. That is not a contract negotiation outcome; it is a physical constraint. New TPU generations need new fab allocations at TSMC. New gigawatt-scale data centers need power purchase agreements with utilities, fiber routes, water permits, and substations. None of those line items compress below 18 to 24 months from a standing start, and most of them are longer.

What that means in practice: every frontier lab is now contracted out for compute that physically does not yet exist. OpenAI's 10-gigawatt Vera Rubin commitment with Nvidia, Microsoft's Azure expansion, the Anthropic-Google deal, the Meta-Nvidia buildout, the SAP-Prior Labs European compute plan we wrote up last week all converge on the same delivery window. 2027 is when the next wave of compute actually shows up at scale, because that is the earliest the buildout can physically deliver. In the meantime, every lab is rationing what it has and pre-paying for what it wants.

Our Take

The $200B headline is dramatic, and the comparison to revenue is the cleanest way to explain why. But the more interesting fact is that Anthropic, OpenAI, and a few other frontier labs have collectively turned hyperscaler revenue into a derivative of their own forward growth assumptions. If Anthropic is right that revenue keeps doubling, $40 billion a year of compute is cheap. If Anthropic is wrong, Google is the one holding the bag, which is exactly why Google insisted on the equity stake before writing the contract.

Practical implication for builders. The marginal cost of an inference call on a frontier model in 2027 is going to be set by TPU economics as much as by Nvidia's margin. That should keep the inference price floor falling at roughly the rate our pricing floor analysis predicted, even with capacity constraints, because the per-token cost of the silicon itself is on a different curve from the cost of the cluster around it. Cheap inference is the policy outcome of cheap chips. Anthropic just made cheap chips a contractual reality for the second-largest frontier lab in the world.

We are tracking the deal cadence on our Anthropic provider page and the corresponding Google compute relationship on the Google page. Next data point to watch: whether Microsoft responds with a similar custom-silicon mega-commitment to OpenAI on Maia, or whether Microsoft sticks with the multi-cloud posture it has been building since the OpenAI relationship reset in April. The shape of that answer tells you whether 2027 is a TPU year or a three-way silicon race.