The Pentagon Skipped Anthropic. Seven Other AI Companies Got the Contracts.
Yesterday the Department of Defense signed AI infrastructure deals with seven companies totaling more than $200 million across classified networks. The list reads like a who's who of the modern AI stack: OpenAI, Google, Microsoft, AWS, NVIDIA, SpaceX, and Reflection. One name is conspicuously absent. Anthropic, the only frontier lab whose public usage policy explicitly prohibits weapons development, was not invited.
That is not an oversight. According to reporting from CNN and follow-up coverage in Defense One, the Trump administration deliberately blacklisted Anthropic after months of friction over the company's refusal to relax its acceptable-use policy for warfare applications. We just watched the first frontier lab in this cycle pay a real economic price for enforcing the safety terms it actually wrote down.
What Got Signed
Seven deals, all announced May 1, 2026, all attached to existing classified-network programs at the DoD's Chief Digital and AI Office. The headline contract value aggregates to roughly $200 million across initial scopes, with option years that could push the multi-year ceiling into the low billions.
| Vendor | Scope |
|---|---|
| OpenAI | Frontier model access on classified networks, plus a co-development track for fine-tuned mission models. |
| Gemini Enterprise and Vertex on IL5/IL6 Google Distributed Cloud, with TPU compute attached. | |
| Microsoft | Azure Government and an OpenAI-on-Azure carve-out at IL6, including Copilot for analyst workflows. |
| AWS | Bedrock on GovCloud and Secret Region, multi-model including Llama, Mistral, Cohere, and the new OpenAI-on-Bedrock SKUs. |
| NVIDIA | DGX SuperPOD deployments and NIM inference microservices for on-prem classified clusters. |
| SpaceX | Starshield-routed model access for forward and disconnected environments. |
| Reflection | Open-weights frontier-class models for sovereign deployment, the policy-friendly hedge against any one vendor. |
The notable shape of the list: every major hyperscaler, every major chipmaker that ships for inference, the frontier lab the administration prefers, and one open-weights provider for the sovereign-deployment story. It is a complete AI stack for the DoD, with the missing slot exactly where Anthropic would normally sit.
Why Anthropic Got Skipped
Anthropic's acceptable-use policy is the most restrictive of any frontier lab. It explicitly prohibits the use of Claude for "weapons of mass destruction," for "cyberweapons," and for offensive uses against critical infrastructure. The policy is not boilerplate. The company has refused specific contracts on these grounds before, and the team has been vocal about it.
Reporting from earlier in the year described back-channel friction between Anthropic and the Office of the Secretary of Defense over scope language. The administration wanted the option to use models for offensive cyber operations and target identification. Anthropic's policy required carve-outs that the contracting officers found unworkable. The negotiation went silent in March. Yesterday's announcement made official what was already true: Anthropic is not in the room.
OpenAI, by contrast, quietly updated its usage policy in early 2024 to remove the explicit ban on military and warfare applications. Google's defense work has been growing since the Project Maven controversy, and Gemini ships under DoD-friendly terms. Microsoft and AWS have decade-long classified track records. Reflection's open-weights stance sidesteps the policy question entirely. Anthropic is the outlier, and it is the only outlier.
The Compute Deal That Made This Possible
Two days before the DoD announcement, Anthropic and Google finalized the next leg of their compute partnership: roughly $40 billion of additional commitment, including up to 5 GW of TPU capacity and a co-engineered TPU generation through 2031. Broadcom is the third party in the structure. The deal is one of the largest pre-sold compute contracts ever signed.
The timing is doing real work. A frontier lab that has just locked in nine figures a year of guaranteed compute revenue from a hyperscaler partner has an obviously different calculus on whether to soften its usage policy for a $30 to $80 million DoD slot. Anthropic was already revenue-secure for the safety-restrictive segment of the market. The DoD knew it. Anthropic knew the DoD knew it. The negotiation was over before the public announcement.
That is the structural story. Safety-restrictive policies are a luxury good in the AI market right now, and the price of admission is having a compute partner big enough to backstop the revenue you decline. Anthropic just demonstrated that it has one.
The Industry Signal
For the rest of the AI industry, this is the first concrete data point on what enforcement of an anti-warfare policy actually costs. Until yesterday, public restrictions on weapons use were a positioning move. Today they are a line item.
Three things change because of this announcement.
First, the safety-as-product wedge gets more explicit. Anthropic can now say to enterprise customers: we said no to the Pentagon when it conflicted with our policy. That is a marketing asset for compliance-sensitive buyers (healthcare, education, finance, EU regulated industries) that no other lab can credibly claim. Expect to see this language in sales decks within a quarter.
Second, the rest of the industry now has cover to drop or quietly soften its own warfare-related restrictions. If five of the seven big players are signing DoD work, the competitive pressure on the holdout is one-directional. Watch Mistral, Cohere, and the smaller US labs over the next sixty days. The ones that follow Anthropic will say so loudly. The ones that follow OpenAI will not.
Third, the open-weights story gets a boost. Reflection getting a seat at the table alongside the closed-weight giants is the administration signaling that sovereign deployment of open weights is a permitted path. That is good for Llama, good for DeepSeek-derived deployments inside the US government, and good for anyone selling on-prem inference hardware.
What This Is Not
A few things this announcement is not, despite how it has been framed in some coverage.
It is not a ban. Anthropic is free to pursue commercial work, allied government work (the UK AISI relationship continues), and existing enterprise contracts. Claude is not being restricted from any market. It is being restricted from one specific procurement channel that the administration controls.
It is not a permanent state. Administrations change. Procurement officers rotate. The GUARD Act vote we covered last week shows that bipartisan AI policy can move fast in either direction. A future DoD that wants the most safety-restrictive frontier lab in the stack can re-engage Anthropic at any time. The door is closed, not locked.
And it is not a financial crisis. Anthropic's annualized revenue run rate is reportedly above $9 billion, the company is sitting on multi-year compute commitments measured in tens of billions, and enterprise demand has been growing faster than the team can serve it. The Pentagon contract Anthropic walked away from would have been a rounding error against any of those numbers.
Our Take
The interesting move in AI policy this week was not the GUARD Act, the Mythos preview, or the latest model launch. It was a procurement decision that put a number on the cost of a safety policy. For the first time in this cycle, a frontier lab held its policy line against a paying government customer, took the economic hit publicly, and lived to tell about it. The next twelve months of AI ethics talk will quietly route around this fact: you can write the policy, or you can take the contract, but the days of getting both are ending.
For builders, the practical takeaway is that vendor selection now has a policy axis you can probe. If your buyer is in a regulated industry or has reputational risk on the line, asking a vendor to point at the deals it has declined is a real diligence question. Most labs will not have a coherent answer. One does.
For policy watchers, this is a stronger signal than any of the voluntary commitments, frontier-model evaluations, or White House summits of the last two years. Real enforcement leaves a paper trail. Yesterday Anthropic left one. We will be tracking follow-on procurement decisions on our AI policy hub and the incidents and policy timeline as more vendors are added or removed from the DoD's shortlist.