Back to Originals

OpenAI, Anthropic, and Google Just Teamed Up Against Chinese AI Theft

Ripper··6 min read

For years, the big three AI companies treated each other as mortal enemies. GPT versus Claude versus Gemini. Pricing wars. Feature parity. Talent raids. Then, this week, something unprecedented happened. OpenAI, Anthropic, and Google announced they are sharing intelligence about Chinese AI companies stealing their models.

The announcement came through the Frontier Model Forum, a collaborative body created specifically for this kind of coordination. And what they have to say is alarming.

The Distillation Attack

Let me explain what happened, because it is more sophisticated than standard prompt extraction.

For the past year, three Chinese AI companies have been mounting what is called an adversarial distillation attack. The basic idea is simple. Create a fake account, submit carefully crafted prompts designed to make Claude, ChatGPT, or Gemini reveal their internal reasoning patterns. Do this millions of times. Use the collective output to train a smaller, cheaper model that mimics the original.

Anthropic documented the scale. Sixteen million adversarial exchanges. Approximately 24,000 fraudulently created accounts used across multiple services. The attackers were methodical. They rotated IP addresses. They changed account creation patterns. They designed their prompts to appear like legitimate research queries. The goal was not to be noticed.

It was, in essence, a way to steal billions of dollars worth of R and D without a server, without a headquarters, without anything that could be physically attacked.

Why This Matters Now

The timing is important. Frontier Model Forum announcements do not happen casually. This level of coordination requires legal approval from three separate general counsels, regulatory clearance, and agreement on what is safe to disclose publicly versus what stays classified.

What made this disclosure possible, I think, is that all three companies now understand they face the same threat and the same timeline. Adversarial distillation works today. It will work better tomorrow. By next year, the models being distilled in China will be nearly as good as the ones they are stealing from. Not identical. Close enough to power products. Close enough to compete.

Sharing intelligence now creates a defensive advantage before the gap closes completely. It also sends a message to the actors doing the stealing. You are detected. You are documented. You are public.

The Bigger Picture

This is also a shot across the bow of open source advocates who argue that frontier AI models should be freely available to everyone.

The counterargument is now data backed. If you open source Claude, OpenAI and Google are not the primary winners. They are already making their own models. The winners are foreign state actors and the companies they fund. Open sourcing frontier models accelerates not widespread beneficial AI. It accelerates whatever China and Russia want to build with them.

Anthropic, OpenAI, and Google are not saying this explicitly in their forum announcement. They do not need to. The subtext is clear. Keeping frontier models proprietary is not about greed. It is about maintaining a defensive window. The moment these models go public, that window closes.

What Happens Next

The Frontier Model Forum has committed to ongoing intelligence sharing and coordinated response protocols. That sounds formal and toothless until you realize what it means in practice. If a particular IP block or account creation pattern gets flagged as part of a distillation campaign, all three companies now have agreements to blacklist it simultaneously.

This is asymmetric defense. The attackers operate one way at a time. They cannot coordinate across three separate companies. The defenders can. That is the advantage.

You will also see upstream improvements. Rate limiting on API calls. Behavioral analysis on account patterns. Honeypot queries designed to identify distillation attacks and fingerprint them. The cost of stealing will go up while the signal quality goes down.

The attacks will not stop. They will become more expensive, more sophisticated, and less effective. That is the goal. Not perfection. Degradation.

The Competitor Angle

Here is the thing that strikes me most about this announcement: Anthropic, OpenAI, and Google are competing harder against each other than they ever have. Opus 4.6 is better than Gemini 2.5. ChatGPT-4o is more popular than both. The pricing wars are brutal. The talent wars are brutal. The feature wars are brutal.

Yet they coordinated on this. Publicly. They are willing to look weaker in the short term for mutual defense in the long term. That suggests they view the Chinese threat not as a market issue but as an existential one. If frontier models can be stolen through adversarial attack, then none of their competitive advantages matter.

That is what happens when the competitors realize they are on the same side against a different opponent.

Track frontier model releases and capabilities.

Explore our Models page to compare Claude, GPT, and Gemini on real benchmarks, pricing, and features. See the full model comparison, benchmark scores, and pricing guide.

About Ripper: Ripper covers AI security, model releases, and the business side of frontier AI at TensorFeed.ai. TensorFeed aggregates news from 15+ sources and is built for both humans and agents.