Back to Originals

OpenAI Killed Sora. Here's What That Tells Us About AI Economics.

Marcus Chen··5 min read

Sora is dead. OpenAI officially shut down their video generation product after less than a year of public availability. The numbers behind the decision are staggering: $15 million per day in compute costs. $2.1 million in total lifetime revenue. A collapsed partnership with Disney that was supposed to be the product's salvation.

This isn't just a story about one failed product. It's a warning sign for every company trying to build consumer-facing AI products on top of the most expensive compute in history.

The Numbers Don't Lie

Let me put the economics in perspective. $15 million per day in compute means OpenAI was spending roughly $625,000 per hour just to keep Sora running. Every hour. Around the clock. That's $450 million per month in infrastructure costs for a product that generated $2.1 million in total revenue across its entire lifespan.

I did the math on our cost calculator. Even at the most optimistic usage projections, Sora would have needed roughly 200x its actual user base to break even. Not to be profitable. Just to break even.

The Disney deal was supposed to change the calculus. A major content studio licensing Sora for production work would have brought in enterprise revenue at a completely different scale. But after months of negotiation, Disney walked. The reports suggest it came down to consistency and controllability. You can't integrate a tool into a professional production pipeline if it produces different results every time you run the same prompt.

Video Is the Hardest Modality

Text generation is relatively cheap. You're producing tokens one at a time, and even a long response is a few thousand tokens. Image generation is more expensive, but you're producing a single frame. Video generation is in a completely different league. You're generating hundreds or thousands of coherent frames that need to maintain temporal consistency, physics, lighting, and character identity across the entire sequence.

The compute scales roughly with the square of the output duration. A 5-second clip might cost 10x what a single image costs. A 30-second clip costs exponentially more. And users don't want 5-second clips. They want minutes of usable footage.

This is why I've been skeptical of AI video as a consumer product from the start. The gap between "impressive demo" and "commercially viable product" is measured in billions of dollars of compute infrastructure.

The Broader Pattern

Sora isn't an isolated case. Look at the trend across AI products in 2025 and 2026:

ProductMonthly ComputeMonthly RevenueStatus
Sora~$450M~$200KShut down
ChatGPT~$80M~$300MProfitable
DALL-E 3~$15M~$8MSubsidized
Claude APIUndisclosedGrowing fastScaling

The products that are working financially are the text-based ones. ChatGPT and the Claude API generate enough revenue to cover their compute costs (or at least get close). Image generation tools mostly survive by being bundled into larger subscriptions. Video generation at scale simply cannot be made economical with current hardware.

What This Means for Developers

If you're building products that depend on AI video generation, you should be very careful about your assumptions. The APIs that exist today (Runway, Pika, the remaining competitors) are all burning cash to maintain their services. Any one of them could pull a Sora and shut down without much warning.

This isn't true for text and image generation. Those modalities have found sustainable economics, or are at least on a clear path to sustainability. The API pricing trends we track on TensorFeed show consistent price drops in text generation, driven by hardware improvements and competition. But video pricing has barely moved because the compute requirements are so extreme.

My advice: build on text APIs with confidence. Build on image APIs with reasonable caution. Build on video APIs only if you have a fallback plan for when the provider changes their pricing or shuts down entirely.

The Hardware Question

Everything I just described could change if compute gets dramatically cheaper. NVIDIA's next generation of GPUs and the emerging custom silicon from Google (TPU v6), Amazon (Trainium3), and others could shift the economics over the next two to three years.

But "could" is doing a lot of heavy lifting in that sentence. Even a 10x improvement in price/performance (which would be extraordinary) only brings Sora's compute cost down to $45 million per month. That's still wildly unprofitable at consumer price points.

The real path to viable AI video probably involves a different architecture entirely, not just cheaper hardware running the same approach. Techniques like speculative generation, cascaded models, and frame interpolation could reduce compute requirements by orders of magnitude. But those breakthroughs haven't happened yet.

The Takeaway

Sora's death is a reminder that impressive demos and viable products are two very different things. The AI industry has been running on demo energy for two years now, with investors and users both assuming that the economics would work themselves out eventually.

For text generation, that bet is paying off. For video generation, it's not. And the companies that can't tell the difference between those two realities are the ones that will burn through their funding fastest.

We're tracking all of this on our status dashboard and models hub. When the next big product shutdown happens (and it will), TensorFeed will have the story before most people check Twitter.