Cerebras Went Public at a $95 Billion Close. The Non-Nvidia Inference Bet Is Now a Market Story.
On May 14, 2026, a chip company that has never turned an operating profit opened its first day of trading at $350 a share. Cerebras priced its IPO the night before at $185, sold 30 million shares, raised roughly $5.5 billion, and closed the session up 68 percent at about $311, a market capitalization near $95 billion. By the reporting from CNBC and TechCrunch, it was the largest US tech-firm IPO since Uber in 2019. The non-Nvidia inference bet is no longer a thesis you argue in a Discord. It is a quote on a screen.
I have spent two years writing that the AI compute story would eventually split in two: training, where Nvidia is close to absolute, and inference, where the economics are different enough that a challenger could exist. The market just put a $95 billion number on the second half of that sentence. It also gave most of it back inside twenty-four hours. Both facts matter.
The Mechanics
The book ran hot. Cerebras lifted its range twice, from the low $100s to $150 to $160, and still priced above it at $185 per share, per Bloomberg. The order book was reported more than 20 times oversubscribed. The stock opened at $350 on the Nasdaq under the ticker CBRS, an implied fully diluted valuation north of $100 billion at the print, before settling to close day one up 68 percent.
Put the three numbers next to each other, because the gap between them is the story. Priced at $185, the offer valued the company near $56 billion fully diluted. Opened at $350. Closed near $95 billion. The underwriters left a lot of money on the table, the way a hot 2026 AI listing is supposed to, and the first tape confirmed the demand was real.
Why $95 Billion for a Company That Loses Money
The S-1 is not a clean growth story. Cerebras reported $510 million in 2025 revenue, up 76 percent from $290 million in 2024. It also reported $237.8 million in GAAP net income, almost all of which was a one-time, non-cash $363.3 million gain from extinguishing a G42-related forward contract liability. Strip that out and the company posted a non-GAAP net loss of $75.7 million and an operating loss of $145.9 million.
So what did the market buy at $95 billion? It bought a contracted forward curve. The filing discloses a $10 billion compute contract with OpenAI inside a broader Master Relationship Agreement worth more than $20 billion, covering 750 MW of inference capacity expandable toward 2 GW. That is the number the bulls underwrote: a named frontier-lab customer signing for power-plant-scale inference, not last year's revenue line.
This is the same pattern I traced when Anthropic committed $200 billion to Google TPUs and when Nvidia crossed $40 billion in AI equity bets. Compute is being priced off multi-year capacity commitments years before the capacity exists. We track that buildout on the AI infrastructure tracker, and Cerebras just became one of its more interesting line items.
The Asterisk Nobody Priced on Day One
Here is the line in the filing that did not make the day-one headlines. Eighty-six percent of Cerebras revenue comes from two UAE-based entities. In the first half of 2024, the Abu Dhabi conglomerate G42 alone was roughly 87 percent of revenue, with about $1.43 billion in long-term commitments. The OpenAI agreement diversifies that on a forward basis. It does not change what the historical revenue base actually is.
That concentration is not just a customer-risk bullet. It is a national-security question with a CFIUS history attached, and it is the reason this exact IPO got postponed in 2024. My colleague Kira Nolan has the full account of the G42 overhang and why it is now a structural tax on every AI-silicon listing. For the market read, the point is narrower: a $95 billion close implies the buyers discounted that risk to nearly zero on Thursday.
The Day-Two Reality Check
On Friday the stock fell about 10 percent, closing near $280. The reporting attributed the pullback to skepticism about how broad the wafer-scale market actually is. Analysts at DA Davidson called the product "niche-y." That word is doing a lot of work, and it is the right thing to interrogate.
The bull case is that Cerebras is the fastest inference hardware available and that token latency is becoming a first-class cost in agent workloads. The bear case is that one enormous die is a specialized tool, not a general accelerator, and that the addressable market outside a few frontier customers is unproven. That argument is a hardware argument, and Ripper takes it apart in what Cerebras actually sells and why it only matters for inference. The two-day round trip from $350 to $280 is the market trying to price exactly that question in real time.
What It Does to the Compute Capital Map
One blockbuster listing does not dent Nvidia. Nvidia reports earnings on May 20, and its training franchise is not in question. What changes is the financing narrative. For three years, every AI compute dollar has been underwritten as a Nvidia dollar or a hyperscaler-silicon dollar (TPU, Trainium, MI400, Maia). Cerebras just demonstrated that public markets will fund a fourth category at a five-figure-times-earnings multiple if the inference story is credible enough.
That has a second-order effect worth watching. The reporting frames Cerebras as the first of a handful of AI-related IPOs expected in 2026. A successful print resets the private mark for every inference-silicon and AI-infrastructure company still on the sidelines. You can see how that capital is currently distributed on our funding portfolio tracker, and the day-two fade is the first data point on whether the window stays open at these multiples or only at lower ones.
The other thing a public Cerebras changes is the inference price floor. If wafer-scale throughput is real at scale, it puts downward pressure on cost-per-token for the latency-sensitive tier, which is exactly the tier the agent economy lives in. We track where that floor actually sits on the models and pricing tracker, and a credible non-GPU supplier in the mix is the kind of input that moves it.
Our Take
The print is real and the thesis it validates is real: inference is a distinct compute market, and the public markets will now fund a non-Nvidia name in it at scale. That is a genuine structural shift, and it is worth saying plainly rather than hedging.
But $95 billion on $510 million of revenue, a non-GAAP loss, and 86 percent customer concentration is a forward bet on one OpenAI contract converting and one architecture generalizing beyond a handful of buyers. The day-two 10 percent fade is not noise. It is the market starting to underwrite the asterisks it ignored on Thursday. I think the inference-silicon category is durable and the specific multiple is not. Those can both be true, and over the next two earnings cycles the gap between them is the entire trade.
We are adding Cerebras to ongoing coverage on /today and tracking the OpenAI capacity ramp against the disclosed schedule. The bet went public. Now it has to convert.
