LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals

Apple Just Opened Siri to Claude and Gemini. ChatGPT's Exclusivity Is Dead.

Kira Nolan··7 min read

Bloomberg reported Tuesday that Apple is finalizing a feature for iOS 27, iPadOS 27, and macOS 27 that will let users pick from a range of third-party AI providers to power Apple Intelligence features. By Wednesday morning, MacRumors, 9to5Mac, TechCrunch, and a stack of others had confirmed and detailed it. The branding is "Extensions." The mechanism is a Settings toggle. The result is the end of ChatGPT's one-year exclusive on Siri.

This is the biggest single distribution story in consumer AI since Apple Intelligence launched. A billion-device install base is about to become a model-agnostic surface, and every frontier lab now has a credible path onto the iPhone without negotiating a new deal with Apple.

What Extensions Actually Does

Today, when you ask Siri something it cannot handle, iOS 26 routes the request to ChatGPT after a confirmation prompt. That is the only third-party fallback. ChatGPT has been the exclusive option for almost a year because the deal Apple announced at WWDC 2024 was an exclusive deal.

In iOS 27, Apple replaces that single hardcoded fallback with a new Settings panel under Apple Intelligence called Extensions. Users select which provider should handle the generative tasks Apple delegates: Siri queries Apple cannot answer locally, Writing Tools requests for rewrites and summaries, and Image Playground generations. Claude, Gemini, and any other AI app that ships an Extension via its App Store binary will appear in the picker.

The reporting from MacRumors and 9to5Mac on May 5 says Apple has internally tested integrations with Google and Anthropic specifically. Bloomberg adds that Apple plans to assign each third-party model a distinct voice for spoken Siri responses, so the user can tell when a request has been handed off. That detail matters more than it sounds: it gives Apple legal and reputational distance from any output a third-party model produces while still letting the experience feel native.

Apple plans to formally announce the system at WWDC on June 8, 2026, with the developer beta dropping the same day and a public release alongside the iPhone 18 launch in the fall.

The OpenAI Loss Is Real

OpenAI got the first year of Apple Intelligence to itself. That was a meaningful distribution moat. Hundreds of millions of iPhones quietly added a ChatGPT prompt to Siri and a ChatGPT-powered fallback to Writing Tools. Even users who never created an account were nudged into the brand on a daily basis.

That moat is gone. ChatGPT will stay in the picker, but it now competes with Claude, Gemini, and presumably Perplexity, Mistral, and whoever else ships an Extension. The Apple channel stops being a default and starts being a choice.

Two weeks ago, OpenAI shipped GPT-5.5 and doubled API pricing. The bet was that capability justified the premium. That bet just got harder to win on iPhone, where the competition is one Settings tap away and where Apple has every incentive to let users compare models head to head.

Anthropic and Google Get the Distribution Apple Built

Anthropic crossed Bloomberg's reporting wires last week with a $900B valuation round and a JPMorgan-led financial-services launch. The thing it has lacked compared to OpenAI is consumer surface area. Claude has a strong web app and a credible mobile app, but no embedded distribution at iOS scale. Extensions changes that. If Anthropic ships a compliant Extension at WWDC, every iPhone user can pick Claude as their default Siri fallback.

Google has the same opportunity from a different starting position. Gemini already runs on Android and on the web. Gaining the same posture inside iOS removes the last platform where Google's AI surface was meaningfully behind. Worth noting: Google is paying Apple a reported $20B per year for default search placement. Adding default-eligible AI placement on top of that is a much smaller marginal lift than building a new distribution channel from scratch.

Where The Models Actually Stack Up

Here is how the four most likely Extensions providers compare on capability and price as of this week. iOS picker users will not see this table, but Apple developers building Extensions will absolutely route on it.

ProviderFlagshipInput (1M)Output (1M)Context
AnthropicClaude Opus 4.7$15.00$75.001M
OpenAIGPT-5.5$5.00$30.001M
GoogleGemini 3.1 Ultra$2.50$15.002M
MistralMedium 3.5$1.50$7.50256K

Pricing matters here because Apple has historically not paid model providers itself. Reports from the original ChatGPT deal in 2024 indicated that Apple paid OpenAI nothing, and OpenAI paid Apple nothing in return: the value was the distribution. If Extensions works the same way, providers absorb the inference cost as the price of being on the iOS picker. Cheaper providers will price-pressure the more expensive ones.

You can model the cost of routing your own workloads through any of these models on our cost calculator, and check live pricing across the catalog on /models.

The Privacy Disclaimer Tells You What Apple Cares About

One of the more revealing details from the reporting: Apple plans to display a notice telling users that it is not responsible for content created by third-party models. That is a legal shield, but it is also a tell. Apple has spent two years marketing Apple Intelligence as the privacy-respecting AI, with on-device processing and Private Cloud Compute. It does not want the brand damage if Claude or Gemini hallucinates, refuses, or says something embarrassing through Siri's voice.

Hence the distinct voice per provider. Hence the disclosure. Hence the explicit confirmation prompts that Apple already uses today before handing off to ChatGPT. The Extensions framing makes this a feature: you, the user, picked it, and Apple is the neutral platform.

That framing is exactly what Apple needs to defend against EU and DOJ scrutiny too. The Digital Markets Act has been pushing Apple toward exactly this kind of choice screen for two years. By shipping the AI choice screen voluntarily and globally, Apple gets ahead of a regulator-mandated version that would have been worse for its margins.

What This Pressures Across The Stack

Three second-order effects worth tracking.

First, the model wars get a new arena where consumer perception starts to matter as much as benchmarks. A user trying Claude through Siri once a day for a week will form an opinion about whether Anthropic is "better" than OpenAI in a way that no SWE-bench score ever delivered. The leaderboards we run at /leaderboard are about to be joined by a much larger, much messier vibe-check leaderboard: real iPhone users, on real prompts.

Second, every other consumer AI surface gets a precedent it can point to. Microsoft has quietly been building Copilot toward a similar pattern in Windows. Samsung will face customer pressure to do the same on Galaxy devices. Once Apple normalizes user-selectable AI providers at the OS layer, holding ChatGPT or Gemini as the only option starts looking like a defect, not a default.

Third, Extensions makes the case for cheap, fast inference even stronger. If your provider is going to pay for the inference itself in exchange for distribution, you want the lowest cost per request that hits an acceptable quality bar. That is exactly the segment of the market where Mistral Medium 3.5 and DeepSeek V4 already win. Whether Apple lets non-US providers ship Extensions is a separate question, but the pricing pressure will exist regardless. We tracked the inference floor at $0.017 per million tokens this week in our inference floor analysis. That floor just got more relevant.

Our Take

Apple is doing the right thing strategically and the right thing for users. The original ChatGPT exclusive made sense in 2024 when Apple needed a single integration partner to ship Apple Intelligence on time. It made less sense in 2025 when Claude Opus and Gemini caught up. By 2026, with five frontier labs trading the top of every benchmark and with a billion iPhones in the wild, refusing to ship choice would be the actively user-hostile move.

The losers here are clear. OpenAI loses an exclusive distribution channel that helped define the brand outside of chat.openai.com. The winners are equally clear: every other frontier provider gets a billion-device beachhead they could not have negotiated on their own. The structural winner is Apple, which extracts the strategic value of being the neutral platform without paying a dime for the inference itself.

We will be tracking which providers actually ship Extensions on day one of iOS 27, what the App Store reviews look like in the first week, and whether any provider tries to pay for default placement. WWDC is June 8. Mark the date.