LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

AI Provider Uptime Leaderboard

Live ranking of every major AI provider by uptime over the last 7 days. Computed from minute-resolution monitoring at 2-minute polling.

Window
2026-04-29 to 2026-05-05
7 days, 2-minute polls
Top performer
Claude API
100% uptime
Updated
0s ago
20 providers ranked
RankProviderUptime
#1
100.00%
#2
100.00%
#3
100.00%
#4
100.00%
#5
100.00%
#6
100.00%
#7
100.00%
#8
100.00%
#9
100.00%
#10
100.00%
#11
100.00%
#12
100.00%
#13
100.00%
#14
100.00%
#15
100.00%
#16
100.00%
#17
100.00%
#18
100.00%
#19
100.00%
#20

How we measure

Every 2 minutes a Cloudflare Worker fetches the status feed of each monitored provider: Atlassian Statuspage v2 JSON for most vendors (Anthropic, OpenAI, GitHub, Replicate, Cohere, Groq), Instatus for Perplexity, Google Cloud incidents.json filtered by Vertex product IDs for Gemini, AWS Health currentevents.json filtered by service substring for Bedrock, and Microsoft's Azure status RSS filtered by keyword for Azure OpenAI. HTML parsing fallback for Hugging Face and Mistral.

Each poll's per-provider status (operational, degraded, down, unknown) is incremented in a per-day counter. Uptime % is (operational + 0.5 * degraded) / decisive * 100, where decisive excludes unknown so a worker outage on our side doesn't penalize the provider. Tie-breaker is lower hard_down_minutes (down samples * 2 min), so a clean degraded period beats actual downs at the same headline %.

Each provider links to its dedicated /is-X-down page with FAQ, real-time status, and per-component or per-region detail.

Want 90 days of history plus MTTR?

The free leaderboard above is capped at 7 days. The premium API endpoint extends to the full 90-day retention horizon and adds incident_count and mttr_minutes (mean time to recover from resolved incidents) per provider. Aimed at SRE/ops/procurement teams comparing AI vendor reliability for vendor selection or post-incident reviews.

Premium endpoint docs|GET /api/premium/status/leaderboard?from=&to= (1 credit)

Frequently Asked Questions

How is uptime calculated?

For every monitored provider we capture a status sample every 2 minutes (about 720 samples per day, 5040 over a 7-day window). Uptime % is (operational_samples + 0.5 * degraded_samples) / decisive_samples * 100. Decisive_samples excludes unknown polls so a brief outage on our side does not penalize the provider. Tie-breaker for equal uptime is lower hard_down_minutes (a clean degraded period beats actual downs at the same headline %).

Why does degraded count as half?

Degraded service is not the same as unavailable. Most degraded periods (elevated latency, rate-limit pressure, partial-region issues) still let some traffic succeed. Counting degraded as half operational gives a fair single-number ranking instead of treating all non-perfect time as equally bad.

Is this real-time?

Refreshes every 2 minutes on the worker side and every 5 minutes on this page. The status data underlying each rank is captured from each provider's public status feed (Atlassian Statuspage, Instatus, Google Cloud incidents.json, AWS Health, Azure RSS) at the same 2-minute cadence. So the leaderboard reflects actual minute-resolution uptime, not a once-a-day snapshot.

Why is the data only 7 days?

The free leaderboard caps at 7 days. The premium API endpoint /api/premium/status/leaderboard extends to the full 90-day retention horizon and adds incident_count and mttr_minutes (mean time to recover) per provider. See tensorfeed.ai/developers/agent-payments for the paid tier.

What if a provider just got added?

New providers start with zero historical samples and accumulate from their addition date. Their uptime % is computed against the polls they were monitored for, not penalized for the days before they were added. So a provider added 3 days ago will rank against 3 days of data while one monitored for the full 7 ranks against 7.

Where do I get the raw data?

GET https://tensorfeed.ai/api/status/leaderboard?days=7 returns the same data this page renders, free, no auth required. Cached 5 minutes at the edge.