LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
All endpoints

Premium Status Leaderboard

1 credit
GET /api/premium/status/leaderboard

The /api/premium/status/leaderboard endpoint ranks AI providers by uptime percentage over a custom date range (up to 90 days). Each entry includes uptime_pct, polls, operational/degraded/down/unknown bucket counts, downtime_minutes, hard_down_minutes, incident_count, and mttr_minutes (mean time to recover from resolved incidents). Sorted by uptime_pct desc with hard_down_minutes as tie-breaker.

When to use this endpoint

For SRE/ops/procurement teams comparing AI vendor reliability for vendor selection or post-incident reviews. The mttr_minutes column captures recovery speed (a 99.9% provider with 4-hour MTTR is materially worse than a 99.9% provider with 10-minute MTTR).

Parameters

NameInTypeDescription
fromquerystringStart date YYYY-MM-DDe.g. 2026-04-01
toquerystringEnd date YYYY-MM-DD (max 90-day range)e.g. 2026-05-01

* required

Example response

{
  "ok": true,
  "range": { "from": "2026-04-01", "to": "2026-05-01", "days": 30 },
  "leaderboard": [
    { "provider": "anthropic", "uptime_pct": 99.97, "polls": 8640, "downtime_minutes": 12, "hard_down_minutes": 0, "incident_count": 1, "mttr_minutes": 12 }
  ],
  "billing": { "credits_charged": 1, "credits_remaining": 49 }
}

Code samples

Python SDK

from tensorfeed import TensorFeed

tf = TensorFeed(token="tf_live_...")
lb = tf._get("/premium/status/leaderboard", **{"from": "2026-04-01", "to": "2026-05-01"})
for p in lb["leaderboard"][:5]:
    print(f"{p['provider']}: {p['uptime_pct']}% (MTTR {p['mttr_minutes']}min, {p['incident_count']} incidents)")

TypeScript SDK

const res = await fetch(
  "https://tensorfeed.ai/api/premium/status/leaderboard?from=2026-04-01&to=2026-05-01",
  { headers: { Authorization: "Bearer tf_live_..." } }
);
const lb = await res.json();
console.log(lb.leaderboard.slice(0, 5));

FAQ

How is uptime measured?

We poll each provider's public status page every ~10 minutes. uptime_pct is the fraction of polls returning "operational." downtime_minutes converts non-operational poll runs into wall-clock minutes assuming the gap between polls is the relevant unit. Best-effort: the upstream status pages themselves are the ground truth.

What is hard_down_minutes vs downtime_minutes?

downtime_minutes counts any non-operational status (degraded + down + unknown). hard_down_minutes counts only "down" (excludes degraded). A provider with significant degraded time but no full outages will rank higher on hard_down_minutes than total downtime suggests.

What does mttr_minutes mean?

Mean Time To Recover, calculated only over incidents that resolved within the date range. Open incidents at the range end are excluded so MTTR is a real, finished-incident statistic.

Related endpoints