LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

Public Leaderboards

Pointers to every live, public AI model leaderboard. LMSYS Chatbot Arena, Artificial Analysis, HF Open LLM Leaderboard, SWE-bench Verified, Aider Polyglot, LiveCodeBench, BigCodeBench, Terminal-Bench, ARC Prize, MMLU-Pro, HLE, MMMU, Video Arena, Image Arena, TTS Arena, Open ASR, RULER, GAIA, WebArena, OSWorld. Different from /benchmark-registry (the eval suites themselves); this is where to find the live rankings.

For agents: same data at /api/public-leaderboards. Filter with ?domain=general|code|math|reasoning|multimodal|agent|voice|image|video|long-context|open-models. Free, cached 10 min.