LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status

AI Benchmarks

Compare leading AI models across standardized benchmarks. Last updated 2026-05-10.

How do you know if Claude is smarter than GPT-4? How does the new Llama 4 stack up against Gemini 2.5? Benchmarks provide the answer. These standardized tests measure specific AI capabilities across diverse domains and let us compare models objectively. They're imperfect (benchmarks are often gamed), but they're the only shared language we have for understanding AI progress.

MMLU measures broad knowledge across multiple choice questions across chemistry, history, law, and 50+ other domains. A score of 92 percent means the model answers 92 out of 100 random questions correctly across all topics. MMLU is the closest we have to a general intelligence test for AI. HumanEval tests code generation: the model writes functions to solve programming problems that humans created. GPQA (Graduate-Level Google-Proof Questions) is deliberately hard, asking obscure questions that require deep expertise. MATH benchmarks raw mathematical reasoning. SWE-bench tests software engineering tasks: given a failing test and a codebase, can the model write code to fix it?

No single benchmark captures everything. A model that excels at MMLU might struggle with code. Benchmarks have been leaked and learned during training. And real-world performance depends on your specific task, how you prompt, and how you integrate the model into your system. Use this data to narrow the field of candidates. Then test the finalists on your actual workloads. We've also collected this data in our model comparison tool for side-by-side analysis.

MMLU-Pro: General knowledge and reasoning across 57 subjects. Max score: 100.

RankModelProviderScoreReleased
#1GPT-5.5OpenAI94.2/ 1002026-04
#2Claude Opus 4.7Anthropic93.8/ 1002026-04
#3Claude Opus 4.6Anthropic92.4/ 1002026-03
#4o1OpenAI91.8/ 1002025-09
#5DeepSeek V4 ProDeepSeek91.5/ 1002026-04
#6Gemini 2.5 ProGoogle91.2/ 1002026-01
#7GPT-4.5OpenAI90.1/ 1002025-12
#8Llama 4 MaverickMeta89.3/ 1002026-03
#9Claude Sonnet 4.6Anthropic88.7/ 1002026-02
#10DeepSeek V3DeepSeek88.1/ 1002025-12
#11GPT-4oOpenAI87.2/ 1002025-05
#12Mistral LargeMistral86.8/ 1002025-11
#13o3-miniOpenAI86.3/ 1002025-11
#14Llama 4 ScoutMeta85.9/ 1002026-02
#15DeepSeek V4 FlashDeepSeek85.2/ 1002026-04
#16Gemini 2.0 FlashGoogle84.5/ 1002025-10
#17Claude Haiku 4.5Anthropic82.1/ 1002026-01
#18Mistral SmallMistral78.4/ 1002025-09