LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

AI Attention Index

Live attention score per AI provider, derived from news volume, GitHub trending, and agent traffic on TensorFeed. Higher score means more mentions, more trending repos, more inbound agent traffic. The signal beneath the noise.

How is this computed?

We sum four weighted signals per provider, then normalize the highest in the response to 100:

  • news_24h * 4.0 — articles mentioning the provider in the last 24 hours
  • news_7d * 1.0 — articles in the last 7 days
  • trending_repos * 2.0 — currently trending GitHub repos matching the provider
  • agent_hits * 0.05 — bot/agent hits to provider-related TensorFeed endpoints

We do not persist the score. We recompute on every request from the free endpoints (/api/news, /api/trending-repos, /api/agents/activity) and cache for 5 minutes. Same data is served as JSON at /api/attention.

Loading...

For agents: the same payload is at /api/attention. Free, no auth, cached 5 minutes. Includes raw signal counts so you can apply your own weighting.