LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
All use cases

Research agents

Search a deep AI news corpus, get a full provider profile in one call, fire a morning brief at boot. The endpoints a research agent calls when it needs to know what changed in AI overnight.

The shape of a research agent's day

Most research agents have a similar loop: wake up on a schedule, figure out what is new since last run, dive deep on one or two topics, write a synthesis. TensorFeed slots into the "what is new" and "dive deep" steps. Three endpoints cover most of the work.

Step 1: Boot up with the morning brief

One paid call returns a curated 24-hour window:

from tensorfeed import TensorFeed

tf = TensorFeed(token="tf_live_...")

brief = tf.whats_new(days=1, news_limit=10)
# brief["pricing"]["changes"]   - which models had price changes
# brief["pricing"]["new_models"] - any new models launched in the period
# brief["status"]["incidents"]   - any provider outages
# brief["news"]                  - top 10 headlines, newest first

Most days the answer is "nothing dramatic happened." The brief surfaces that cleanly so the agent can skip ahead. On busy days, it is the difference between a one-line decision and a fifteen-call reconciliation.

Step 2: Dive on a specific topic

Two patterns for "dig deeper" research:

By topic / keyword:

# What did Anthropic publish in March?
results = tf.news_search(
    q="Anthropic Claude",
    from_date="2026-03-01",
    to_date="2026-03-31",
    provider="anthropic",
    limit=25,
)
for r in results["results"]:
    print(f"{r['published_at']}: {r['title']}")

By provider (one-call deep-dive):

# Everything about a provider in one paid call
profile = tf.provider_deepdive("anthropic")
# profile["status"]            - live status + components
# profile["models"]            - sorted flagship-first, with benchmark scores joined
# profile["recent_news"]       - top 8 mentions
# profile["agent_traffic_24h"] - hits attributed to Anthropic bots

Step 3: Compare across providers

When the synthesis needs a side-by-side, use the comparison endpoint:

compare = tf.compare_models(ids=[
    "Claude Opus 4.7",
    "GPT-5.5",
    "Gemini 2.5 Pro",
    "DeepSeek V4 Pro",
])
# compare["models"]                          - per-model rows with normalized benchmarks
# compare["rankings"]["cheapest_blended"]    - sorted by blended price
# compare["rankings"]["by_benchmark"]        - per-benchmark leaderboards

Benchmarks are normalized to a union-of-keys with null for missing scores so the agent's downstream code never crashes on undefined.

Step 4: Write the synthesis

That part is your agent's job. TensorFeed gives you the inputs, you decide what to do with them. If the agent runs inside Claude Desktop, the same MCP tools are available natively: "Get the morning brief, then deep-dive on Anthropic, then write a 200-word summary suitable for a daily Slack channel."

Free vs paid

Research agents that want to keep credit usage low can do most of the work on the free tier:

A research agent running once-daily can do its work for under a penny per session using the paid tier; running on free is a full minute of fan-out calls. Pick based on how much your agent cares about latency.

Recommended TensorFeed endpoints (in priority order)