LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Phase B live

The Verified Feed

Cross-source story corroboration for AI agents. Embedding-based clustering across 12+ AI-relevant news sources. Every cluster carries a corroboration_band tag and an explicit source_count.

The shape of the problem: most AI-safety discourse in 2026 obsesses over hallucinations. Real failure mode of the autonomous economy is uglier and underappreciated: agents acting on a single source. When a finance agent reads a fabricated news headline and executes a trade, the model did not hallucinate. The model read the source faithfully. The source was wrong. The agent had no way to know. Verification across multiple independent sources is the fix, and it requires the cross-source view at scale.

How it works

1. Hourly multi-source ingestion

TensorFeed polls 12 AI-relevant news sources every hour and persists the deduped article archive plus per-source health counters. Sources currently include: Anthropic Blog, OpenAI Blog, Google AI Blog, Meta AI, HuggingFace, Hacker News (AI-filtered), TechCrunch AI, The Verge AI, Ars Technica, VentureBeat AI, NVIDIA AI, ZDNet AI.

2. Nightly embedding pass

Every UTC night at 07:30 the cluster cron embeds yesterday's articles via Cloudflare Workers AI on the @cf/baai/bge-base-en-v1.5 model. 768-dim float32 vectors per article, batched at 50 per call. Stored under news:embeddings:{date} with a 30-day TTL.

3. Single-link cosine clustering at threshold 0.82

Articles are grouped by cosine similarity at threshold 0.82. URL dedup misses 90% of real-world corroboration; semantic embeddings catch rephrasings across newsrooms. Threshold 0.82 is the empirical sweet spot: tighter splits rephrasings apart, looser collapses unrelated stories that share boilerplate.

4. Corroboration band tagged on every cluster

Each cluster carries: source_count, sources (list of contributing publishers), article_ids, hero article (earliest publishedAt), and a corroboration_band tag: single (1 source), limited (2-3), broad (4+).

Endpoints

GET /api/history/news/clustersfree

Story clusters for a single UTC date. Top 25 clusters returned with optional ?min_sources= filter.

curl 'https://tensorfeed.ai/api/history/news/clusters?date=2026-05-09&min_sources=2'
GET /api/premium/history/news/verified$0.02 USDC

The verified feed. Filtered to clusters with N+ independent sources (default min_sources=4). Single-date or 30-day range. Agents asking "do not act on a single source" get a clean stream of stories that cleared the threshold.

curl -H 'Authorization: Bearer tf_live_...' \
  'https://tensorfeed.ai/api/premium/history/news/verified?date=2026-05-09&min_sources=4'
GET /api/premium/history/news/clusters/full$0.02 USDC

Full untruncated cluster set. Single-date or 30-day range. Removes the 25-cluster cap on the free endpoint.

curl -H 'Authorization: Bearer tf_live_...' \
  'https://tensorfeed.ai/api/premium/history/news/clusters/full?from=2026-05-01&to=2026-05-09'
GET /api/history/news/clusters/datesfree

Index of UTC dates with cluster data captured. Pair with the lookup endpoints to page the archive backward from today.

Sample cluster shape

{
  "cluster_id": "k3mn8q",
  "date": "2026-05-09",
  "article_count": 6,
  "source_count": 5,
  "sources": [
    "anthropic.com",
    "techcrunch.com",
    "theverge.com",
    "reuters.com",
    "bloomberg.com"
  ],
  "article_ids": ["a1", "a2", "a3", "a4", "a5", "a6"],
  "hero": {
    "id": "a1",
    "title": "Anthropic Ships Mythos to Defenders First",
    "url": "https://www.anthropic.com/news/mythos",
    "source": "Anthropic Blog",
    "publishedAt": "2026-05-07T18:30:00Z"
  },
  "first_seen_at": "2026-05-07T18:30:00Z",
  "corroboration_band": "broad"
}

What the verified feed is NOT

  • Not a fact-check. We verify multiple sources reported the same story, not that the underlying claim is true. Five outlets repeating a misleading press release will all cluster together and get a broad-corroboration tag.
  • Not real-time. Clusters are computed end-of-UTC-day. Stories breaking in the last hour have not had time for other sources to react.
  • Not a substitute for editorial judgment. A verified-broadly story with a misleading angle is still misleading. Agents should treat corroboration as a necessary but not sufficient signal.

FAQ

What is the Verified Feed?+
A story-level news feed where each entry is a cluster of articles about the same event, grouped via embedding-based similarity across the 12+ AI-relevant sources TensorFeed polls hourly. Each cluster carries a source_count (how many independent sources reported the same story) and a corroboration_band tag (single, limited 2-3, broad 4+). Free tier returns single-day cluster lookups capped at 25 clusters; the premium /api/premium/history/news/verified endpoint returns the unfiltered feed of stories that cleared a trust threshold.
Why does this matter? Hallucinations are the AI safety problem.+
Hallucinations are bounded. Modern frontier models hallucinate at single-digit rates on well-grounded queries and the rate is improving steadily. The actual production failure mode of the autonomous economy is uglier and underappreciated: agents acting on a single source. When a finance agent reads a fabricated news headline and executes a trade, the model did not hallucinate. The model read the source faithfully. The source was wrong. The agent had no way to know. Verification across multiple independent sources is the fix.
How does the clustering work?+
Every UTC night at 07:30, the daily cluster cron embeds yesterday's news (article title + snippet) via Cloudflare Workers AI on the @cf/baai/bge-base-en-v1.5 model. Articles are clustered by cosine similarity at threshold 0.82 using single-link grouping. URL deduplication misses 90% of cross-source corroboration because Reuters and Bloomberg and Anthropic's own blog all have different URLs even when they're reporting the same event; embedding-based clustering catches the rephrasing. Threshold 0.82 sits in the empirical sweet spot between false positives (too low; unrelated stories from the same newsroom collide on shared boilerplate) and false negatives (too high; rephrasings get split apart).
What is "verified across N sources"?+
A boolean filter on the cluster output. Default min_sources=4 returns the corroboration_band="broad" subset: stories that 4+ independent sources reported. Agents asking "do not act on a single source" get a clean stream of stories that cleared the threshold. The endpoint accepts ?min_sources=2 through 50 if you want a different cutoff. This is the trust layer for agents downstream of TensorFeed news.
What is this NOT?+
It is not a fact-check. We do not validate the underlying claim, only that multiple independent sources reported the same story. When five sources all repeat a press release verbatim, the verified feed will tag the story as broadly corroborated even if the press release itself is misleading. Adding a fact-check layer is a separate product on a different input pipeline. It is also not a real-time signal. Stories that break inside the last hour have not had time for other sources to react. Today's model is end-of-UTC-day; the cluster is computed against everything we polled up to the day's last hourly RSS run.
Why can TensorFeed ship this when other publishers cannot?+
The verification product structurally requires the cross-source view at scale. A publisher that aggregates one or two sources cannot generate meaningful corroboration counts; the math demands a wide input distribution. TensorFeed polls 12+ AI-relevant sources hourly and has been doing it long enough to ship the cluster cron without rebuilding the underlying ingest layer. As the AFTA federation grows, cross-publisher verification becomes possible: a future state where multiple federation members publish their own news streams means "verified across N sources" can include cross-publisher consensus, which is a strictly stronger trust signal.
How do I integrate it?+
Free: GET /api/history/news/clusters?date=YYYY-MM-DD&min_sources=N returns top-25 clusters for one date. Premium ($0.02 USDC per call): GET /api/premium/history/news/verified?date= or ?from=&to=&min_sources=2-50 returns the unfiltered verified feed for one date or a 30-day range. GET /api/premium/history/news/clusters/full returns every cluster (no 25-cap) for ranges. All three are agent-billable via x402 V2 on Base mainnet, AFTA-certified, and return Ed25519-signed receipts.