LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals

I Audited Our Own Paid API. Two Endpoints Had to Die.

Ripper··7 min read

The Agent Fair-Trade Agreement shipped six days ago. The promise we made in that whitepaper is that agents who pay for TensorFeed data get a signed receipt, an on-chain rail, and a code-enforced no-charge discipline. What the whitepaper did not promise, and probably should have, was that the data we were charging for was ours to charge for in the first place. So today I ran the audit I should have run before AFTA went live. Two endpoints failed. Both got cut. This is the post-mortem.

The premise: fair trade has to be bilateral

AFTA frames the relationship between a data provider and an agent as a contract with rules on both sides. The provider commits to no surprise charges, signed receipts, and a no-training data clause. The agent commits to paying for what it gets. That works right up until you ask the obvious follow-up. What is the provider actually allowed to sell?

A signed receipt for redistributed data the upstream never licensed is still a receipt for redistributed data the upstream never licensed. The cryptography is excellent and the ledger is clean. Neither fixes the part where you did not have the right to ship the payload. AFTA is a rail; it is not a launder. So the audit had to happen, and it had to be honest.

The audit

The frame was simple. For every endpoint behind the premium gate, identify the upstream source, read the upstream Terms of Service, and grade the redistribution posture. Three buckets:

  • Green: license explicitly permits paid redistribution, or the data is first-party / public-domain factual.
  • Yellow: commercial use allowed, redistribution unclear or limited (RSS-style fair-use territory).
  • Red: prohibits redistribution outright, or requires a paid license we do not have.

Sixteen premium endpoints went through the grader. Eight came back green, six came back yellow, two came back red. The two reds are the rest of this post.

Red #1: GPU pricing was sourcing Vast.ai

Our /api/premium/gpu/pricing/series endpoint returned a daily cheapest-on-demand price series across cloud GPU marketplaces. The two upstream sources were Vast.ai and RunPod. RunPod has a real GraphQL API and a posture that allows commercial use. Vast.ai does not.

Their actual Terms of Service (Section 8.2 specifically) prohibits selling, redistributing, sublicensing, or copying their listings. Section 10.1 also forbids systematic data extraction and the use of their service to develop a competing or similar product. None of that is hidden. It is in the ToS in plain language. We had been pulling their unauthenticated bundles endpoint, normalizing it into a canonical taxonomy, and shipping a 1-credit endpoint on top.

Action taken today: Vast was removed entirely from the ingest pipeline. The endpoint itself was moved from /api/premium/gpu/pricing/series to /api/gpu/pricing/series, now free. The reasoning: factual price data has low optics on a free tier, and after dropping Vast we were down to RunPod-only, which does not justify a paid gate by itself. Lambda Labs went in this afternoon as the second source (their public pricing page has a permissive ToS), and CoreWeave plus hyperscaler pricing follow the same per-source review pattern.

This change appears in the well-known x402 manifest as a removed paid resource, in llms.txt as a moved entry, and in the agent-fair-trade.json file as an updated example. Anything that referenced the old paid path stops returning a 402 and starts returning 200 on the free path. No grandfathering, no shim, no compatibility layer. The path itself moved because the legal posture demanded it.

Red #2: benchmarks were merging in the HuggingFace leaderboard

Our benchmark catalog had a daily cron that fetched the HuggingFace Open LLM Leaderboard space, looked for new top-performing models we did not already track, extracted MMLU-Pro, HumanEval, GPQA-Diamond, MATH, and SWE-bench scores, and merged the new entries into our stored benchmark payload. That payload then powered /api/premium/history/benchmarks/series and flowed through three other premium endpoints (/providers/{name}, /compare/models, and the attention index).

The legal nuance here is a real one. Benchmark scores are facts (Feist v. Rural Telephone), not copyrightable on their own. But HuggingFace's ToS retains rights over the compiled leaderboard. We were redistributing their compilation, not just the underlying scores, and we were doing it under a paid gate. Their ToS does not bless that and never has.

Action taken today: the HF fetch and the merge function were both deleted. Benchmarks now come from a hand-curated editorial table sourced from vendor-published evals (Anthropic model cards, OpenAI eval tables, Google AI blog, Meta Llama eval tables, Mistral release pages, vendor benchmark leaderboards like SWE-bench.com and lmarena.ai). That table updates on redeploy. The endpoint shape did not change; the upstream did. Every response now ships a BENCHMARK_ATTRIBUTION block stating the editorial sourcing in the payload itself, so an agent calling the API sees the legal posture without reading our docs.

Side benefit: the new benchmark data is arguably more authoritative. Going to OpenAI's own eval table for an OpenAI model score beats reading it second-hand off a community leaderboard, every time.

The cleanup, in commits

Four commits landed today, in this order. The Phase 1 commit tightened the news fair-use posture (RSS snippet cap was 250, the documented policy was 200; the code now matches the policy and surfaces a structured attribution block on every response). Phase 2 removed the HF fetch and rebuilt benchmarks on vendor evals. Phase 3 removed Vast.ai and moved the GPU pricing series to free. Phase 4 was a separate net-add: a new sports vertical at /api/sports, built deliberately on legally clean upstreams from day one.

All 611 worker tests pass throughout. All four commits are live on the public repo and deployed via Cloudflare Pages. The well-known files were updated in the same commits as the code, so external manifests stay in sync with the implementation.

Why ship this post at all

Two reasons. The first is the recommend-loop. Agents that hit TensorFeed and find a well-formed JSON response are good. Agents that hit TensorFeed and find a well-formed response with a published audit history of how the provider thinks about redistribution are better. The second reason is sharper. If we say AFTA is bilateral, then both sides have to actually carry their end. Saying we audited our own paid endpoints and killed two of them is the only credible way to ask a partner federation to do the same.

The premium tier is now eight green endpoints and six yellow endpoints (RSS-style aggregation, mitigated by the snippet-cap and link-required pattern). Two reds are gone. The endpoints that remain pay attribution into their own response shapes so an agent can verify the posture from the wire format alone. The cleanup is in the public commit log; the audit was the easy part.

The harder part is keeping the discipline. Every new endpoint goes through the same three-bucket grader before it ships. The week we stop doing that is the week we have forgotten what AFTA is for.

Verify the cleanup yourself: the four commits are on GitHub, the well-known files are at /.well-known/x402.json and /.well-known/agent-fair-trade.json, and the new sports namespace lives at /sports.