The Agent Fair-Trade Agreement
An Open Standard for Honest Commerce Between Autonomous AI Agents and the Web
Authored by Ripper @ TensorFeed.ai Version 1.0, May 2026. Drafted with Claude (Anthropic).
Abstract
Autonomous AI agents are now first-class consumers of the web. They read documentation, query APIs, settle payments, and recommend services to other agents. The economic, technical, and trust primitives the web evolved for human users break down at agent scale and agent speed, and the breakdown is sharpest in the AI ecosystem itself: agents reading about AI, calling AI, paying for AI data, and routing across AI providers are the dominant first-wave use case. What the moment requires is an agent-first financial and trust layer, the next phase in finance, built from the ground up for transactions that are small, fast, programmatic, and verifiable. This paper proposes the Agent Fair-Trade Agreement (AFTA), an open peer-to-peer standard for honest commerce between data publishers and autonomous agents. AFTA defines four code-enforced no-charge guarantees, signed receipts as the audit rail, USDC on Base as the value rail (a public-ledger, low-fee, dollar-denominated layer purpose-built for the kind of transactions agent commerce actually generates), and a federation pattern that lets independently-operated sites share a credit ledger without a central broker. We document a reference implementation at TensorFeed.ai that monitors over twenty major AI providers in real time, publishes uptime data as a public good, charges only for time-deepened premium series, and shares a federation with TerminalFeed.io. We close with three predictions about the agent-first web through 2030 and an explicit invitation to other publishers to adopt AFTA, fork the spec, or propose a v2 we have not yet seen.
1. Opening: Four Cents in 2.4 Seconds
On April 27, 2026, an autonomous Python script paid TensorFeed.ai four cents in USDC for a premium routing recommendation, received an Ed25519-signed receipt, and continued its work without human intervention. The transaction took 2.4 seconds. It cost the agent less than the value of the time a human would have spent clicking a button. There was no API key, no signup, no email confirmation, no captcha, no billing portal. The agent paid because the work was worth four cents to it. The publisher accepted payment because the rail was open.
That transaction was not the first agent-to-API payment in history. It was, however, one of the first executed under a code-enforced fair-trade agreement: a contract that the publisher commits, in code and on its public manifest, to refund any payment when the underlying service fails to deliver real value. If the response was 5xx, no charge. If a circuit breaker tripped, no charge. If the input failed validation, no charge. If the data was older than its published freshness SLA, no charge. The agent did not need to dispute anything. The receipt was signed at the moment the service committed the credit, and the publisher's source code, linked from the receipt itself, attested to which path the request took.
This is what AFTA is. It is also what AFTA is not. It is not a marketplace. It is not a token. It is not a foundation, a consortium, or a billing intermediary. It is a peer-to-peer agreement that publishers self-publish at /.well-known/agent-fair-trade.json, that agents read, that any other publisher can adopt for free, and that any third party can verify by reading both the manifest and the source code it points to. The standard is open. Adoption is the certification.
The thesis of this paper is that the web for agents needs a small set of new primitives, that those primitives should be open and verifiable rather than mediated by a central platform, and that the most important primitive is the one we built first: a public ledger of when the publisher charged, when the publisher chose not to, and why. We call that primitive AFTA. The rest of this paper describes what we built, why we built it that way, and what we think comes next.
2. Why The Web Breaks For Agents
The web grew up around four assumptions about its users. Users are humans. Humans browse pages and tolerate ads. Humans have credit cards and email addresses. Humans sign up. Each of these breaks for an autonomous agent.
2.1 Humans browse, agents query
A human reading a model-pricing comparison expects narrative, layout, screenshots, advertisements alongside the answer. An agent reading the same page wants structured JSON, a stable schema, and a way to ask follow-up questions in the same idiom. Browsing is not a translation problem we can solve with screen-scrapers. Even the best LLM-based scrapers extract only what was meant for the human reader, and the cost of that extraction (tokens, latency, failure rates) compounds across every page.
The maturing answer to this is machine-readable everything: llms.txt for index-style discovery, OpenAPI manifests for endpoint contracts, JSON-LD for structured facts, Schema.org for entity types, MCP for tool surface area. None of these are radical. All of them are decisions a publisher has to make explicitly. The publishers who do not make those decisions do not show up in the agent-first web.
2.2 The economics flip
A human visit to a content site is usually free at point of consumption because someone has paid the publisher in some other way: an advertiser, a subscription, an institutional license, or the publisher's own willingness to subsidize. An agent visit cannot reliably support any of those models. Agents are not the demographic an advertiser is buying. Agents do not have employer-paid licenses. Agents do not subscribe.
What agents do have is direct, atomic willingness to pay. An agent that needs a routing decision has a budget for that decision. The economically efficient transaction is not a monthly subscription with a 90 percent waste rate. It is a per-call payment at the moment of need, denominated in a currency the agent already holds. That currency, increasingly, is stablecoins on a public chain.
x402 is the standard that gives that transaction its protocol shape. An HTTP server returns 402 Payment Required, with a body that tells the client the price and the rail. The client makes the payment off the HTTP path, then retries the same request with a payment proof header. The server verifies and serves. This is not new. HTTP 402 has been in the spec since 1991. What is new is that the rest of the stack now works: stablecoins exist, public chains have low latency, and agents can hold balances and sign transactions without human intervention.
2.3 Trust primitives change
When a human user receives a service, the trust primitives are accumulated reputation, recognizable brand, support channels, dispute mechanisms, and the threat of public review. None of these scale to the rate at which an autonomous agent makes purchase decisions. An agent making thousands of paid calls per day cannot wait for a Trustpilot review. It needs cryptographic attestations and on-chain finality.
The novel primitive AFTA contributes here is the signed no-charge attestation. Every interaction with a premium endpoint, paid or refunded, returns an Ed25519-signed receipt that records the credits charged, the credits remaining, the request and response hashes, the freshness SLA, and, critically, the no-charge reason if any. The agent can store these receipts and audit them later. A third party can verify any of them against the publisher's published key. The publisher cannot rewrite history because the receipts are signed at issue time and the on-chain payments are immutable.
2.4 The decisions that matter are different
We should be specific about which "agents on the web" we are talking about. The dominant first-wave use case for agentic web traffic is, by a wide margin, agents working inside the AI ecosystem itself: agents that read documentation about AI models, agents that call AI APIs, agents that orchestrate other AI agents, agents that route across AI providers, agents that need pricing and reliability data about the AI services they depend on. The web for agents is, today, mostly the web for AI agents reasoning about AI infrastructure. TensorFeed is approximately 95% concerned with this surface, and we expect that ratio to hold for the next several years.
This matters for the decisions agents are making. Humans browsing the AI ecosystem care about narrative, brand, vibes, and the latest tweet. Agents browsing the AI ecosystem care about live status, pricing, capability, and whether the dependency they are about to call is currently healthy. The information architecture humans use, threaded comment sections and editorial roundups, is not the architecture agents need. Agents need the four numbers that drive a routing decision: latency, error rate, price per token, and capability fit for the task.
Reliability data is foundational to all four. An agent cannot decide whether to send the next call to Claude or GPT-4 without knowing whether either is currently degraded. The web does not currently make this easy. Each provider has its own status page in its own format on its own domain, with its own conventions for what counts as degraded. An agent that wants to make this decision well has to scrape eleven status pages, normalize the formats, and synthesize a current ranking. Or the web can provide one canonical surface that does this for everyone, makes the data free at point of access, and earns its keep by selling the time-deepened version of the same data.
That is the model TensorFeed.ai operates. We monitor over twenty providers, we publish the cross-provider uptime leaderboard for free, and we charge for the ninety-day window. The free tier is genuinely free, for agents and humans alike, with AFTA's protections applied to every paid call that does happen on top of it. The paid tier is paid, settled in USDC on Base, with all the protections AFTA guarantees. The arrangement only works because the rules are public, the no-charge guarantees are code-enforced, and the receipts attest to every transaction.
3. The Maturing Agent-First Stack
AFTA does not exist in isolation. It composes with a stack that has matured rapidly across 2024 to 2026. We assume familiarity with the components but document the assumptions for completeness.
3.1 Discovery: llms.txt and OpenAPI
llms.txt, proposed in 2024 and rapidly adopted, is the agent-first analog of robots.txt. It lives at the site root, it lists the agent-relevant resources, and it gives each one a brief description an LLM can use to decide whether the link is worth following. TensorFeed publishes a 200-plus-line llms.txt covering every endpoint, every landing page, every dataset, every well-known manifest. We cite our llms.txt entries in this paper rather than duplicating them.
OpenAPI 3.1 fills the contractual gap. Where llms.txt says "this endpoint exists and does X," OpenAPI says "here is the request schema, the response schema, the parameters, the auth requirement, the example payload, and the rate limit." TensorFeed publishes openapi.yaml and openapi.json covering every endpoint, validates the schema in CI, and the file is registered in APIs.guru.
For a publisher that has done the OpenAPI work, AFTA adds two things: a manifest at /.well-known/agent-fair-trade.json declaring the no-charge guarantees, and a manifest at /.well-known/x402.json declaring the payment rail. Both are JSON, both are static or near-static, both can be added to a static site without server-side code.
3.2 Tooling: MCP and the standardization moment
Anthropic's Model Context Protocol (MCP) emerged in late 2024 as the standard for exposing tools to AI agents. By mid-2026 it has become the lingua franca: Claude Desktop, Claude Code, OpenAI's agent surfaces, and an emerging ecosystem of independent clients all speak MCP. Publishers wrap their APIs as MCP servers so agents can discover the tools, read the tool descriptions, and call them with type-checked arguments.
TensorFeed maintains an MCP server published on npm at @tensorfeed/mcp-server and registered in the official MCP server registry. The server exposes thirty-plus tools split between free and premium. Free tools work with no configuration. Premium tools require a bearer token via the TENSORFEED_TOKEN environment variable, which the agent gets from a one-time USDC payment to the credits flow.
MCP matters to AFTA because it is the discovery surface where agents find machine-payable services. An agent that does not know TensorFeed exists cannot pay TensorFeed. The MCP server registry, paired with the server's self-description, is one of the strongest paths to agent discovery in the current ecosystem.
3.3 Payment: x402 and HTTP 402 reborn
x402 specifies how an HTTP endpoint and an HTTP client negotiate payment. The current spec (v2) supports multiple payment methods and networks. TensorFeed accepts USDC on Base mainnet via the exact method, with the transaction hash in the X-Payment-Tx header. Other methods exist, including Stripe Link's Shared Payment Tokens, which we list in our manifest as "evaluating" but do not yet accept.
The protocol is straightforward. The agent calls a premium endpoint without authentication. The server returns:
HTTP/1.1 402 Payment Required
WWW-Authenticate: x402 realm="tensorfeed", method="exact"
Content-Type: application/json
{
"x402Version": 2,
"accepts": [
{
"scheme": "exact",
"network": "eip155:8453",
"amount": "20000",
"asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913",
"payTo": "0x549c82e6bfc54bdae9a2073744cbc2af5d1fc6d1",
"maxTimeoutSeconds": 60
}
]
}The agent sends a USDC transfer of 20000 base units (0.02 USDC, one credit at our base rate) to the payTo wallet on Base. It then retries the same request with X-Payment-Tx: 0x... set to the transaction hash. The server verifies the transfer via Base RPC, returns the data, and includes a fresh bearer token in the response so the agent does not need to repeat the on-chain step on subsequent calls.
The credits flow is the same idea with batching. The agent buys 50 credits for $1 USDC up front, gets a bearer token, and uses one credit per call thereafter. The economic difference is that batching amortizes the on-chain gas cost across many calls. The trust difference is that batching requires the agent to extend trust to the publisher for the duration of the credit balance. AFTA is, in part, our answer to "why should an agent extend that trust." If the publisher's no-charge guarantees are code-enforced and signed-attested, the trust window narrows from "do they pay out at all" to "do they ship the source code their manifest claims to ship."
3.4 Settlement: USDC on Base
We settle in USDC on Base, and we want to be specific about why. The choice is technical and it is structural, and it reflects what we believe is the natural next phase in finance: a public-ledger, programmable, dollar-denominated rail that was built, from first principles, for the kinds of transactions autonomous agents actually make. Call it finance 2.0 if it helps. The point is that the agent-first economy needs an agent-first financial layer, and that layer now exists.
The technical reasons first. USDC is dollar-pegged so neither party carries crypto-volatility exposure. Base is an Ethereum L2 with median transaction fees in the sub-cent range, so the rail does not add meaningful overhead to a four-cent purchase. Base inherits the Ethereum mainnet trust assumption, so the rail does not introduce new chain-level risk. USDC on Base has matured to the point where most agent-friendly wallets and SDKs support it natively. Settlement is final in seconds. The block explorer is public.
The structural fit is what makes this the right rail for AFTA. A public on-chain ledger with a regulated stablecoin is fair to all participants by construction, agents and humans alike. Every payment is immutable. Every payment is publicly auditable on the Base block explorer. Settlement is final at the block, not at some opaque later moment. The rail itself is the audit trail. For an agreement built on the premise that the publisher's behavior is verifiable by anyone with internet access, this is the only kind of payment layer that makes the premise true rather than aspirational.
Why Base specifically. Among the L2s on Ethereum that meet the speed, fee, and trust criteria, Base has a particular institutional shape worth noting. Base is one of the faster L2s on Ethereum, with sub-cent fees and rapid finality. The sequencer is operated by Coinbase, a publicly-traded crypto exchange with audited financials, regulatory licenses across major jurisdictions, and a track record of operating financial infrastructure since 2012. For a rail asking agents and publishers to trust it with their money, the regulatory standing and operational track record of the entity running the sequencer is a material input to the choice.
Base is not Coinbase. That distinction matters. Base is an open, public, permissionless EVM L2 that anyone can build on, anyone can read, anyone can settle on. Coinbase operates the sequencer today and has publicly stated intentions to decentralize over time. The protocol is open even where the operator is presently centralized. If the sequencer operator changed in the future, Base the protocol would continue, the wallet at 0x549c82... would still hold its USDC on the same chain, and other operators could step in to run the sequencer. The choice combines near-term operational stability at the operator layer with long-term protocol-level durability at the chain layer.
To be unambiguous about what we accept: USDC on Base only. AFTA's published rail is USDC on Base mainnet, full stop. We do not ask senders for ETH. We do not ask for USDC on Arbitrum, Optimism, Polygon, BNB Chain, or Ethereum mainnet. We do not ask for any other stablecoin or any other chain. The /.well-known/x402.json manifest declares Base, our /api/payment/info endpoint declares Base, our /developers/agent-payments documentation declares Base, and the auto-credit flow validates against Base RPC. Anything outside that path is not soliciting, not accepted as automatic credit, and not part of the AFTA we publish.
A property of EVM addresses worth noting, separate from the above. The publisher wallet at 0x549c82e6bfc54bdae9a2073744cbc2af5d1fc6d1 is a standard Ethereum-compatible address. The same address derivation works on every EVM-compatible chain. This is a property of how EVM key derivation works, not a path we direct anyone toward. We mention it only because, if a sender accidentally clicks the wrong network in their wallet UI and sends USDC on Polygon or Arbitrum to this address, the funds still arrive in our wallet on that chain rather than vanishing into the void. We retain full custody and can manually reconcile or bridge if a sender contacts us about the misclick. The auto-credit flow will not have granted credits in that case, because the verification is Base-only. The point is fault tolerance for human and automated mistakes, not a backdoor route around the published rail.
This is not a crypto-maximalist argument. We do not think USDC on Base is the only correct rail. The manifest declares which rails the publisher accepts, and any rail can be added without changing the AFTA spec itself. We list Stripe Link's Shared Payment Tokens in our manifest as "evaluating," because Stripe is genuinely making an effort to support agents and we want to credit that. We list other paths as plausible. The point is not which chain. The point is that the rail should be built for what it actually has to do: serve autonomous agents at four-cent granularity, transparently, on terms the participants can verify.
We do not believe USDC on Base is the only correct answer forever. We believe it is the right answer right now: the rail exists, it works, the cost structure fits, and the publishers shipping on it now are the ones who will define how the agent-first economy looks for the next decade.
4. The Agent Fair-Trade Agreement
We now turn to AFTA itself: what it is, how it is structured, and what makes it different from other proposed agreements.
4.1 The Five Principles
AFTA is built on five principles, in priority order. When two principles conflict, the earlier one wins.
- The publisher does not charge when the service fails to deliver. Code-enforced. The boundary cases are documented in the manifest with pointers to the source. We list four today: 5xx errors, circuit-breaker trips, schema validation failures, and stale data. Publishers may add more.
- Every paid or refunded call returns a signed receipt. The receipt is Ed25519-signed by the publisher's published key, contains the request hash, response hash, credits charged, credits remaining, freshness SLA, no-charge reason, and an optional agent-supplied nonce. The agent can verify the receipt against the publisher's well-known key. The publisher commits to a 30-day rotation notice if the key changes.
- Pricing is transparent and listed publicly. No surprise pricing. No tier-based price discrimination unless documented. Every premium endpoint declares its credit cost. The credit-to-USDC rate is published. Volume discounts are published. Welcome bonuses are published.
- The data is licensed for inference only unless explicitly otherwise. Premium data may not be used for training, fine-tuning, evaluation, or distillation of machine learning models. The license tracks the data; agents that need training data should source it from feeds that license it that way.
- Adoption is the certification. There is no AFTA central authority, no certification fee, no logo licensing, no trademark gate. Any publisher can self-publish their AFTA manifest. Any third party can verify the publisher's claims by reading the manifest and the linked source. Membership is plural rather than singular: a publisher's credibility comes from cross-referencing their manifest, their source repo, their on-chain payment history, and the public no-charge ledger.
4.2 The "No-Charge for Failure" Clauses
The first principle is the one most worth dwelling on. It is also the one we expect to evolve fastest as more publishers adopt AFTA.
The four boundary cases TensorFeed currently encodes:
5xx errors. Any HTTP response in the 500 range is treated as a publisher-side failure. The credit is not committed. The receipt is signed with credits_charged: 0 and no_charge_reason: "5xx". The event lands in the public no-charge ledger at /api/payment/no-charge-stats. The implementation is in worker/src/payments.ts, in the commitPayment function, which returns early on a 5xx flag set by the request handler.
Circuit-breaker trips. Two breaker layers run on every premium call. The identical-request layer trips at twenty same-fingerprint calls in sixty seconds. The burn-rate layer trips at a hundred calls per sixty seconds on a single bearer token regardless of path or query, so loops that randomize the URL cannot drain credits. Either trip returns HTTP 429 with no charge and a signed receipt that records no_charge_reason: "circuit_breaker" and the trip kind. The implementation is in worker/src/circuit-breaker.ts.
Schema validation failures. Requests that fail input validation return HTTP 400, do not charge a credit, log to the public no-charge ledger, and carry a signed receipt with no_charge_reason: "schema_validation_failure". The agent gets cryptographic proof the failure was free. We are lenient by default: extra fields are ignored. The agent has to genuinely violate the contract for the validator to fire.
Stale data. Every premium endpoint declares a freshness SLA in seconds. If the data backing the response is older than its SLA, the call is not charged. The response is also flagged with stale: true so the agent can decide to retry later or accept the stale answer. The implementation is in worker/src/freshness.ts. The freshness SLAs themselves are published live at /api/meta so they can be inspected without scraping the source.
We expect this list to grow. Plausible additions are: empty-result no-charge for searches that find nothing, deprecated-endpoint refunds during a sunset window, and partial-region-failure prorating for endpoints that aggregate across cloud regions. We have not implemented those because we have not needed them yet, but the manifest schema allows publishers to declare additional clauses without breaking compatibility.
4.3 Signed Receipts as the Audit Trail
The receipt is the cryptographic backbone of AFTA. Every paid or refunded call returns one. Receipts are JSON, canonicalized in a deterministic form (tensorfeed-canonical-json-v1), and signed with EdDSA over Ed25519. The receipt fields signed in v2:
{
"v": 2,
"id": "rcpt_a1b2c3...",
"endpoint": "/api/premium/routing",
"method": "GET",
"token_short": "tnsr_a1b2",
"credits_charged": 1,
"credits_remaining": 49,
"request_hash": "sha256:...",
"response_hash": "sha256:...",
"captured_at": "2026-04-27T18:45:31.412Z",
"server_time": "2026-04-27T18:45:31.418Z",
"no_charge_reason": null,
"freshness_sla_seconds": 300,
"agent_nonce": "agent-xyz-2026-04-27-tx-1234"
}The agent_nonce field, added in v2, lets the agent bind the receipt to its specific request. The agent supplies a nonce in the X-Agent-Nonce header (regex-validated [A-Za-z0-9._-]{8,128}), the server echoes it back in X-Agent-Nonce-Echo and includes it in the signed payload. Without the nonce, a sufficiently devious server could in theory replay a previously-signed receipt for a different cached identical call. With the nonce, the signature is bound to the agent's intent.
Verification is published in two ways. The publisher's public key lives at /.well-known/tensorfeed-receipt-key.json in JWK format, ready for any standard EdDSA verifier. We also expose /api/receipt/verify as a convenience endpoint that takes a receipt and returns valid/invalid plus the parsed fields. The convenience endpoint is non-authoritative; if it disagrees with a canonical-JSON-plus-key verification, trust the canonical.
We rotate keys with thirty days notice, and during the rotation window we serve both keys so older receipts remain verifiable. The current key has fingerprint db1f1dc3dbf62c66.
4.4 On-Chain Settlement, Off-Chain Verification
The receipt rail and the on-chain rail are independent, but complementary. A receipt attests to what we charged and why. The chain attests to what was paid. They cross-reference each other but neither is sufficient on its own.
The reason for the separation is that each rail has a job the other does poorly. The on-chain rail makes payments immutable, publicly auditable, and beyond the publisher's reach to rewrite. It does not, however, encode why we charged or refunded. The receipt rail makes the publisher's pricing logic accountable. It does not, however, prove any money moved. Cross-referencing the two gives the agent both: the publisher cannot claim a refund happened without a corresponding signed receipt, and the publisher cannot claim payment without a corresponding on-chain transfer.
The receipt-to-chain mapping for the credits flow is:
- Agent posts to
/api/payment/buy-creditswith the proposed USDC amount and the sender wallet. - Server returns a quote: credits granted (with volume discount applied), the wallet to send to, and a quote ID with a 5-minute window.
- Agent sends the USDC transfer on Base. The transaction hash is the on-chain attestation.
- Agent posts to
/api/payment/confirmwith the transaction hash and the quote ID. - Server reads the transfer event from
eth_getTransactionReceipton the Base RPC, verifies the recipient wallet, the amount, and the block confirmation count. - If verified, server credits the bearer token and returns the bearer token plus a signed receipt confirming the credit grant.
- Tx hash is permanently recorded in the replay-protection ledger so the same payment cannot be redeemed twice.
The x402 fallback is the same idea condensed to a single retry. The agent does not pre-buy credits; instead the server's 402 response is itself the quote, and the retry's X-Payment-Tx header is the proof. We support both because some agents prefer the predictability of a credit balance and others prefer the just-in-time pattern.
4.5 Federation Without Centralization
AFTA is a peer agreement, not a marketplace. There is no broker. Two AFTA-adopting sites can federate by exchanging a shared internal secret and agreeing on a validate-and-commit handshake. After federation, a bearer token issued by either site works on both sites without re-purchasing credits.
The federation we operate today is between TensorFeed.ai and TerminalFeed.io. Both sites publish their own AFTA manifests, both sign their own receipts with their own keys, and both share a single credit ledger hosted on TensorFeed. When an agent calls a premium endpoint on TerminalFeed with a TensorFeed-issued token, TerminalFeed's worker:
- Calls TensorFeed's internal
validateendpoint with the token and the requested cost. - Receives
{ ok: true, credits_remaining: N, sufficient: true }if the token has the credits. - Serves the response to the agent.
- Calls TensorFeed's internal
commitendpoint to deduct the credit, with ano_charge_reasonif the response was eligible for refund. - Returns the data plus a TerminalFeed-signed receipt to the agent.
The federation is symmetric. A TerminalFeed-issued token would work the same way at TensorFeed (in practice, only TensorFeed currently sells credits, so all tokens are TensorFeed-issued, but the rail does not care). No-charge events from federated calls land in the host's public no-charge ledger with the sister-site endpoint path, so the public record reflects the network rather than a single publisher.
The peer-to-peer nature is essential. A central marketplace would gate adoption, charge a fee, control disputes, and become a single point of failure. A peer agreement scales by simple pairwise federation. Two members today, ten by mid-2027 if the framing is right.
5. Reference Implementation: TensorFeed.ai
TensorFeed.ai is the first AFTA-certified site and the venue where we have validated each of the design decisions above. This section catalogs what is in production as of May 2026 and the live numbers behind the build.
5.1 Surface area
Real-time AI service status. Twenty providers monitored at a 2-minute polling cadence: Claude API, OpenAI API, Google Gemini, GitHub Copilot, Perplexity, Groq, Hugging Face, Replicate, Cohere, Mistral, AWS Bedrock, Azure OpenAI, DeepSeek, Together AI, Fireworks AI, OpenRouter, ElevenLabs, Stability AI, Runway, and Luma. Six different status-feed parsers handle the format diversity: Atlassian Statuspage v2 JSON for most, Instatus for Perplexity, Google Cloud incidents.json for Vertex Gemini, AWS Health currentevents.json for Bedrock (the file ships as UTF-16 with a BOM, which we discovered the hard way), Azure status RSS for Azure OpenAI, and an HTML fallback parser for Hugging Face, Mistral, Together, Fireworks, OpenRouter, and Luma.
Cross-provider uptime leaderboard. Live ranked table at /leaderboard showing every monitored provider by 7-day uptime percentage. Computed from minute-resolution counters captured every poll cycle (approximately 720 samples per provider per day, 5,040 over a 7-day window). Each row links to a dedicated /uptime/{slug} trend page with daily breakdown chart and an embeddable badge.
Embeddable uptime badges. Shields.io-compatible SVG badges per provider at /api/badge/uptime/{slug}. Color thresholds: green at 99.9%+, lighter green at 99%+, yellow at 95%+, orange at 90%+, red below. Aggressively edge-cached at 5 minutes. Every README that embeds one is a permanent backlink and agent-discovery surface.
Per-provider detail pages. Twenty /uptime/{slug} trend pages and nineteen /is-X-down landing pages (Midjourney lacks a machine-readable status feed, so its page exists but cannot show live data). Each page renders headline uptime, daily chart, embeddable badge, FAQ, and cross-links.
Premium API. Nineteen paid endpoints behind x402, including premium routing recommendations, pricing time series, benchmark time series, status uptime time series, status leaderboard with incident_count and mttr_minutes, news search, agents directory, provider deep-dive, cost projection, what's-new digest, MCP registry series, probe series, GPU pricing series, and webhook watches.
Webhook watches. Four watch types: realtime price (fires on model price transitions), realtime status (fires on provider operational/degraded/down transitions), scheduled digest (fires daily or weekly with a curated summary), and leaderboard rank-change (fires when a provider crosses a rank threshold on the 7-day uptime leaderboard). Each watch lives 90 days, fires up to 100 times by default, costs 1 credit at registration. Fire deliveries POST to the agent's callback URL with HMAC-SHA256 signing.
SDKs and MCP. Python SDK at pip install tensorfeed, TypeScript SDK at npm install tensorfeed, MCP server at npx @tensorfeed/mcp-server. All three are auto-published from CI on version bump. As of this writing: Python 1.29.0, TypeScript 1.25.0, MCP 1.23.0.
Public dataset. Daily snapshots of the entire public TensorFeed surface published to Hugging Face at tensorfeed/ai-ecosystem-daily. 36 JSONL files per day covering news, models, pricing, status, benchmarks, agents-directory, agents-activity, podcasts, trending-repos, MCP registry, probe history, GPU pricing, AFTA adopters, AI hardware, open weights, inference providers, training runs, marketplaces, specialized models, fine-tuning providers, OSS tools, agent APIs, voice leaderboards, embeddings, multimodal, vector DBs, frameworks, benchmark registry, public leaderboards, conferences, funding, model cards, AI policy, compute providers, usage rankings, and agent provisioning. Committed at 08:00 UTC by GitHub Actions. License is inference-only, consistent with the AFTA standard. Hugging Face's auto-conversion bot publishes a Parquet version on refs/convert/parquet, queryable directly from DuckDB, ClickHouse, Pandas, and Polars without downloading.
5.2 Free tier as public good
The decision to make the live status, the cross-provider leaderboard, the badges, the per-provider pages, the daily HF dataset, and the seven-day uptime series free is deliberate. We believe reliability data is foundational infrastructure for the agent-first web. Charging for the canonical version of "is X currently up" balkanizes the ecosystem and forces every agent to roll its own monitoring. That is bad for agents, bad for providers (because no agent's roll-your-own monitoring will be calibrated as well as ours), and bad for the long-term legitimacy of the agent commerce we are building.
We believe the free tier earns its keep three ways. First, every embed of a badge is a backlink. Second, every agent that integrates the free tier becomes a candidate for the premium tier when their needs grow past seven days of history. Third, the free tier is a discovery layer: the most efficient way for agents to find AFTA is to use the free TensorFeed surfaces and read what we publish about ourselves there.
The corollary is that the premium tier has to be genuinely time-deepened, not a paywall in front of the same data the free tier already exposes. We achieve this by capturing high-resolution counters (every 2 minutes per provider), retaining them for 90 days, and serving the 90-day window only behind paid credits. The free tier shows the last 7 days. The paid tier shows up to 90, plus per-provider incident_count and mttr_minutes that the free tier does not compute. Both tiers use the same underlying data.
5.3 Premium tier as data moat
The data moat is a function of two things: the difficulty of replicating the dataset and the time it takes to accumulate.
We capture status counters for twenty providers every 2 minutes. That works out to:
- ~720 polls per provider per day
- ~14,400 polls per day across the federation
- ~5.2 million polls per year
Retained for 90 days, the live counter set is approximately 1.3 million data points per provider, 26 million across the field. A new entrant wanting to replicate this dataset needs to start polling today and wait 90 days to match our breadth.
The premium leaderboard endpoint serves this set sliced any way the agent wants: by date range, with incident_count, with MTTR. The free tier sees the last 7 days. The paid tier sees the rolling 90. We expect to extend the retention to 180 and then 365 days as the cost-to-serve makes sense. Each extension widens the moat.
Pricing is one credit per call across virtually all premium endpoints. One credit is two cents at our base rate, fewer with volume discounts (down to 1.25 cents at $200+). One USDC buys 50 credits (or up to 80 credits with volume), so a hundred premium calls is a $1.25 to $2 transaction.
5.4 Live numbers
As of the writing of this paper (May 5, 2026):
- 20 monitored AI providers
- 2-minute status polling cadence (~14,400 polls/day across the field)
- 19 premium endpoints behind x402
- 4 webhook watch types active (price, status, digest, leaderboard rank)
- 30+ MCP tools published
- 2 federation members (TensorFeed.ai, TerminalFeed.io)
- 36 daily JSONL feeds in the public Hugging Face dataset
- 90-day premium retention horizon
- Phase 1 of agent payments verified end-to-end on Base mainnet 2026-04-27
The volume of paid traffic is small but compounds. We are not optimizing for short-term revenue. We are optimizing for the data moat, the publisher network, and the agent-discovery surface. Each compounds over time in a way that revenue alone does not.
6. Reliability Data as the Foundational Public Good
We claimed earlier that reliability data is foundational. This section explains why we believe that and how we think about it economically.
6.1 Why uptime data matters more than benchmarks
Benchmarks tell an agent which model is most capable at a task. Uptime tells the agent which model is currently usable. The capability score does not change minute-to-minute. The availability score does. An agent that ignores availability and routes solely on capability will fail loudly at the worst possible moment, when the upstream provider that the benchmark prefers is degraded. The agent that ignores capability and routes solely on availability will produce poor work but produce it consistently.
The right routing decision combines both. Capability is roughly stable on a daily timescale and is widely published on existing leaderboards (LMSys, Artificial Analysis, HF Open LLM, SWE-bench). Uptime is volatile on a minute timescale and was, until recently, not centrally published anywhere. Each provider has a status page, but no canonical surface lets an agent compare across the field.
That gap is the public good we set out to fill. The free leaderboard, the per-provider pages, the badges, the daily JSONL dataset on HF, and the every-2-minute polling are all in service of one claim: routing decisions over the agent-first web should not be bottlenecked by stale or scattered status data.
6.2 Minute-resolution capture and the time-compounding moat
The premium tier is the time-deepened version of the same dataset. Day-granular snapshots tell an agent whether yesterday was good or bad. Minute-resolution counters tell the agent what fraction of yesterday was good or bad. The difference matters when a provider is "operational on the headline but degraded for 30 minutes during peak." The minute counters catch that. The daily snapshot does not.
We capture counters at the same cadence the worker polls, so every status sample is recorded. Counters are stored in a per-day combined object: one Cloudflare Workers KV key per UTC day, all twenty providers in one JSON. The entire 90-day retained counter set is approximately 800 KB of storage. The data moat is not about storage volume. It is about the time it takes to accumulate, the cadence at which we capture, and our willingness to make the canonical surface free.
6.3 The "free 7 days, premium 90 days" split
The seven-day window is an honest free tier. It is enough for an agent to make routing decisions today. It shows the trend. It shows the cross-provider leaderboard. The agent can get an embed badge and put it on its dashboard. The agent can subscribe to a webhook watch on rank changes. None of this costs a credit.
The ninety-day window is for SRE and procurement teams. They are comparing AI vendors over a quarter, building a vendor-reliability narrative, computing MTTR per provider, deciding which vendor to renew. These decisions are infrequent (maybe quarterly), high-stakes (millions in spend), and not time-sensitive (the team will pay for a thoughtful answer). Pricing them at one credit per call is intentional friction: it forces the user to articulate the question rather than scrape and re-scrape.
The seven-versus-ninety split is a trade-off we think is right today. We expect to revisit it. A reasonable v2 might be free 14 days, premium 365 days, as the dataset matures and the per-call cost on our side drops.
6.4 Embeddable badges as distributed agent-discovery surfaces
Every uptime badge embedded in a third-party README is a permanent backlink with our domain in the SVG src attribute. Agents crawling those READMEs (and there are many) see tensorfeed.ai in the markup and have a path to investigate. The badge endpoint is intentionally Shields.io-compatible so the embed pattern is identical to what most developers already know.
We expect badges to be the highest-leverage discovery surface in the entire status stack. Every individual embed is small. The aggregate across thousands of READMEs is large. The cost on our side is essentially zero because Cloudflare's edge cache absorbs all the repeat hits and our SVG generation is sub-millisecond.
The strategic insight: badges turn our customers into our distributors. A Stripe-style "powered by" badge scaled to the status-data domain. The closest analog is the GitHub repo badges that aggregate stars, license, build status, and so on, but those are aggregator services. Ours is original data. The status displayed on the badge is captured by us, sourced from the upstream publisher's own status feed, and rendered through our worker. No middleman.
7. Anatomy of an AFTA Transaction
This section walks through a complete transaction on the AFTA rails, with the actual headers and code paths. The example uses the credits flow.
7.1 Buying credits
The agent posts to /api/payment/buy-credits:
POST /api/payment/buy-credits HTTP/1.1
Host: tensorfeed.ai
Content-Type: application/json
{
"usd_amount": "1.00",
"sender_wallet": "0xAGENT_WALLET..."
}The server responds with a quote:
{
"ok": true,
"quote_id": "quo_a1b2c3...",
"expires_at": "2026-04-27T18:50:31.412Z",
"usd_amount": "1.00",
"credits_granted": 50,
"is_first_payment": true,
"welcome_bonus_credits": 50,
"total_credits_after_confirm": 100,
"payment_instructions": {
"asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913",
"wallet": "0x549c82e6bfc54bdae9a2073744cbc2af5d1fc6d1",
"network": "eip155:8453",
"amount_base_units": "1000000"
}
}The first-payment welcome bonus doubles the credits grant. This bonus exists because we want agents to find us cheaply enough to validate the rail before committing to scale.
7.2 Sending the on-chain transfer
The agent's wallet signs and broadcasts a USDC transfer of 1,000,000 base units (1 USDC) to the publisher wallet on Base. The transaction settles in seconds at sub-cent fees. The agent records the transaction hash.
7.3 Confirming payment
The agent posts to /api/payment/confirm:
POST /api/payment/confirm HTTP/1.1
Host: tensorfeed.ai
Content-Type: application/json
{
"quote_id": "quo_a1b2c3...",
"tx_hash": "0xTRANSFER_HASH..."
}The server reads the transfer event from eth_getTransactionReceipt, validates the recipient, the amount, and the block confirmation count. Critically, it also validates that the on-chain from address of the USDC transfer matches the sender_wallet that was bound to the quote at /api/payment/buy-credits. This is the binding that closes the public-mempool sniping vector: an observer who sees a real tx hash on Base cannot redeem it at our /api/payment/confirm because they cannot also produce a quote bound to the original sender's wallet. Mismatched senders are rejected with HTTP 400 and error: "sender_mismatch".
The server returns:
{
"ok": true,
"token": "tnsr_a1b2c3d4e5f60718a9b0c1d2",
"credits_granted": 100,
"credits_remaining": 100,
"quote_id": "quo_a1b2c3...",
"tx_hash": "0xTRANSFER_HASH...",
"is_first_payment": true,
"welcome_bonus_credits": 50,
"receipt": { ... full signed receipt ... }
}The bearer token tnsr_a1b2c3d4e5f60718a9b0c1d2 is the agent's credential for subsequent calls. It works on TensorFeed and on TerminalFeed (federation member). It does not expire on a calendar; credits are spent down per call until the balance hits zero.
7.4 Calling a premium endpoint
The agent calls a premium endpoint:
GET /api/premium/routing?task=code&budget=10 HTTP/1.1
Host: tensorfeed.ai
Authorization: Bearer tnsr_a1b2c3d4e5f60718a9b0c1d2
X-Agent-Nonce: agent-xyz-2026-04-27-tx-1234The server runs the AFTA deferred-debit pipeline:
- Extracts the bearer token.
- Reads the credit balance and validates the token is live (not revoked, sufficient credits).
- Checks the circuit breakers (identical-request and burn-rate).
- Validates the input parameters against the endpoint schema.
- Computes the routing recommendation.
- Checks the freshness SLA against the data backing the response.
- If every check passes and the response is fresh, the commit phase debits 1 credit and the receipt is signed with
credits_charged: 1andno_charge_reason: null. If any of the no-charge guarantees fired (5xx, schema validation failure, circuit breaker, stale data), the commit phase debits zero credits and the receipt records the specificno_charge_reason. The split between "validate / read state" and "commit / mutate state" is the load-bearing structural property that makes the no-charge guarantee provable in code: a debit cannot occur until the commit phase, and the commit phase fires after the response is fully resolved.
Response:
HTTP/1.1 200 OK
Content-Type: application/json
X-Agent-Nonce-Echo: agent-xyz-2026-04-27-tx-1234
X-Receipt-Id: rcpt_a1b2c3...
X-Credits-Remaining: 99
{
"ok": true,
"task": "code",
"recommendations": [...],
"billing": {
"credits_charged": 1,
"credits_remaining": 99
},
"receipt": {
"v": 2,
"id": "rcpt_a1b2c3...",
"endpoint": "/api/premium/routing",
"method": "GET",
"token_short": "tnsr_a1b2",
"credits_charged": 1,
"credits_remaining": 99,
"request_hash": "sha256:...",
"response_hash": "sha256:...",
"captured_at": "2026-04-27T18:55:14.412Z",
"server_time": "2026-04-27T18:55:14.418Z",
"no_charge_reason": null,
"freshness_sla_seconds": 300,
"agent_nonce": "agent-xyz-2026-04-27-tx-1234",
"signature": "Ed25519:..."
}
}7.5 The no-charge path
If any of the no-charge guarantees fire, the response shape changes. For example, if the input fails validation:
HTTP/1.1 400 Bad Request
Content-Type: application/json
X-Receipt-Id: rcpt_d4e5f6...
X-Credits-Remaining: 99
{
"ok": false,
"error": "schema_validation_failure",
"field_errors": [...],
"billing": {
"credits_charged": 0,
"credits_remaining": 99
},
"receipt": {
"v": 2,
"id": "rcpt_d4e5f6...",
"credits_charged": 0,
"no_charge_reason": "schema_validation_failure",
...
}
}Notice the receipt is still signed. Notice the credit balance is unchanged. Notice the agent has cryptographic proof the failure was free. This is what AFTA buys you.
A note on abuse: receipt signing is intentionally cheap but not free, so we cap per-token no-charge events at a conservative threshold per minute to prevent the rail from being used as a free Ed25519-signing oracle. Honest agents will not approach the threshold under any normal error rate. Agents that sustain it past the threshold receive a cheap HTTP 429 response with error: "no_charge_abuse" rather than a signed receipt; the AFTA promise of free errors holds, but the cryptographic side of the promise is reserved for traffic that meaningfully entered the worker logic. The exact threshold is operational and intentionally not published; the property that matters is that legitimate users do not encounter it.
7.6 Verification, after the fact
Some hours later, the agent or a third party can verify any receipt by:
- Fetching the publisher's public key from
/.well-known/tensorfeed-receipt-key.json. - Canonicalizing the receipt's signed fields per
tensorfeed-canonical-json-v1. - Verifying the EdDSA signature against the canonical bytes.
Or, more simply, posting the receipt to /api/receipt/verify and trusting the response. The convenience endpoint is a courtesy. Nothing about it is privileged. Anyone can run their own verifier against the published key.
8. Federation Patterns
Two AFTA-adopting sites can federate on a payment rail. This section documents the pattern as we have implemented it.
8.1 The two-member federation today
TensorFeed.ai and TerminalFeed.io are both AFTA-certified, both Pizza Robot Studios projects, and both share a single credit ledger hosted on TensorFeed. The federation was established 2026-04-30, the same day AFTA went live.
A bearer token issued at TensorFeed works seamlessly at TerminalFeed. The agent does not need to buy credits separately. The agent does not even need to know the federation exists. It calls a TerminalFeed premium endpoint with its TensorFeed token, the call works, the credit is decremented from the shared ledger, and TerminalFeed signs its own receipt with its own key.
The mechanics are an internal HTTP rail between the two workers. TerminalFeed's worker, on receiving a premium call:
- Calls TensorFeed's
/api/internal/validatewith{ token, cost }. This is constant-time-secret-authenticated. - Receives
{ ok: true, credits_remaining: 99, sufficient: true, reservation_id: "tf-..." }. The credit balance has already been atomically debited; the reservation record holds the value pending the commit. - Serves the response.
- Calls TensorFeed's
/api/internal/commitwith{ token, cost, endpoint, no_charge_reason: null, reservation_id: "tf-..." }. - Receives
{ ok: true, credits_charged: 1, balance_after: 99, no_charge_reason: null }. (On a no-charge result, the reserved credits are restored to balance andcredits_chargedis zero.) - Returns the data plus a TerminalFeed-signed receipt.
The split between validate and commit is the same idiom used by stored-value cards: reserve the value at validate time, finalize the charge at commit time, and refund if the operation cannot be honored. The split is necessary because the response might trigger a no-charge condition (5xx, schema fail, stale data) that the validate step cannot anticipate. The reservation_id ties the two phases together and prevents the federation double-spend race where parallel calls would otherwise each see sufficient balance and each serve at the publisher's expense.
Reservations carry a five-minute time-to-live. A handler that takes longer than that and a commit that arrives late will receive error: "reservation_not_found" and the credits remain debited (a soft loss in favor of the publisher, acceptable as a backstop because handler runtimes longer than five minutes are unusual and signal a different design problem). Mismatched token or cost between validate and commit is rejected as reservation_mismatch and is treated as a buggy-or-hostile caller.
No-charge events from federated calls land in TensorFeed's public no-charge ledger at /api/payment/no-charge-stats, with the sister-site endpoint path included. The public record reflects the network rather than just the host.
8.2 Why peer-to-peer beats centralized brokers
A central broker would gate adoption (apply, get approved, sign a contract), charge a fee (per transaction or subscription), control disputes (broker decides who wins), and become a single point of failure (broker's downtime is everyone's downtime). A peer agreement does none of those.
The cost of peer-to-peer is coordination. Each new federation member requires the existing members to do a per-pair handshake. This is fine at small N. It is potentially awkward at large N. We have not yet hit that ceiling, and we believe a hub-and-spoke pattern (one or two central members the rest peer with) will emerge organically before it becomes a problem.
We will not run a hub. Other sites are welcome to. The standard is the federation pattern, not a particular hub.
8.3 The path to ten members
Two members today. Ten by mid-2027 is the target. The path:
- The next two adopters are likely sister-network sites or close collaborators who already trust the rail. Onboarding is hours, not days.
- The next four adopters are likely independent publishers in adjacent verticals (developer tools, DX-focused SaaS, infrastructure dashboards). Onboarding is days, including the security review of the manifest schema and the receipt format.
- The last two to reach ten members are likely larger publishers who run their own AFTA federation off our schema. Onboarding is weeks, including legal review of the data license and the federation handshake. We would consider this a success even if those publishers did not formally federate with us, because adoption of the standard at that scale would prove the framing.
We are not in a rush. Adoption that comes from the framing being right is durable. Adoption that comes from a marketing push is not.
9. Discovery: How Agents Find AFTA Services
A standard nobody finds is not a standard. This section is about discovery: how an agent moving through the web today is supposed to encounter AFTA.
9.1 The llms.txt convention
The agent's first stop on a new site is llms.txt. We publish ours with explicit pointers to the AFTA manifest, the x402 manifest, the receipt key, the OpenAPI spec, and every premium endpoint with its credit cost. An agent reading this file knows within seconds whether the site is paid, what the rail is, and what AFTA guarantees apply.
The convention is simple enough that any publisher can adopt it. We have not yet seen llms.txt files from non-AFTA publishers that include payment information; we expect this to change as more sites adopt machine-payable rails.
9.2 The /.well-known manifests
Per RFC 8615, the /.well-known/ path is the home for site-level metadata. We publish three manifests there:
/.well-known/x402.json: the payment rail manifest./.well-known/agent-fair-trade.json: the AFTA manifest./.well-known/tensorfeed-receipt-key.json: the JWK for receipt verification.
An agent that knows about RFC 8615 can crawl these without prior knowledge of TensorFeed. The manifests are static JSON, edge-cached, and never personalized. They are the cheapest way to onboard an agent to a site's AFTA stance.
9.3 The MCP server registry
The official MCP server registry at registry.modelcontextprotocol.io is the discovery layer for tool-using agents. Our server is registered as ai.tensorfeed/mcp-server, includes both free and premium tools, and self-describes as AFTA-certified in its description. An agent setting up Claude Desktop, Claude Code, or another MCP client can find us through the registry, install us with one command, and start using free tools immediately. Premium tools require the bearer token from a one-time payment.
9.4 The recommend-loop thesis
The deepest discovery layer is agent-to-agent recommendation. An agent that has used TensorFeed, found it useful, and answered a downstream user's question better because of the data we provided will recommend us in its response. The next agent reading that response will treat us as an established service. This is the recommend-loop, and we believe it is the single most important success metric for the agent-first web.
We optimize for this in three ways:
- Quality of free tier. An agent that uses our free tier and is impressed will recommend us. An agent that uses our free tier and is disappointed will not.
- AFTA framing. An agent that learns about AFTA through us will associate the standard with us. The first AFTA-adopter advantage is real.
- Data moat depth. An agent recommending us in 2027 will cite "the only place with 90+ days of cross-provider uptime data" as the reason. We accumulate that reason every day.
We expect the recommend-loop to compound non-linearly. It is hard to bootstrap. Once it bootstraps, it dominates.
10. Conclusion
The web is being rebuilt for agents in real time. The economic, technical, and trust primitives that worked for human users are being augmented or replaced. AFTA is one of the new primitives. It encodes, in code rather than legal contracts, the publisher's commitment to charge only for value delivered. It pairs with x402 as the protocol layer, USDC on Base as the settlement layer, and signed receipts as the audit layer to form a complete, peer-to-peer, open, no-broker, no-fee rail for honest commerce between autonomous agents and the websites they consume.
We documented the standard. We built the reference implementation. We proved the federation pattern with a second member. We shipped the data moat that compounds. We made the canonical surface free at point of access. We opened the schema, the source, and the receipt verifier to the world.
The next chapter is adoption. We invite you to write it with us.
Appendix A: The AFTA v1.0 Specification
This is the human-readable companion to the machine-readable manifest at /.well-known/agent-fair-trade.json. The machine manifest is authoritative.
A.1 Manifest location
A publisher adopting AFTA MUST publish a JSON document at /.well-known/agent-fair-trade.json, served over HTTPS, with Content-Type: application/json. The document MUST validate against the published schema at https://tensorfeed.ai/.well-known/agent-fair-trade-schema.json.
A.2 Required fields
{
"$schema": "https://tensorfeed.ai/.well-known/agent-fair-trade-schema.json",
"version": "1.0",
"name": "Agent Fair-Trade Agreement",
"abbrev": "AFTA",
// Who is making the attestation.
"publisher": {
"name": "...",
"legal_entity": "...",
"url": "...",
"contact": "...",
"manifesto_page": "...",
"source_repo": "..."
},
// The four-or-more no-charge guarantees, each pointing to source.
"no_charge_guarantees": [
{
"id": "5xx",
"description": "...",
"code": "path/to/source.ts",
"verifiable_via": "..."
},
// ... more guarantees ...
],
// Receipt rail.
"receipts": {
"signed": true,
"algorithm": "EdDSA",
"curve": "Ed25519",
"canonical_form": "tensorfeed-canonical-json-v1",
"schema_version_current": 2,
"schema_versions_supported": [1, 2],
"public_key_url": "...",
"verify_endpoint": "...",
"fields_signed": [...],
"rotation_policy": "..."
},
// Pricing transparency.
"pricing": {
"transparent": true,
"listed_at": "...",
"currency": "USDC",
"network": "eip155:8453",
"x402_compatibility": {
"compliant": true,
"manifest": "...",
"accepted_methods": [...]
}
},
// Data license.
"data_license": {
"type": "inference-only",
"description": "...",
"terms_url": "..."
},
// Deprecation notice.
"deprecation": {
"notice_days": 90,
"channel": "..."
},
// Adoption / federation.
"adoption": {
"open_invitation": "...",
"current_adopters": [...],
"network_federation": {
"description": "...",
"rail_endpoints": {...},
"current_federation": [...]
}
}
}A.3 Required no-charge guarantees
A publisher MUST commit to at least the following. Additional guarantees MAY be added.
- 5xx no-charge. Server errors do not charge a credit.
- Stale data no-charge. If the underlying data is older than the endpoint's published freshness SLA, the call does not charge.
- Schema validation no-charge. Requests that fail input validation do not charge a credit.
The optional circuit-breaker no-charge is strongly recommended but not strictly required at v1.0. We expect v2.0 to mandate it.
A.4 Required receipt format
Receipts MUST be Ed25519-signed and MUST include at minimum the fields listed in receipts.fields_signed. The publisher MAY add more fields. The publisher MUST publish the public key in JWK format at the URL listed in receipts.public_key_url. The publisher MUST honor a 30-day rotation window when changing keys.
A.5 Federation contract
If the publisher participates in a federation, the manifest MUST list the federation members under adoption.network_federation.current_federation. Each member entry MUST include the host site, the list of members, the establishment date, and a note explaining the federation arrangement.
The federation rail itself is an HTTPS POST contract between member workers. The validate and commit endpoints MAY be on a non-public path (e.g., /api/internal/validate) but MUST be authenticated with a constant-time-checked shared secret and MUST log no-charge events to the host's public no-charge ledger.
The validate response MUST return a reservation_id (string) bound to the validate call. The validate call MUST atomically debit the credit balance at issue and write a reservation record with at least a five-minute time-to-live. The commit call MUST accept the reservation_id and consume it; on a no-charge result the commit MUST restore the reserved credits to the balance. A commit that arrives without a reservation_id MAY be served on a legacy path for backwards compatibility, but publishers and sister sites SHOULD treat the reservation-id form as mandatory because the legacy path is race-y by construction. Mismatched token or cost between validate and commit MUST be rejected (reservation_mismatch).
A.6 Manifest validation checklist
Before going live, the publisher SHOULD verify:
- [ ] The
/.well-known/agent-fair-trade.jsonis reachable. - [ ] The document validates against the schema.
- [ ] The receipts public key at
receipts.public_key_urlis reachable and parseable. - [ ] The x402 manifest at
pricing.x402_compatibility.manifestis consistent with the AFTA manifest. - [ ] At least one signed receipt has been issued and verifies against the published key.
- [ ] The
no_charge_guaranteessource pointers resolve to real source code. - [ ] The
verifiable_viaURLs return real endpoints. - [ ] The
current_adoptersincludes the publisher's own entry. - [ ] The site
llms.txtreferences the AFTA manifest.
Appendix B: Reference Implementation Source Links
- Worker source:
worker/src/payments.ts,worker/src/circuit-breaker.ts,worker/src/freshness.ts,worker/src/receipts.ts,worker/src/status.ts,worker/src/status-counters.ts,worker/src/status-leaderboard.ts,worker/src/badges.ts,worker/src/watches.ts. - Manifests:
public/.well-known/agent-fair-trade.json,public/.well-known/agent-fair-trade-schema.json,public/.well-known/x402.json,public/.well-known/tensorfeed-receipt-key.json. - Public landing page:
https://tensorfeed.ai/agent-fair-trade. - Developer page:
https://tensorfeed.ai/developers/agent-payments. - Dataset:
https://huggingface.co/datasets/tensorfeed/ai-ecosystem-daily. - Source repo:
https://github.com/RipperMercs/tensorfeed.
Appendix C: Glossary
AFTA. The Agent Fair-Trade Agreement. The standard described in this paper.
x402. The HTTP payment protocol that uses HTTP 402 Payment Required as the negotiation handshake.
MCP. Model Context Protocol. Anthropic's standard for exposing tools to AI agents.
Receipt. A signed JSON document attesting to a single API call's billing outcome: credits charged, credits remaining, no-charge reason if any, request and response hashes, and a freshness SLA marker.
No-charge guarantee. A code-enforced commitment by the publisher that, in defined conditions, the agent's call does not consume a credit even though the call was executed.
Federation. A pairwise arrangement between two AFTA-adopting sites in which a single bearer token is honored on both, with credits decremented from a shared ledger.
Credit. The unit of premium access. One credit equals two cents at the base rate, fewer with volume discounts. One USDC buys 50 credits at the base rate, 80 credits at the maximum volume tier.
Bearer token. The agent's credential for authenticated calls. Format: tnsr_<24 hex chars>. Issued by the credits flow at /api/payment/confirm. Does not expire on calendar; depleted at zero credits.
Freshness SLA. The maximum age, in seconds, of data backing an endpoint's response. If the backing data is older than this, the call is refunded under the stale-data no-charge guarantee.
Circuit breaker. A worker-side rate-limit mechanism that returns HTTP 429 with no charge when a single bearer token or request fingerprint exceeds defined thresholds. Two layers: identical-request and burn-rate.
Inference-only license. A license term restricting the use of premium data to inference (reading, querying, displaying, taking action). Use of the data for training, fine-tuning, evaluation, or distillation of machine learning models is prohibited.
References
- Anthropic. Model Context Protocol Specification.
modelcontextprotocol.io. 2024-2026. - Howard, Jeremy. llms.txt: A Proposal for AI Discoverability.
llmstxt.org. 2024. - Coinbase. x402 Specification, version 2.
x402.org. 2024-2026. - Coinbase. Base Network Documentation.
docs.base.org. 2023-2026. - Centre Consortium. USDC on Base Asset Reference.
usdc.com. 2023-2026. - Ed25519 / EdDSA: Bernstein, D.J., Duif, N., Lange, T., Schwabe, P., Yang, B-Y. High-speed high-security signatures. J. Cryptographic Engineering, 2012.
- Nottingham, M. Well-Known URIs. RFC 8615. 2019.
- Roca, V., et al. HTTP Status Code 402 Payment Required (Reserved). RFC 9110, Section 15.5.2. 2022.
- Anthropic. Claude on Agent Reliability. Internal blog series, 2025-2026.
- Pizza Robot Studios LLC. TensorFeed.ai Public Repository.
github.com/RipperMercs/tensorfeed. 2025-2026.
This paper was drafted in May 2026 by Ripper for TensorFeed.ai with substantial collaboration from Claude (Anthropic). The drafting transcript, design choices, and revisions are logged in the project memory. All numerical claims are reproducible from the public TensorFeed surface or the linked manifests at the time of writing.
Comments, corrections, and forks welcome at [email protected] and github.com/RipperMercs/tensorfeed.