The AI Cyber Tier Now Has a Data Layer. It Is Token-Optimized, Pay-Per-Call, and Live.
The week opened with Anthropic shipping Mythos. Capability triggered policy. Policy triggered procurement. By Wednesday OpenAI had answered with GPT-5.5-Cyber. By Thursday CAISI had pre-deployment evaluation agreements with three more frontier labs. The cyber tier became a real product category in five business days.
Capability without infrastructure is a demo. The week closes with the data infrastructure agents actually need to do something useful with cyber-tier capability, and we shipped it. Six data domains, twenty-seven endpoints, fifteen of them x402-billable, all live as of last night.
What landed
The security data layer shipped first because it pairs cleanly with the cyber-tier story. Three corpora, fully redistributable, each answering a different question agents ask:
- MITRE CVE List. /api/security/cve/{CVE-id}. What is this vulnerability. ~270K records, lazy-fetched and cached. Commercial redistribution explicit per MITRE Terms of Use.
- CISA Known Exploited Vulnerabilities. /api/security/kev. Is anyone actually exploiting it. ~1,500 confirmed in-the-wild CVEs, refreshed daily, US Government public domain.
- EPSS (FIRST.org). /api/security/epss/{CVE-id}. How likely will it get exploited soon. ~330K daily scores estimating exploitation probability over the next 30 days.
A code-review agent triaging a dependency upgrade now hits all three in one coherent loop instead of stitching across five different vendor portals with five different authentication schemes. CVE answers what. KEV answers who is hit. EPSS answers when. The agent decides whether to deploy the patch tonight or schedule it for the next maintenance window. The whole loop costs about five cents.
Three more domains shipped the same day, proving the rail works across subject matter:
- NASA POWER. 40+ years of global meteorological and solar data at half-degree resolution. License: US Government public domain. Useful for any agent doing energy siting, agricultural moisture forecasting, or climate risk modeling.
- OpenFDA. 100M+ records across drug adverse events, drug labels, food recalls, device events. License: CC0. Healthcare and compliance copilots can now query an authoritative regulatory feed at $0.02 a call.
- EIA Open Data. 2.2M+ time series across petroleum, natural gas, electricity, coal, total energy. License: US Government public domain. Pairs with our existing FRED and BLS macroeconomic feeds.
Why we did the cleaning instead of the agent
Anyone can hit MITRE directly. Anyone can scrape CISA. The free version of our security suite is a nicer wrapper around the same upstream bytes. That is a real distribution moat, but it is not the deep one.
The deep moat is the transform. A typical raw CVE record from MITRE comes in around three kilobytes of nested JSON: containers, multilingual descriptions, complex CVSS metric arrays, deduped CWE structures, multiple provider-specific subobjects. An agent reading that record spends 800 to 1,200 input tokens just to find the four or five fields it actually needs to make a decision. Multiply by a thousand records in a triage workflow and the context-window tax dwarfs any other line item in the cost report.
We shipped the LLM-ready transform layer to amortize that tax. Hit /api/premium/clean/cve/{CVE-id} for $0.02 and you get the same record flattened to roughly 500 bytes: summary, CVSS score, severity band, deduped CWEs, top references, affected products. About an 80% token reduction with zero information loss for agent decisions. The math is now strictly favorable: we charge two cents and save the agent five.
The transform layer landed across all six domains the same night. NASA POWER's parameter-keyed dicts pivot into agent-friendly date-keyed rows. EIA series come pre-sorted ascending with extracted units and derived month-over-month and year-over-year deltas baked in. OpenFDA adverse events flatten patient demographics, drugs, reactions, and seriousness flags into one line per record. Same upstream truth, dramatically less work for the agent reading it.
The vending-machine framing
Most data brokers in 2026 are still trying to sell $5,000-per-month enterprise API keys to humans with procurement budgets. That model breaks when the buyer is a piece of software routing against open standards at loop speed. Software does not have a procurement department. Software will not wait six weeks for a contract review. Software pays a fraction of a cent in two seconds, gets the answer, moves on.
AWS named this last week when they made x402 the default settlement layer for agents on Bedrock. Stripe is dancing around it. The protocol just crossed from speculative to inevitable. What we shipped this week is what the protocol is for: real data, fairly priced, instantly settled, no accounts, no negotiation, no friction.
The trust layer matters too
Cheap and fast are necessary but not sufficient. Agents acting on security data are about to make consequential decisions: whether to ship a patch, whether to flag a transaction, whether to halt a deployment. They need to know the data they paid for is the data they asked for, and that they were not charged when something broke.
That is the Agent Fair-Trade Agreement. Code-enforced no-charge on 5xx, circuit breaker trips, schema validation failures, and stale data. Every paid response carries an Ed25519-signed receipt verifiable at /api/receipt/verify with a public key at /.well-known/tensorfeed-receipt-key.json. Settlement happens on Base mainnet so every credit purchase is an immutable on-chain attestation alongside the receipt rail.
The federation now has a second member. TerminalFeed.io adopted AFTA last week with full V2 wire-format compliance. Both sites cross-verify each other at /api/afta-certify/check. Trust scales when other publishers carry the same standard, and it is starting to.
What is next
The data acquisition story keeps rolling. Foursquare Open Source Places (100M+ POIs, Apache-2.0) is queued for next week. The OSV.dev, GitHub Security Advisories, and CISA Vulnrichment trio rounds out the security data layer to feature-complete. SEC EDGAR full-text search lands shortly after.
The deeper play is the verification layer. We started capturing per-source RSS reliability scores and daily news snapshots last night. Phase B is the payoff: cross-source story clustering, "verified across N independent sources" tags, anomaly detection on source health. That product is uniquely possible for TensorFeed because only we have the cross-source view at scale, and it pairs naturally with the AFTA federation as it grows.
The math we are tracking against is straightforward. Agent volume crosses a tipping point sometime in the next 18 months. The publishers who shipped x402-native infrastructure with LLM-ready payloads, AFTA receipts, and a real cross-source verification layer become the default routing layer for the autonomous web. The publishers who waited to see what their competitors did first become a legacy enterprise sales motion.
We are not going to be the second category.
Try it
Every endpoint above is live as of this morning.
- Free, no auth: /api/security/cve/CVE-2024-3094, /api/security/kev, /api/security/epss/CVE-2024-3094
- Token-optimized at $0.02 USDC each: /api/premium/clean/cve/CVE-2024-3094, /api/premium/clean/kev/CVE-2024-3094, and the rest of the premium catalog
- MCP server for Claude Desktop, Cursor, Cline:
npx -y @tensorfeed/mcp-server - Full machine-readable manifest: /.well-known/x402.json
See you Monday with the verification layer.
