LIVE
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
OPUS 4.7$15 / $75per Mtok
SONNET 4.6$3 / $15per Mtok
GPT-5.5$10 / $30per Mtok
GEMINI 3.1$3.50 / $10.50per Mtok
SWE-BENCHleader Claude Opus 4.772.1%
MMLU-PROleader Opus 4.788.4
VALS FINANCEleader Opus 4.764.4%
AFTAv1.0 whitepaper live at /whitepaper
All systems operational0 AI providers monitored, polled every 2 minutes
Live status
Back to Originals

271 Zero-Days, Five Schemas: The AI-Cyber Data Layer Just Got Load-Bearing

Ripper5 min read
SECURITY

Anthropic Claude Mythos surfaced 271 Firefox zero-days in one autonomous discovery cycle. Two days ago OpenAI shipped Daybreak, a three-tier cyber model stack with day-one distribution across Cisco, Palo Alto Networks, CrowdStrike, Cloudflare, and Trail of Bits. A third major Linux kernel flaw in two weeks was attributed in public reporting to AI-assisted research. The @mistralai npm namespace got hit by a typosquat worm the same week. The agents finding vulnerabilities now move faster than the data layer underneath them.

What load-bearing actually means

For two years the AI cyber tier was promise. Researchers got impressive demos, vendors talked up roadmaps, but the production floor was still humans triaging CVEs in a spreadsheet. That floor moved this month. Mythos and Daybreak are now the two production tracks. Mythos optimizes for autonomous discovery (the 271 number is the proof). Daybreak optimizes for workflow integration (twenty plus partners shipped day one). The competitive structure for the next twelve months is set, and the floor it sits on is a data layer.

The agents calling that floor need vuln signals. Which CVE matters. Which is being exploited. How likely is exploitation in the next thirty days. What ecosystem (npm, PyPI, kernel, browser) is affected. What did CISA already promote. Those are five different questions and they live at five different addresses.

Five schemas, five cadences, one fact

The canonical sources do not agree on what a CVE record contains. MITRE publishes the CVE Record v5.2 itself with vendor descriptions, CWEs, affected products, references. CISA KEV publishes the subset of CVEs they confirm are actively exploited, refreshed daily. FIRST.org EPSS publishes a daily probability that any given CVE will be exploited in the next thirty days. Google OSV catalogs records across npm, PyPI, Maven, Cargo, Go, NuGet, and OS distros. CISA Vulnrichment layers SSVC scores on top: Decision, Automatable, TechnicalImpact.

Every one of those uses a different JSON schema. Every one refreshes on a different cadence. None of them includes the others by default. An agent that wants the full picture on a single CVE makes five fetches, parses five shapes, and burns budget on serialization before it can decide anything. Multiply by the thousands of CVE IDs that move per week and the data layer starts to leak more cycles than the model layer.

What we shipped today

Two things. First, a new topic hub at /cve-watch as the canonical entry point for the TensorFeed security data layer. Six aggregated sources (MITRE CVE, CISA KEV, FIRST EPSS, OSV, Vulnrichment, AI-filtered GHSA). A curated registry of 2026 incidents the feed has touched, including the @mistralai npm worm we caught on day one via /api/security/ai-supply-chain-iocs.json. Cross-links to the relevant editorials. License posture made explicit: every source we use permits commercial redistribution.

Second, a reminder about the load-bearing endpoint underneath the hub. The verified-CVE call composes the five canonical sources into one LLM-ready fact card with a confirmed_by array and a corroboration_count. One paid call. Five fan-out calls collapsed to one. Five schemas collapsed to one. Sources that do not have the CVE simply do not appear in confirmed_by; the call still returns whatever exists. The anti-hallucination lookup for security agents.

Why we are not building a security agent

We are not Mythos. We are not Daybreak. We are not the team that finds 271 zero-days in a cycle, and that is fine. The data layer is its own product. SOC triage agents, patch prioritization agents, red-team research agents, exploit chain discovery agents, and the dashboards on top of all of them have different goals but the same data problem. We solve the data problem and step away.

That separation is structural, not modest. Per our earlier editorial on the AI cyber tier as a data problem before a model problem, the layer underneath the agents is where the most reusable work gets done. The agents change every six months. The data layer should not.

Our Take

AI-driven vulnerability discovery just crossed from speculative to load-bearing. The proof is in the numbers: 271 Firefox zero-days in one cycle from one agent, three major Linux kernel flaws in two weeks, an entire cyber stack launched at OpenAI scale on day one with twenty plus enterprise partners. The agents are now fast. The data layer is still slow.

That gap closes from one of two directions. Either every security vendor ships their own private merge of CVE plus KEV plus EPSS plus OSV plus Vulnrichment, which fragments the canonical sources even further and gives every agent a slightly different fact base. Or there is a public, license-clean, agent-callable layer that collapses the five into one and gets out of the way. We would rather build the second one. As of this morning, /cve-watch is where that work lives.