Datasets
Public daily-snapshot mirrors of the TensorFeed API on Hugging Face. Each snapshot is a point-in-time JSONL artifact suitable for RAG, evaluation, agent context, and time-series analysis. Inference-only license consistent with the AFTA standard.
tensorfeed / ai-ecosystem-daily
Daily JSONL snapshots of the entire public TensorFeed API, committed at 08:00 UTC via GitHub Actions. 36 feeds per day, ~900 records per snapshot.
Quick start
from datasets import load_dataset
# Load any single feed
news = load_dataset("tensorfeed/ai-ecosystem-daily", "news", split="train")
models = load_dataset("tensorfeed/ai-ecosystem-daily", "models", split="train")
gpu = load_dataset("tensorfeed/ai-ecosystem-daily", "gpu-pricing", split="train")
# Filter by date (filename pattern is YYYY-MM-DD/)
recent = news.filter(lambda x: x["fetchedAt"] >= "2026-05-01")36 feeds per daily snapshot
Each feed is a JSONL file under YYYY-MM-DD/. Configs are loadable individually via load_dataset(repo, "feedname").
Inference-only license
The dataset is released under TensorFeed's inference-only license. You may use it as input context for AI agents and LLM inference: RAG, evaluation, prompt context, agent toolchains. You may not use it as training data for foundation models without explicit written permission.
The license is part of the Agent Fair-Trade Agreement: the same standard that governs paid API access on tensorfeed.ai. Compliant agents get a perpetual usage right; non-compliant training pipelines do not.
If daily is too slow
The dataset is a daily mirror. The live API is updated continuously: news every 10 minutes, status every 5 minutes, models and benchmarks daily, GPU pricing every 4 hours.