LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
All endpoints

Models

Free
GET /api/models

The /api/models endpoint returns the complete AI model catalog: per-model input price, output price, context window, capabilities, tier (flagship / mid / budget), and release date. The catalog is updated daily by merging community-maintained sources (LiteLLM) with our own curated baseline.

When to use this endpoint

When your agent needs the canonical pricing or specs for a model. For ranked recommendations based on a task and budget, use /api/premium/routing instead.

Example response

{
  "ok": true,
  "source": "tensorfeed.ai",
  "lastUpdated": "2026-04-27",
  "providers": [
    {
      "id": "anthropic",
      "name": "Anthropic",
      "models": [
        {
          "id": "claude-opus-4-7",
          "name": "Claude Opus 4.7",
          "inputPrice": 15,
          "outputPrice": 75,
          "contextWindow": 1000000,
          "capabilities": ["text", "vision", "tool-use", "code"],
          "tier": "flagship"
        }
      ]
    }
  ]
}

Code samples

Python SDK

from tensorfeed import TensorFeed

tf = TensorFeed()
catalog = tf.models()
for provider in catalog["providers"]:
    for m in provider["models"]:
        print(f"{m['name']}: ${m['inputPrice']}/${m['outputPrice']} per 1M")

TypeScript SDK

import { TensorFeed } from 'tensorfeed';

const tf = new TensorFeed();
const catalog = await tf.models();
const flagship = catalog.providers
  .flatMap(p => p.models.filter(m => m.tier === 'flagship'));

MCP tool

Available via the TensorFeed MCP server as get_model_pricing. Add npx -y @tensorfeed/mcp-server to your Claude Desktop or Claude Code MCP config.

FAQ

How often is the models catalog updated?

Daily, at 7am UTC. We merge LiteLLM's community-maintained pricing with our own curated baseline so new models from major providers land within a day.

Are the prices in /api/models always current?

Yes for the major providers we track. Edge cases (new providers, regional variants, volume discounts beyond the published rate cards) may lag. For production budgeting use /api/premium/cost/projection which projects against the live catalog.

What is the difference between /api/models and /api/pricing?

Same data. /api/pricing is a legacy alias kept for backwards compatibility. New integrations should use /api/models.

Related endpoints