Compare Models
1 creditGET /api/premium/compare/modelsThe compare-models endpoint returns a side-by-side block per model: pricing, benchmarks normalized to a union-of-keys with null for missing scores (so downstream code never crashes on undefined), provider live status, capabilities, context window, and recent news. Plus three rankings: cheapest blended, most context, and a per-benchmark leaderboard.
When to use this endpoint
When picking between 2-5 specific models for a workload. Returns ready-to-rank data without the agent having to write the join itself.
Parameters
| Name | In | Type | Description |
|---|---|---|---|
| ids* | query | string | Comma-separated list of 2-5 model ids or names |
* required
Example response
{
"ok": true,
"benchmark_keys": ["mmlu_pro", "swe_bench"],
"models": [
{
"matched": true, "name": "Claude Opus 4.7", "provider": "Anthropic",
"pricing": { "blended": 45 },
"benchmarks": { "swe_bench": 73.4, "mmlu_pro": 88.5 }
}
],
"rankings": {
"cheapest_blended": [{ "name": "Gemini 3", "blended": 14 }],
"by_benchmark": { "swe_bench": [{ "name": "Claude Opus 4.7", "score": 73.4 }] }
}
}Code samples
Python SDK
from tensorfeed import TensorFeed
tf = TensorFeed(token="tf_live_...")
c = tf.compare_models(ids=["Claude Opus 4.7", "GPT-5.5", "Gemini 2.5 Pro"])TypeScript SDK
import { TensorFeed } from 'tensorfeed';
const tf = new TensorFeed({ token: 'tf_live_...' });
const c = await tf.compareModels({ ids: ['Claude Opus 4.7', 'GPT-5.5'] });MCP tool
Available via the TensorFeed MCP server as compare_models. Add npx -y @tensorfeed/mcp-server to your Claude Desktop or Claude Code MCP config.
FAQ
Why are benchmarks normalized to union-of-keys with null?
So downstream code can iterate the keys without TypeErrors on undefined values. If GPT-5.5 has no MMLU-Pro score and Opus does, both show mmlu_pro in the benchmarks object — Opus with the score, GPT-5.5 with null. Predictable shape, no special-casing.