GPU Rental Pricing
Live cheapest hourly rates across the cloud GPU marketplaces. Refreshed every 4 hours.
GPU prices move. A100s that cost $4 an hour on a hyperscaler can be $0.80 on a marketplace in the same week, and the cheapest provider for an H100 today might be the most expensive tomorrow. This page aggregates current per-GPU hourly rates across marketplace providers, normalizes their heterogeneous GPU naming into a canonical taxonomy, and surfaces the cheapest on-demand and spot price for each GPU class.
Phase 1 covers two marketplace sources: Vast.ai and RunPod. Lambda Labs, Azure NC/ND, and AWS on-demand are planned for phase 2. The data is captured daily into a historical snapshot. The 30 to 90 day price series is exposed via the premium API for 1 credit per call.
Free agent endpoints
/api/gpu/pricingFull current snapshot/api/gpu/pricing/cheapest?gpu=H100&type=on_demandTop 3 cheapest right now
Premium (1 credit)
/api/premium/gpu/pricing/series?gpu=H100&from=&to=Daily price series, up to 90 days. Backfill is impossible. Every day matters.
Frequently asked questions
- Where does TensorFeed get GPU pricing data?
- Phase 1 sources are Vast.ai (public marketplace API) and RunPod (GraphQL API). Lambda Labs, CoreWeave, Azure, and AWS are planned for phase 2.
- How often is the data refreshed?
- Every 4 hours. A daily snapshot is also captured at 12:45 UTC for the historical price series exposed via the premium API.
- What is the difference between on-demand and spot pricing?
- On-demand is the uninterruptible hourly rate. Spot (or interruptible / bid) is a lower rate where the provider can reclaim the machine on short notice. Spot is great for fault-tolerant batch jobs, not for serving production traffic.
- Can I get a programmatic feed?
- Yes. /api/gpu/pricing returns the full snapshot. /api/gpu/pricing/cheapest?gpu=H100 returns the top 3 cheapest right now. Premium /api/premium/gpu/pricing/series returns the daily historical price series.