LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

Is Fireworks AI Down?

Live Fireworks AI status. Auto-refreshes every 2 minutes.

Fireworks AI is Operational

Fireworks AI is up and running normally. The chat completion and embeddings APIs are operational across the Fireworks model catalog.

Last checked: 07:08 AM

Frequently Asked Questions

Is Fireworks AI down right now?

No, Fireworks AI is not down right now. The Fireworks API is operational.

How do you monitor Fireworks AI?

We pull Fireworks's status page at status.fireworks.ai every 2 minutes. Fireworks publishes per-model uptime which gives us a clear all-clear or active-incident signal.

Which Fireworks models are affected when Fireworks is down?

All Fireworks-hosted inference: DeepSeek V3.1, OpenAI GPT OSS 120B and 20B, Llama 3.3 70B Instruct, Qwen3 VL 30B Thinking, and the embeddings APIs (Nomic Embed, Qwen3 Embedding 8B). If you access the same models through DeepSeek or another host, those are independent inference paths.

What do I do when Fireworks AI is down?

For the same model on different infra: Together AI hosts a similar catalog with separate inference; OpenRouter routes across multiple providers and can fail over automatically; or hit the model owner directly. Check tensorfeed.ai/status for the live status of every major AI provider.