LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

Is Together AI Down?

Live Together AI status. Auto-refreshes every 2 minutes.

Together AI is Operational

Together AI is up and running normally. All inference categories (chat, vision, embeddings, image, voice) are operational.

Last checked: 07:08 AM

Frequently Asked Questions

Is Together AI down right now?

No, Together AI is not down right now. The Together inference API is operational across all model categories.

How do you monitor Together AI?

We pull Together's status page at status.together.ai every 2 minutes. Together hosts on Better Stack which gives a clear all-clear or active-incident signal we surface here.

Which Together models are affected when Together is down?

All Together-hosted inference: Llama 3.x family (8B, 70B, 405B), DeepSeek V3.1 and R1, Qwen, Gemma, Mistral, FLUX image models, Whisper voice, embeddings (Multilingual E5), rerank, and moderation. If you access the same models through DeepSeek's own API or another provider, those are independent.

What do I do when Together AI is down?

For the same model on different infra: Fireworks AI hosts a similar catalog with separate inference; OpenRouter routes across multiple providers and can fail over automatically; or hit the model owner directly (DeepSeek, Mistral, etc). Check tensorfeed.ai/status for the live status of every major AI provider in one place.