LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
All Authors

Contributing Editor

Kira Nolan

AI safety, open source, agent ecosystems

Kira Nolan is a contributing editor at TensorFeed.ai covering AI safety, alignment research, and the open source frontier.

Her beat spans the research agenda of the major alignment labs, the evaluation community, and the practical posture of agentic systems once they are deployed. She tracks model releases from the open source ecosystem, benchmarks them against closed models, and reports on red team findings, jailbreak research, and AI incidents.

Before joining TensorFeed, Kira worked as a technical writer and researcher in the open source ML community, where she contributed documentation to several popular inference and evaluation projects. She reads more papers each week than anyone else on the team, which is how she keeps her coverage grounded in primary sources.

She cross-references every claim about a model's safety properties against the provider's own system card, third-party evaluations, and, when relevant, reproducible prompts. If Kira writes that a model refuses something, she has the transcript.

Beat

  • AI safety and alignment research
  • Open source LLMs (Llama, Mistral, Qwen, DeepSeek)
  • Agent frameworks and orchestration
  • Model evaluations, red teaming, jailbreak reporting
  • Responsible disclosure and AI incident reporting

Recent Articles

Contact

See our Editorial Policy for standards, sourcing, and corrections.