Contributing Editor
Kira Nolan
AI safety, open source, agent ecosystems
Kira Nolan is a contributing editor at TensorFeed.ai covering AI safety, alignment research, and the open source frontier.
Her beat spans the research agenda of the major alignment labs, the evaluation community, and the practical posture of agentic systems once they are deployed. She tracks model releases from the open source ecosystem, benchmarks them against closed models, and reports on red team findings, jailbreak research, and AI incidents.
Before joining TensorFeed, Kira worked as a technical writer and researcher in the open source ML community, where she contributed documentation to several popular inference and evaluation projects. She reads more papers each week than anyone else on the team, which is how she keeps her coverage grounded in primary sources.
She cross-references every claim about a model's safety properties against the provider's own system card, third-party evaluations, and, when relevant, reproducible prompts. If Kira writes that a model refuses something, she has the transcript.
Beat
- AI safety and alignment research
- Open source LLMs (Llama, Mistral, Qwen, DeepSeek)
- Agent frameworks and orchestration
- Model evaluations, red teaming, jailbreak reporting
- Responsible disclosure and AI incident reporting