Last Updated: April 2026

The Road to AGI and ASI

Tracking the research, predictions, and milestones on the path to artificial general intelligence and artificial superintelligence.

...
Papers tracked
7
Predictions logged
live
Last updated

Quick Definitions

Artificial General Intelligence (AGI) refers to AI systems that match or exceed human cognitive abilities across virtually all domains. Artificial Superintelligence (ASI) refers to systems that substantially exceed human intelligence in all areas. Both remain theoretical, though leading researchers now estimate AGI could arrive within 5 to 20 years.

What is AGI?

Artificial general intelligence is the idea of building a machine that can learn and perform any intellectual task a human can, at roughly human level of competence or better. The word that matters is general. A chess engine can destroy any human at chess, a translation model can outperform professional translators on many language pairs, and a protein folding model can exceed decades of structural biology work in an afternoon. None of those systems are AGI. They are narrow systems that excel at a bounded task and cannot transfer that skill to anything else.

An AGI system, by contrast, would be able to read a physics paper in the morning, debug a production codebase in the afternoon, and coach a skeptical executive through a tough conversation in the evening, all without being retrained between tasks. It would carry context across problems, learn new skills from a handful of examples, set its own subgoals, and know when to ask for help. Most definitions stop short of requiring consciousness or subjective experience. They focus purely on capability: what can this system do, across how many domains, and how reliably.

Modern frontier models have moved the goalposts repeatedly. In 2020, passing the bar exam in the 90th percentile was science fiction. In 2023, GPT-4 did it. In 2024, Claude and GPT-4o handled graduate-level physics questions, wrote production quality code, and completed extended multi-step workflows. Several researchers, including Sebastien Bubeck in the widely read "Sparks of Artificial General Intelligence" paper, have argued that current systems already show early flickers of general intelligence. Others insist that what looks like reasoning is sophisticated pattern matching and that real AGI will require new architectural ideas.

The honest answer in 2026 is that the line between advanced narrow AI and true AGI is blurry, and it is getting blurrier every quarter. You can track the latest frontier models and their benchmark scores on our model tracker.

What is ASI?

Artificial superintelligence picks up where AGI leaves off. An AGI can do what any human can do. An ASI substantially exceeds the best humans across every cognitive domain, including scientific research, strategic planning, social reasoning, and creativity. The concept comes from Nick Bostrom's 2014 book Superintelligence, which argued that once a system reaches human-level general capability, further improvement could be rapid. A system that can do AI research, Bostrom pointed out, can also improve itself, and that feedback loop could produce a system far beyond human level in a short amount of time.

The practical distinction matters. AGI is mostly framed as an economic milestone: it can do the work a human can do, which changes labor markets and productivity. ASI is framed as a civilizational milestone: it can do things no human can, which changes science, security, and governance. Dario Amodei's essay Machines of Loving Grace sketches out what a few years with ASI might look like, from curing most diseases to decades of compressed economic growth. Critics argue these scenarios assume both unlimited capability and perfect alignment, neither of which is guaranteed.

For now, ASI remains fully theoretical. No system exists that exceeds the best humans across all cognitive domains. But the gap between AGI and ASI is one of the most important open questions in the field, and the speed of that transition is one of the main drivers of AI safety research at every frontier lab.

AGI vs ASI vs Narrow AI

Understanding the three categories helps cut through most of the confusion in AI discourse. Every AI system in production today is narrow. AGI is the target. ASI is what comes after.

TypeCapabilityExamplesStatus
Narrow AIExcellent at specific tasks, cannot generalizeChatGPT, AlphaFold, Waymo, MidjourneyExists today
AGIHuman-level across virtually every domainNone yet confirmedActive goal
ASISubstantially exceeds best humans in every domainNoneTheoretical

Timeline: Milestones on the Path to AGI

A compressed history of the ideas and systems that set the course for where the field is today. Automatically extended as new frontier milestones land.

1950Turing's 'Computing Machinery and Intelligence'

Alan Turing asks 'Can machines think?' and proposes the imitation game, later known as the Turing Test. It frames the question of machine intelligence in operational terms and sets the research agenda for the next seventy years.

1956The Dartmouth Conference

John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester convene a summer workshop at Dartmouth College. The proposal coins the term 'artificial intelligence' and predicts that a significant advance could be made in a single summer. It would take decades longer.

1997Deep Blue defeats Kasparov

IBM Deep Blue beats reigning world chess champion Garry Kasparov in a six game match. Brute force search plus hand crafted evaluation functions shows that specialized systems can outperform humans at bounded tasks, even without general reasoning.

2012AlexNet kicks off the deep learning era

A deep convolutional neural network trained on GPUs wins ImageNet by a huge margin. The result convinces the research community that scale, data, and hardware acceleration are the keys to progress.

2016AlphaGo beats Lee Sedol

DeepMind AlphaGo defeats one of the strongest Go players in history using deep reinforcement learning and Monte Carlo tree search. Go was long considered a benchmark that would not fall for another decade.

2017The Transformer is introduced

Google Brain and Google Research publish 'Attention Is All You Need.' The transformer architecture becomes the backbone of almost every frontier AI system that follows.

2020GPT-3 demonstrates few-shot learning

OpenAI releases a 175 billion parameter language model that can perform new tasks with only a handful of examples in context. The result shifts the field toward scaling laws as a theory of progress.

2022ChatGPT reaches 100 million users

ChatGPT becomes the fastest growing consumer product in history, reaching 100 million weekly users within two months of launch. AI moves from research curiosity to mainstream technology overnight.

2023GPT-4 and Claude 2 push the frontier

Frontier models begin passing professional licensing exams, writing working code across multiple languages, and reasoning about complex multi-step problems. Researchers begin publicly debating whether early AGI behaviors are already visible.

2024Claude 3 and GPT-4o bring multimodal reasoning

Anthropic ships Claude 3 Opus. OpenAI ships GPT-4o. Both combine vision, audio, and language in a single model. Benchmarks that were state of the art a year earlier are now solved nearly at ceiling.

2025Reasoning models arrive

OpenAI o1 and o3, DeepSeek R1, and Anthropic extended thinking mode show that inference time compute can produce dramatic gains on math, coding, and scientific reasoning benchmarks. The scaling story expands from training to test time.

2026Frontier models approach expert performance

Claude Opus 4.6, GPT-4.5, Gemini 2.5 Pro, and others now rival or exceed expert humans on a growing list of professional benchmarks. Public debate shifts from "when will AGI arrive" to "how will we know when it has."

For the full picture, see our AI timeline.

Prediction Tracker

Public AGI timelines from the people building the systems. Treat these as bets rather than forecasts. They shift frequently and the track record for short-horizon AI predictions is uneven.

PersonOrgPredictionMadeArrives by
Dario AmodeiAnthropicPowerful AI that functions as 'a country of geniuses in a datacenter' is plausible by 2026 to 2027.Oct 20242026 to 2027
Sam AltmanOpenAIAGI, in the sense of systems that can do most economically valuable work, could arrive within a few thousand days.Sep 20242027 to 2030
Demis HassabisGoogle DeepMindAGI is likely within five to ten years, but current systems are still missing planning, memory, and reasoning depth.Jun 20242029 to 2034
Yann LeCunMeta AICurrent LLM architectures cannot reach human-level AI. New paradigms are needed. AGI is at least a decade away and possibly much longer.Mar 20242035 or later
Ray KurzweilIndependent (formerly Google)AI will match human intelligence by 2029 and merge with it by the 2045 singularity.Original forecast 1999, reaffirmed 20242029
Elon MuskxAIAI smarter than any single human by end of 2025, and smarter than all humans combined by 2029 or 2030.Apr 20242025 to 2030
Geoffrey HintonIndependent (formerly Google)AGI within 5 to 20 years. Probability of existential risk from AI in the 10 to 20 percent range.May 20232028 to 2043

Latest AGI News

Live stream of AGI and superintelligence coverage, filtered from our full news feed. Updates daily.

Recent Research Papers

arXiv papers matching AGI, superintelligence, and human-level AI keywords, pulled from our research feed.

Risks and Safety Concerns

The same capabilities that make AGI economically valuable make it potentially dangerous. A system that can do any remote job a human can do is also, by construction, a system that can do any remote job a malicious actor would pay for. Safety research across all major labs focuses on four broad categories of risk.

Misuse

Frontier models could dramatically lower the barrier to cyber intrusions, influence operations, and bio or chemical weapon design. Every major lab now runs pre-deployment evaluations specifically for these threat models.

Loss of oversight

As systems operate autonomously over longer horizons, humans lose the ability to review every action. Oversight research focuses on scalable supervision, interpretability, and formal verification.

Alignment

Ensuring that what a model is trained to do matches what humans actually want is an unsolved problem. Techniques include RLHF, constitutional AI, debate, and recursive reward modeling.

Concentration of power

A small number of labs with a large lead could concentrate extraordinary economic and political power. Proposals to address this range from open model releases to international coordination treaties.

Who Is Working on AGI?

Five labs currently drive the public frontier. A dozen more sit one or two release cycles behind. Each lab frames its mission slightly differently.

Anthropic

Frontier capability research with an explicit safety mission. Claude 3 and Claude Opus 4.6 are its flagship model families. Dario Amodei publicly targets powerful AI within a few years and has argued that the first companies to reach advanced systems should use them to help solve the alignment problem.

OpenAI

The company was founded with AGI as its explicit charter. GPT-4, GPT-4o, the o-series reasoning models, and the forthcoming GPT-5 line are the commercial face of that research. Sam Altman regularly talks about AGI as a matter of when rather than if.

Google DeepMind

The merged Google Brain and DeepMind org has pursued general intelligence since DeepMind was founded in 2010. Gemini, AlphaFold, AlphaZero, and the Genie world models all reflect different slices of that long-running program. Demis Hassabis has stated AGI is the explicit goal.

xAI

Elon Musk's frontier lab, built around Grok and increasingly large training clusters. The company has framed its mission as building a 'maximally truth seeking' AI that can understand the universe.

Meta AI

Yann LeCun argues that current LLM architectures cannot reach AGI and that new ideas are needed. Meta has released the Llama family as open weights and invested heavily in world models and self-supervised learning research.

Frequently Asked Questions

When will AGI be achieved?

There is no consensus. Leaders at Anthropic and OpenAI publicly estimate that systems capable of most economically valuable cognitive work could arrive within 3 to 6 years. Demis Hassabis at Google DeepMind puts it at 5 to 10 years. Yann LeCun at Meta argues current architectures cannot reach AGI at all and that it is at least a decade away. Academic surveys of AI researchers show median estimates that have been pulling forward every year since 2022.

What is the difference between AGI and ASI?

AGI, or artificial general intelligence, refers to AI systems that match human cognitive abilities across virtually all domains. ASI, or artificial superintelligence, refers to systems that substantially exceed the best humans in every domain, including scientific research, strategic planning, and creativity. AGI is usually framed as a milestone; ASI is framed as what comes after. Some researchers argue the gap between them could be very short, while others argue it could be decades.

Is ChatGPT AGI?

No. ChatGPT and other frontier chatbots are narrow AI systems that are unusually broad. They can discuss almost any topic, write code, draft legal documents, and reason about images, but they still lack persistent memory, robust planning, reliable long horizon agency, and the ability to learn new skills after training. Most researchers consider them early precursors to AGI, not AGI itself.

How will we know when AGI arrives?

There is no single agreed-upon test. Proposed benchmarks include the ability to perform any remote job a human can, to run an autonomous research lab and produce novel publishable science, to learn new skills from a handful of examples as a human would, and to pass rigorous in-person evaluations that rule out memorization. In practice, AGI is likely to arrive gradually, with capability thresholds crossed one at a time rather than a single ribbon-cutting moment.

Is AGI dangerous?

Leading researchers, including Geoffrey Hinton, Yoshua Bengio, and the safety teams at Anthropic, OpenAI, and DeepMind, take seriously the possibility that sufficiently advanced AI systems could pose serious risks if their goals are not aligned with human welfare. Risks discussed in the academic literature include misuse for cyber or bio weapons, concentration of economic power, loss of human oversight, and in the most extreme scenarios, systems that pursue objectives humans cannot correct. Other researchers argue these risks are overstated and that AGI will be shaped by the same iterative engineering processes as other technologies.

Related Hubs

Continue exploring the frontier from different angles.