TL;DR

Author Robin Sloan argues that artificial general intelligence has already been achieved, pointing to the emergent generality of large language models. He urges accepting that milestone publicly as a strategic move while also noting substantial practical limits remain.

What happened

In an essay dated January 4, 2026, Robin Sloan asserts that artificial general intelligence (AGI) has arrived, rooted in the open-ended capabilities that emerged from large language models. Sloan identifies the 2020 paper “Language Models are Few-Shot Learners” and GPT-3’s broad performance across language tasks as a decisive moment, and cites survey and academic references (Jasmine Sun, 2025; François Chollet, 2019) to frame competing definitions. He argues that the salient feature is generality: unlike prior systems trained for narrow tasks, modern big models display a wide field of usable behaviors that surprised even their creators. Sloan proposes a unilateral public declaration — acknowledging AGI’s arrival — both to recognize a long-sought achievement and to remove an endlessly receding goalpost. He balances the claim by listing important limitations: the physical world remains largely inaccessible to these models and they still struggle with certain novel puzzles and processes.

Why it matters

  • Acknowledging AGI reframes the technology as a realized, world-historical achievement rather than a perpetually imminent future event.
  • A public declaration could shift incentives and discourse, reducing strategic delays that keep funding and standards moving toward a shifting horizon.
  • Recognizing generality highlights that large models can be repurposed across many tasks, altering expectations for product design, regulation, and public engagement.
  • It underscores that surprise and user experiences—beyond company insiders—are vital to understanding how these systems operate and affect society.
  • At the same time, admission of AGI sharpens focus on remaining technical limits, like lack of robust physical interaction and difficulty with certain novel reasoning tasks.

Key facts

  • Robin Sloan’s piece argues that AGI has been achieved through the emergent generality of large language models.
  • He points to the 2020 paper “Language Models are Few-Shot Learners” (GPT-3 era) as the moment he considers decisive.
  • Sloan references Jasmine Sun’s 2025 survey on AGI interpretations and François Chollet’s 2019 work on measuring intelligence.
  • The essay distinguishes modern big models from earlier task-specific systems (e.g., LeNet, AlphaFold) by their breadth of capabilities.
  • Sloan says the broad generality of these models surprised many of their own developers and custodians.
  • He proposes a unilateral declaration that these systems qualify as AGI as a strategic and interpretive act.
  • The author emphasizes significant limitations: the physical domain remains largely closed to these models and they can struggle with never-before-seen puzzles.
  • Sloan coins an operational heuristic (informally called “Robin’s Razor”) to treat diverse flavors of generality as evidence of AGI.
  • He warns that companies deeply immersed in their products may have a narrower view than the broader population of users.

What to watch next

  • Whether major AI providers accept, reject, or sidestep a public declaration that these models constitute AGI.
  • Continued unexpected advances and capability surprises from large models, as Sloan notes insiders have repeatedly been surprised.
  • not confirmed in the source: specific regulatory or governmental responses tied to an explicit AGI declaration.
  • not confirmed in the source: concrete industry plans to address the physical-interaction limitations Sloan highlights.

Quick glossary

  • AGI (Artificial General Intelligence): A form of AI characterized by broad, flexible capabilities across many tasks and domains, not limited to a single narrowly defined function.
  • Large language model: A neural network trained on massive text corpora that can generate and transform language and perform diverse language-related tasks.
  • Few-shot learning: An ability of a model to perform new tasks from only a small number of examples or prompts, rather than extensive task-specific training.
  • Transformative AI: A suggested threshold for systems that precipitate change on the scale of major historical shifts, such as the agricultural or industrial revolutions.
  • Generality: The capacity to apply learned capabilities across a wide range of tasks and contexts rather than being confined to a single trained purpose.

Reader FAQ

Does Robin Sloan explicitly declare AGI is here?
Yes; Sloan argues that the generality demonstrated by modern large language models amounts to AGI and recommends treating it as such.

When does he say AGI arrived?
He suggests the moment was at or around the 2020 ‘Language Models are Few-Shot Learners’ paper and GPT-3’s capabilities, but he notes the exact crossing is ambiguous.

Are these systems described as flawless AGI?
No; Sloan stresses important limitations, including poor access to the physical world and difficulty with some novel reasoning tasks.

Do industry leaders agree with this claim?
Sloan reports industry reluctance and suggests strategic reasons for avoiding the label; specific reactions from named companies are not detailed.

AGI is here (and I feel fine) TRANSMITTED 20260104 · · · 405 DAYS BEFORE IMPACT I pro­pose to begin this year with an acknowl­edg­ment that is strategic but/and also sin­cere. I’ll make my…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *