TL;DR
High-profile predictions that AI agents would begin doing substantive multi-step work for companies in 2025 largely failed to materialize. Technical shortcomings in agent products, expert skepticism, and overhyped expectations explain the gap between promises and reality.
What happened
Throughout 2024 and early 2025, prominent figures in AI argued that autonomous agents would begin performing substantive, multi-step tasks for businesses. Companies and commentators pointed to advances in code-oriented agents as evidence that the approach could generalize to other kinds of work, and some industry leaders forecast huge economic impact. Instead, launches in 2025 produced underwhelming results: agent products fell short of reliably completing complex, real-world tasks, with documented failures in routine web interactions. Several respected researchers argued that current large language models—the core technology behind these agents—are not yet suitable for the kind of dependable, independent work envisioned. The debate shifted from a near-term prediction of automated labor to a recognition that building trustworthy digital “employees” remains an unsolved engineering and product-design challenge. The author urges shifting attention from speculative futures to the concrete effects of AI systems already deployed.
Why it matters
- Overpromises can skew investment, hiring and product roadmaps toward unrealistic short-term goals.
- Reliance on today's LLM-based agents without robust safeguards risks operational failures in real tasks.
- Public discussion of AI impact should be grounded in demonstrated capabilities rather than speculation.
- Labor-disruption claims need empirical verification before driving policy and corporate decisions.
Key facts
- Sam Altman predicted in 2024 that AI agents might 'join the workforce' in 2025.
- OpenAI executives and other industry leaders publicly suggested 2025 would be pivotal for agent deployments.
- Prior agents specialized in programming tasks (e.g., code-assist systems) showed promising multi-step performance.
- Products released in 2025, including some branded 'agents,' often failed to complete routine web tasks reliably.
- A cited example describes an agent repeatedly failing to select a drop-down value on a real estate site.
- Critics including Gary Marcus argued LLM-based stacks are insufficient for dependable agent behavior.
- Other prominent technologists noted the industry had made optimistic predictions about agent timelines.
- The author recommends focusing in 2026 on concrete, observable impacts of existing AI tools rather than future hypotheticals.
What to watch next
- Pace and robustness of vendor releases that claim multi-step, autonomous agent capabilities (not confirmed in the source).
- Independent evaluations of agent reliability in real-world workflows and web interactions (not confirmed in the source).
- Labor-market studies measuring actual job displacement attributable to deployed AI systems (not confirmed in the source).
Quick glossary
- AI agent: A software system designed to perform multi-step tasks autonomously, often by interacting with users, systems, or web interfaces.
- Large language model (LLM): A neural network trained on large amounts of text to generate or analyze language; often used as the base for chatbots and agents.
- Chatbot: An interactive software application that responds to user queries, typically focused on conversation or information retrieval rather than autonomous multi-step task execution.
- Codex / code-assist agents: AI systems specialized in writing and reasoning about code; some showcased strong multi-step problem-solving within programming contexts.
Reader FAQ
Did AI agents actually start doing large amounts of workplace tasks in 2025?
No; products marketed as agents in 2025 did not reliably take over major parts of jobs, according to reporting in the source.
Why did predictions about agents fail?
The source cites technical limitations of current LLM-based stacks, implementation failures in real tasks, and industry overoptimism.
Will agents join the workforce in 2026?
Not confirmed in the source.
Should policymakers worry about mass job displacement from AI now?
The author advises grounding concern in demonstrated impacts of existing systems; large-scale displacement claims require empirical support.

Why Didn’t AI “Join the Workforce” in 2025? January 5, 2026 Exactly one year ago, Sam Altman made a bold prediction: “We believe that, in 2025, we may see the…
Sources
- Why didn't AI “join the workforce” in 2025?
- Why A.I. Didn't Transform Our Lives in 2025
- Why AI agents won't replace government workers anytime …
- Why AI Agents Didn't Take Over in 2025
Related posts
- Scientific production in the era of large language models — PDF release
- Nvidia aims to be the Android of generalist robotics platform
- Why AI Failed to Join the Workforce in 2025 — A Year in Review