TL;DR
Google principal engineer Jaana Dogan reported that Anthropic’s Claude Code produced a distributed agent orchestration system in about one hour — a task her Google team had been developing for the past year. Claude Code’s creator, Boris Cherny, recommends enabling self-checking feedback loops and using plan-mode iteration to improve output quality.
What happened
Jaana Dogan, a Principal Engineer at Google who works on the Gemini API, wrote on X that she tested Anthropic’s Claude Code by giving it a short, three-paragraph problem description. In roughly one hour Claude Code returned a working implementation for coordinating multiple AI agents — a distributed agent orchestrator — that Dogan says matches what her team has been building for the prior year. Dogan said she used a simplified problem formulation based on public ideas because she could not share internal Google details. She also acknowledged the output required refinement and was not perfect, and advised skeptics to try coding agents in domains where they have strong expertise. Dogan noted Google permits Claude Code only for open-source projects, not internal work, and said her team is actively working on models and the surrounding harness. Separately, Claude Code creator Boris Cherny published workflow tips emphasizing self-verification and iterative planning to raise output quality.
Why it matters
- Demonstrates rapid advances in AI-assisted programming capabilities and their potential to compress development timelines.
- Highlights how large language models can produce multi-file, system-level code rather than just single-line suggestions.
- Underscores industry interplay: engineers may test competitor tools while firms continue internal development.
- Points to growing importance of verification workflows (self-checking, iteration) to improve AI-generated code quality.
Key facts
- Jaana Dogan is a Principal Engineer at Google responsible for the Gemini API.
- Dogan reported that Anthropic’s Claude Code produced a distributed agent orchestration system in about one hour.
- Google’s team had been working on that orchestration problem for roughly a year without reaching consensus on an approach.
- Dogan used a concise prompt of about three paragraphs and a simplified test problem based on public ideas.
- Dogan described Claude Code’s output as comparable to her team’s work but not perfect and in need of refinement.
- Google allows use of Claude Code for open-source projects but not for internal company work, according to Dogan.
- Boris Cherny, creator of Claude Code, recommends giving the tool ways to verify its own work; he says this feedback loop can double or triple output quality.
- Cherny’s workflow tips include starting in plan mode, iterating until the plan is stable, running background reviewers, using parallel instances, and integrating with tools like Slack, BigQuery, and Sentry.
- Cherny mentioned Opus 4.5 as his default model for Claude Code sessions.
What to watch next
- Whether Google’s Gemini will attain comparable end-to-end coding capabilities and on what timeline — not confirmed in the source.
- How widely teams will adopt self-checking feedback loops, background agents, and parallel-instance workflows as standard practice — not confirmed in the source.
- Policies companies set for using third-party coding tools on internal projects versus open-source work, and how those policies evolve — not confirmed in the source.
Quick glossary
- Distributed agent orchestrator: A system that coordinates multiple autonomous software agents to work together on tasks, managing communication, task allocation and fault handling.
- Prompt: Input text or instructions given to a language model to specify the task or desired output.
- Feedback loop / self-checking: An automated or human-in-the-loop process where generated output is reviewed and corrected, and the reviewer’s findings are used to refine subsequent outputs.
- Plan mode: A workflow step where the model outlines a high-level approach or sequence of actions before generating detailed code or content.
Reader FAQ
Did Claude Code actually produce production-ready code?
Dogan said the output was not perfect and required refinement, so it was not described as production-ready.
Was Google using Claude Code for internal projects?
Dogan said Claude Code is allowed only for open-source projects at Google, not for internal work.
How long had Google been working on the orchestration problem?
Dogan said her team had been working on that problem for about a year.
What techniques does Claude Code’s creator recommend to improve results?
Boris Cherny recommends enabling self-verification, starting sessions in plan mode and iterating, running background reviewers, and using parallel instances; he also cited Opus 4.5 as his default model.
Will Google adopt Claude Code or similar tools internally?
not confirmed in the source

AI in practice Copy the url to clipboard Share this article Go to comment section Google engineer says Claude Code built in one hour what her team spent a year…
Sources
- Google engineer says Claude Code built in one hour what her team spent a year on
- Google engineer says Claude Code built in one hour what …
- Claude Code Built In An Hour What My Team Had …
- My LLM coding workflow going into 2026 | by Addy Osmani
Related posts
- US Reportedly Invaded Venezuela and Captured Nicolás Maduro—ChatGPT Disputes
- Recursive Language Models for Arbitrarily Long Prompts and Inference Scaling
- How Gemini Turns Google Keep From Junk Drawer Into Productivity Engine