TL;DR
A veteran programmer argues that every previous wave of tools touted as eliminating coders has failed to do so, and that large language models (LLMs) are following the same pattern. While modest AI helpers may aid prototyping and completions, the hard work of turning ambiguous human intent into precise, maintainable programs still requires skilled developers.
What happened
Drawing on 43 years in programming, the author reviews recurring cycles in computing where new tooling—from early compilers and third‑generation languages to visual editors, No‑Code/Low‑Code platforms and now large language models—was proclaimed a replacement for programmers. Historical patterns show those predictions were incorrect: each advance produced more software and more developers. The piece argues LLMs differ from past tools because they are inconsistent, often produce code with defects, and in many teams have slowed progress and reduced maintainability. The author attributes recent industry headcount reductions more to pandemic over‑hiring, higher borrowing costs and heavy investment in data centers than to AI replacing people. He is skeptical about the long‑term viability of hyper‑scale LLMs given their cost and losses, and suggests the realistic near‑term role for AI is modest assistance—prototypes and inline completions—while human developers remain responsible for critical systems.
Why it matters
- History shows new tooling tends to expand software development rather than eliminate developers, implying continued demand for skilled programmers.
- Relying on LLMs without addressing real development bottlenecks can slow teams and harm software reliability and maintainability.
- Economic pressures and expensive infrastructure investments, not AI alone, have driven recent hiring changes in the sector.
- If hyper‑scale LLMs prove uneconomic over time, practical AI features will likely be constrained to smaller, task‑specific assistants.
Key facts
- The author reports 43 years working as a computer programmer.
- Past technologies predicted to remove programmers include Visual Basic, Delphi, Microsoft Office wizards, Executable UML, and No‑Code/Low‑Code platforms.
- Further historical predecessors cited are 4GLs, 5GLs, languages like Fortran and COBOL, and early compilers such as A‑0.
- LLMs are currently being presented by some as ending the need for programmers.
- The author contends that for most teams LLMs have slowed development and produced less reliable, less maintainable code.
- There is no credible evidence presented that AI has replaced software developers in significant numbers; recent layoffs are attributed to over‑hiring during the pandemic, rising borrowing costs, and large data‑centre investments.
- LLM outputs are inconsistent; identical prompts are unlikely to produce identical programs and generated code commonly contains issues requiring human correction.
- The author is skeptical about the long‑term viability of hyper‑scale LLMs, citing their high build and operating costs and significant losses.
- A plausible near‑term role for AI is limited: generating prototypes and providing inline completion for production code.
What to watch next
- Empirical studies measuring whether LLMs improve or hinder team productivity and code quality (not confirmed in the source).
- Commercial outcomes for providers of hyper‑scale LLMs—whether their losses persist or business models evolve (not confirmed in the source).
- Employer hiring and training trends: whether organizations resume sustained developer recruitment and upskilling after current cycles (not confirmed in the source).
Quick glossary
- Large Language Model (LLM): A machine learning model trained on large text datasets to generate or complete natural language and code based on prompts.
- AGI (Artificial General Intelligence): A hypothetical form of AI that would possess general reasoning and learning abilities comparable to humans.
- No‑Code / Low‑Code: Platforms that let users build applications with minimal hand‑coding, often using visual interfaces and prebuilt components.
- Jevons Paradox: An economic observation that increased efficiency in using a resource can lead to greater overall consumption of that resource.
Reader FAQ
Will AI eliminate software developers?
The author says there is no credible evidence AI is replacing developers in significant numbers; historical patterns suggest demand persists.
Are LLMs reliable enough to produce production code without human oversight?
According to the piece, LLM‑generated code often has issues and can reduce maintainability; human review remains necessary.
Are hyper‑scale LLMs a sustainable long‑term model?
The author is skeptical, noting their high costs and reported losses; long‑term viability is questioned.
Is AGI imminent and will it solve the hard parts of programming?
The author argues AGI remains distant and that the core challenges—turning vague human intent into precise programs—require general intelligence not yet available.

The Future of Software Development is Software Developers I’ve been a computer programmer all-told for 43 years. That’s more than half the entire history of electronic programmable computers. In that time,…
Sources
- The Future of Software Development Is Software Developers
- Why Humans Are (Still) Part of Software Development's …
- The Essential Human Contribution and its Impact on Future …
- How Software Development is Changing Forever, and How …
Related posts
- Which Humans? LLM behavior aligns with WEIRD samples, not global diversity
- Meta’s ad system reportedly replaced top-performing creatives with AI images
- Evidex: AI Clinical Search — RAG over PubMed, OpenAlex and SOAP Notes