TL;DR

A veteran software engineer says AI coding assistants can help with simple, direct queries but usually do not cut development time in complex, long-lived codebases. He and other practitioners report variable outputs, hidden edge cases, and significant manual rework when integrating AI suggestions.

What happened

Alain Dekker, a long-time software engineer, recounts mixed experiences using AI coding assistants in real-world development. He reports that these tools perform well on straightforward queries and conversational summaries, but frequently produce incomplete or incorrect code for non-trivial tasks. Dekker gives a concrete example where a suggestion for locating the PDF-associated application in Windows registry (via CoPilot) omitted important registry subtleties and produced non-compiling Delphi code; he only resolved the issue after finding more complete guidance on StackOverflow. A positive counterexample involved a C# WebBrowser property (ScriptErrorsSuppressed) that fixed a specific problem quickly. Dekker also relays a test manager’s experience at a UK betting company, who found AI-generated unit tests required multiple prompt iterations and substantial rewrites, making the tools a net time-waster for that team. He warns that AI’s confident tone, IDE integration, and concentration of power in a few firms raise learning, maintenance and governance concerns.

Why it matters

  • Simple AI suggestions can speed small, contained tasks, but they often fail to address complex, real-world edge cases.
  • Blindly accepting AI output risks introducing bugs into systems that are hard to update or maintain over many years.
  • Authoritative-sounding AI responses can create false confidence and reduce critical review by developers, especially less experienced ones.
  • Wider adoption embeds power and data with a few large tech vendors, raising privacy, copyright and control questions for teams and organizations.

Key facts

  • Author Alain Dekker is an experienced engineer who uses AI tools lightly and primarily uses CoPilot.
  • Dekker finds AI answers generally strong on simple, direct queries and conversational summaries.
  • A CoPilot suggestion about finding a PDF-associated app in the Windows registry missed important subtleties and produced non-compiling Delphi code.
  • A positive example: AI pointed out the ScriptErrorsSuppressed property in C# WebBrowser, which solved a specific warning issue.
  • A Test Manager at a major UK betting firm told Dekker that AI-generated unit tests often required multiple prompt iterations and full rewrites, becoming a net time-waster for experienced testers.
  • Dekker groups developers’ AI usage into three informal categories: light or no users, moderate users who review AI output, and users who rely on AI excessively.
  • Dekker expresses concern that AI baked into IDEs may reduce deep learning of harder problems among newer developers.
  • The article flags broader worries about data, copyright and concentration of power in major tech companies as AI spreads.

What to watch next

  • Whether IDE-embedded AI reduces developers’ opportunity to learn deep problem-solving techniques over time, as experienced by Dekker.
  • How well AI suggestions integrate into large, legacy codebases that must be maintained for years.
  • How major vendors’ practices around data ingestion, privacy and product control evolve as AI tools become more ubiquitous.

Quick glossary

  • AI coding assistant: Software that suggests code, comments or documentation using machine learning models trained on code and text.
  • Greenfield project: A new software project started from scratch without constraints from legacy systems.
  • Legacy codebase: An existing, often older, collection of code that is actively maintained and may be hard to change or update.
  • Prompt engineering: The practice of crafting and refining input queries to get better outputs from AI models.
  • Integrated development environment (IDE): A software application that provides comprehensive facilities to programmers for software development, often including code editing, debugging and build tools.

Reader FAQ

Do AI coding assistants replace developers?
Not confirmed in the source that AI will replace developers; the author says AI reshapes work but is not a substitute for human review and judgment.

Do AI tools consistently save development time?
The source reports they save time on simple tasks but often do not reduce time in complex or long-lived codebases and can require substantial rework.

Are AI-generated code suggestions reliable out of the box?
No; Dekker and a test manager describe variable quality and cases where suggestions were incomplete, incorrect or required multiple prompt cycles to be usable.

Should teams rely on AI for writing unit tests?
According to a Test Manager cited in the article, AI-generated unit tests often needed many iterations and rewrites and were a net time-waster for experienced testers.

SOFTWARE 131 Software engineer reveals the dirty little secret about AI coding assistants: They don't save much time 'Stay in control and think for yourself' Alain Dekker Fri 14 Nov 2025 // 13:15 UTC…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *