TL;DR

A Sonar survey of more than 1,100 developers finds broad distrust of AI-produced code even as use of AI coding tools has become routine. Many developers report spending substantial effort reviewing AI output, but fewer than half say they always check AI-generated code before committing it.

What happened

Sonar's State of Code Developer Survey, based on responses from over 1,100 developers worldwide, highlights a widening verification bottleneck as AI coding tools proliferate. While 96% of respondents say AI-generated code is not functionally correct, only 48% report always validating AI-produced code before committing. Tool use is frequent: 72% of developers who have tried AI coding assistants use them daily or multiple times per day, and developers estimate that 42% of their current codebase includes significant AI assistance — a figure they expect to climb to 65% by 2027 (from 6% in 2023). AI is being applied across project types, from prototypes (88%) to internal production (83%), customer-facing production (73%), and critical business services (58%). Sonar warns that time saved on generation is often reallocated to review: 95% of respondents perform at least some review, and 59% call that effort moderate or substantial. The survey also notes that 35% of developers use personal accounts for AI coding tools rather than corporate ones.

Why it matters

  • Widespread daily use of AI coding tools coupled with low rates of always-checking output increases risk of buggy or insecure code reaching production.
  • The need for significant review work can erode the productivity gains AI promises, creating a 'verification bottleneck' in development workflows.
  • High use of personal accounts for AI tools raises potential concerns about governance, IP handling and compliance within organizations.
  • Developers expect AI's role in code creation to grow substantially by 2027, implying that verification and testing practices will become more central to engineering processes.

Key facts

  • Survey source: Sonar's State of Code Developer Survey, more than 1,100 developers worldwide.
  • 96% of respondents say AI-generated code is not functionally correct.
  • Only 48% of developers say they always check AI-generated code before committing it.
  • 72% of developers who have tried AI coding tools use them every day or multiple times a day; 6% report using them less than once a week.
  • Developers estimate 42% of their code currently includes significant AI assistance, a share they expect to reach 65% by 2027 (up from 6% in 2023).
  • Types of projects using AI tools: prototypes 88%, internal production 83%, customer-facing production 73%, critical business services 58%.
  • Most-used tools reported: GitHub Copilot (75%), ChatGPT (74%), Claude/Claude Code (48%), Gemini/Duet AI (37%), Cursor (31%).
  • 95% of developers spend at least some effort reviewing, testing, or correcting AI output; 59% describe that effort as moderate or substantial.
  • 38% say reviewing AI-generated code takes more effort than reviewing human-written code, while 27% say the opposite.
  • 35% of developers report using AI coding tools from personal accounts rather than corporate ones.

What to watch next

  • Whether organizations tighten policies to require corporate accounts or audit AI-tool usage (not confirmed in the source).
  • Adoption rates of dedicated verification tooling and processes to address the reported review burden.
  • If the predicted rise to 65% AI-assisted code by 2027 materializes and how that affects deployment confidence.
  • Changes in vendor licensing or enterprise offerings intended to reduce reliance on personal accounts (not confirmed in the source).

Quick glossary

  • AI-generated code: Source code produced in whole or in part by machine learning models or AI assistants rather than written directly by a developer.
  • Verification debt: Extra time and effort required to review and understand code produced by automated tools, compared with code written by the developer.
  • Hallucination (in AI): When an AI model generates plausible-sounding but incorrect or fabricated information, including incorrect code.
  • Code review: The practice of examining source code changes to find bugs, improve quality, and ensure standards before merging or deployment.

Reader FAQ

How many developers distrust AI-generated code?
According to the survey, 96% of respondents believe AI-generated code is not functionally correct.

Do developers review AI-generated code?
Most do: 95% spend at least some effort reviewing, testing, or correcting AI output, but only 48% always check it before committing.

Are AI coding tools widely used?
Yes. Among developers who have tried them, 72% use AI coding tools daily or multiple times per day; common tools include GitHub Copilot and ChatGPT.

Has AI reduced developer toil?
Many developers say AI reduces unwanted toil (75%), but the survey reports average time spent on toil remains about 23–25% regardless of AI usage.

AI + ML Most devs don't trust AI-generated code, but fail to check it anyway Developer survey from Sonar finds AI tool adoption has created a verification bottleneck Thomas Claburn…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *