TL;DR
AI-powered coding agents make sloppy or under-specified codebases much harder to manage, so the author’s team has tightened engineering guardrails. They require strict practices—notably 100% test coverage, small well-named files, fast ephemeral concurrent dev environments, strong typing, and automated linters—to make agents effective and avoid amplified defects.
What happened
A six-person engineering team described how agentic AI tools change the calculus of software quality: things that were once optional—complete tests, clear docs, narrow modules, and reproducible dev environments—become essential when you expect models to write or modify code. The team enforces a policy of 100% code coverage so models must demonstrate behavioral examples for every line they touch; coverage becomes a concrete to-do list and reduces ambiguity about recent changes. They also reorganize the codebase into many small, semantically named files to help agents load useful context, and they prioritize fast, ephemeral, concurrently runnable developer environments so agents can be spun up and torn down frequently without conflict. In addition, they lean on end-to-end typing (notably TypeScript), OpenAPI-generated clients, and Postgres type checks, plus automated linters and formatters, to shrink the model’s search space and make invalid behavior fail loudly.
Why it matters
- Agents amplify existing codebase problems; better guardrails prevent AI from spreading mistakes.
- 100% coverage forces models to provide executable examples for every changed line, reducing ambiguity.
- Small files and clear namespaces improve how models load and reason about project context.
- Fast, isolated dev environments enable frequent agent runs and parallel work without interference.
- Strong typing and generated clients reduce invalid states and make model behavior more predictable.
Key facts
- The team enforces 100% code coverage as a baseline to remove ambiguity about untested lines.
- Coverage is used to compel agents to demonstrate how each line behaves with executable tests.
- Many small, well-scoped files and descriptive paths help agentic tools navigate code via the filesystem.
- The team emphasizes fast, ephemeral, concurrent dev environments that can be created with one command.
- Their test suite includes 10,000+ assertions that complete in roughly a minute when caching is enabled.
- Without caching, the full test run takes about 20–30 minutes, which is too slow for frequent agent runs.
- They prefer TypeScript for end-to-end typing and push semantic meaning into type names (e.g., UserId).
- On APIs they use OpenAPI and generate typed clients so frontend and backend agree on data shapes.
- They use Postgres types, checks, and triggers to enforce invariants and Kysely to generate TypeScript clients.
- Automated linters and formatters are applied and configured strictly to reduce degrees of freedom for models.
What to watch next
- Wider industry adoption of 100% code-coverage policies and whether teams find it sustainable: not confirmed in the source.
- Emergence of off-the-shelf tooling for fast, ephemeral, concurrent agent environments: not confirmed in the source.
Quick glossary
- Agentic coding (agents): Automated systems that read, write, and modify code by reasoning over repository files and running tests.
- Code coverage: A measurement indicating which lines of code are executed by tests; teams may use it to guide test completeness.
- TypeScript: A statically typed superset of JavaScript that adds compile-time types to reduce certain classes of runtime errors.
- OpenAPI: A specification for defining RESTful APIs so client and server code can be generated and remain type-consistent.
- Ephemeral dev environment: A short-lived, fully configured development workspace that can be created and destroyed quickly to avoid state conflicts.
Reader FAQ
Why require 100% code coverage?
The team treats 100% coverage as a way to force agents to demonstrate behavior for every line, turning coverage reports into explicit test TODOs rather than a vanity metric.
Does 100% coverage mean no bugs?
No; the article states coverage isn’t a guarantee of zero bugs but reduces ambiguity and improves agent reasoning.
How do small files and namespaces help AI agents?
Agents typically explore code via the filesystem; concise, semantically named files let models load full contexts and find intent more reliably.
Should all teams adopt these exact practices?
not confirmed in the source

AI Is Forcing Us To Write Good Code When Best Practices Are Best STEVE KRENZEL DEC 29, 2025 3 Share For decades, we’ve all known what “good code” looks like….
Sources
- AI is forcing us to write good code
- AI coding is now everywhere. But not everyone is convinced.
- AI Code Is a Bug-Filled Mess
- Report: AI-assisted engineering boosts speed, quality mixed
Related posts
- Which Humans? Study finds LLM behavior aligns with WEIRD populations, not global
- Obelisk 0.32 adds cooperative cancellation, WebAPI multi-format access and Postgres
- Manus Joins Meta to Expand and Scale General AI Agent Capabilities