TL;DR

A senior engineer describes hands-on practices for using AI tools like Claude Code and Cursor in software development, arguing they boost productivity and let experienced developers operate at higher abstraction. The piece also lists real risks — from junk output to dehumanizing permission flows — and gives concrete remedies such as hooks, stronger permission scripts, and heavy testing.

What happened

The author, writing from the perspective of a senior engineer active in the open-source Python data ecosystem, lays out a two-track set of lessons about working with AI: big ideas about why experienced programmers should adopt AI, and practical tips from daily workflows using Claude Code (and Cursor). They argue AI makes development more enjoyable and enables work in areas previously avoided, while acknowledging costs: large volumes of low-quality output, erosion of deep understanding, slow reviews, and dehumanizing permission prompts. To address these issues the author promotes climbing the abstraction ladder and automating routine interactions. Concrete tactics include Hooks in Claude Code (examples show a settings.json hook and a small Python script that enforces running tests via "uv run pytest"), custom permission logic implemented with regex/Python, subtle sound notifications to reduce constant checking, and workflow practices such as TDD-style prompting, heavy testing and benchmarking, targeted "grilling" questions, cleanup commands, and periodic agent-driven tech-debt reviews.

Why it matters

  • Experienced engineers can gain leverage by automating low-level tasks and focusing on higher-level design and experimentation.
  • AI-generated code increases output but shifts the bottleneck toward review and validation, raising the need for better testing and review strategies.
  • Automation around permissions and tool use can reduce repetitive, dehumanizing prompts and restore control to developers.
  • Practical hooks and scripted checks can prevent common mistakes (e.g., incorrect test commands) and standardize workflows across AI agents.

Key facts

  • The author uses Claude Code and Cursor as examples of AI developer tools in daily workflows.
  • Reported downsides include LLMs producing a lot of low-quality output, concerns about lost understanding, slow review processes, and dehumanizing approval prompts.
  • The author recommends "climbing the abstraction hierarchy" — delegating simple tasks to automation so humans can focus on higher-level work.
  • Hooks in Claude Code are presented as dependable mechanisms; the author provides a sample settings.json hook invoking a Python script to enforce 'uv run pytest'.
  • The author has written a custom Python-based permission hook system that uses regex and arbitrary logic to grant or deny requested actions.
  • Sound hooks (using afplay on macOS) are used to notify when an agent needs input or is finished, reducing the urge to constantly check status.
  • Confidence in AI-generated changes is built through heavy investment in tests and benchmarks, TDD-like prompting, and targeted questioning ('grilling') of the agent.
  • The author uses a final cleanup phase (a /cleanup command or Skill) and occasionally asks a fresh agent to audit the project for technical debt.

What to watch next

  • Whether teams standardize Hooks and richer permission scripts as part of CI and developer tooling.
  • How code-review practices evolve to handle markedly higher volumes of AI-generated changes and whether test-heavy workflows become dominant.
  • not confirmed in the source

Quick glossary

  • LLM: Large language model — a machine learning system trained to generate or predict text based on large datasets.
  • Hook: A programmable callback or script that runs at defined points in an agent or tool workflow to enforce rules, notify users, or modify behavior.
  • TDD: Test-driven development — a software practice where tests are written before code, guiding design and ensuring coverage.
  • Agent: An AI-driven system or process that performs tasks, makes decisions, or interacts with tools on behalf of a developer.
  • Benchmark: A standardized test or measurement used to compare performance characteristics such as speed and memory usage.

Reader FAQ

Should senior engineers be using AI for development?
The author argues yes: experienced developers can leverage AI to move faster and work at higher abstractions while avoiding low-value, repetitive tasks.

How can I avoid being overwhelmed by AI prompts and permission requests?
Implement hooks and automated permission logic to catch common patterns, and use subtle notifications so you don't need to constantly check the agent.

Is it safe to trust AI-generated code without reading all of it?
The author recommends building confidence through tests, benchmarks, targeted questioning of the agent, and selective review rather than blind trust.

Will AI replace code reviewers or eliminate review work?
not confirmed in the source

AI Zealotry¶ Written on 2025-12-31 I develop with AI today. It's great. There are many articles you can read on why AI is great (or terrible) or how to use…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *