TL;DR
LLVM published a draft policy that allows contributors to use AI tools but requires a knowledgeable human to review and take responsibility for any LLM-assisted contributions. The proposal forbids autonomous agents that act without human approval and asks contributors to label substantial tool-generated content and be prepared to answer reviewer questions.
What happened
A contributor to the LLVM project posted an updated draft AI tool policy after gathering feedback from community meetings. The revised proposal centers on a mandatory "human-in-the-loop" requirement: anyone using LLMs or similar assistance must read, review, and be able to explain their output before requesting maintainer review. The draft moves away from a vague notion of "owning" contributions toward explicit reviewer accountability, and it recommends labeling pull requests or commits that contain significant tool-generated content (for example, using an Assisted-by: trailer). The policy also bans automated agents that perform actions in project spaces without human approval and disallows review tools that publish comments automatically; opt-in tools that require human sign-off are acceptable. The draft outlines escalation steps for maintainers who judge contributions to be "extractive" and repeats that contributors remain responsible for copyright compliance.
Why it matters
- Sets clear expectations that contributors must validate LLM output themselves, protecting maintainers from extra review burden.
- Limits use of autonomous bots and automated comments in project spaces, changing how integrations can interact with LLVM repositories.
- Promotes transparency by asking for tool-use labeling, which may help the community develop best practices around AI assistance.
- Balances enabling productivity gains from LLMs with sustaining a healthy review culture and mentoring path for new contributors.
Key facts
- Policy requires a human to read and review all LLM-generated code or text before seeking project review.
- Contributors must be prepared to answer questions about their contribution during review and cannot defer to an LLM.
- Authors should note significant tool usage in PR descriptions, commit messages, or author attribution (e.g., Assisted-by:).
- Automated agents that act without human approval (such as bots that post or comment autonomously) are banned.
- Opt-in review tools that keep a human in the loop are allowed under the policy.
- The policy covers code PRs, RFCs, design proposals, issues, security reports, comments, and other extractive contributions.
- Maintainers may mark a contribution as 'extractive', request changes, and escalate to moderation if authors don't address concerns.
- Contributors remain responsible for copyright and must ensure they have the right to contribute material, including AI-regenerated content.
- Examples cited include human-reviewed generated documentation and contributions with formal proofs (e.g., Alive2) to signal value.
What to watch next
- How maintainers apply the 'extractive' label and when they escalate to moderation — not confirmed in the source.
- Whether the Assisted-by: commit trailer becomes a de facto standard for indicating AI assistance — not confirmed in the source.
- How third-party tools and platform integrations adapt to the ban on autonomous agents in LLVM spaces — not confirmed in the source.
Quick glossary
- Human-in-the-loop: A workflow requirement that a human reviews, approves, and remains accountable for outputs produced or assisted by automated tools.
- Extractive contribution: A submission that shifts review effort to maintainers without providing commensurate value, as defined by a project's maintainers.
- LLM: Large language model — an AI system trained on large text corpora that can generate or transform natural language and code.
- Pull request (PR): A request submitted to a repository to merge code or documentation changes into a target branch, subject to review.
- Commit message trailer: A structured line or tag added to a commit message used for metadata such as attribution or tooling notes (e.g., Assisted-by:).
Reader FAQ
Can I use LLMs to prepare contributions to LLVM?
Yes, but you must personally read, review, and be able to explain any AI-assisted output before seeking review.
Do I need to disclose when I used AI tools?
Yes. The draft asks contributors to note significant tool use in PR descriptions, commit messages, or usual authorship places.
Are automated bots that post or comment allowed?
No. Agents that act without human approval are banned; opt-in tools requiring human sign-off are allowed.
What happens if a maintainer deems my contribution extractive?
Maintainers should request changes, add the extractive label, and may escalate to moderation or admin teams if issues remain.

rnk 1 13d Hey folks, I got a lot of feedback from various meetings on the proposed LLVM AI contribution policy, and I made some significant changes based on that…
Sources
- LLVM AI tool policy: human in the loop
- Our AI policy vs code of conduct and vs reality
- Code review as human alignment, in the era of LLMs
- Human-in-the-Loop Review Workflows for LLM Apps & Agents
Related posts
- Hong Kong rolls out Money Safe accounts that require branch visits to access funds
- We don’t need more contributors who aren’t programmers to contribute code
- 2026 study: Drinking water quality differs widely across U.S. airlines