TL;DR
LLVM contributors may use AI tools, but the project draft policy requires a human to review and take responsibility for any LLM-generated content. The proposal bans autonomous agents that act without human approval and asks authors to label substantial tool-assisted work to help maintainers triage reviews.
What happened
A draft policy circulated on the LLVM developer forum refines how contributors may use large language models and similar tools when submitting code, documentation, or design proposals. The author revised an earlier draft after community feedback and a round-table discussion, centering the rules on a ‘‘human-in-the-loop’’ requirement: contributors must personally read, review, and be prepared to answer questions about any LLM-assisted content before asking maintainers to review it. The draft discourages pushing the labour of verifying AI output onto reviewers by treating unvetted LLM output as potentially ‘‘extractive’’—work that costs maintainers more than it benefits the project. Contributors are asked to note tool usage in PR descriptions or commit messages (for example, an Assisted-by: trailer). The draft also forbids autonomous agents that post or act without human sign-off, permits opt-in review tools that keep a human reviewer, and reiterates that contributors remain responsible for copyright and provenance of their submissions.
Why it matters
- Protects maintainer time by discouraging submissions that offload review effort onto volunteers.
- Clarifies contributor responsibility: using AI doesn't remove the need for human validation or copyright checks.
- Limits automated agents that can generate noise or unwanted changes in project repositories and discussions.
- Promotes transparency through labeling to help the community understand and evaluate AI-assisted work.
Key facts
- The draft central rule: contributors must read and review all LLM-generated code or text before asking others to review it.
- Contributors remain the author and are fully accountable for tool-assisted contributions.
- The policy asks authors to flag substantial tool usage in the PR description, commit message, or other authorship fields; a suggested trailer is Assisted-by: .
- Autonomous agents that act without human approval (for example, posting PR comments automatically) are prohibited under the draft.
- Opt-in review tools that keep a human in the loop are acceptable according to the proposal.
- The policy applies to code, RFCs or design proposals, issues, security reports, comments, and other contribution types.
- The draft frames unvetted LLM output as an 'extractive contribution' that can be rejected or labeled to deprioritize review.
- Maintainers are instructed to respond to extractive submissions with a standard request for changes and may escalate to moderation if issues persist.
- The policy restates that using AI to regenerate copyrighted material does not remove copyright obligations; contributors must ensure licensing compliance.
- Examples cited as acceptable include human-reviewed generated documentation and PRs that include verifiable proofs (e.g., from Alive2).
What to watch next
- Whether LLVM will formally adopt this draft as official project policy — not confirmed in the source.
- How maintainers will operationalize enforcement across platforms like GitHub, Discourse, and Discord — not confirmed in the source.
- If and how other open-source projects adopt similar human-in-the-loop rules or labeling conventions — not confirmed in the source.
Quick glossary
- Human-in-the-loop: A workflow requirement that a person must review, approve, or be accountable for outputs generated by automated tools before those outputs are acted upon.
- Extractive contribution: A submission that requires more reviewer time to validate and integrate than the benefit it brings to the project.
- LLM (Large Language Model): A type of machine learning model trained on large text corpora that can generate human-like text or code.
- Pull request (PR): A request to merge proposed changes into a project's codebase, typically subject to review by maintainers.
Reader FAQ
Can I use LLMs to help write patches or documentation?
Yes—provided you personally review and stand behind the generated content before submitting it for review.
Are automated agents that post or comment allowed?
No. The draft explicitly bans agents that take action in project spaces without human approval.
How should I disclose that I used a tool?
The proposal asks contributors to note tool usage in PR descriptions or commit messages; a suggested trailer is Assisted-by: .
Does using an AI tool change copyright responsibilities?
No. Contributors are responsible for ensuring they have the right to contribute material under the project license; AI use does not remove copyright obligations.

rnk 1 13d Hey folks, I got a lot of feedback from various meetings on the proposed LLVM AI contribution policy, and I made some significant changes based on that…
Sources
- We don't need more contributors who aren't programmers to contribute code
- We don't need more contributors who aren't programmers …
- Why aren't you contributing to open source?
- Why don't many programmers contribute to open source …
Related posts
- 2026 study: Drinking water quality differs widely across U.S. airlines
- New York’s mayor-elect bars Raspberry Pi from inauguration block party
- Research: Honey Detects Testers and Conceals Affiliate Network Violations