Guidance for AI-led contributions
New contributor guidelines in the generative AI age
Seeing a surge in AI-led contributions, our core team are adapting Wagtail’s contribution guidelines to triage accordingly. We ask that if you use generative AI for your contribution, you include a disclaimer, for example:
“This pull request includes code written with the assistance of AI. This code was reviewed and verified by me.”
Acceptable uses of LLMs
We recognize generative AI can be a useful tool for contributors, but like any tool should be used with critical thinking and good judgement when creating issues and pull requests. It can be great in particular with:
- Gaining understanding of the existing Wagtail code
- Assistance with writing comments
- Supplementing contributor knowledge for code, tests, and documentation
Unacceptable uses
We struggle when contributors’ entire work (code changes, documentation update, pull request descriptions) are LLM-generated. In this situation, we move from "AI-led" contributions towards "AI slop". Those contributors often mean well but don’t have the understanding of how their contribution is shaping up. They also don’t understand the effort it takes us as maintainers to review and provide meaningful feedback.
We will close those pull requests and issues that are unproductive, so we can focus our limited maintainer capacity elsewhere.
Behind the scenes
Wagtail is more than a decade old and we have seen all sorts of drive-by contributions in the past, some being “low-quality” is nothing new. We want people to try to contribute to Wagtail even with limited expertise, so we get new people into open source! But what is new is how easy it has gotten for interested contributors to do very minimal effort and produce a legitimate-looking contribution.
We are interested in appropriate uses of AI, where it supports human contributors, and meets our proposed AI guidelines:
- No AI dependency in Wagtail core: all opt-in via packages from Django’s vast ecosystem.
- Responsible approach to AI: high alignment with our values; ethical, sustainable, transparent, privacy-preserving.
- Model and provider agnostic: compatible with a wide range of open source models, not just flagship proprietary ones.
- Only the right AI: focus on the use cases where there is a definite improvement.
- Human in the loop: preserve the user’s autonomy and agency.
For contributors however, they will come with their preferred tools, and that is ok too. We are still working on our Guidance for AI-led contributions in issue #13390, the above is only a first draft from the work of Andrew Selzer and a draft of the Python Contributor’s guide.
We will keep working on this, and may even start discouraging, disallowing, or being much stricter about AI uses for specific scenarios:
- In writing of our RFCs, where understanding by the author is crucial to success.
- In Google Summer of Code project proposals, where the author’s voice gets completely lost when the writing or the research is done by AI.
Where next with AI
For contributors and maintainers, we have an AI development tools trial going on behind the scenes. And for everyone, our Wagtail Space 2025 online event will feature our latest work for AI in the CMS!