Open source AI we use to work on Wagtail
What’s working for us right now
If you’ve been using AI tools but haven’t tried open source models yet, you’re missing out. Here’s a quick recap on what we found to work well with Wagtail projects right now. Things you can try in a matter of minutes.
Why it’s worth your time
Two reasons: agency, and capability. Frontier Large Language Models and $200 subscriptions can do a lot; yet they fall way short in critical aspects of our ethos when it comes to AI adoption. Open source models have a mixed track record too, but the sheer diversity of models and providers out there means that you will likely find something that better suits your needs.
Working with open source models forces you to become aware of this much larger diversity of options. You learn a lot along the way, you make more informed choices about AI adoption. And when future models come our or new techniques emerge, you’re better able to understand the implications.
Still not convinced? You might enjoy reading Owners, not renters: Mozilla’s open source AI strategy.
Inference providers
There are dozens (hundreds?) of options to run LLM inference APIs as a service. We’ll focus on three we’re liking most so far:
- Scaleway: the primary provider we use for development on Wagtail AI. We picked their service because of their clear environmental track record: solid commitments; and a data center in a low-emissions electricity grid.
- Neuralwatt: a promising option that fully reports energy use of inference. You get a much better understanding of the climate footprint of your AI usage.
- Or run it locally! Ollama is very popular. LM Studio has a gigantic range of models. Foundry Local from Microsoft is super simple to get started with.
Here’s an example of a very tangible benefit of this greater diversity of options; Neuralwatt provides a built-in dashboard that shows your AI inference energy use. No need to ballpark the energy use of AI coding agents, your AI provider directly tells you:

If you’re not sure where to start, try local AI first. Pick a very small model and see how it works. If you’re worried your laptop might not be up to the task, there are options for local AI on phones too.
Open source models
Here as well, there are a plethora of options. And more coming out on a daily basis. Here are some of our go-to options:
- For general-purpose use: Mistral Small, or GPT-OSS. The main models we use to try out Wagtail AI features. They’re relatively small (so low-footprint, fast, easy to run), close or at the “Pareto front” of performance relative to cost. Mistral have a good track record of transparency about their models’ footprint.
- For coding: Qwen3 Coder, or Kimi K2.5. Qwen3 Coder is available in a wide range of sizes, from 1.5B for local use as an autocompletion model, to data center size. Kimi K2.5 is a much larger 1000B model (in the top 10 of AI agents for coding!), but its architecture is optimized for low energy use (273.32 mWh per query).
- For quick local checks: Ministral 3B, or Liquid LFM2.5. We use those models to try Wagtail AI and our llms.txt format for docs. They have a ton of capabilities for models of this size.
The beauty of open source models isn’t so much any specific model’s performance as much as the breadth of options. And it’s not just about capabilities; models like Apertus also lead the way in transparency by having open source data / training in addition to the model weights. This model focuses on supporting many languages, so it’s a great test of our support for AI capabilities in other languages than english.
One last illustration of the value in picking and choosing per use case: Holo2 is optimized for automated browsing, and we can use it to create text descriptions of Wagtail UI screenshots! See how it describes a Promptfoo screenshot of Wagtail’s keyboard shortcuts dialog, compared to Mistral Small and GPT 4.1. There are more details with the same prompt, more opportunity to get precise descriptions of complex interfaces.

The image displays a computer interface window titled "Keyboard shortcuts," listing application shortcuts like search and toggle sidebar, and action shortcuts for saving changes and previewing. The layout is clean, with a dark background and white
Try it out
All of this selection of providers and models takes extra work, compared to just using the proprietary flagship from one of the big "labs". But it’s worth the effort, and with a few minutes you’ll be able to get something up and running.
If you’re not sure what tool to use those models with, the OpenCode CLI coding harness has good support for open source models and providers. Continue.dev in your IDE does too. Mistral Vibe is very simple to set up, and comes with a free tier. Any LLM from Mozilla is excellent to integrate any provider or model with your application.
And as for Wagtail, we’ll keep experimenting. Keep AI out of core, while using it where it makes sense.