Hey!
Summary of this email:
- Event Bus in your Database course published on the Standard plan.
- Opus 4.5 vs GPT-5.2 vs Gemini 3 (flash) for programming: Our favorite model.
⏱️ Estimated reading time: 2 quick minutes. 🧠 Knowledge density: Packed and then some.
🚏 Event Bus in your Database course published on the Standard plan
Building an event bus in the database gives us all the advantages of asynchrony on an infrastructure we already know.
We've just published the course on the Codely Pro Standard plan. We hope you enjoy it as much as we enjoyed putting it together. 🫶
✨ Opus 4.5 vs GPT-5.2 vs Gemini 3 (flash) for programming: Our favorite model
These past few weeks, new versions of the most popular models for programming have been released. We've been testing all of them and we have opinions on which is the best for what.
🧠 Claude Opus 4.5
If we had to pick just one model for programming, it would undoubtedly be Opus 4.5.
It's the most expensive of the three, but it has a big advantage: it's very efficient.
This means that for the same task, it uses fewer tokens than the alternatives. And therefore, it can end up being cheaper than other models that seemingly have a lower price.
On top of being very efficient, it seems to be the one that's been best optimized for programming. It handles both frontend and backend tasks like a champ.
If you combine this with tools like Claude Code, it gets even more powerful since it mitigates its biggest problem: it's not very fast.
The way to mitigate it is no mystery — when it needs to explore a lot of code, Claude Code spawns sub-agents that use the Haiku model. Haiku is faster and "dumber," but for the task of exploring files it works great.
🗣️ GPT-5.2 This model has been the response to Gemini 3. It's a very powerful model, but for programming it leaves a lot to be desired since it's more expensive, slower, and not as good.
But the reason for that is because it's not a model designed for programming. OpenAI has said that in a few weeks they'll release the Codex version of the model, which we do expect to blow all the benchmarks away.
So currently, GPT-5.2 is not a model we'd recommend for programming.
That said, as a generalist model for asking questions via chat, it's excellent.
⚡ Gemini 3 Chances are, at the pace they're going, next year Gemini will be the best model for programming. It's a model that's very close to Opus 4.5 in benchmarks, but in our tests, Opus 4.5 works better for us (biased toward our TS + PG stack).
Buuut yesterday they launched Gemini 3 flash. Being a flash model, you might think it's much "dumber." But it's not. It's very good. It's very fast. It's very efficient. It's very cheap. In many benchmarks it's nearly on par with GPT-5.2.
It's not as powerful as Opus 4.5, but the gap isn't as big as you might think. Based on the tests we've run since yesterday, our conclusions are:
- Default to trying everything with Gemini 3 flash.
- If it fails, try it with Opus 4.5.
- Use Opus 4.5 when you know something is going to be really complex.
Obviously, this applies if you're using a tool that lets you use models from different families, like Cursor or VS Code. If you're using Claude Code, Opus 4.5 for everything (at least until they release Sonnet 4.7).
We'll be discussing all of this and more tomorrow at Café con Codely at 9 AM CET. Live on our Twitch and YouTube. See you there!
And since you've made it this far into the newsletter, here's the joke of the week, which I know you've been waiting for (this one's brand new):
> Why can't C go to a Michelin star restaurant? Because it has no class! 😂 😂 😂
See you around!