Planet Python
Last update: February 05, 2026 07:44 PM UTC
February 05, 2026
Eli Bendersky
Rewriting pycparser with the help of an LLM
pycparser is my most widely used open source project (with ~20M daily downloads from PyPI [1]). It's a pure-Python parser for the C programming language, producing ASTs inspired by Python's own. Until very recently, it's been using PLY: Python Lex-Yacc for the core parsing.
In this post, I'll describe how I collaborated with an LLM coding agent (Codex) to help me rewrite pycparser to use a hand-written recursive-descent parser and remove the dependency on PLY. This has been an interesting experience and the post contains lots of information and is therefore quite long; if you're just interested in the final result, check out the latest code of pycparser - the main branch already has the new implementation.
The issues with the existing parser implementation
While pycparser has been working well overall, there were a number of nagging issues that persisted over years.
Parsing strategy: YACC vs. hand-written recursive descent
I began working on pycparser in 2008, and back then using a YACC-based approach for parsing a whole language like C seemed like a no-brainer to me. Isn't this what everyone does when writing a serious parser? Besides, the K&R2 book famously carries the entire grammar of the C99 language in an appendix - so it seemed like a simple matter of translating that to PLY-yacc syntax.
And indeed, it wasn't too hard, though there definitely were some complications in building the ASTs for declarations (C's gnarliest part).
Shortly after completing pycparser, I got more and more interested in compilation and started learning about the different kinds of parsers more seriously. Over time, I grew convinced that recursive descent is the way to go - producing parsers that are easier to understand and maintain (and are often faster!).
It all ties in to the benefits of dependencies in software projects as a function of effort. Using parser generators is a heavy conceptual dependency: it's really nice when you have to churn out many parsers for small languages. But when you have to maintain a single, very complex parser, as part of a large project - the benefits quickly dissipate and you're left with a substantial dependency that you constantly grapple with.
The other issue with dependencies
And then there are the usual problems with dependencies; dependencies get abandoned, and they may also develop security issues. Sometimes, both of these become true.
Many years ago, pycparser forked and started vendoring its own version of PLY. This was part of transitioning pycparser to a dual Python 2/3 code base when PLY was slower to adapt. I believe this was the right decision, since PLY "just worked" and I didn't have to deal with active (and very tedious in the Python ecosystem, where packaging tools are replaced faster than dirty socks) dependency management.
A couple of weeks ago this issue was opened for pycparser. It turns out the some old PLY code triggers security checks used by some Linux distributions; while this code was fixed in a later commit of PLY, PLY itself was apparently abandoned and archived in late 2025. And guess what? That happened in the middle of a large rewrite of the package, so re-vendoring the pre-archiving commit seemed like a risky proposition.
On the issue it was suggested that "hopefully the dependent packages move on to a non-abandoned parser or implement their own"; I originally laughed this idea off, but then it got me thinking... which is what this post is all about.
Growing complexity of parsing a messy language
The original K&R2 grammar for C99 had - famously - a single shift-reduce conflict having to do with dangling elses belonging to the most recent if statement. And indeed, other than the famous lexer hack used to deal with C's type name / ID ambiguity, pycparser only had this single shift-reduce conflict.
But things got more complicated. Over the years, features were added that weren't strictly in the standard but were supported by all the industrial compilers. The more advanced C11 and C23 standards weren't beholden to the promises of conflict-free YACC parsing (since almost no industrial-strength compilers use YACC at this point), so all caution went out of the window.
The latest (PLY-based) release of pycparser has many reduce-reduce conflicts [2]; these are a severe maintenance hazard because it means the parsing rules essentially have to be tie-broken by order of appearance in the code. This is very brittle; pycparser has only managed to maintain its stability and quality through its comprehensive test suite. Over time, it became harder and harder to extend, because YACC parsing rules have all kinds of spooky-action-at-a-distance effects. The straw that broke the camel's back was this PR which again proposed to increase the number of reduce-reduce conflicts [3].
This - again - prompted me to think "what if I just dump YACC and switch to a hand-written recursive descent parser", and here we are.
The mental roadblock
None of the challenges described above are new; I've been pondering them for many years now, and yet biting the bullet and rewriting the parser didn't feel like something I'd like to get into. By my private estimates it'd take at least a week of deep heads-down work to port the gritty 2000 lines of YACC grammar rules to a recursive descent parser [4]. Moreover, it wouldn't be a particularly fun project either - I didn't feel like I'd learn much new and my interests have shifted away from this project. In short, the Potential well was just too deep.
Why would this even work? Tests
I've definitely noticed the improvement in capabilities of LLM coding agents in the past few months, and many reputable people online rave about using them for increasingly larger projects. That said, would an LLM agent really be able to accomplish such a complex project on its own? This isn't just a toy, it's thousands of lines of dense parsing code.
What gave me hope is the concept of conformance suites mentioned by Simon Willison. Agents seem to do well when there's a very clear and rigid goal function - such as a large, high-coverage conformance test suite.
And pycparser has an very extensive one. Over 2500 lines of test code parsing various C snippets to ASTs with expected results, grown over a decade and a half of real issues and bugs reported by users.
I figured the LLM can either succeed or fail and throw its hands up in despair, but it's quite unlikely to produce a wrong port that would still pass all the tests. So I set it to run.
The initial port
I fired up Codex in pycparser's repository, and wrote this prompt just to make sure it understands me and can run the tests:
This repository implements a Python parser producing an AST. It has a very
extensive test suite; can you first verify you successfully run all the tests?
(with python3 -m unittest discover, from the root dir)
Codex figured it out (I gave it the exact command, after all!); my next prompt was the real thing [5]:
pycparser uses the PLY library for its lexer and parser, and its own AST
package for the ASTs created by the parser. I want to explore replacing the
Parser component (leaving the Lexer alone for now) with a hand written
recursive descent parser that will pass all the same tests. Can you try that?
Here Codex went to work and churned for over an hour. Having never observed an agent work for nearly this long, I kind of assumed it went off the rails and will fail sooner or later. So I was rather surprised and skeptical when it eventually came back with:
Replaced the PLY parse path with a handwritten recursive‑descent parser while
keeping the existing lexer and AST helpers intact; CParser now points to the
new RD implementation, with the old PLY version preserved as PLYCParser in
pycparser/c_parser.py. The RD parser includes a token stream, declarator/type
parsing, expression precedence handling, pragmas, and statement parsing to
match the previous AST output and coordinate behavior.
It took me a while to poke around the code and run it until I was convinced - it had actually done it! It wrote a new recursive descent parser with only ancillary dependencies on PLY, and that parser passed the test suite. After a few more prompts, we've removed the ancillary dependencies and made the structure clearer. I hadn't looked too deeply into code quality at this point, but at least on the functional level - it succeeded. This was very impressive!
A quick note on reviews and branches
A change like the one described above is impossible to code-review as one PR in any meaningful way; so I used a different strategy. Before embarking on this path, I created a new branch and once Codex finished the initial rewrite, I committed this change, knowing that I will review it in detail, piece-by-piece later on.
Even though coding agents have their own notion of history and can "revert" certain changes, I felt much safer relying on Git. In the worst case if all of this goes south, I can nuke the branch and it's as if nothing ever happened. I was determined to only merge this branch onto main once I was fully satisfied with the code. In what follows, I had to git reset several times when I didn't like the direction in which Codex was going. In hindsight, doing this work in a branch was absolutely the right choice.
The long tail of goofs
Once I've sufficiently convinced myself that the new parser is actually working, I used Codex to similarly rewrite the lexer and get rid of the PLY dependency entirely, deleting it from the repository. Then, I started looking more deeply into code quality - reading the code created by Codex and trying to wrap my head around it.
And - oh my - this was quite the journey. Much has been written about the code produced by agents, and much of it seems to be true. Maybe it's a setting I'm missing (I'm not using my own custom AGENTS.md yet, for instance), but Codex seems to be that eager programmer that wants to get from A to B whatever the cost. Readability, minimalism and code clarity are very much secondary goals.
Using raise...except for control flow? Yep. Abusing Python's weak typing (like having None, false and other values all mean different things for a given variable)? For sure. Spreading the logic of a complex function all over the place instead of putting all the key parts in a single switch statement? You bet.
Moreover, the agent is hilariously lazy. More than once I had to convince it to do something it initially said is impossible, and even insisted again in follow-up messages. The anthropomorphization here is mildly concerning, to be honest. I could never imagine I would be writing something like the following to a computer, and yet - here we are: "Remember how we moved X to Y before? You can do it again for Z, definitely. Just try".
My process was to see how I can instruct Codex to fix things, and intervene myself (by rewriting code) as little as possible. I've mostly succeeded in this, and did maybe 20% of the work myself.
My branch grew dozens of commits, falling into roughly these categories:
- The code in X is too complex; why can't we do Y instead?
- The use of X is needlessly convoluted; change Y to Z, and T to V in all instances.
- The code in X is unclear; please add a detailed comment - with examples - to explain what it does.
Interestingly, after doing (3), the agent was often more effective in giving the code a "fresh look" and succeeding in either (1) or (2).
The end result
Eventually, after many hours spent in this process, I was reasonably pleased with the code. It's far from perfect, of course, but taking the essential complexities into account, it's something I could see myself maintaining (with or without the help of an agent). I'm sure I'll find more ways to improve it in the future, but I have a reasonable degree of confidence that this will be doable.
It passes all the tests, so I've been able to release a new version (3.00) without major issues so far. The only issue I've discovered is that some of CFFI's tests are overly precise about the phrasing of errors reported by pycparser; this was an easy fix.
The new parser is also faster, by about 30% based on my benchmarks! This is typical of recursive descent when compared with YACC-generated parsers, in my experience. After reviewing the initial rewrite of the lexer, I've spent a while instructing Codex on how to make it faster, and it worked reasonably well.
Followup - static typing
While working on this, it became quite obvious that static typing would make the process easier. LLM coding agents really benefit from closed loops with strict guardrails (e.g. a test suite to pass), and type-annotations act as such. For example, had pycparser already been type annotated, Codex would probably not have overloaded values to multiple types (like None vs. False vs. others).
In a followup, I asked Codex to type-annotate pycparser (running checks using ty), and this was also a back-and-forth because the process exposed some issues that needed to be refactored. Time will tell, but hopefully it will make further changes in the project simpler for the agent.
Based on this experience, I'd bet that coding agents will be somewhat more effective in strongly typed languages like Go, TypeScript and especially Rust.
Conclusions
Overall, this project has been a really good experience, and I'm impressed with what modern LLM coding agents can do! While there's no reason to expect that progress in this domain will stop, even if it does - these are already very useful tools that can significantly improve programmer productivity.
Could I have done this myself, without an agent's help? Sure. But it would have taken me much longer, assuming that I could even muster the will and concentration to engage in this project. I estimate it would take me at least a week of full-time work (so 30-40 hours) spread over who knows how long to accomplish. With Codex, I put in an order of magnitude less work into this (around 4-5 hours, I'd estimate) and I'm happy with the result.
It was also fun. At least in one sense, my professional life can be described as the pursuit of focus, deep work and flow. It's not easy for me to get into this state, but when I do I'm highly productive and find it very enjoyable. Agents really help me here. When I know I need to write some code and it's hard to get started, asking an agent to write a prototype is a great catalyst for my motivation. Hence the meme at the beginning of the post.
Does code quality even matter?
One can't avoid a nagging question - does the quality of the code produced by agents even matter? Clearly, the agents themselves can understand it (if not today's agent, then at least next year's). Why worry about future maintainability if the agent can maintain it? In other words, does it make sense to just go full vibe-coding?
This is a fair question, and one I don't have an answer to. Right now, for projects I maintain and stand behind, it seems obvious to me that the code should be fully understandable and accepted by me, and the agent is just a tool helping me get to that state more efficiently. It's hard to say what the future holds here; it's going to interesting, for sure.
| [1] | pycparser has a fair number of direct dependents, but the majority of downloads comes through CFFI, which itself is a major building block for much of the Python ecosystem. |
| [2] | The table-building report says 177, but that's certainly an over-dramatization because it's common for a single conflict to manifest in several ways. |
| [3] | It didn't help the PR's case that it was almost certainly vibe coded. |
| [4] | There was also the lexer to consider, but this seemed like a much simpler job. My impression is that in the early days of computing, lex gained prominence because of strong regexp support which wasn't very common yet. These days, with excellent regexp libraries existing for pretty much every language, the added value of lex over a custom regexp-based lexer isn't very high. That said, it wouldn't make much sense to embark on a journey to rewrite just the lexer; the dependency on PLY would still remain, and besides, PLY's lexer and parser are designed to work well together. So it wouldn't help me much without tackling the parser beast. |
| [5] | I've decided to ask it to the port the parser first, leaving the lexer alone. This was to split the work into reasonable chunks. Besides, I figured that the parser is the hard job anyway - if it succeeds in that, the lexer should be easy. That assumption turned out to be correct. |
February 05, 2026 11:38 AM UTC
PyBites
Building Useful AI with Asif Pinjari
I interview a lot of professionals and developers, from 20-year veterans to people just starting out on their Python journey.
But my conversation with Asif Pinjari was different.
Asif is still a student (and a Teaching Assistant) at Northern Arizona University. Usually, when I talk to people at this stage of their life and career, they’re completely focused on passing tests or mastering syntax.
Asif on the other hand, is doing something else: He’s doing things the Pybites way! He’s building with a focus on providing value.
We spent a lot of time discussing a problem I’m seeing quite often now: developers who limit themselves with AI. That is, they learn how to make an API call to OpenAI and call it a day.
But as Asif pointed out during the show, that’s not engineering. That’s just wrapping a product.
Using AI Locally
One of the most impressive parts of our chat was Asif’s take on privacy. He thought ahead and isn’t just blindly sending data to ChatGPT. He’s already caught on to the very real business constraint: Companies are terrified of their data leaking.
Instead of letting that be a barrier, he went down the rabbit hole of Local LLMs (using tools like Ollama) to build systems that run privately.
This to me is the difference between a student mindset and an engineer mindset.
- Student: “How do I get the code to run?”
- Engineer: “How do I solve the user’s problem as safely and securely as possible?”
It Started With a Calculator
We also traced this back to his childhood. Asif told a great story about being a kid and just staring at a calculator, trying to figure out how it knew the answer.
It reminded me that it’s that kind of curiosity (the desire to look under the hood and understand the nuts and bolts) is exactly what’s missing these days. Living and breathing tech from a young age is exactly why so many of us got into tech in the first place!
Enjoy this episode. It’s inspired me to keep building, that’s for sure!
– Julian
Listen and Subscribe Here
February 05, 2026 10:20 AM UTC
Stéphane Wirtel
Certified AI Developer - A Journey of Growth, Grief, and New Beginning
✨ The Certification Just Arrived!
I just received word from Alyra – I’ve officially completed the AI Developer Certification, successfully mastering the fundamentals of Machine Learning and Deep Learning! 🎉
These past three months have been nothing short of transformative. Not just technically, but personally too. This certification represents far more than finishing a course — it’s about rediscovering mathematics, overcoming personal challenges, and proving to myself that I can learn deeply, even when everything feels impossible.
February 05, 2026 12:00 AM UTC
Peter Hoffmann
Local macOS Dev Setup: dnsmasq + caddy-snake for python projects
When working on a single web project, running flask run on a fixed port is
usually more than sufficient. However, as soon as you start developing
multiple services in parallel, this approach quickly becomes cumbersome:
ports collide, you have to remember which service runs on which port, and you
end up constantly starting, stopping, and restarting individual development
servers by hand.
Using a wildcard local domain (*.lan) throuhg dnsmask and a vhost proxy with
proper WSGI services solves these problems cleanly. Each project gets a stable,
memorable local subdomain instead of a port number, services can run side by
side without collisions, and process management becomes centralized and
predictable. The result is a local development setup with less friction.
dnsmasq on macOS Sonoma (Local DNS with .lan)
This is a concise summary of how to install and configure dnsmasq on macOS
Sonoma to resolve local development domains using a .lan wildcard (e.g.
*.lan → 127.0.0.1).
1. Install dnsmasq
Using Homebrew:
brew install dnsmasq
Homebrew (on Apple Silicon) installs dnsmasq and places the default config in:
/opt/homebrew/etc/dnsmasq.conf
2. Configure dnsmasq
Edit the configuration file:
sudo vim /opt/homebrew/etc/dnsmasq.conf
Add the following:
# Listen only on localhost
listen-address=127.0.0.1
bind-interfaces
# DNS port
port=53
# Wildcard domain for local development
address=/.lan/127.0.0.1
This maps any *.lan hostname to 127.0.0.1.
It's recommended to not use .dev as this is real Google owned TLD and browsers
have baked in to use HTTPS only. Also don't use .local as this is reserved for
mDNS (Bonjour).
3. Tell macOS to use dnsmasq
macOS ignores /etc/resolv.conf, so DNS must be configured per network interface.
Option A: System Settings (GUI)
- System Settings → Network
- Select your active interface (Wi-Fi / Ethernet)
- Details → DNS
- Add:
127.0.0.1
- Move it to the top of the DNS server list
Option B: Command line
networksetup -setdnsservers Wi-Fi 127.0.0.1
(Replace Wi-Fi with the correct interface name if needed.)
4. Start dnsmasq
Run it as a background service:
sudo brew services start dnsmasq
Or run it manually for debugging:
sudo dnsmasq --no-daemon
5. Flush DNS cache
This step is required on Sonoma:
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder
6. Test
dig foo.lan
ping foo.lan
Both should resolve to:
127.0.0.1
Result
You now have:
- dnsmasq running on
127.0.0.1 - Wildcard local DNS via
*.lan - Fully compatible behavior with macOS Sonoma
Caddy
Note: My initial plan was to use caddy with caddy-snake to run multiple
vhosts for python wsgi apps with the configuration below. But this did
not work out as expected because caddy-snake does not run multiple python
interpreters for the different projects, but only appends the site packages
from the python projects to sys.path for all projects and runs all of them in
the same python interpreter. This leads to problems with different python
versions or incompatible python requirements installed in the different venv
versions. So the approach below only works if you use the same python version
and your requirements are compatible within the different apps.
Caddyfile: host multiple WSGI services
As we want caddy to run wsgi services we need to build caddy-snake:
The caddyfile now needs to be stored in $(brew --prefix)/etc/Caddyfile
{
auto_https off
}
http://foo.lan {
bind 127.0.0.1
route {
python {
module_wsgi app:app
working_dir /Users/you/dev/foo
venv /Users/you/dev/foo/.venv
}
}
log {
output stdout
format console
}
}
http://bar.lan {
bind 127.0.0.1
route {
python {
module_wsgi app:app
working_dir /Users/you/dev/bar
venv /Users/you/dev/bar/.venv
}
}
}
February 05, 2026 12:00 AM UTC
February 04, 2026
Django Weblog
Recent trends in the work of the Django Security Team
Yesterday, Django issued security releases mitigating six vulnerabilities of varying severity. Django is a secure web framework, and that hasn’t changed. What feels new is the remarkable consistency across the reports we receive now.
Almost every report now is a variation on a prior vulnerability. Instead of uncovering new classes of issues, these reports explore how an underlying pattern from a recent advisory might surface in a similar code path or under a slightly different configuration. These reports are often technically plausible but only sometimes worth fixing. Over time, this has shifted the Security Team’s work away from discovery towards deciding how far a given precedent should extend and whether the impact of the marginal variation rises to the level of a vulnerability.
Take yesterday’s releases:
We patched a “low” severity user enumeration vulnerability in the mod_wsgi authentication handler (CVE 2025-13473). It’s a straightforward variation on CVE 2024-39329, which affected authentication more generally.
We also patched two potential denial-of-service vulnerabilities when handling large, malformed inputs. One exploits inefficient string concatenation in header parsing under ASGI (CVE 2025-14550). Concatenating strings in a loop is known to be slow, and we’ve done fixes in public where the impact is low. The other one (CVE 2026-1285) exploits deeply nested entities. December’s vulnerability in the XML serializer (CVE 2025-64460) was about those very two themes.
Finally, we also patched three potential SQL injection vulnerabilities. One envisioned a developer passing unsanitized user input to a niche feature of the PostGIS backend (CVE 2026-1207), much like CVE 2020-9402. Our security reporting policy assumes that developers are aware of the risks when passing unsanitized user input directly to the ORM. But the division between SQL statements and parameters is well ingrained, and the expectation is that Django will not fail to escape parameters. The last two vulnerabilities (CVE 2026-1287 and CVE 2026-1312) targeted user-controlled column aliases, the latest in a stream of reports stemming from CVE 2022-28346, involving unpacking **kwargs into .filter() and friends, including four security releases in a row in late 2025. You might ask, “who would unpack **kwargs into the ORM?!” But imagine letting users name aggregations in configurable reports. You would have something more like a parameter, and so you would appreciate some protection against crafted inputs.
On top of all that, on a nearly daily basis we get reports duplicating other pending reports, or even reports about vulnerabilities that have already been fixed and publicized. Clearly, reporters are using LLMs to generate (initially) plausible variations.
Security releases come with costs to the community. They interrupt our users’ development workflows, and they also severely interrupt ours.
There are alternatives. The long tail of reports about user-controlled aliases presents an obvious one: we can just re-architect that area. (Thanks to Simon Charette for a pull request doing just that!) Beyond that, there are more drastic alternatives. We can confirm fewer vulnerabilities by placing a higher value on a user's duty to validate inputs, placing a lower value on our prior precedents, or fixing lower severity issues publicly. The risk there is underreacting, or seeing our development workflow disrupted anyway when a decision not to confirm a vulnerability is challenged.
Reporters are clearly benefiting from our commitment to being consistent. For the moment, the Security Team hopes that reacting in a consistent way—even if it means sometimes issuing six patches—outweighs the cost of the security process. It’s something we’re weighing.
As always, keep the responsibly vetted reports coming to security@djangoproject.com.
February 04, 2026 04:00 PM UTC
Python Morsels
Is it a class or a function?
If a callable feels like a function, we often call it a function... even when it's not!
Classes are callable in Python
To create a new instance of a class in Python, we call the class:
>>> from collections import Counter
>>> counts = Counter()
Calling a class returns a new instance of that class:
>>> counts
Counter()
That is, an object whose type is that class:
>>> type(counts)
<class 'collections.Counter'>
In Python, the syntax for calling a function is the same as the syntax for making a new class instance.
Both functions and classes can be called. Functions are called to evaluate the code in that function, and classes are called to make a new instance of that class.
print
Is print a class or …
Read the full article: https://www.pythonmorsels.com/class-or-function/
February 04, 2026 03:30 PM UTC
Robin Wilson
Pharmacy late-night opening hours analysis featured in the Financial Times
Some data analysis I’ve done has been featured in the Financial Times today – see this article (the link may not work any more unless you have a FT subscription – sorry).
The brief story is that I had terrible back pain over Christmas, and spoke to an out-of-hours GP on the phone who prescribed me some muscle relaxant and some strong painkillers. This was at 9pm on a Sunday, so I asked her where my wife could go to pick up the prescription and was told that the closest pharmacy that was open was about 45-60mins drive away.
I’ve lived in Southampton for 18 years now, and years ago I knew there were multiple pharmacies in Southampton that were open until late at night – and one that I think was open 24 hours a day. I was intrigued to see how much this had changed, and managed to find a NHS dataset giving opening hours for all NHS community pharmacies. These data are available back to 2022 – so only ~3 years ago – but I thought I’d have a look.
The results shocked me: between 2022 and 2025, the number of pharmacies open past 9pm on a week day has decreased by approximately 95%! And there are now large areas of the country where there are no pharmacies open past 9pm on any week day.
I mentioned this to some friends, one of them put me in touch with a journalist, I sent them the data and was interviewed by them and the result is this article. In case you can’t access it, I’ll quote a couple of the key paragraphs here:
The number of late-night pharmacies in England has fallen more than 90 per cent in three years, according to an analysis of NHS data, raising fresh concerns about patient access to out-of-hours care.
…
The analysis of official NHS records was carried out by geospatial software engineer Robin Wilson and verified by the FT. Wilson, from Southampton, crunched the numbers after his wife had to make a two-hour round trip to collect his prescription for back pain that left him “immobilised”.
I’ll aim to write another post about how I did the analysis sometime, but it was mosly carried out using Python and pandas, along with some maps via GeoPandas and Folium. The charts and maps in the article were produced by the FT in their house style.
February 04, 2026 02:10 PM UTC
Real Python
Why You Should Attend a Python Conference
The idea of attending a Python conference can feel intimidating. You might wonder if you know enough, if you’ll fit in, or if it’s worth your time and money. In this guide, you’ll learn about the different types of Python conferences, what they actually offer, who they’re for, and how attending one can support your learning, confidence, and connection to the wider Python community.
Prerequisites
This guide is for all Python users who want to grow their Python knowledge, get involved with the Python community, or explore new professional opportunities. Your level of experience with Python doesn’t matter, and neither does whether you use Python professionally or as a hobbyist—regularly or only from time to time. If you use Python, you’re a Python developer, and Python conferences are for Python developers!
Brett Cannon, a CPython core developer, once said:
I came for the language, but I stayed for the community. (Source)
If you want to experience this feeling firsthand, then this guide is for you.
Get Your PDF: Click here to download a PDF with practical tips to help you feel prepared for your first Python conference.
Understand What Python Conferences Actually Offer
Attending a Python conference offers several distinct benefits that generally fall into three categories:
-
Personal growth: Learn new concepts, tools, and best practices through talks, tutorials, and hands-on sessions that help you deepen your Python skills and build confidence.
-
Community involvement: Meet other Python users in person, connect with open-source contributors and maintainers, and experience the collaborative culture that defines the Python community.
-
Professional opportunities: Discover potential job openings, meet companies using Python across industries, and form professional connections that can lead to future projects or roles.
The following sections explore each category in more detail to help you recognize what matters most to you when choosing a Python conference.
Personal Growth
One of the biggest benefits of attending a Python conference is the opportunity for personal growth through active learning and engagement.
Python conferences are organized around a program of talks, tutorials, and collaborative sessions that expose you to new ideas, tools, and ways of thinking about Python. The number of program items can range from one at local meetups to over one hundred at larger conferences like PyCon US and EuroPython.
At larger events, you’re exposed to a wide breadth of topics to choose from, while at smaller events, you have fewer options but can usually attend all the sessions you’re interested in. Conference talks are an excellent opportunity to get exposed to new ideas, hear about new tools, or just listen to someone else talk about a topic you’re familiar with, which can be a very educational experience!
Most of these talks are later shared on YouTube, but attending in person allows you to participate in live Q&A sessions where speakers answer audience questions directly. You also have the chance to meet the speaker after the talk and ask follow-up questions that wouldn’t be possible when watching a recording.
Tutorials, on the other hand, are rarely recorded. They tend to be longer than talks and focus on hands-on coding, making them a brilliant way to gain practical, working knowledge of a Python feature or tool. Working through exercises with peers and asking questions in real time can help solidify your understanding of a topic.
Some conferences also include collaborative sprint events, where you get together with other attendees to contribute to open-source projects, typically with the guidance of the project maintainers themselves:
EuroPython Attendees Collaborating During the Sprints (Image: Braulio Lara)
Participating in sprints under the mentorship of the project maintainers is a great way to boost your confidence in your skills and get some open-source contributions under your belt.
Community Involvement
Developers are used to collaborating on open-source projects with people around the world, but working together online isn’t the same as having a face-to-face conversation. Python conferences fill that gap by giving developers a dedicated space to meet and connect in person.
Read the full article at https://realpython.com/python-conference-guide/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 04, 2026 02:00 PM UTC
Seth Michael Larson
Dumping Nintendo e‑Reader Card “ROMs”
The Nintendo e‑Reader was a peripheral released for the Game Boy Advance in 2001. The Nintendo e‑Reader allowed scanning “dotcode strips” to access extra content within games or to play mini-games. Today I'll show you how to use the GB Operator, a Game Boy ROM dumping tool, in order to access the ROM encoded onto e‑Reader card dotcodes.
I'll be demonstrating using a new entrant to e‑Reader game development for the venerable platform: Retro Dot Codes by Matt Greer. Matt regularly posts about his process developing and printing e‑Reader cards and games in 2026. I was a recipient for one of his free e‑Reader card giveaways and purchased Retro Dot Cards “Series 1” pack which I'm planning to open and play for the blog.
Dumping a Nintendo e-Reader card contents
The process is straightforward but requires a working GBA or equivalent (GBA, GBA SP, Game Boy Player, DS, or Analogue Pocket *), a Nintendo e-Reader cartridge, and a GBA ROM dumper like GB Operator. Launch the e‑Reader cartridge using a Game Boy Advance, Analogue Pocket, or Game Boy Player. The e-Reader software prompts you to scan the dotcodes.
The Solitaire card stores its program data on two “long” dotcode strips consisting of 28 “blocks” per-strip encoding 104 bytes-per-block for a total of 5824 bytes on two strips (2×28×104=5824). If you want to see approximately how large a dotcode strip is, open this page in a desktop browser. After scanning each side of the Solitaire card you can play Solitaire on your console:
I'll be honest, I was never into Solitaire as a kid, I was more “Space Cadet Pinball” on Windows... Anyways, let's archive the “ROM” of this game so even if we lose the physical card we can still play.
Turn off your device and connect the e‑Reader cartridge to the GB Operator. Following the steps I documented for Ubuntu, start “Epilogue Playback” software and dump the e‑Reader ROM and critically: the save data. The Nintendo e‑Reader supports saving your already scanned game in the save data of the cartridge so you can play again next time you boot without needing to re-scan the cards.
Now we have a .sav file. This file works as an archive of the
program, as we can load our e-Reader ROM and this save into a GBA emulator to play again.
Success!
Examining e-Reader Card ROMs
Now that we have the .sav file for the Solitaire ROM, let's see
what we can find inside. The file itself
is mostly empty, consisting almost entirely of 0xFF and 0x00 bytes:
>>> data = open("Solitaire.sav", "rb").read()
>>> len(data), hex(len(data))
(131072, '0x20000')
>>> sum(b == 0xFF for b in data)
118549
>>> sum(b == 0x00 for b in data)
8200
We know from the data limits of 2 dotcode strips that there's only 5824 bytes maximum for program data. If we look at the format of e‑Reader save files documented at caitsith2.com we can see what this data means. I've duplicated the page below, just in case:
ereader save format.txt
US E-reader save format
Base Address = 0x00000 (Bank 0, address 0x0000)
Offset Size Description
0x0000 0x6FFC Bank 0 continuation of save data.
0x6FFD 0x6004 All 0xFF
0xD000 0x0053 43617264 2D452052 65616465 72203230
30310000 67B72B2E 32333333 2F282D2E
31323332 302B2B30 31323433 322F2A2C
30333333 312F282C 30333233 3230292D
30303131 2F2D2320 61050000 80FD7700
000001
0xD053 0x0FAD All 0x00s
0xE000 0x1000 Repeat 0xD000-0xDFFF
0xF000 0x0F80 All 0xFFs
0xFF80 0x0080 All 0x00s
Base Address = 0x10000 (Bank 1, address 0x0000)
Offset Size Description
0x0000 0x04 CRC (calculated starting from 0x0004, and amount of data to calculate is
0x30 + [0x002C] + [0x0030].)
0x0004 0x24 Program Title (Null terminated) - US = Straight Ascii, Jap = Shift JIS
0x0028 0x04 Program Type
0x0204 = ARM/THUMB Code/data (able to use the GBA hardware directly, Linked to 0x02000000)
0x0C05 = 6502 code/data (NES limitations, 1 16K program rom + 1-2 8K CHR rom, mapper 0 and 1)
0x0400 = Z80 code/data (Linked to 0x0100)
0x002C 0x04 Program Size
= First 2 bytes of Program data, + 2
0x0030 0x04 Unknown
0x0034 Program Size Program Data (vpk compressed)
First 2 bytes = Size of vpk compressed data
0xEFFF 0x01 End of save area in bank 1. Resume save data in bank 0.
The CRC is calculated on Beginning of Program Title, to End of Program Data.
If the First byte of Program Title is 0xFF, then there is no save present.
If the CRC calculation does not match stored CRC, then the ereader comes up with
an ereader memory error.
CRC calculation Details
CRC table is calculated from polynomial 0x04C11DB7 (standard CRC32 polynomial)
with Reflection In. (Table entry 0 is 0, and 1 is 0x77073096...)
CRC calculation routine uses Initial value of 0xAA478422. The Calculation routine
is not a standard CRC32 routine, but a custom made one, Look in "crc calc.c" for
the complete calculation algorithm.
Revision history
v1.0 - First release
V1.1 - Updated/Corrected info about program type.
v1.2 - Updated info on Japanese text encoding
v1.3 - Info on large 60K+ vpk files.
From this format specification we can see that
the program data starts around offset 0x10000
with the CRC, the program title, type, size,
and the program data which is compressed
using the VPK0 compression algorithm.
Searching through our save data, sure enough we see some
data at the offsets we expect like the program title and the
VPK0 magic bytes vpk0:
>>> hex(data.index(b"Solitaire\x00"))
'0x10004'
>>> hex(data.index(b"vpk0"))
'0x10036'
We know that the VPK0-compressed blob length is encoded in the two bytes before the magic header, little-endian. Let's grab that value and write the VPK0-compressed blob to a new file:
>>> vpk_idx = data.index(b"vpk0")
>>> vpk_len = int.from_bytes(
... data[vpk_idx-2:vpk_idx], "little")
>>> with open("Solitaire.vpk", "wb") as f:
... f.write(data[vpk_idx:vpk_idx+vpk_len])
In order to decompress the program data we'll need
a tool that can decompress VPK0. The e‑Reader development
tools repository points to nevpk.
You can download the source code
for multi-platform support and compile using cmake:
curl -L https://github.com/breadbored/nedclib/archive/749391c049756dc776b313c87da24b7f47b78eea.zip \
-o nedclib.zip
unzip nedclib.zip
cmake . && make
# Now we can use nevpk to decompress the program.
nevpk -d -i Solitaire.vpk -o Solitaire.bin
md5sum Solitaire.bin
3a898e8e1aedc091acbf037db6683a41 Solitaire.bin
This Solitaire.bin file is the original binary that Matt compiled
before compressing, adding headers, and printing the
program onto physical cards. Pretty cool that we can
reverse the process this far!
Nintendo e-Reader and Analogue Pocket
The Analogue Pocket is a hardware emulator that uses an FPGA to emulate multiple retro gaming consoles, including the GBA. One of the prominent features of this device is its cartridge slot, allowing you to play cartridges without dumping them to ROM files first.
But there's just one problem with using the Analogue Pocket with the Nintendo e-Reader. The cartridge slot is placed low on the device, making it impossible to insert oddly-shaped cartridges like the Nintendo e-Reader. Enter the E-Reader Extender! This project by Brian Hargrove extends the cartridge slot while giving your Analogue Pocket a big hug.
Playing Nintendo e-Reader games on Delta Emulator
The best retro emulator is the one you bring with you, for this reason the Delta Emulator is my emulator of choice as it runs on iOS. However, there are challenges to running e-Reader games on Delta: specifically that Delta only allows one save file per GBA ROM. This means to change games you'd need to import a new e-Reader save file. Delta stores ROMs and saves by their ROM checksum (MD5).
Thanks for keeping RSS alive! ♥
February 04, 2026 12:00 AM UTC
February 03, 2026
PyCoder’s Weekly
Issue #720: Subprocess, Memray, Callables, and More (Feb. 3, 2026)
#720 – FEBRUARY 3, 2026
View in Browser »
Ending 15 Years of subprocess Polling
Python’s standard library subprocess module relies on busy-loop polling to determine whether a process has completed yet. Modern operating systems have callback mechanisms to do this, and Python 3.15 will now take advantage of these.
GIAMPAOLO RODOLA
Django: Profile Memory Usage With Memray
Memory usage can be hard to keep under control in Python projects. Django projects can be particularly susceptible to memory bloat, as they may import many large dependencies. Learn how to use memray to learn what is going on.
ADAM JOHNSON
B2B Authentication for any Situation - Fully Managed or BYO
What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys…What you’d rather be doing: almost anything else. PropelAuth does it all for you, at every stage. →
PROPELAUTH sponsor
Create Callable Instances With Python’s .__call__()
Learn Python callables: what “callable” means, how to use dunder call, and how to build callable objects with step-by-step examples.
REAL PYTHON course
Articles & Tutorials
The C-Shaped Hole in Package Management
System package managers and language package managers are solving different problems that happen to overlap in the middle. This causes complications when languages like Python depend on system libraries. This article is a deep dive on the different pieces involved and why it is the way it is.
ANDREW NESBITT
Use \z Not $ With Python Regular Expressions
The $ in a regular expression is used to matching the end of a line, but in Python, it matches a line end both with and without a \n. Python 3.14 added support for \z, which is widely supported by other languages, to get around this problem.
SETH LARSON
Python errors? Fix ‘em fast for FREE with Honeybadger
If you support web apps in production, you need intelligent logging with error alerts and de-duping. Honeybadger filters out the noise and transforms Python logs into contextual issues so you can find and fix errors fast. Get your FREE account →
HONEYBADGER sponsor
Speeding Up Pillow’s Open and Save
Hugo used the Tachyon profiler to examine the open and save calls in the Pillow image processing module. He found ways to optimize the calls and has submitted a PR, this post tells you about it.
HUGO VAN KEMENADE
Some Notes on Starting to Use Django
Julia has decided to add Django to her coding skills and has written some notes from her first experiences. See also the associated HN discussion
JULIA EVANS
How Long Does It Take to Learn Python?
This guide breaks down how long it takes to learn Python, with realistic timelines, weekly study plans, and strategies to speed up your progress.
REAL PYTHON
uv Cheatsheet
uv cheatsheet that lists the most common and useful uv commands across project management, working with scripts, installing tools, and more!
MATHSPP.COM • Shared by Rodrigo Girão Serrão
What’s New in pandas 3
pandas 3.0 has just been released. This article uses a real‑world example to explain the most important differences between pandas 2 and 3.
MARC GARCIA
GeoPandas Basics: Maps, Projections, and Spatial Joins
Dive into GeoPandas with this tutorial covering data loading, mapping, CRS concepts, projections, and spatial joins for intuitive analysis.
REAL PYTHON
Things I’ve Learned in My 10 Years as an Engineering Manager
Non-obvious advice that Jampa wishes he’d learned sooner. Associated HN Discussion
JAMPA UCHOA
Django Views Versus the Zen of Python
Django’s generic class-based views often clash with the Zen of Python. Here’s why the base View class feels more Pythonic.
KEVIN RENSKERS
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
February 4, 2026
REALPYTHON.COM
Canberra Python Meetup
February 5, 2026
MEETUP.COM
Sydney Python User Group (SyPy)
February 5, 2026
SYPY.ORG
PyDelhi User Group Meetup
February 7, 2026
MEETUP.COM
PiterPy Meetup
February 10, 2026
PITERPY.COM
Leipzig Python User Group Meeting
February 10, 2026
MEETUP.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #720.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
February 03, 2026 07:30 PM UTC
Mike Driscoll
Python Typing Book Kickstarter
Python has had type hinting support since Python 3.5, over TEN years ago! However, Python’s type annotations have changed repeatedly over the years. In Python Typing: Type Checking for Python Programmers, you will learn all you need to know to add type hints to your Python applications effectively.
You will also learn how to use Python type checkers, configure them, and set them up in pre-commit or GitHub Actions. This knowledge will give you the power to check your code and your team’s code automatically before merging, hopefully catching defects before they make it into your products.

What You’ll Learn
You will learn all about Python’s support for type hinting (annotations). Specifically, you will learn about the following topics:
- Variable annotations
- Function annotations
- Type aliases
- New types
- Generics
- Hinting callables
- Annotating TypedDict
- Annotating Decorators and Generators
- Using Mypy for type checking
- Mypy configuration
- Using ty for type checking
- ty configuration
- and more!
Rewards to Choose From
There are several different rewards you can get in this Kickstarter:
- A signed paperback copy of the book (See Stretch Goals)
- An eBook copy of the book in PDF and ePub
- A t-shirt with the cover art from the book (See Stretch Goals)
- Other Python eBooks
The post Python Typing Book Kickstarter appeared first on Mouse Vs Python.
February 03, 2026 06:17 PM UTC
Django Weblog
Django security releases issued: 6.0.2, 5.2.11, and 4.2.28
In accordance with our security release policy, the Django team is issuing releases for Django 6.0.2, Django 5.2.11, and Django 4.2.28. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler
The django.contrib.auth.handlers.modwsgi.check_password() function for authentication via mod_wsgi allowed remote attackers to enumerate users via a timing attack.
Thanks to Stackered for the report.
This issue has severity "low" according to the Django security policy.
CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI
When receiving duplicates of a single header, ASGIRequest allowed a remote attacker to cause a potential denial-of-service via a specifically created request with multiple duplicate headers. The vulnerability resulted from repeated string concatenation while combining repeated headers, which produced super-linear computation resulting in service degradation or outage.
Thanks to Jiyong Yang for the report.
This issue has severity "moderate" according to the Django security policy.
CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS
Raster lookups on GIS fields (only implemented on PostGIS) were subject to SQL injection if untrusted data was used as a band index.
As a reminder, all untrusted user input should be validated before use.
Thanks to Tarek Nakkouch for the report.
This issue has severity "high" according to the Django security policy.
CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods
django.utils.text.Truncator.chars() and Truncator.words() methods (with html=True) and truncatechars_html and truncatewords_html template filters were subject to a potential denial-of-service attack via certain inputs with a large number of unmatched HTML end tags, which could cause quadratic time complexity during HTML parsing.
Thanks to Seokchan Yoon for the report.
This issue has severity "moderate" according to the Django security policy.
CVE-2026-1287: Potential SQL injection in column aliases via control characters
FilteredRelation was subject to SQL injection in column aliases via control characters, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to QuerySet methods annotate(), aggregate(), extra(), values(), values_list(), and alias().
Thanks to Solomon Kebede for the report.
This issue has severity "high" according to the Django security policy.
CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation
QuerySet.order_by() was subject to SQL injection in column aliases containing periods when the same alias was, using a suitably crafted dictionary, with dictionary expansion, used in FilteredRelation.
Thanks to Solomon Kebede for the report.
This issue has severity "high" according to the Django security policy.
Affected supported versions
- Django main
- Django 6.0
- Django 5.2
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0, 5.2, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2025-13473: Username enumeration through timing difference in mod_wsgi authentication handler
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2025-14550: Potential denial-of-service vulnerability via repeated headers when using ASGI
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-1207: Potential SQL injection via raster lookups on PostGIS
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-1285: Potential denial-of-service vulnerability in django.utils.text.Truncator HTML methods
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-1287: Potential SQL injection in column aliases via control characters
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
CVE-2026-1312: Potential SQL injection via QuerySet.order_by and FilteredRelation
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 4.2 branch
The following releases have been issued
- Django 6.0.2 (download Django 6.0.2 | 6.0.2 checksums)
- Django 5.2.11 (download Django 5.2.11 | 5.2.11 checksums)
- Django 4.2.28 (download Django 4.2.28 | 4.2.28 checksums)
The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.
February 03, 2026 02:13 PM UTC
Real Python
Getting Started With Google Gemini CLI
This video course will teach you how to use Gemini CLI to bring Google’s AI-powered coding assistance directly into your terminal. After you authenticate with your Google account, this tool will be ready to help you analyze code, identify bugs, and suggest fixes—all without leaving your familiar development environment.
Imagine debugging code without switching between your console and browser, or picture getting instant explanations for unfamiliar projects. Like other command-line AI assistants, Google’s Gemini CLI brings AI-powered coding assistance directly into your command line, allowing you to stay focused in your development workflow.
Whether you’re troubleshooting a stubborn bug, understanding legacy code, or generating documentation, this tool acts as an intelligent pair-programming partner that understands your codebase’s context.
You’re about to install Gemini CLI, authenticate with Google’s free tier, and put it to work on an actual Python project. You’ll discover how natural language queries can help you understand code faster and catch bugs that might slip past manual review.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 03, 2026 02:00 PM UTC
PyBites
Coding can be super lonely
I hate coding solo.
Not in the moment or when I’m in the zone, I mean in the long run.
I love getting into that deep focus where I’m locked in and hours pass by in a second!
But I hate snapping out of it and not having anyone to chat with about it. (I’m lucky that’s not the case anymore though – thanks Bob!)
So it’s no surprise that many of the devs I chat with on Zoom calls or in person share the same sentiment.
Not everyone has a Bob though. Many people don’t have anyone in their circle that they can talk to about code.
- No one to share the hardships with.
- No one to troubleshoot problems with.
- No one to ask for a code review or feedback.
- No one to learn from experience with.
It can be a lonely experience.
And just as bad, it leads to stagnation. You can spend years coding in a silo and feel like you haven’t grown at all. That feeling of being a junior dev becomes unshakable.
When you work in isolation, you’re operating in a vacuum. Without external input, your vacuum becomes an echo chamber.
- Your bad habits become baked in.
- You don’t learn new ways of doing things (no new tricks!)
- And worse of all – you have no idea you’re even doing it.
As funny as it sounds, as devs I think we all need other devs around us who will create friction. Without the friction of other developers looking at your work, you don’t grow.
Some of my most memorable learning experiences in my first dev job were with my colleague, sharing ideas on a whiteboard and talking through code. (Thanks El!)
If you haven’t had the experience of this kind of community and support, then you’re missing out. Here’s what I want you to do this week:
- Go seek out a Code Review: Find someone more senior than you and ask them to give you their two cents on your coding logic. Note I’m suggesting logic and not your syntax. Let’s target your thought process!
- Build for Someone Else: Go build a tool for a colleague or a friend. The second another person uses your code it breaks the cycle/vacuum because you’re now accountable for the bugs, suggestions and UX.
- Public Accountability: Join our community, tell us what you’re going to build and document your progress! If no one is watching, it’s too easy to quit when the engineering gets hard (believe me, I know!).
At the end of the day, you don’t become a Senior Developer and break through to the next level of your Python journey by typing in a dark room alone (as enjoyable as that may be sometimes
)
You become one by engaging with the community, sharing what you’re doing and learning from others.
If you’re stuck in a vacuum, join the community, reply to my welcome DM, and check out our community calendar.
- Sign up for our Accountability Sessions.
- Keep an eye out for Live Sessions Bob and I are hosting every couple of weeks

Julian
This was originally sent to our email list. Join here.
February 03, 2026 11:02 AM UTC
Python Bytes
#468 A bolt of Django
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://github.com/FarhanAliRaza/django-bolt?featured_on=pythonbytes">django-bolt: Faster than FastAPI, but with Django ORM, Django Admin, and Django packages</a></strong></li> <li><strong><a href="https://github.com/deepankarm/pyleak?featured_on=pythonbytes">pyleak</a></strong></li> <li><strong>More Django (three articles)</strong></li> <li><strong><a href="https://data-star.dev?featured_on=pythonbytes">Datastar</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=DhfAWhLrT78' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="468">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: <a href="https://github.com/FarhanAliRaza/django-bolt?featured_on=pythonbytes">django-bolt : Faster than FastAPI, but with Django ORM, Django Admin, and Django packages</a></strong></p> <ul> <li>Farhan Ali Raza</li> <li>High-Performance Fully Typed API Framework for Django</li> <li>Inspired by DRF, FastAPI, Litestar, and Robyn</li> <li><a href="https://bolt.farhana.li?featured_on=pythonbytes">Django-Bolt docs</a></li> <li><a href="https://djangochat.com/episodes/building-a-django-api-framework-faster-than-fastapi?featured_on=pythonbytes">Interview with Farhan on Django Chat Podcast</a></li> <li><a href="https://www.youtube.com/watch?v=Pukr-fT4MFY">And a walkthrough video</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/deepankarm/pyleak?featured_on=pythonbytes">pyleak</a></strong></p> <ul> <li>Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak.</li> <li>Has patterns for <ul> <li>Context managers</li> <li>decorators</li> </ul></li> <li>Checks for <ul> <li>Unawaited asyncio tasks</li> <li>Threads</li> <li>Blocking of an asyncio loop</li> <li>Includes a pytest plugin so you can do <code>@pytest.mark.no_leaks</code></li> </ul></li> </ul> <p><strong>Brian #3: More Django (three articles)</strong></p> <ul> <li><a href="https://paultraylor.net/blog/2026/migrating-from-celery-to-django-tasks/?featured_on=pythonbytes"><strong>Migrating From Celery to Django Tasks</strong></a> <ul> <li>Paul Taylor</li> <li>Nice intro of how easy it is to get started with Django Tasks</li> </ul></li> <li><a href="https://jvns.ca/blog/2026/01/27/some-notes-on-starting-to-use-django/?featured_on=pythonbytes">Some notes on starting to use Django</a> <ul> <li>Julia Evans</li> <li>A handful of reasons why Django is a great choice for a web framework <ul> <li>less magic than Rails</li> <li>a built-in admin</li> <li>nice ORM</li> <li>automatic migrations</li> <li>nice docs</li> <li>you can use sqlite in production</li> <li>built in email</li> </ul></li> <li><a href="https://alldjango.com/articles/definitive-guide-to-using-django-sqlite-in-production?featured_on=pythonbytes">The definitive guide to using Django with SQLite in production</a> <ul> <li>I’m gonna have to study this a bit.</li> <li>The conclusion states one of the benefits is “reduced complexity”, but, it still seems like quite a bit to me.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #4: <a href="https://data-star.dev?featured_on=pythonbytes">Datastar</a></strong></p> <ul> <li><p>Sent to us by Forrest Lanier</p></li> <li><p>Lots of work by Chris May</p></li> <li><p>Out <a href="https://talkpython.fm/episodes/all?featured_on=pythonbytes">on Talk Python</a> soon.</p></li> <li><p><a href="https://github.com/starfederation/datastar-python?featured_on=pythonbytes">Official Datastar Python SDK</a></p></li> <li><p>Datastar is a little like HTMX, but</p> <ul> <li><p>The single source of truth is your server</p></li> <li><p>Events can be sent from server automatically (using SSE)</p> <ul> <li>e.g <div class="codehilite"> <pre><span></span><code><span class="k">yield</span> <span class="n">SSE</span><span class="o">.</span><span class="n">patch_elements</span><span class="p">(</span> <span class="sa">f</span><span class="s2">"""</span><span class="si">{</span><span class="p">(</span><span class="err">#</span><span class="n">HTML</span><span class="err">#</span><span class="p">)</span><span class="si">}{</span><span class="n">datetime</span><span class="o">.</span><span class="n">now</span><span class="p">()</span><span class="o">.</span><span class="n">isoformat</span><span class="p">()</span><span class="si">}</span><span class="s2">"""</span> <span class="p">)</span> </code></pre> </div></li> </ul></li> </ul></li> <li><p><a href="https://everydaysuperpowers.dev/articles/why-i-switched-from-htmx-to-datastar/?featured_on=pythonbytes">Why I switched from HTMX to Datastar</a> article</p></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://djangochat.com/episodes/inverting-the-testing-pyramid-brian-okken?featured_on=pythonbytes">Django Chat: Inverting the Testing Pyramid - Brian Okken</a> <ul> <li>Quite a fun interview</li> </ul></li> <li><a href="https://peps.python.org/pep-0686/?featured_on=pythonbytes">PEP 686 – Make UTF-8 mode default</a> <ul> <li>Now with status “Final” and slated for Python 3.15</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://fosstodon.org/@proteusiq/115962026225790683">Prayson Daniel’s Paper tracker</a></li> <li><a href="https://github.com/Dimillian/IceCubesApp?featured_on=pythonbytes">Ice Cubes</a> (open source Mastodon client for macOS)</li> <li><a href="https://github.com/rvben/rumdl-intellij?featured_on=pythonbytes">Rumdl for PyCharm</a>, et. al</li> <li><a href="https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/?featured_on=pythonbytes">cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun</a></li> <li><a href="https://surveys.jetbrains.com/s3/python-developers-survey-2026?featured_on=pythonbytes">Python Developers Survey 2026</a></li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/2015711332947902557?featured_on=pythonbytes">Pushed to prod</a></strong></p>
February 03, 2026 08:00 AM UTC
Daniel Roy Greenfeld
We moved to Manila!
Last year we relocated to Metro Manila, Philippines for the foreseeable future. Audrey's mother is from here, and we wanted our daughter Uma to have the opportunity to spend time with her extended family and experience another line of her heritage.
Where are you living?
In Makati, a city that contains one of the major business districts in Metro Manila. Specifically we're in Salcedo village, a neighboorhood in the CBD, made of towering residential and business buildings with numerous shops, markets, and a few parks. This area allows for a walkable life, which is important to us coming from London.
What about the USA?
The USA is our homeland and we're US citizens. We still have family and friends there. We're hoping to visit the US at least once a year.
What about the UK?
We loved living in London, and have many good friends there. I really enjoyed working for Kraken Tech, but my time came to an end there so our visas were no longer valid. We hope to visit the UK (and the rest of Europe) as tourists, but without the family connection it's harder to justify than trips to the homeland.
What about your daughter?
Uma loves Manila and is in second grade at an international school here in walking distance of our residence. We had looked into getting her into a local public school with a notable science program, but the paperwork required too much lead time. We do like the small class sizes at her current school, and how they accomodate the different learning speeds of students. She will probably stay there for a while.
For extra curricular activities she's enjoying Brazilian Jiu-Jitsu, climbing, yoga, and swimming.
If I'm in Manila can I meet up with you?
Sure! Some options:
- We're long-time members of the Python Philippines community, so you can often find us at their events
- If you train in BJJ, I'm usually at Open Mat Makati quite a bit. Just let me know ahead of time so I can plan around it
- If you want to meet up for coffee, hit me up on social media. Manila is awesome for coffee shops!
February 03, 2026 06:41 AM UTC
February 02, 2026
Real Python
The Terminal: First Steps and Useful Commands for Python Developers
The terminal provides Python developers with direct control over their operating system through text commands. Instead of clicking through menus, you type commands to navigate folders, run scripts, install packages, and manage version control. This command-line approach is faster and more flexible than graphical interfaces for many development tasks.
By the end of this tutorial, you’ll understand that:
- Terminal commands like
cd,ls, andmkdirlet you navigate and organize your file system efficiently - Virtual environments isolate project dependencies, keeping your Python installations clean and manageable
pipinstalls, updates, and removes Python packages directly from the command line- Git commands track changes to your code and create snapshots called commits
- The command prompt displays your current directory and indicates when the terminal is ready for input
This tutorial walks through the fundamentals of terminal usage on Windows, Linux, and macOS. The examples cover file system navigation, creating files and folders, managing packages with pip, and tracking code changes with Git.
Get Your Cheat Sheet: Click here to download a free cheat sheet of useful commands to get you started working with the terminal.
Install and Open the Terminal
Back in the day, the term terminal referred to some clunky hardware that you used to enter data into a computer. Nowadays, people are usually talking about a terminal emulator when they say terminal, and they mean some kind of terminal software that you can find on most modern computers.
Note: There are two other terms that you might hear now and then in combination with the terminal:
- A shell is the program that you interact with when running commands in a terminal.
- A command-line interface (CLI) is a program designed to run in a shell inside the terminal.
In other words, the shell provides the commands that you use in a command-line interface, and the terminal is the application that you run to access the shell.
If you’re using a Linux or macOS machine, then the terminal is already built in. You can start using it right away.
On Windows, you also have access to command-line applications like the Command Prompt. However, for this tutorial and terminal work in general, you should use the Windows terminal application instead.
Read on to learn how to install and open the terminal on Windows and how to find the terminal on Linux and macOS.
Windows
The Windows terminal is a modern and feature-rich application that gives you access to the command line, multiple shells, and advanced customization options. If you have Windows 11 or above, chances are that the Windows terminal is already present on your machine. Otherwise, you can download the application from the Microsoft Store or from the official GitHub repository.
Before continuing with this tutorial, you need to get the terminal working on your Windows computer. You can follow the Your Python Coding Environment on Windows: Setup Guide to learn how to install the Windows terminal.
After you install the Windows terminal, you can find it in the Start menu under Terminal. When you start the application, you should see a window that looks like this:
It can be handy to create a desktop shortcut for the terminal or pin the application to your task bar for easier access.
Linux
You can find the terminal application in the application menu of your Linux distribution. Alternatively, you can press Ctrl+Alt+T on your keyboard or use the application launcher and search for the word Terminal.
After opening the terminal, you should see a window similar to the screenshot below:
How you open the terminal may also depend on which Linux distribution you’re using. Each one has a different way of doing it. If you have trouble opening the terminal on Linux, then the Real Python community will help you out in the comments below.
macOS
A common way to open the terminal application on macOS is by opening the Spotlight Search and searching for Terminal. You can also find the terminal app in the application folder inside Finder.
When you open the terminal, you see a window that looks similar to the image below:
Read the full article at https://realpython.com/terminal-commands/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
February 02, 2026 02:00 PM UTC
February 01, 2026
Graham Dumpleton
Developer Advocacy in 2026
I got into developer advocacy in 2010 at New Relic, followed by a stint at Red Hat. When I moved to VMware, I expected things to continue much as before, but COVID disrupted those plans. When Broadcom acquired VMware, the writing was on the wall and though it took a while, I eventually got made redundant. That was almost 18 months ago. In the time since, I've taken an extended break with overseas travel and thoughts of early retirement. It's been a while therefore since I've done any direct developer advocacy.
One thing became clear during that time. I had no interest in returning to a 9-to-5 programming job in an office, working on some dull internal system. Ideally, I'd have found a company genuinely committed to open source where I could contribute to open source projects. But those opportunities are thin on the ground, and being based in Australia made it worse as such companies are typically in the US or Europe and rarely hire outside their own region.
Recently I've been thinking about getting back into developer advocacy. The job market makes this a difficult proposition though. Companies based in the US and Europe that might otherwise be good places to work tend to ignore the APAC region, and even when they do pay attention, they rarely maintain a local presence. They just send people out when they need to.
Despite the difficulties, I would also need to understand what I was getting myself into. How much had developer advocacy changed since I was doing it? What challenges would I face working in that space?
So I did what any sensible person does in 2026. I asked an AI to help me research the current state of the field. I started with broad questions across different topics, but one question stood out as a interesting starting point: What are the major forces that have reshaped developer advocacy in recent years?
This post looks at what the AI said and how it matched my own impressions.
Catching Up: What's Changed?
The AI came back with three main themes.
Force 1: AI Has Changed Everything
What the AI told me:
The data suggests a fundamental shift in how developers work. Around 84% of developers now use AI tools on a daily basis, with more than half relying on them for core development tasks. Developers are reporting 30-60% time savings on things like boilerplate generation, debugging, documentation lookup, and testing.
This has significant implications for developer advocacy. The traditional path—developer has a problem, searches Google, lands on Stack Overflow or your documentation, reads a tutorial—has been disrupted. Now, developers increasingly turn to AI assistants first. They describe their problem and get an immediate, contextual answer, often with working code included.
What this means is that your content now has two audiences: humans and AI systems. Your documentation isn't just being read by developers—it's being ingested, processed, and used by AI to generate answers. If the AI misrepresents your product or gives incorrect guidance, you may never even know it's happening.
There's even an emerging standard called llms.txt that's gaining traction. First proposed in late 2024, it's a file that tells AI systems what content to prioritise and how to interpret it. By 2026, documentation without this kind of AI-optimisation struggles to surface correctly when developers ask AI assistants for help. It's like SEO, but for language models instead of search engines.
My take:
Yep, no surprise here. I've seen this firsthand with my own mod_wsgi and wrapt open source projects, and it goes beyond what the AI described. In the past, people would ask questions via Stack Overflow, project-specific forums, or mailing lists if they did get stuck. These days, direct engagement is rare. Occasionally someone might raise a problem on GitHub, but those old communication channels have largely disappeared. People using your code just ask the AI instead.
As for the suggestion that I need to put in extra work to help AI systems, such as providing an llms.txt file, I'm not convinced. From what I can see, AI is already doing a good job of working things out on its own. Besides, project documentation, while supposedly authoritative, isn't the only source of information out there. I don't think an AI will give your llms.txt file any more credibility than the third-party content scattered across blogs, conference talks, and elsewhere.
Either way, AI has drastically changed things and now sits between a user and documentation for a project. It can be demoralising to write documentation when you don't know if a real person will ever actually read it anymore.
Force 2: Content Saturation Has Reached Crisis Levels
What the AI told me:
The volume of content being produced has exploded. Current estimates put global data production at over 403 million terabytes daily, and around 80% of developers report experiencing information overload. The symptoms show up in the data: lower click-through rates, shorter time-on-page, faster scrolling behaviour, declining email open rates.
The root cause? AI has made content creation trivially easy. Anyone can generate a tutorial, write a blog post, or produce documentation in minutes. The result is a flood of content, much of it generic, shallow, and nearly indistinguishable from everything else.
Developer communities have adapted. Developers have become sophisticated at filtering low-quality content. Generic "getting started" tutorials are largely ignored because AI can generate them on demand. Surface-level blog posts get scrolled past. The attention bar has risen significantly.
This creates a dual reality. The challenge: your content competes against unprecedented noise. The opportunity: genuinely valuable content stands out more than ever, precisely because it's rare.
My take:
I'd have to agree with the AI here. I'm definitely feeling overwhelmed by the sheer volume of content when researching topics these days. That said, I'd qualify that observation. It's particularly acute for topics with a lot of buzz around them, AI being the obvious example. The same applies when searching for information on large software projects with broad user bases, like popular programming languages or major web frameworks. But for niche areas, I don't see this as much of an issue. People aren't going to generate content for obscure projects just for the sake of it, especially when there's no hype driving attention.
When talking about developer advocacy as a job, the focus is usually a company's own product. For most companies, especially early-stage ones, they don't have the mind share yet, so they fit into that niche category.
That said, content saturation is still a real problem. I'm tired of the endless YouTube videos on the same topics and how self-proclaimed advocates jump on trends to outdo each other. Too often, the focus is just on shipping content, which means discussions end up shallow and repetitive with little substance.
Force 3: Platform Fragmentation Has Disrupted Communities
What the AI told me:
The platform landscape has shifted significantly. When you were last active, Slack dominated developer communities, Discord was primarily for gamers, and Twitter served as the town square for tech discourse.
None of that holds anymore.
Discord has overtaken Slack for developer communities. The drivers were primarily economic—Slack's pricing pushed communities toward free alternatives—and feature-related, with Discord offering better moderation tools, role management, and community features. Many developers who were daily Slack users between 2015-2020 have essentially stopped using Slack and are now primarily active in Discord channels.
The Twitter situation is more fragmented. Some developers moved to Mastodon, others to Bluesky, and many simply reduced their social media engagement altogether. LinkedIn has grown as a platform for technical content. There's no longer a single "town square" where developers reliably gather.
The practical implication: you can't rely on any single platform for community strategy. Presence across multiple spaces, with different approaches for each, is now necessary.
My take:
My age is probably showing here. The AI talks about people moving from Slack to Discord and the demise of Twitter. I still miss mailing lists. Back then, I found the asynchronous nature of mailing lists to be a much better forum for discussions with users. You could take your time understanding questions and drafting thoughtful responses. These days, with real-time discussion platforms, there's pressure to provide immediate answers, which often means less effort goes into truly understanding a user's problem.
To me migrations between platforms for the purpose of providing support to users is inevitable, especially as technology changes. This doesn't mean that new platforms are going to be better though.
Of the disruptions, I felt the demise of Twitter most acutely. It provided more community interactions for me than other discussion forums. When everyone fled Twitter, I lost those connections and don't feel as close to developer communities as I once did, especially the Python community. COVID and the shutdown of conferences during that time compounded this. Overall, I don't feel as connected to the Python community as I was in the past.
Initial Reflections
Having gone through these three forces, I'm left with mixed feelings. Nothing the AI said was really a surprise though.
The main challenge in getting back into developer advocacy is adapting to how AI has changed everything.
I don't see it as insurmountable though, especially since companies expanding their developer advocacy programs are typically niche players without a huge volume of content about their product already out there. The key is ensuring the content they do have on their own site addresses what users need, and expanding from there as necessary.
Relying solely on documentation isn't the answer either. When I've done developer advocacy in the past, I found that online interactive learning platforms could supplement documentation well. That's even more true now, as users aren't willing to spend much time reading through documentation. You need something to hook them, a way to quickly show how your product might help them. Interactive platforms where they can experiment with a product without installing it locally can make a real difference here.
What's Next
Right now I'm not sure what that next step is. I'll almost certainly need to find some sort of job, at least for the next few years before I can think about retiring completely. I still work on my own open source projects, but they don't pay the bills.
One of those projects is actually an interactive learning platform, exactly the sort of thing I've been talking about above. I've invested significant time on it, but it's something I've never really discussed here on my blog. As I think through what comes next, it seems like time to change that.
February 01, 2026 10:38 AM UTC
Tryton News
Tryton News February 2026
During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our release last month. But we also added many new features which we would like to introduce to you in this newsletter.
For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.
Changes for the User
Sales, Purchases and Projects
We now add the optional gift card field to the list of products. This helps to search gift card products.
Now we clean the former quotation_date when copy sale records as we already do with the sale_date.
We now display the origin field of requests in the purchase request list. When the purchase request is not from a stock supply, it is useful for the user who takes action, to know the origin of the request.
Accounting, Invoicing and Payments
Now we support allowance and charge in UBL invoices.
We now fill the buyer’s item identification (BuyersItemIdentification) in the UBL invoice, when sale product customer is activated.
On the invoice line we have properties like product_name to get related supplier and customer codes.
Now we add a cron scheduler to reconcile account move lines. On larger setups the number of accounts and parties to reconcile can be very numerous. It would consume too much time to execute the reconciliation wizard, even with the automatic option.
In this cases it will be better to run the reconciliation process as a scheduled task in background.
We now add support for payment references for incoming invoice. As the invoice manages payment references, we support to fill it using information from the incoming document.
Now Tryton warns the user before creating an overpayment. Sometimes users book a payment directly as a move line but without creating a payment record. If the line is not yet reconciled (it can be a partial payment), the line to pay stand still there showing the full amount to pay. This can lead to over pay a party without any notice for the user.
So we now ensure that the amount being paid does not exceed the payable (or receivable) amount of the party.
There is no guarantee against overpayment The proper way to avoid is to always use the payments functionality. But the warning will catch most of the mistakes.
Now we add support for Peppyrus webhooks in Tryton’s document incoming functionality.
We now set Belgian account 488 as deposit.
Now we add cy_vat as tax identifier type.
Stock, Production and Shipments
We now store the original planned date of requested internal shipments and productions.
For shipments created by sales we already store the original planned date to compute the delay. Now we do the same for the supplied shipments and productions.
Now we use a fields.Many2One to display either the product or the variant in the stock reporting instead of the former reference field. With this change the user is able to search for product or variant specific attributes. But the reference field is still useful to build the domain, so we keep it invisible.
We now add routings on BOM form to ease the setup.
Now we use the default warehouse when creating new product locations.
User Interface
Now we allow to reorder tabs in Sao, the Tryton web client.
Now we use the default digit value to calculate the width of the float widget in Sao.
New Releases
We released bug fixes for the currently maintained long term support series
7.0 and 6.0, and for the penultimate series 7.8 and 7.6.
Security
Please update your systems to take care of a security related bug we found last month.Mahdi Afshar and Abdulfatah Abdillahi have found that trytond sends the trace-back to the clients for unexpected errors. This trace-back may leak information about the server setup. Impact CVSS v3.0 Base Score: 4.3 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: Low Integrity: None Availability: None Workaround A possible workaround is to configure an error handler which would remove the trace-back from the respo…
Abdulfatah Abdillahi has found that sao does not escape the completion values. The content of completion is generally the record name which may be edited in many ways depending on the model. The content may include some JavaScript which is executed in the same context as sao which gives access to sensitive data such as the session. Impact CVSS v3.0 Base Score: 7.3 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: Required Scope: Unchanged Confidentiality…
Mahdi Afshar has found that trytond does not enforce access rights for the route of the HTML editor (since version 6.0). Impact CVSS v3.0 Base Score: 7.1 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: High Integrity: Low Availability: None Workaround A possible workaround is to block access to the html editor. Resolution All affected users should upgrade trytond to the latest version. Affected versions per ser…
Cédric Krier has found that trytond does not enforce access rights for data export (since version 6.0). Impact CVSS v3.0 Base Score: 6.5 Attack Vector: Network Attack Complexity: Low Privileges Required: Low User Interaction: None Scope: Unchanged Confidentiality: High Integrity: None Availability: None Workaround There is no workaround. Resolution All affected users should upgrade trytond to the latest version. Affected versions per series: trytond: 7.6: <= 7.6.10 7.4: <= 7.4.20 7.0: <=…
Changes for the System Administrator
Now we allow filtering users to be notified by cron tasks. When notifying subscribing users of a cron task, messages may make sense only for some user. For example if the message is about a specific company, we want to notify only the users having access to this company.
We now dump the action value of cron notification as JSON if it is not a string (aka JSON).
Now we log an exception when the Binary field retrieval of a file ID from the file-store fails.
We now support 0 as parameter of max_tasks_per_child.
The ProcessPoolExecutor requires a positive number or None. But when using an environment variable to configure a startup script, it is complicated to set no value (which means skip the argument). Now it is easier because 0 is considered as None.
Changes for Implementers and Developers
We now log an exception when it fails to open the XML-file of a view (view arch).
Now we format dates used as record names with the contextual language.
We now add the general test PartyCheckReplaceMixin to check replaced fields of the replace party wizard.
1 post - 1 participant
February 01, 2026 07:00 AM UTC
January 31, 2026
EuroPython
Humans of EuroPython: Naa Ashiorkor Nortey
Behind every inspiring talk, networking session, and workshop at EuroPython lies countless hours of dedication from our amazing volunteers. From organizing logistics and securing speakers to welcoming attendees, these passionate community members make our conference possible year after year. Without their selfless commitment and hard work, EuroPython simply wouldn&apost exist.
Here’s our recent conversation with Naa Ashiorkor Nortey, who led the EuroPython 2025 Speaker Mentorship Team, contributed to the Programme Team and mentored at the Humble Data workshop.
We appreciate your work on the conference, Naa!
Naa Ashiorkor Nortey, Speaker Mentorship Lead at EuroPython 2025EP: Had you attended EuroPython before volunteering, or was volunteering your first experience with it?
My first experience volunteering at EuroPython was in 2023. I volunteered at the registration desk and as a session chair, and I’m still here volunteering.
EP: What&aposs one task you handled that attendees might not realize happens behind the scenes at EuroPython?
I can’t think of a specific task, but I would say that some attendees might not realise the number of hours volunteers put in for EuroPython. Usually, a form might be filled out with the number of hours a volunteer can dedicate in a week, but in reality the number of hours invested might be way more than that. There are volunteers in different time zones with different personal lives, so imagine making all that work.
EP: Was there a moment when you felt your contribution really made a difference?
Generally, showing up at the venue after months of planning, it just hit me how much difference my contribution makes. Specifically at EuroPython 2025, where I had the opportunity to lead the Speaker Mentorship Team. I interviewed one of the mentees during the conference. She mentioned that it was her first time speaking and highlighted how the speaker mentorship programme and her mentor greatly impacted her. At that moment, I felt my contribution really made a difference.
EP: What surprised you most about the volunteer experience?
The dedication and commitment of some of the volunteers were so inspiring.
EP: If you could describe the volunteer experience in three words, what would they be?
Fun learning experience.
EP: Do you have any tips for first-time EuroPython volunteers?
Don’t be afraid to volunteer, even if it involves leading one of the teams or contributing to a team you have no experience with. You can learn the skills needed in the team while volunteering. Everyone is supportive and ready to help. Communicate as much as you can and enjoy the experience.
EP: Thank you for the interview, Naa!
Armin Ronacher
Pi: The Minimal Agent Within OpenClaw
If you haven’t been living under a rock, you will have noticed this week that a project of my friend Peter went viral on the internet. It went by many names. The most recent one is OpenClaw but in the news you might have encountered it as ClawdBot or MoltBot depending on when you read about it. It is an agent connected to a communication channel of your choice that just runs code.
What you might be less familiar with is that what’s under the hood of OpenClaw is a little coding agent called Pi. And Pi happens to be, at this point, the coding agent that I use almost exclusively. Over the last few weeks I became more and more of a shill for the little agent. After I gave a talk on this recently, I realized that I did not actually write about Pi on this blog yet, so I feel like I might want to give some context on why I’m obsessed with it, and how it relates to OpenClaw.
Pi is written by Mario Zechner and unlike Peter, who aims for “sci-fi with a touch of madness,” 1 Mario is very grounded. Despite the differences in approach, both OpenClaw and Pi follow the same idea: LLMs are really good at writing and running code, so embrace this. In some ways I think that’s not an accident because Peter got me and Mario hooked on this idea, and agents last year.
What is Pi?
So Pi is a coding agent. And there are many coding agents. Really, I think you can pick effectively anyone off the shelf at this point and you will be able to experience what it’s like to do agentic programming. In reviews on this blog I’ve positively talked about AMP and one of the reasons I resonated so much with AMP is that it really felt like it was a product built by people who got both addicted to agentic programming but also had tried a few different things to see which ones work and not just to build a fancy UI around it.
Pi is interesting to me because of two main reasons:
- First of all, it has a tiny core. It has the shortest system prompt of any agent that I’m aware of and it only has four tools: Read, Write, Edit, Bash.
- The second thing is that it makes up for its tiny core by providing an extension system that also allows extensions to persist state into sessions, which is incredibly powerful.
And a little bonus: Pi itself is written like excellent software. It doesn’t flicker, it doesn’t consume a lot of memory, it doesn’t randomly break, it is very reliable and it is written by someone who takes great care of what goes into the software.
Pi also is a collection of little components that you can build your own agent on top. That’s how OpenClaw is built, and that’s also how I built my own little Telegram bot and how Mario built his mom. If you want to build your own agent, connected to something, Pi when pointed to itself and mom, will conjure one up for you.
What’s Not In Pi
And in order to understand what’s in Pi, it’s even more important to understand what’s not in Pi, why it’s not in Pi and more importantly: why it won’t be in Pi. The most obvious omission is support for MCP. There is no MCP support in it. While you could build an extension for it, you can also do what OpenClaw does to support MCP which is to use mcporter. mcporter exposes MCP calls via a CLI interface or TypeScript bindings and maybe your agent can do something with it. Or not, I don’t know :)
And this is not a lazy omission. This is from the philosophy of how Pi works. Pi’s entire idea is that if you want the agent to do something that it doesn’t do yet, you don’t go and download an extension or a skill or something like this. You ask the agent to extend itself. It celebrates the idea of code writing and running code.
That’s not to say that you cannot download extensions. It is very much supported. But instead of necessarily encouraging you to download someone else’s extension, you can also point your agent to an already existing extension, say like, build it like the thing you see over there, but make these changes to it that you like.
Agents Built for Agents Building Agents
When you look at what Pi and by extension OpenClaw are doing, there is an example of software that is malleable like clay. And this sets certain requirements for the underlying architecture of it that are actually in many ways setting certain constraints on the system that really need to go into the core design.
So for instance, Pi’s underlying AI SDK is written so that a session can really contain many different messages from many different model providers. It recognizes that the portability of sessions is somewhat limited between model providers and so it doesn’t lean in too much into any model-provider-specific feature set that cannot be transferred to another.
The second is that in addition to the model messages it maintains custom messages in the session files which can be used by extensions to store state or by the system itself to maintain information that either not at all is sent to the AI or only parts of it.
Because this system exists and extension state can also be persisted to disk, it has built-in hot reloading so that the agent can write code, reload, test it and go in a loop until your extension actually is functional. It also ships with documentation and examples that the agent itself can use to extend itself. Even better: sessions in Pi are trees. You can branch and navigate within a session which opens up all kinds of interesting opportunities such as enabling workflows for making a side-quest to fix a broken agent tool without wasting context in the main session. After the tool is fixed, I can rewind the session back to earlier and Pi summarizes what has happened on the other branch.
This all matters because for instance if you consider how MCP works, on most model providers, tools for MCP, like any tool for the LLM, need to be loaded into the system context or the tool section thereof on session start. That makes it very hard to impossible to fully reload what tools can do without trashing the complete cache or confusing the AI about how prior invocations work differently.
Tools Outside The Context
An extension in Pi can register a tool to be available to the LLM to call and every once in a while I find this useful. For instance, despite my criticism of how Beads is implemented, I do think that giving an agent access to a to-do list is a very useful thing. And I do use an agent-specific issue tracker that works locally that I had my agent build itself. And because I wanted the agent to also manage to-dos, in this particular case I decided to give it a tool rather than a CLI. It felt appropriate for the scope of the problem and it is currently the only additional tool that I’m loading into my context.
But for the most part all of what I’m adding to my agent are either skills or TUI extensions to make working with the agent more enjoyable for me. Beyond slash commands, Pi extensions can render custom TUI components directly in the terminal: spinners, progress bars, interactive file pickers, data tables, preview panes. The TUI is flexible enough that Mario proved you can run Doom in it. Not practical, but if you can run Doom, you can certainly build a useful dashboard or debugging interface.
I want to highlight some of my extensions to give you an idea of what’s possible. While you can use them unmodified, the whole idea really is that you point your agent to one and remix it to your heart’s content.
/answer
I don’t use plan mode. I encourage the agent to ask questions and there’s a productive back and forth. But I don’t like structured question dialogs that happen if you give the agent a question tool. I prefer the agent’s natural prose with explanations and diagrams interspersed.
The problem: answering questions inline gets messy. So /answer reads the
agent’s last response, extracts all the questions, and reformats them into a
nice input box.
/todos
Even though I criticize Beads for its
implementation, giving an agent a to-do list is genuinely useful. The /todos
command brings up all items stored in .pi/todos as markdown files. Both the
agent and I can manipulate them, and sessions can claim tasks to mark them as in
progress.
/review
As more code is written by agents, it makes little sense to throw unfinished work at humans before an agent has reviewed it first. Because Pi sessions are trees, I can branch into a fresh review context, get findings, then bring fixes back to the main session.
The UI is modeled after Codex which provides easy to review commits, diffs, uncommitted changes, or remote PRs. The prompt pays attention to things I care about so I get the call-outs I want (eg: I ask it to call out newly added dependencies.)
/control
An extension I experiment with but don’t actively use. It lets one Pi agent send prompts to another. It is a simple multi-agent system without complex orchestration which is useful for experimentation.
/files
Lists all files changed or referenced in the session. You can reveal them in
Finder, diff in VS Code, quick-look them, or reference them in your prompt.
shift+ctrl+r quick-looks the most recently mentioned file which is handy when
the agent produces a PDF.
Others have built extensions too: Nico’s subagent extension and interactive-shell which lets Pi autonomously run interactive CLIs in an observable TUI overlay.
Software Building Software
These are all just ideas of what you can do with your agent. The point of it mostly is that none of this was written by me, it was created by the agent to my specifications. I told Pi to make an extension and it did. There is no MCP, there are no community skills, nothing. Don’t get me wrong, I use tons of skills. But they are hand-crafted by my clanker and not downloaded from anywhere. For instance I fully replaced all my CLIs or MCPs for browser automation with a skill that just uses CDP. Not because the alternatives don’t work, or are bad, but because this is just easy and natural. The agent maintains its own functionality.
My agent has quite a few
skills and crucially
I throw skills away if I don’t need them. I for instance gave it a skill to
read Pi sessions that other engineers shared, which helps with code review. Or
I have a skill to help the agent craft the commit messages and commit behavior I
want, and how to update changelogs. These were originally slash commands, but
I’m currently migrating them to skills to see if this works equally well. I
also have a skill that hopefully helps Pi use uv rather than pip, but I also
added a custom extension to intercept calls to pip and python to redirect
them to uv instead.
Part of the fascination that working with a minimal agent like Pi gave me is that it makes you live that idea of using software that builds more software. That taken to the extreme is when you remove the UI and output and connect it to your chat. That’s what OpenClaw does and given its tremendous growth, I really feel more and more that this is going to become our future in one way or another.
January 30, 2026
Kevin Renskers
Django's test runner is underrated
Every podcast, blog post, Reddit thread, and every conference talk seems to agree: “just use pytest”. Real Python says most developers prefer it. Brian Okken’s popular book calls it “undeniably the best choice”. It’s treated like a rite of passage for Python developers: at some point you’re supposed to graduate from the standard library to the “real” testing framework.
I never made that switch for my Django projects. And after years of building and maintaining Django applications, I still don’t feel like I’m missing out.
What I actually want from tests
Before we get into frameworks, let me be clear about what I need from a test suite:
-
Readable failures. When something breaks, I want to understand why in seconds, not minutes.
-
Predictable setup. I want to know exactly what state my tests are running against.
-
Minimal magic. The less indirection between my test code and what’s actually happening, the better.
-
Easy onboarding. New team members should be able to write tests on day one without learning a new paradigm.
Django’s built-in test framework delivers all of this. And honestly? That’s enough for most projects.
Django tests are just Python’s unittest
Here’s something that surprises a lot of developers: Django’s test framework isn’t some exotic Django-specific system. Under the hood, it’s Python’s standard unittest module with a thin integration layer on top.
TestCase extends unittest.TestCase. The assertEqual, assertRaises, and other assertion methods? Straight from the standard library. Test discovery, setup and teardown, skip decorators? All standard unittest behavior.
What Django adds is integration: Database setup and teardown, the HTTP client, mail outbox, settings overrides.
This means when you choose Django’s test framework, you’re choosing Python’s defaults plus Django glue. When you choose pytest with pytest-django, you’re replacing the assertion style, the runner, and the mental model, then re-adding Django integration on top.
Neither approach is wrong. But it’s objectively more layers.
The self.assert* complaint
A common argument I hear against unittest-style tests is: “I can’t remember all those assertion methods”. But let’s be honest. We’re not writing tests in Notepad in 2026. Every editor has autocomplete. Type self.assert and pick from the list.
And in practice, how many assertion methods do you actually use? In my tests, it’s mostly assertEqual and assertRaises. Maybe assertTrue, assertFalse, and assertIn once in a while. That’s not a cognitive burden.
Here’s the same test in both styles:
# Django / unittest
self.assertEqual(total, 42)
with self.assertRaises(ValidationError):
obj.full_clean()
# pytest
assert total == 42
with pytest.raises(ValidationError):
obj.full_clean()
Yes, pytest’s assert is shorter. It’s a bit easier on the eyes. And I’ll be honest: pytest’s failure messages are better too. When an assertion fails, pytest shows you exactly what values differed with nice diffs. That’s genuinely useful.
But here’s what makes that work: pytest rewrites your code. It hooks into Python’s AST and transforms your test files before they run so it can produce those detailed failure messages from plain assert statements. That’s not necessarily bad - it’s been battle-tested for over a decade. But it is a layer of transformation between what you write and what executes, and I prefer to avoid magic when I can.
For me, unittest’s failure messages are good enough. When assertEqual fails, it tells me what it expected and what it got. That’s usually all I need. Better failure messages are nice, but they’re not worth adding dependencies and an abstraction layer for.
The missing piece: parametrized tests
If there’s one pytest feature people genuinely miss when using Django’s test framework, it’s parametrization. Writing the same test multiple times with different inputs feels wasteful.
But you really don’t need to switch to pytest just for that. The parameterized package solves this cleanly:
from django.test import SimpleTestCase
from parameterized import parameterized
class SlugifyTests(SimpleTestCase):
@parameterized.expand([
("Hello world", "hello-world"),
("Django's test runner", "djangos-test-runner"),
(" trim ", "trim"),
])
def test_slugify(self, input_text, expected):
self.assertEqual(slugify(input_text), expected)
Compare that to pytest:
import pytest
@pytest.mark.parametrize("input_text,expected", [
("Hello world", "hello-world"),
("Django's test runner", "djangos-test-runner"),
(" trim ", "trim"),
])
def test_slugify(input_text, expected):
assert slugify(input_text) == expected
Both are readable. Both work well. The difference is that parameterized is a tiny, focused library that does one thing. It doesn’t replace your test runner, introduce a new fixture system, or bring an ecosystem of plugins. It’s a decorator, not a paradigm shift.
Once I added parameterized, I realized pytest no longer solved a problem I actually had.
Side by side: common test patterns
Let’s look at how typical Django tests compare to pytest’s approach.
Database tests
# Django
from django.test import TestCase
from myapp.models import Article
class ArticleTests(TestCase):
def test_article_str(self):
article = Article.objects.create(title="Hello")
self.assertEqual(str(article), "Hello")
# pytest + pytest-django
import pytest
from myapp.models import Article
@pytest.mark.django_db
def test_article_str():
article = Article.objects.create(title="Hello")
assert str(article) == "Hello"
With Django, database access simply works. TestCase wraps every test in a transaction and rolls it back afterward, giving you a clean slate without extra decorators. pytest-django takes the opposite approach: database access is opt-in. Different philosophies, but I find theirs annoying since most of my tests touch the database anyway, so I’d end up with @pytest.mark.django_db on almost every test.
View tests
# Django
from django.test import TestCase
from django.urls import reverse
class ViewTests(TestCase):
def test_home_page(self):
response = self.client.get(reverse("home"))
self.assertEqual(response.status_code, 200)
# pytest + pytest-django
from django.urls import reverse
def test_home_page(client):
response = client.get(reverse("home"))
assert response.status_code == 200
In Django, self.client is right there on the test class. If you want to know where it comes from, follow the inheritance tree to TestCase. In pytest, client appears because you named your parameter client. That’s how fixtures work: injection happens by naming convention. If you didn’t know that, the code would be puzzling. And if you want to find where a fixture is defined, you might be hunting through conftest.py files across multiple directory levels.
What about fixtures?
Pytest’s fixture system is the other big feature people bring up. Fixtures compose, they handle setup and teardown automatically, and they can be scoped to function, class, module, or session.
But the mechanism is implicit. You’ve already seen the implicit injection in the view test example: name a parameter client and it appears, add db to your function signature and you get database access. Powerful, but also magic you need to learn.
For most Django tests, you need some objects in the database before your test runs. Django gives you two ways to do this:
setUp()runs before each test methodsetUpTestData()runs once per test class, which is faster for read-only data
class ArticleTests(TestCase):
@classmethod
def setUpTestData(cls):
cls.author = User.objects.create(username="kevin")
def test_article_creation(self):
article = Article.objects.create(title="Hello", author=self.author)
self.assertEqual(article.author.username, "kevin")
If you need more sophisticated object creation, factory-boy works great with either framework.
The fixture system solves a real problem - complex cross-cutting setup that needs to be shared and composed. My projects just haven’t needed that level of sophistication. And I’d rather not add the indirection until I do.
The hidden cost of flexibility
Pytest’s flexibility is a feature. It’s also a liability.
In small projects, pytest feels lightweight. But as projects grow, that flexibility can accumulate into complexity. Your conftest.py starts small, then grows into its own mini-framework. You add pytest-xdist for parallel tests (Django has --parallel built-in). You write custom fixtures for DRF’s APIClient (Django’s APITestCase just works). You add a plugin for coverage, another for benchmarking. Each one makes sense in isolation.
Then a test fails in CI but not locally, and you’re debugging the interaction between three plugins and a fixture that depends on two other fixtures.
Django’s test framework doesn’t have this problem because it doesn’t have this flexibility. There’s one way to set up test data. There’s one test client. There’s one way to run tests in parallel. Boring, but predictable.
When I’m debugging a test failure, I want to debug my code, not my test infrastructure.
When I would recommend pytest
I’m not anti-pytest. If your team already has deep pytest expertise and established patterns, switching to Django’s runner would be a net negative. Switching costs are real. If I join a project that uses pytest? I use pytest. This is a preference for new projects, not a religion.
It’s also worth noting that pytest can run unittest-style tests without modification. You don’t have to rewrite everything if you want to try it. That’s a genuinely nice feature.
But if you’re starting fresh, or you’re the one making the decision? Make it a conscious choice. “Everyone uses pytest” can be a valid consideration, but it shouldn’t be the whole argument.
My rule of thumb
Start with Django’s test runner. It’s boring, it’s stable, and it works.
Add parameterized when you need parametrized tests.
Switch to pytest only when you can name the specific problem Django’s framework can’t solve. Not because a podcast told you to, but because you’ve hit an actual wall.
I’ve been building Django applications for a long time. I’ve tried both approaches. And I keep choosing boring.
Boring is a feature in test infrastructure.
The Python Coding Stack
Planning Meals, Weekly Shop, Alternative Constructors Using Class Methods
I’m sure we’re not the only family with this problem: deciding what meals to cook throughout the week. There seems to be just one dish that everyone loves, but we can hardly eat the same dish every day.
So we came up with a system, and I’m writing a Python program to implement it. We keep a list of meals we try out. Each family member assigns a score to each meal. Every Saturday, before we go to the supermarket for the weekly shop, we plan which meals we’ll cook on each day of the week. It’s not based solely on the preference ratings, of course, since my wife and I have the final say to ensure a good balance. Finally, the program provides us with the shopping list with the ingredients we need for all the week’s meals.
I know, we’ve reinvented the wheel. There are countless apps that do this. But the fun is in writing your own code to do exactly what you want.
I want to keep this article focussed on just one thing: alternative constructors using class methods. Therefore, I won’t go through the whole code in this post. Perhaps I’ll write about the full project in a future article.
So, here’s what you need to know to get our discussion started.
Do you learn best from one-to-one sessions? The Python Coding Place offers one-to-one lessons on Zoom. Try them out, we bet you’ll love them. Find out more about one-to-one private sessions.
Setting the Scene • Outlining the Meal and WeeklyMealPlanner Classes
Let me outline two of the classes in my code. The first is the Meal class. This class – you guessed it – deals with each meal. Here’s the class’s .__init__() method:
The meal has a name so we can easily refer to it, so there’s a .name data attribute. And the meals I cook are different from the meals my wife cooks, which is why there’s a .person_cooking data attribute. This matters as on some days of the week, only one of us is available to prepare dinner, so this attribute becomes relevant!
There are also days when we have busy afternoons and evenings with children’s activities, so we need to cook a quick meal. The .quick_meal data attribute is a Boolean flag to help with planning for these hectic days.
Then there’s the .ingredients data attribute. You don’t need me to explain this one. And since each family member rates each meal, there’s a .ratings dictionary to keep track of the scores.
The class has more methods, such as add_ingredient(), remove_ingredient(), add_rating(), and more. There’s also code to save to and load from CSV and JSON files. But these are not necessary for today’s article, so I’ll leave them out.
There’s also a WeeklyMealPlanner class:
The ._meals data attribute is a dictionary with the days of the week as keys and Meal instances as values. It’s defined as a non-public attribute to be used with the read-only property .meals. The .meals property returns a shallow copy of the ._meals dictionary. This makes it safer as it’s harder for a user to make changes directly to this dictionary. The dictionary is modified only through methods within WeeklyMealPlanner. I’ve omitted the rest of the methods in this class as they’re not needed for this article.
You can read more about properties in Python in this article: The Properties of Python’s ‘property’
So, each time we try a new dish, we create a Meal object, and each family member rates it. This meal then goes into our collection of meals to choose from each week. On Saturday, we choose the meals we want for the week, put them in a WeeklyMealPlanner instance, and we’re almost ready to go…
At the Supermarket
Well, we’re almost ready to go to the supermarket at this point. So, here’s another class:
A ShoppingList object has an .ingredients data attribute. This attribute is a dictionary. The keys are the ingredients, and the values are the quantities needed for each ingredient. I’m also showing the .add_ingredient() method, which I’ll need later on. So, you can create an instance of ShoppingList in the usual way:
Then, you can add ingredients as needed. But this is annoying for us on a Saturday. Here’s why…
Do you want to master Python one article at a time? Then don’t miss out on the article in The Club which are exclusive to premium subscribers here on The Python Coding Stack
Alternative Constructor
Before describing our Saturday problems, let’s briefly revisit what happens when you create an instance of a class. When you place parentheses after the class name, Python does two things: it creates a blank new object, and it initialises it. The creation of the new object almost always happens “behind the scenes”. The .__new__() method creates a new object, but you rarely need to override it. And the .__init__() method performs the object’s initialisation.
You can only have one .__init__() special method in a class. Does this mean there’s only one way to create an instance of a class?
Not quite, no. Although there’s no way to define more .__init__() methods, there are ways to create instances through different routes. The @singledispatchmethod decorator is a useful tool, but one I’ll discuss in a future post. Today, I want to talk about using class methods as alternative constructors.
Back to a typical Saturday in our household. We just finished choosing the seven dinners we plan to have this coming week, and we created a WeeklyMealPlanner instance. So we should now create a ShoppingList instance using ShoppingList() and then go through all the meals we chose, entering their ingredients.
Wouldn’t it be nice if we could just create a ShoppingList instance directly from the WeeklyMealPlanner instance? But that would require a different way to create an instance of ShoppingList.
Let’s define an alternative constructor, then:
There’s a new method called .from_meal_planner(). However, this is not an instance method. It doesn’t belong to an instance of the class. Instead, it belongs to the class directly. The @classmethod decorator tells Python to treat this method as a class method. Note that the first parameter in this method is not self, as with the usual (instance) methods. Instead, you use cls, which is the parameter name used by convention to refer to the class.
Whereas self in an instance method represents the instance of a class, cls represents the class directly. So, unlike instance methods, class methods don’t have access to the instance. Therefore, class methods don’t have access to instance attributes.
The first line of this method creates an instance of the class. Look at the expression cls(), which comes after the = operator. Recall that cls refers to the class. So, cls is the same as ShoppingList in this example. But adding parentheses after the class creates an instance. You assign this new instance to the local variable shopping_list. You use cls rather than ShoppingList to make the class more robust in case you choose to subclass it later.
Fast-forward to the end of this class method, and you’ll see that the method returns this new instance, shopping_list. However, it makes changes to the instance before returning it. The method fetches all the ingredients from each meal in the WeeklyMealPlanner instance and populates the .ingredients data attribute in the new ShoppingList instance.
In summary, the class method doesn’t have access to an instance through the self parameter. But since it has access to the class, the method uses the class to create a new instance and initialise it, adding steps to the standard .__init__() method.
Therefore, this class method creates and returns an instance of ShoppingList with its .ingredients data attribute populated with the ingredients you need for all the meals in the week.
You now have an alternative way of creating an instance of ShoppingList:
This class now has two ways to create instances. The standard one using ShoppingList() and the alternative one using ShoppingList.from_meal_planner(). It’s common for class methods used as alternative constructors to have names starting with from_*.
You can have as many alternative constructors as you need in a class.
Question: if it’s more useful to create a shopping list directly from the weekly meal planner, couldn’t you implement this logic directly in the .__init__() method? Yes, you could. But this would create a tight coupling between the two classes, ShoppingList and WeeklyMealPlanner. You can no longer use ShoppingList without an instance of WeeklyMealPlanner, and you can no longer easily create a blank ShoppingList instance.
Creating two constructors gives you the best of both worlds. ShoppingList is still flexible enough so you can use it as a standalone class or in conjunction with other classes in other projects. But you also have access to the alternative constructor ShoppingList.from_meal_planner() when you need it.
Alternative Constructors in the Wild
You may have already seen and used alternative constructors, perhaps without noticing.
Let’s consider dictionaries. The standard constructor is dict() – the name of the class followed by parentheses. As it happens, you have several options when using dict() – you can pass a mapping, or an iterable of pairs, or **kwargs. You can read more about these alternatives in this article: dict() is More Versatile Than You May Think.
But there’s another alternative constructor that doesn’t use the standard constructor dict() but still creates a dictionary. This is dict.fromkeys():
You can have a look at help(dict.fromkeys). You’ll see the documentation text refer to this method as a class method, just like the ShoppingList.from_meal_planner() class method you defined earlier.
And if you use the datetime module, you most certainly have used alternative constructors using class methods. The standard constructor when creating a datetime.datetime instance is the following:
However, there are several class methods you can use as alternative constructors:
Have a look at other datetime.datetime methods starting with from_.
Your call…
The Python Coding Place offers something for everyone:
• a super-personalised one-to-one 6-month mentoring option
$ 4,750
• individual one-to-one sessions
$ 125
• a self-led route with access to 60+ hrs of exceptional video courses and a support forum
$ 400
Final Words
Python restricts you to defining only one .__init__() method. But there are still ways for you to create instances of a class through different routes. Class methods are a common way of creating alternative constructors for a class. You call them directly through the class and not through an instance of the class – ShoppingList.from_meal_planner(). The class method then creates an instance, modifies it as needed, and finally returns the customised instance.
Now, let me see what’s on tonight’s meal planner and, more importantly, whether it’s my turn to cook.
Code in this article uses Python 3.14
The code images used in this article are created using Snappify. [Affiliate link]
Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Appendix: Code Blocks
Code Block #1
class Meal:
def __init__(
self,
name,
person_cooking,
quick_meal=False,
):
self.name = name
self.person_cooking = person_cooking
self.quick_meal = quick_meal
self.ingredients = {} # ingredient: quantity
self.ratings = {} # person: rating
# ... more methods
Code Block #2
class WeeklyMealPlanner:
def __init__(self):
self._meals = {} # day: Meal
@property
def meals(self):
return dict(self._meals)
# ... more methods
Code Block #3
class ShoppingList:
def __init__(self):
self.ingredients = {} # ingredient: quantity
def add_ingredient(self, ingredient, quantity=1):
if ingredient in self.ingredients:
self.ingredients[ingredient] += quantity
else:
self.ingredients[ingredient] = quantity
# ... more methods
Code Block #4
ShoppingList()
Code Block #5
class ShoppingList:
def __init__(self):
self.ingredients = {} # ingredient: quantity
@classmethod
def from_meal_planner(cls, meal_planner: WeeklyMealPlanner):
shopping_list = cls()
for meal in meal_planner.meals.values():
if meal is None:
continue
for ingredient, quantity in meal.ingredients.items():
shopping_list.add_ingredient(ingredient, quantity)
return shopping_list
def add_ingredient(self, ingredient, quantity=1):
if ingredient in self.ingredients:
self.ingredients[ingredient] += quantity
else:
self.ingredients[ingredient] = quantity
Code Block #6
# if my_weekly_planner is an instance of 'WeeklyMealPlanner', then...
shopping_list = ShoppingList.from_meal_planner(my_weekly_planner)
Code Block #7
dict.fromkeys(["James", "Bob", "Mary", "Jane"])
# {'James': None, 'Bob': None, 'Mary': None, 'Jane': None}
Code Block #8
import datetime
datetime.datetime(2026, 1, 30)
# datetime.datetime(2026, 1, 30, 0, 0)
Code Block #9
datetime.datetime.today()
# datetime.datetime(2026, 1, 30, 12, 54, 2, 243976)
datetime.datetime.fromisoformat("2026-01-30")
# datetime.datetime(2026, 1, 30, 0, 0)
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Real Python
The Real Python Podcast – Episode #282: Testing Python Code for Scalability & What's New in pandas 3.0
How do you create automated tests to check your code for degraded performance as data sizes increase? What are the new features in pandas 3.0? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python⇒Speed
The best Docker base image for your Python application (February 2026)
When you’re building a Docker image for your Python application, you’re building on top of an existing image—and there are many possible choices for the resulting container.
There are OS images like Ubuntu, and there are the many different variants of the python base image.
And now there’s a new choice, installing Python using uv, which allows you to use any base image you’d like.
Which one should you use? Which one is better? There are many choices, and it may not be obvious which is the best for your situation.
So to help you make a choice that fits your needs, in this article I’ll go through some of the relevant criteria, and suggest some reasonable defaults that will work for most people.
Read more...












