skip to navigation
skip to content

Planet Python

Last update: December 22, 2025 04:44 PM UTC

December 22, 2025


Real Python

SOLID Design Principles: Improve Object-Oriented Code in Python

A great approach to writing high-quality object-oriented Python code is to consistently apply the SOLID design principles. SOLID is a set of five object-oriented design principles that can help you write maintainable, flexible, and scalable code based on well-designed, cleanly structured classes. These principles are foundational best practices in object-oriented design.

In this tutorial, you’ll explore each of these principles with concrete examples and refactor your code so that it adheres to the principle at hand.

By the end of this tutorial, you’ll understand that:

  • You apply the SOLID design principles to write classes that you can confidently maintain, extend, test, and reason about.
  • You can apply SOLID principles to split responsibilities, extend via abstractions, honor subtype contracts, keep interfaces small, and invert dependencies.
  • You enforce the Single-Responsibility Principle by separating tasks into specialized classes, giving each class only one reason to change.
  • You satisfy the Open-Closed Principle by defining an abstract class with the required interface and adding new subclasses without modifying existing code.
  • You honor the Liskov Substitution Principle by making the subtypes preserve their expected behaviors.
  • You implement Dependency Inversion by making your classes depend on abstractions rather than on details.

Follow the examples to refactor each design, verify behaviors, and internalize how each SOLID design principle can improve your code.

Free Bonus: Click here to download sample code so you can build clean, maintainable classes with the SOLID Principles in Python.

Take the Quiz: Test your knowledge with our interactive “SOLID Design Principles: Improve Object-Oriented Code in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

SOLID Design Principles: Improve Object-Oriented Code in Python

Learn Liskov substitution in Python. Spot Square and Rectangle pitfalls and design safer APIs with polymorphism. Test your understanding now.

The SOLID Design Principles in Python

When it comes to writing classes and designing their interactions in Python, you can follow a series of principles that will help you build better object-oriented code. One of the most popular and widely accepted sets of standards for object-oriented design (OOD) is known as the SOLID design principles.

If you’re coming from C++ or Java, you may already be familiar with these principles. Maybe you’re wondering if the SOLID principles also apply to Python code. The answer to that question is a resounding yes. If you’re writing object-oriented code, then you should consider applying these principles to your OOD.

But what are these SOLID design principles? SOLID is an acronym that encompasses five core principles applicable to object-oriented design. These principles are the following:

  1. Single-responsibility principle (SRP)
  2. Open–closed principle (OCP)
  3. Liskov substitution principle (LSP)
  4. Interface segregation principle (ISP)
  5. Dependency inversion principle (DIP)

You’ll explore each of these principles in detail and code real-world examples of how to apply them in Python. In the process, you’ll gain a strong understanding of how to write more straightforward, organized, scalable, and reusable object-oriented code by applying the SOLID design principles. To kick things off, you’ll start with the first principle on the list.

Single-Responsibility Principle (SRP)

The single-responsibility principle (SRP) comes from Robert C. Martin, more commonly known by his nickname Uncle Bob. Martin is a well-respected figure in software engineering and one of the original signatories of the Agile Manifesto. He coined the term SOLID.

The single-responsibility principle states that:

A class should have only one reason to change.

This means that a class should have only one responsibility, as expressed through its methods. If a class takes care of more than one task, then you should separate those tasks into dedicated classes with descriptive names. Note that SRP isn’t only about responsibility but also about the reasons for changing the class implementation.

Note: You’ll find the SOLID design principles worded in various ways out there. In this tutorial, you’ll refer to them following the wording that Uncle Bob uses in his book Agile Software Development: Principles, Patterns, and Practices. So, all the direct quotes come from this book.

If you want to read alternate wordings in a quick roundup of these and related principles, then check out Uncle Bob’s The Principles of OOD.

This principle is closely related to the concept of separation of concerns, which suggests that you should divide your programs into components, each addressing a separate concern.

To illustrate the single-responsibility principle and how it can help you improve your object-oriented design, say that you have the following FileManager class:

Python file_manager_srp.py
from pathlib import Path
from zipfile import ZipFile

class FileManager:
    def __init__(self, filename):
        self.path = Path(filename)

    def read(self, encoding="utf-8"):
        return self.path.read_text(encoding)

    def write(self, data, encoding="utf-8"):
        self.path.write_text(data, encoding)

    def compress(self):
        with ZipFile(self.path.with_suffix(".zip"), mode="w") as archive:
            archive.write(self.path)

    def decompress(self):
        with ZipFile(self.path.with_suffix(".zip"), mode="r") as archive:
            archive.extractall()

In this example, your FileManager class has two different responsibilities. It manages files using the .read() and .write() methods. It also deals with ZIP archives by providing the .compress() and .decompress() methods.

This class violates the single-responsibility principle because there is more than one reason for changing its implementation (file I/O and ZIP handling). This implementation also makes code testing and code reuse harder.

To fix this issue and make your design more robust, you can split the class into two smaller, more focused classes, each with its own specific concern:

Read the full article at https://realpython.com/solid-principles-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 22, 2025 02:00 PM UTC

Quiz: SOLID Design Principles: Improve Object-Oriented Code in Python

In this quiz, you’ll test your understanding of the SOLID Design Principles: Improve Object-Oriented Code in Python tutorial.

You will reason about behavior contracts, attribute invariants, and choosing composition or separate types over inheritance. For a refresher, you can watch the Design and Guidance: Object-Oriented Programming in Python course.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 22, 2025 12:00 PM UTC


Nicola Iarocci

Rediscovering a 2021 podcast on Python, .NET, and open source

Yesterday, the kids came home for the Christmas holidays. Marco surprised me by telling me that on his flight from Brussels, he discovered and listened to “my podcast” on Spotify. I was stunned. I didn’t remember ever recording a podcast, even though I’ve given a few interviews here and there over the years.

During my usual morning walk today, I went to look for it, and there it was, an interview I had done in 2021 that I had completely forgotten about. I got over the initial embarrassment (it’s always strange to hear your own voice) and resisted the temptation to turn it off, listening to it all the way through. I must admit that it captures that moment in my professional life, and much of the content is still relevant, especially regarding my experience as an open-source author and maintainer and my transition from C# to Python and back.

I found copies of the podcast on many platforms, including a YouTube video (we actually video recorded it, who knew!), but they are all in Italian. I fed the video to MacWhisper, which transcribed it; I then asked Claude to translate it into English, removing pauses and repetitions; and finally, I ran it through Grammarly for a grammatical check. That’s what AI allows today: half an hour to go from an Italian audio podcast to a full English transcript, and that’s including a manual, pedantic review of Grammarly suggestions.

What follows is the full transcript, in English, of that 2021 interview. We touch on a variety of topics, including Python for .NET Developers, functional programming in Python, F#, and C#, the Eve REST framework and Flask, electronic invoicing, my open-source experience as an author and maintainer in both ecosystems, cross-platform development, and advice to newcomers.

I don’t really know how or why Marco found this relic of mine on Spotify, and I’m not brave enough to ask, but I’m grateful he dug it up. Also, many thanks to Mauro Servienti for hosting the interview.

DotNet Podcast - Interview with Nicola Iarocci (2021)

Hello everyone, and welcome to a new episode of DotNet Podcast, the Italian podcast dedicated to Microsoft technologies and more. You can find us on all major social platforms and podcasting services; all links are on our website, dotnetpodcast.com. Today we’re talking about Python and Eve (or Pyrrest) and, why not, electronic invoicing for .NET.

Today I have the pleasure of having Nicola Iaroci with me. I met Nicola at the Italian edition of SoCraTes in Rimini a few years ago. Nicola is the classic jack of all trades: developer, entrepreneur, fitness enthusiast, consultant, open-source lover, Microsoft MVP, MongoDB Master, and probably something else I’ve forgotten, because the list is very long. In the open source world, he’s known primarily for Eve (we’ll have him explain how to pronounce it), a REST framework for Python, and Electronic Invoicing .NET. I almost forgot: if you want to know anything about how Git works, Nicola knows it all.

Welcome Nicola! Did I forget anything?

Hi, thanks, welcome everyone. Well, no, I’d say probably yes, but I can’t tell you what, so I’d say the introduction is perfect. Thanks.

This podcast has always been oriented toward the .NET world, though lately we’re branching out a bit into various technologies and even topics not necessarily technical. But assuming our listeners are primarily .NET developers, briefly, what is Python, and why should or could a C# .NET developer be interested?

Sure, so first of all, Python fundamentally, let’s say from a general point of view, is not that different from the languages we’re used to in the .NET world, particularly C#, in the sense that it’s still a high-level language, object-oriented from its foundations, let’s say, from its roots. The main difference, perhaps, is that it’s fundamentally interpreted, so there’s an interpreter that executes the language and programs, although it’s also possible to do JIT compilation, especially for performance reasons.

I’ll add an important note because people generally think that a dynamic language is not strongly typed, which is not true. Python is a strongly typed language, but it has dynamic semantics, so type checking is still possible. I always clarify this because those coming from C-derived languages often think it’s very similar to JavaScript, when in reality, there are clear differences.

It’s certainly a language designed to be more approachable for a general programmer or even a beginner, and this probably also explains why it’s had such success, I’d say almost incredible success, for example, in the world of scientific research, numerical computing, in the financial world and for those who need to do numerical analysis. Because, among other things, in that particular area it’s a language where performance is excellent, not because Python is a fast language (it’s not at all, being interpreted), but a very beautiful thing about Python is that you can write its libraries in C, with bindings that practically allow you to use libraries written in C as if they were Python libraries.

And this is the trick that allowed all these tools for numerical, scientific, and financial analysis to be written in a language that’s simple to grasp, even for those who perhaps aren’t born developers like us. For example, think of a researcher, a scientist locked in their laboratory who can understand Python, approach programming and immediately have powerful tools available, yet the language remains easily readable and understandable. It’s a perfect language, for example, for learning to program in general.

Then what else? Like our .NET languages, it obviously supports modules and packages. There’s an equivalent to NuGet: PyPI, where you can now find hundreds of thousands of packages to install.

One thing that personally interested me a lot about Python when I started studying it was the fact that it was cross-platform from the beginning. It was a language that, when I started studying, was already twenty years old, so very mature. The base class library, or something like that, so it comes with a lot of included material, a bit like .NET at this point with .NET Core. Practically, whatever you want to do, there’s certainly a standard way that allows you to do it.

The fact that it was cross-platform for me then was very interesting, because I came from the .NET ecosystem, which, when I started looking at Python about 10 years ago, was still absolutely out of the question. Then the world is made to change rapidly; it would also be nice if you let me tell you what happened in those years, but we probably don’t have time. But fundamentally, yes, the fact that it was open source from the beginning, 20 years ago, I really liked a lot.

I must admit that in those years we fundamentally had to port our application, desktop applications, standalone, networked applications, and I was a bit frustrated by let’s say the old Microsoft world, to be clear, which sounds strange coming from a Microsoft MVP, but this is also to tell why I became a Microsoft MVP after having abandoned Microsoft, I must say also quite disappointed, and I moved to the Python world. Then I was noticed in the Python world by Microsoft people, who sort of recruited me into this part of the MVP world.

Then I actually went back to doing a lot of C#, F#, in short, working in the .NET world. Actually, if I have to be honest, nowadays about 75-80% of my work is in C# and F#, and the remaining 20% is maintenance of my open-source Python packages.

So yes, let’s say this: Python, personally, gave me new perspectives to answer your question. You know, when you’re always inside, you always drink from the same source, you always drink the same water, and certainly it’s excellent water, but you don’t know the other flavors, the other tastes.

I’ll give you a trivial example: in C#, historically (keep in mind that I’m a certain age, so I started writing C# really too many years ago), I remember this episode that always marked me. They had always taught me not to use try-catch if possible because exceptions in .NET were performance problems; it was a bad practice. In the Python world, it’s exactly the opposite: you write code without too many guards, you focus on the logic that solves the problem, and you catch and handle errors as needed.

Going back to the C# world, I started using this thing a lot in Python and used it aggressively, even to the dismay of my colleagues who had stayed in the world. And this is simply an example. Then, in the meantime, obviously, in these 10-15 years, try-catch is no longer a problem like it was 15 years ago.

I brought some things from the Zen of Python, a kind of decalogue that Python programmers try to follow, into the C# world. For example, “Explicit is better than implicit,” which is a classic of the Python world. So, rather than write one more line of code, make your intention exactly clear, rather than hiding things behind too many automagical things.

So, all these little things here. Now, yes, the .NET world has given us a lot, it’s not even worth saying. Actually, now they’ll do the same thing here, too. When I moved there, and especially there, I think C# was already very mature. So I found myself… I remember another thing: when, for the first time, I didn’t understand how a REST call worked in Python, and I realized I could go to GitHub to see the source code of the framework I was using, which was Flask.

It’s a bit like if I, 15 or 10-12 years ago, had been able to look at the ASP.NET source code to understand exactly why things weren’t working as I thought they should. That thing for me was, the English would say, paradigmatic. It really opened my heart, and I understood that was the world I wanted to be in. Then you see, many of these arguments are a bit, let’s say, weak today because C# and .NET are now fully open source. But better this way, thank God. A lot has changed in recent years.

Nicola has a t-shirt I couldn’t see well at first; it has the Superman logo, but it’s actually an F with a hashtag, so it’s an F# t-shirt. Is there a relationship between the functional world and Python, or is Python a traditional object-oriented world, let’s call it that?

Well, that’s a good question. Actually, Python was born entirely and purely object-oriented. So anything you use in Python is an object, but it has a strong predisposition also toward functional programming, let’s say. I have to be honest: if I want to do good quality functional programming, F# is a thousand times better than Python, but you can certainly do it.

Let’s say this is probably interesting as well. I wouldn’t have arrived at F# (let’s say) if I hadn’t gone through Python. Always for the discussions I was telling you about before, the approach in Python is to simplify things, break them down into… well, these are good practices used in all programming languages. But there’s a tendency, let’s say, to use functions rather than magic classes that incorporate state, for example.

In Python, it’s quite common, even if not everyone does it, because, being a complete object-oriented language, you can perfectly well maintain state inside objects as you would in a C# class, in a classic way. But you will surely have noticed, and I think the listeners too, that the trend in C#, as well as in recent versions, is toward functional programming in a manner I’d say is almost impetuous, so records, immutables, many other things like enhanced lambdas, etc., etc.

So, coming back to the .NET world from Python, I became much more curious about F # than I had been 10 years ago. So yes, in Python you can do decent functional programming and organize your code in a functional way, but it’s certainly a language that remains object-oriented, let’s say, in its orientation.

And it’s not even, I have to tell you the truth, compared to C#, which has been adopting so much from F# in the last two or three versions, Python is not making this huge effort to become functional, and in this, I have to say I quite agree. I really like the evolution of C#, especially for me, who also obviously does F#. I think I would really like also like to know how to say, I’ve also tried actually in some conferences to do outreach in this sense, to encourage the C# developer to go toward functional, and also to illustrate F#, the principles of F#.

But I also appreciate the fact that in the Python world, they said, “ok, Python at its heart is an object-oriented language and its focus is there, let’s improve that type of paradigm. There are other languages that do well. And in the .NET world, this is very true: there’s C#, and we have the enormous fortune of also having F#.

Now there’s been an attempt at hybridization, let’s say, or rather to bring into the C# world everything that’s beautiful about the functional world, which sure is appreciable. I’m happy, I’m already using it a lot. But I think it can also create a lot of confusion for a newcomer to the language, who perhaps comes from object-oriented programming and finds themselves with many functional things and doesn’t quite understand what to do.

I adore the latest versions of C#, and I’m very happy about it, but what is the specific direction of C#?

Yes, I also have a sort of adoration for the latest version of C#, but I realize that having behind me… I started writing C# code in 1999 with Beta 1 of the .NET Framework. So I’ve been through all of them, so for me, the novelties are small things compared to the twenty years that have passed. I realize that the problem might be that, for a new developer who arrives today, they face, fundamentally, an almost insurmountable mountain to climb.

That’s exactly what I meant, so adding then also, let’s say, this cognitive weight. One could ask oneself why can’t I use a class instead of a record, for example, but the same thing, what’s the fundamental difference? Yes, there are reasons, but one of the really strong things about Python is its ease of access for those starting in programming.

I’ve seen that Microsoft is making huge efforts in this regard, with video series on YouTube and sites to welcome people to C#. And I, in particular, who come from the world, I’m let’s say with feet in both camps, so I’m well known in the Python world too, so much so that I’ve also done presentations where I present Python to the C# programmer and C# and .NET to the Python programmer.

Especially now that C# is finally cross-platform and open source, it becomes much more… before, it was unthinkable to talk to a Python programmer about learning C#, which was only Windows and only .NET Framework, standalone, etc. It was really unthinkable. Now some cracks are opening, I’m trying to slip in there, but it’s very hard because clearly we have let’s say an image, we’re known as dotNetters in quotes for enterprise, for PA… and heavy stuff that’s installed on Windows, the GAC, all these problems that existed in the Windows world.

Many times, making people understand “look, now you can take a Linux machine and run a web application written in C# with the same ease as you would with Node or Python” is really a message that leaves people open-mouthed, and they don’t believe it, you have to show them. So the work to be done on our part is really a lot.

You touched on Python at the beginning of the discussion, saying, if I remember correctly, you used Flask as a framework, which I assume is a sort of counterpart to ASP.NET. More or less? What relationship is there between Flask, ASP.NET, Eve and the REST world in general? And why did you feel the need to write a framework for REST for Python?

Well, that’s an excellent question. When I came to Python, there was a framework called Django, which was very widely used and very famous, and did the equivalent of a modern ASP.NET, an ASP.NET Core. But it was too complex, it was exactly the classic behemoth in the .NET style, because I’m talking about the old way, where you had to take everything that came or what you needed. What I needed was to practically implement the equivalent of a Web API application we have in the .NET world, so the controller… well, there’s no concept of controllers, but it’s the classic REST API concept.

So, Flask actually defines itself as a REST API, but it’s not a true Web API. It’s more of a .NET type framework. When I explain to C# programmers what exactly Eve is, I make this kind of comparison: imagine a Web API .NET, well, Eve gives you that with the addition that there was this attempt to make it very easy to put online when you need an API that’s basically a front-end for a database, so not much business logic but CRUD operations and things like that.

So, in this sense natively, Eve, besides providing the REST interface toward clients which by the way is strongly opinionated (that is, keep in mind that when I wrote Eve it actually was born as an internal project for my company, so we knew exactly that a POST would create a record, a PUT would replace, a PATCH would modify), I didn’t create a framework that lets you do whatever you want, precise choices were made about how verbs work, how modifications work.

And fundamentally, if that’s okay with you, my framework probably lets you work agilely and quickly. If you don’t agree, use Flask or another framework, or build your own. And so yes, I’d say the relationship is this: Flask could be, let’s say, the core of a Web API. Look, it comes to mind now with .NET 6 this new interesting feature, which are the minimal APIs.

If you take the Flask homepage and look at the quick start example, it’s maybe ten lines of code, and you take a snippet that Microsoft is publishing on its social media (David Fowler comes to mind on Twitter; every now and then, he posts screenshots of these minimal APIs), they’re almost overlapping, it’s incredible, right?

And so the work they’re doing now in .NET 6 with minimal APIs is to remove from the logic let’s say MVC and everything that was behind it, remove the routing and those things from verb management and then put it all in your hands so that you can make a pure API without the behemoth, without having to create a controller and so on.

So, Flask practically gives you that base that we’ll now have in .NET 6. So, this, and minimal APIs are something that in my opinion can be very interesting for those who want to start using C# for example to make Web APIs, because when I came from C# I had the whole behemoth world let’s say to manage and I found Flask which gave me the building blocks to build what I wanted, it was exactly the reason why I went to use Flask and Python.

So finally in .NET 6 we’ll have something very similar and it’s really impressive to compare Node code, Python code with Flask and .NET 6 minimal API code and see that the effort is very evident from Microsoft to make it interesting also for those coming from other stacks where all that, sorry if I repeat myself (I call it the behemoth), all that complication… so you have to have a controller, you have to have a view, you have to make a site, you have to have in short the data layer, all these affairs that certainly come from data user, among these and so on, are being somewhat thinned out to make everything lighter and it will be very interesting in my opinion.

Exactly. One thing I noticed is that, finally, for the first time with .NET 6, the empty project template is truly empty.

Exactly, Flask has been like that for me: you just need a .py file with your four lines of API initialization, and then a nice little function that responds to a GET, and you’re done. Now we’ll have this in .NET 6 too, which is really interesting to me. Hopefully, then we’ll be able to make the rest of the world understand that .NET is no longer what it was 10 or 15 years ago, and here I emphasize this is the big commitment, in my opinion, the real challenge to win: communication.

But I have to say that on this I’m optimistic for another reason and that’s performance, that is the performance of .NET Core, multi-framework, cross-platform are really interesting and this is the reason why I went back to doing a lot of C# actually, because with .NET Core I have cross-platform and performance that I don’t have in Python and now I’m also starting to have a language, a stack that’s agile and very similar to what I used in Python, what I use in Python or in Node for example. But let’s not talk about Node, otherwise we’ll have a classic flame war.

Ok, in all of this, how does Electronic Invoicing .NET fit in?

Well, Electronic Invoicing is also here, the result of my Python experience, that is, in Python I embraced open source to the point of becoming the first creator and then maintainer of some open source projects, and I saw the incredible potential that comes with letting’s say, the benefits you have from making your code public.

And so going back to work in .NET (we’re talking about 2014-2015, because electronic invoicing, it must be said, is something that was imposed by law, I think in 2019, now I don’t remember, but actually the technical specifications were already in place for some years for the public administration world). So we had to do this thing internally.

Fundamentally, for those who don’t know the product, Electronic Invoicing for .NET is simply a deserializer and serializer of electronic invoices that puts in your hands an instance of a class that represents the electronic invoice and that, very handily, also forces you to validate it according to the technical specifications. So you can, before submitting your electronic invoice to the Revenue Agency, etc., already know whether it will be accepted, identify any errors, and tell your user how to correct them. This is the version in a nutshell.

And so when we did this and I started working on this project I proposed to colleagues to try to leave it as open source because it was evidently something that would become useful certainly to a niche compared to Eve or Cerberus which is another project I have, but because meanwhile it’s only dedicated to the Italian public and not to the whole planet let’s say, but also certainly only to those who develop management software etc., etc.

But why not? It seemed to me an interesting little game, also because, I’ll tell you the truth, it may seem a bit naive on my part, but it seemed to me the way to show and make .NET developers understand that open source, even from us peons, is perfectly possible. If you have an interesting project that can be useful, you can do it even if you come from the let’s call it closed enterprise world, like that of .NET, and it seemed to me a way to give an example and encourage others to take the same step.

In short, at the beginning, it remained quiet for two or three years, and we used it only by us and four other unfortunate souls like us. Gradually it gained traction, it clearly became a very important thing when the law then imposed electronic invoicing and I must say we had the advantage, you see there too, of being a project that by then was already a few years old (it was from 2015 I think I left it open source), it was already mature enough to be adopted by those who were panicking because they found themselves with three months to implement something and so after that contributors arrived.

The thing about the whole Electronic Invoicing project that gives me the most satisfaction is the fact that there are .NET developers and it’s evident from how they make pull requests and how they contribute to the code that they’ve never done it before, but with enthusiasm and obviously out of necessity they get to work, they throw themselves into it and they’ve also contributed pieces of code that have proven very important.

So in my small way, in the .NET world, what usually happens in other worlds I come from, or rather that I return to, so Python, is happening.

This is a fascinating thing that recently happened to me too because I have several open source projects, one recently… a guy who I later discovered was Australian started opening issues, then a couple of pull requests and then more and now I’m evaluating within myself whether to make him a maintainer because he’s contributing in a very important way and if the project hadn’t originally been open source because it didn’t need to be open source, originally there wouldn’t have been all the contributions that I honestly had never thought of, they didn’t make much sense.

Absolutely, I confirm and my experience too. To give a practical example on Electronic Invoicing, a guy comes to mind who contributed… I think I implemented serialization in JSON, in addition to XML, for these electronic invoices, but I wasn’t at all interested in deserialization. It’s a guy… moreover, it’s also the contract, the pull request arrived with this feature completely implemented, and so now I support bidirectional JSON or the famous digital signatures, which are a very complex topic.

Electronic invoices can be sent as pure XML or with a digital signature. Actually the large part of those features was contributed from the outside. They’re important features you see that I didn’t work on, clearly, then I contributed to quality control, everything you want, but giving me so much value to the project and obviously to the community.

And among these contributors, there are precisely new contributors who, often precisely because of their enthusiasm, are the ones who in the end contribute the most, with also those more interesting features.

If I have a minute to tell another episode that comes to mind, in Eve, there was this guy. I had already been on GitHub for 4-5 years; it was going very well, and it had widespread adoption, which I was very proud of. After which, this pull request from this guy arrives, with, I remember, something like 800 code changes, so a monster pull request. Going to look at them, they were all changes to comments, what in Python are called docstrings, they’re practically the inline documentation that you put as a comment that then serves developers to understand your code.

They were full of typos and errors because I’m obviously not a native English speaker, so there were grammatical errors, typos, and other issues. It was super embarrassing because I realized that my code with all my errors had been seen by who knows how many tens of thousands of programmers, who knows how many laughs they had at my expense.

But the beautiful thing about this contribution was that he wrote to me, “look, I’m not an expert programmer, so I thought of contributing in this way. And from that day, the Eve documentation has enormously improved, so that a contribution which is not a code contribution, from an absolute non-expert for me personally, for the whole project, has had and still has an immense value.

So this is also a message, I always tell this episode. There are opportunities to contribute in a significant way for anyone, from the super programmer (the famous “10x developer” if you also say it in Italian), but also those who have just started can help and indeed, they should be encouraged because they’re the ones who have so much enthusiasm, among you who then give them confidence, the problem is that after a while being a maintainer becomes very demanding and you start to delegate to someone who I’m not saying replaces you, but maybe even yes.

In fact, the next question is precisely that: if I understood correctly, both open source projects have some relationship with your work, but obviously, the cognitive and managerial load is significantly higher than what your work would generate. So what is your experience in the world of open source governance, and in general, managing projects that start maybe a bit like, I don’t know, what we could call a playground, and then explode in your hands, and you say, “oh my God, now what do I do?”

Yes, it’s a gigantic problem actually because with Eve, thank God, we’ve now reached a maturity and stability of the framework that allows me to live quite on my laurels, but attention, simply because I chose that the project is mature and stable, and I don’t want to take it, say, to a version 2.0. If I were like .NET can be forced every year to make a new major release, it would obviously be my job 24 hours a day.

I also confess that to solve this problem I also tried to make Eve somehow profitable, to have an income from the project itself through donations, like “buy me a coffee”, but not with the objective of becoming full-time living, but with the objective of being able to pay myself half a day of work per week to dedicate explosively to the project, because if I could have dedicated, say, every Friday 8 hours to open source, you would certainly have a project 10 times more beautiful than what it has now, same thing with Cerberus, same thing with Electronic Invoicing.

As you can imagine, this thing didn’t have much success because everyone is very good at installing packages, but when they have to put their hands in their wallets, they’re much less good at it. Maybe there was gratitude for receiving this, which is very pleasant, but the long-term strategy is missing… another topic; we don’t want to discuss now how to maintain…

There comes a moment when the cognitive commitment is really great, you have so much other business and other work types to deal with, and it becomes a problem. The solution, for example, I was lucky because, a bit like what happened to you, one of these contributors, gradually, I really let him, even in a somewhat sly way, I let him take control, but it started with minimal pull requests, then he gained courage, I sort of nurtured him.

In the end when I was certain he knew the codebase very well, he was also an expert, in short fundamentally Cerberus I left in his hands 100%, so I follow it from afar, I receive email notifications, if he has some important modification to make he asks me for advice, but actually he even has the rights to publish updates and so let’s say now for me I’m simply a father watching from afar the child grow, so to speak.

For Eve, I’m still the main maintainer, but as I tell you, the choice of Eve was perhaps made with the idea of being able to continue living and earning, let’s say, my salary. It was ok. The product is mature. From now on, we accept pull requests for bug fixes or mature new features that make sense to incorporate, but I don’t foresee further development in any direction.

Electronic Invoicing, for me, is strategic because we use it every day in the company; we ultimately make management software, so there I remain the maintainer, and I do it gladly because there’s a necessity.

So, in general, hopefully in the .NET world too, we’ll arrive at this… so, many have told me “how nice, I’ll definitely make a donation, we’ll make a donation for Electronic Invoicing because we use it in the company”, I think not even one has arrived. So in this certainly the .NET world still has to mature and become aware of the fact that a long-term investment for a product you use every day and that gives you income in the end somehow (otherwise you wouldn’t use it), it makes sense to think about contributing to the long life of the project to not risk finding yourself at a certain point with the maintainer who took the motorcycle and went to the Himalayas to climb and you have a critical project maybe that’s no longer updated, that has security problems, etc. On this, there’s work to do.

Ok, to conclude, last two quick questions. Still staying in the open source world, if you had to give advice to someone who wants to start an open source project and advice to someone who wants to contribute to an open source project.

Yes. So, for those who want to start, I’d say don’t worry. In the sense that, as you said before, there were projects you had put open source that actually didn’t need to be. “They didn’t need to be open source.” This is, the perception is a problem of really, I can’t translate it into Italian, but it’s a problem of wrong perception, in the sense that actually there’s certainly someone on planet earth who has to solve the problem that you have to solve at that moment.

So even projects, if you want trivial small ones that serve little purpose, thanks to potential algorithms and tools, exposure from GitHub, from Google and so on, rest assured that if you’re doing a project for electronic invoicing, someone else has that problem. So the first thing is that the history of open source is full of projects born as hobby projects, put on GitHub almost like this for convenience, because this way I have a remote backup, which actually exploded in the hands of maintainers because they were successful.

But even if they weren’t successful, it serves you in the meantime to acquire the know-how, which is not small, and to gain experience in doing so and overcoming the shyness of sharing your code and showing it to the public. So, this is the other very important thing. My programming style has evolved a lot, thanks also to seeing what others do.

Sure, it can also be, how to say, sometimes not humiliating, I’d say, but certainly it puts you in your place when you see that your code has been refactored in a much more performant or more elegant way by others, but it’s there that you learn. It’s a bit of the discourse about always drinking water from the same place I was telling you about. So certainly I’d say first thing, don’t be afraid, throw anything on GitHub even if the code is not the best… sorry… don’t stay there to refine it, to clean it up, because maybe someone else will do it for you, who will even be grateful to you.

The other thing, probably, is… for those who want to start contributing instead, as I was saying before, there certainly are projects of the day that you use, even in professional frameworks… I’ll also make here, I’d say, another episode. When I was preparing a talk on Python and explaining how to use Python inside Visual Studio, which few people know, you can use all the Visual Studio features you’re used to to write Python code. I realized that the official documentation on the Microsoft Visual Studio site had some shortcomings in the pages dedicated to Python. And so what did I do? I made a fork of that documentation, which is now finally all on GitHub. I contributed a fix to this documentation.

And now, I don’t know if it’s still like this, but two or three years ago, if you went to the official Microsoft documentation for Python, you’d find my little face among the contributors. So there you go, I made a contribution even to an official Microsoft project. Free, because it’s the tool that I use every day.

I’m convinced that a large part of us developers have this experience of noticing a small error, a small problem, or a specification that doesn’t necessarily require an extremely complicated algorithm to solve. It can also be, as I was telling you in that other episode, a docstring, a comment with a typo. They’re all experiences that add up, that help you gain familiarity with the context, and so don’t start maybe by contributing a Fibonacci optimization, because I don’t know what.

So, really start from the trivial, from the simple, from what you do every day, because you know it very well and you’re already competent. After which, there’s time and a way to gain confidence; many of these my contributors, as I was telling you, start like this. Then, gradually, they gained courage and went to examine the deeper code, to solve the more complicated issues that I myself didn’t feel like going to look at, because I knew it was a hornet’s nest and the willing one arrived who did it in my place, riding the enthusiasm, which maybe I don’t have anymore, but they have it, the positive energy to use, to spend.

Good, very interesting, I agree completely. It’s one of the most common barriers to entering the open source world, precisely because itis tied to the idea that “I have to contribute,” and you have in your head that the contribution must be substantial, when, in the end, it’s often a small thing.

Exactly. The novice contributor, let’s say, is intimidated and thinks they should contribute something fundamental, but it doesn’t make sense to contribute. On the other hand you have a maintainer who is literally at the window waiting for those who contribute the small things to arrive, because they’re those, many small things that then are small things relatively (it’s the perception of the remote developer that it’s a small thing), it’s all work that you take away from the maintainer and that you take away from the community.

So even the very small thing I can’t wait for those more sought-after so-called small things to arrive. Actually, it has great value, as seen in the example of the 800 typos in the Eve documentation.

I thank you for your availability. It’s been a very pleasant chat. I hope to have you as a guest again, and as we shared before we started, we discussed an interesting topic that could be fitness for developers.

If you decide to do it, call me. Thanks so much to you and your whole team for what you do with this podcast. You do an excellent job.

Thanks so much, thanks so much and thanks again for your availability. See you next time, hello everyone.

Bye-bye, bye everyone, thanks.

December 22, 2025 09:49 AM UTC


Zato Blog

Modern REST API Tutorial in Python

Modern REST API Tutorial in Python

Great APIs don't win theoretical arguments - they just prefer to work reliably and to make developers' lives easier.

Here's a tutorial on what building production APIs is really about: creating interfaces that are practical in usage, while keeping your systems maintainable for years to come.

Sound intriguing? Read the modern REST API tutorial in Python here.

Modern REST API tutorial in Python

More resources

➤ Python API integration tutorials
What is a Network Packet Broker? How to automate networks in Python?
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?
Open-source iPaaS in Python

December 22, 2025 03:00 AM UTC


Armin Ronacher

A Year Of Vibes

2025 draws to a close and it’s been quite a year. Around this time last year, I wrote a post that reflected on my life. Had I written about programming, it might have aged badly, as 2025 has been a year like no other for my profession.

2025 Was Different

2025 was the year of changes. Not only did I leave Sentry and start my new company, it was also the year I stopped programming the way I did before. In June I finally felt confident enough to share that my way of working was different:

Where I used to spend most of my time in Cursor, I now mostly use Claude Code, almost entirely hands-off. […] If you would have told me even just six months ago that I’d prefer being an engineering lead to a virtual programmer intern over hitting the keys myself, I would not have believed it.

While I set out last year wanting to write more, that desire had nothing to do with agentic coding. Yet I published 36 posts — almost 18% of all posts on this blog since 2007. I also had around a hundred conversations with programmers, founders, and others about AI because I was fired up with curiosity after falling into the agent rabbit hole.

2025 was also a not so great year for the world. To make my peace with it, I started a separate blog to separate out my thoughts from here.

The Year Of Agents

It started with a growing obsession with Claude Code in April or May, resulting in months of building my own agents and using others’. Social media exploded with opinions on AI: some good, some bad.

Now I feel I have found a new stable status quo for how I reason about where we are and where we are going. I’m doubling down on code generation, file systems, programmatic tool invocation via an interpreter glue, and skill-based learning. Basically: what Claude Code innovated is still state of the art for me. That has worked very well over the last few months, and seeing foundation model providers double down on skills reinforces my belief in this approach.

I’m still perplexed by how TUIs made such a strong comeback. At the moment I’m using Amp, Claude Code, and Pi, all from the command line. Amp feels like the Apple or Porsche of agentic coding tools, Claude Code is the affordable Volkswagen, and Pi is the Hacker’s Open Source choice for me. They all feel like projects built by people who, like me, use them to an unhealthy degree to build their own products, but with different trade-offs.

I continue to be blown away by what LLMs paired with tool execution can do. At the beginning of the year I mostly used them for code generation, but now a big number of my agentic uses are day-to-day things. I’m sure we will see some exciting pushes towards consumer products in 2026. LLMs are now helping me with organizing my life, and I expect that to grow further.

The Machine And Me

Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. I find this odd and discomforting. Most agents we use today do not have much of a memory and have little personality but it’s easy to build yourself one that does. An LLM with memory is an experience that is hard to shake off.

It’s both fascinating and questionable. I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer. These systems we now create have human tendencies, but elevating them to a human level would be a mistake. I increasingly take issue with calling these machines “agents,” yet I have no better word for it. I take issue with “agent” as a term because agency and responsibility should remain with humans. Whatever they are becoming, they can trigger emotional responses in us that can be detrimental if we are not careful. Our inability to properly name and place these creations in relation to us is a challenge I believe we need to solve.

Because of all this unintentional anthropomorphization, I’m really struggling at times to find the right words for how I’m working with these machines. I know that this is not just me; it’s others too. It creates even more discomfort when working with people who currently reject these systems outright. One of the most common comments I read in response to agentic coding tool articles is this rejection of giving the machine personality.

Opinions Everywhere

An unexpected aspect of using AI so much is that we talk far more about vibes than anything else. This way of working is less than a year old, yet it challenges half a century of software engineering experience. So there are many opinions, and it’s hard to say which will stand the test of time.

I found a lot of conventional wisdom I don’t agree with, but I have nothing to back up my opinions. How would I? I quite vocally shared my lack of success with MCP throughout the year, but I had little to back it up beyond “does not work for me.” Others swore by it. Similar with model selection. Peter, who got me hooked on Claude early in the year, moved to Codex and is happy with it. I don’t enjoy that experience nearly as much, though I started using it more. I have nothing beyond vibes to back up my preference for Claude.

It’s also important to know that some of the vibes come with intentional signalling. Plenty of people whose views you can find online have a financial interest in one product over another, for instance because they are investors in it or they are paid influencers. They might have become investors because they liked the product, but it’s also possible that their views are affected and shaped by that relationship.

Outsourcing vs Building Yourself

Pick up a library from any AI company today and you’ll notice they’re built with Stainless or Fern. The docs use Mintlify, the site’s authentication system might be Clerk. Companies now sell services you would have built yourself previously. This increase in outsourcing of core services to companies specializing in it meant that the bar for some aspects of the user experience has risen.

But with our newfound power from agentic coding tools, you can build much of this yourself. I had Claude build me an SDK generator for Python and TypeScript — partly out of curiosity, partly because it felt easy enough. As you might know, I’m a proponent of simple code and building it yourself. This makes me somewhat optimistic that AI has the potential to encourage building on fewer dependencies. At the same time, it’s not clear to me that we’re moving that way given the current trends of outsourcing everything.

Learnings and Wishes

This brings me not to predictions but to wishes for where we could put our energy next. I don’t really know what I’m looking for here, but I want to point at my pain points and give some context and food for thought.

New Kind Of Version Control

My biggest unexpected finding: we’re hitting limits of traditional tools for sharing code. The pull request model on GitHub doesn’t carry enough information to review AI generated code properly — I wish I could see the prompts that led to changes. It’s not just GitHub, it’s also git that is lacking.

With agentic coding, part of what makes the models work today is knowing the mistakes. If you steer it back to an earlier state, you want the tool to remember what went wrong. There is, for lack of a better word, value in failures. As humans we might also benefit from knowing the paths that did not lead us anywhere, but for machines this is critical information. You notice this when you are trying to compress the conversation history. Discarding the paths that led you astray means that the model will try the same mistakes again.

Some agentic coding tools have begun spinning up worktrees or creating checkpoints in git for restore, in-conversation branch and undo features. There’s room for UX innovation that could make these tools easier to work with. This is probably why we’re seeing discussions about stacked diffs and alternative version control systems like Jujutsu.

Will this change GitHub or will it create space for some new competition? I hope so. I increasingly want to better understand genuine human input and tell it apart from machine output. I want to see the prompts and the attempts that failed along the way. And then somehow I want to squash and compress it all on merge, but with a way to retrieve the full history if needed.

New Kind Of Review

This is related to the version control piece: current code review tools assign strict role definitions that just don’t work with AI. Take the GitHub code review UI: I regularly want to use comments on the PR view to leave notes for my own agents, but there is no guided way to do that. The review interface refuses to let me review my own code, I can only comment, but that does not have quite the same intention.

There is also the problem that an increased amount of code review now happens between me and my agents locally. For instance, the Codex code review feature on GitHub stopped working for me because it can only be bound to one organization at a time. So I now use Codex on the command line to do reviews, but that means a whole part of my iteration cycles is invisible to other engineers on the team. That doesn’t work for me.

Code review to me feels like it needs to become part of the VCS.

New Observability

I also believe that observability is up for grabs again. We now have both the need and opportunity to take advantage of it on a whole new level. Most people were not in a position where they could build their own eBPF programs, but LLMs can. Likewise, many observability tools shied away from SQL because of its complexity, but LLMs are better at it than any proprietary query language. They can write queries, they can grep, they can map-reduce, they remote-control LLDB. Anything that has some structure and text is suddenly fertile ground for agentic coding tools to succeed. I don’t know what the observability of the future looks like, but my strong hunch is that we will see plenty of innovation here. The better the feedback loop to the machine, the better the results.

I’m not even sure what I’m asking for here, but I think that one of the challenges in the past was that many cool ideas for better observability — specifically dynamic reconfiguration of services for more targeted filtering — were user-unfriendly because they were complex and hard to use. But now those might be the right solutions in light of LLMs because of their increased capabilities for doing this grunt work. For instance Python 3.14 landed an external debugger interface which is an amazing capability for an agentic coding tool.

Working With Slop

This may be a little more controversial, but what I haven’t managed this year is to give in to the machine. I still treat it like regular software engineering and review a lot. I also recognize that an increasing number of people are not working with this model of engineering but instead completely given in to the machine. As crazy as that sounds, I have seen some people be quite successful with this. I don’t yet know how to reason about this, but it is clear to me that even though code is being generated in the end, the way of working in that new world is very different from the world that I’m comfortable with. And my suspicion is that because that world is here to stay, we might need some new social contracts to separate these out.

The most obvious version of this is the increased amount of these types of contributions to Open Source projects, which are quite frankly an insult to anyone who is not working in that model. I find reading such pull requests quite rage-inducing.

Personally, I’ve tried to attack this problem with contribution guidelines and pull request templates. But this seems a little like a fight against windmills. This might be something where the solution will not come from changing what we’re doing. Instead, it might come from vocal people who are also pro-AI engineering speaking out on what good behavior in an agentic codebase looks like. And it is not just to throw up unreviewed code and then have another person figure the shit out.

December 22, 2025 12:00 AM UTC

December 21, 2025


Ned Batchelder

Generating data shapes with Hypothesis

In my last blog post (A testing conundrum), I described trying to test my Hasher class which hashes nested data. I couldn’t get Hypothesis to generate usable data for my test. I wanted to assert that two equal data items would hash equally, but Hypothesis was finding pairs like [0] and [False]. These are equal but hash differently because the hash takes the types into account.

In the blog post I said,

If I had a schema for the data I would be comparing, I could use it to steer Hypothesis to generate realistic data. But I don’t have that schema...

I don’t want a fixed schema for the data Hasher would accept, but tests to compare data generated from the same schema. It shouldn’t compare a list of ints to a list of bools. Hypothesis is good at generating things randomly. Usually it generates data randomly, but we can also use it to generate schemas randomly!

Hypothesis basics

Before describing my solution, I’ll take a quick detour to describe how Hypothesis works.

Hypothesis calls their randomness machines “strategies”. Here is a strategy that will produce random integers between -99 and 1000:

import hypothesis.strategies as st
st.integers(min_value=-99, max_value=1000)

Strategies can be composed:

st.lists(st.integers(min_value=-99, max_value=1000), max_size=50)

This will produce lists of integers from -99 to 1000. The lists will have up to 50 elements.

Strategies are used in tests with the @given decorator, which takes a strategy and runs the test a number of times with different example data drawn from the strategy. In your test you check a desired property that holds true for any data the strategy can produce.

To demonstrate, here’s a test of sum() that checks that summing a list of numbers in two halves gives the same answer as summing the whole list:

from hypothesis import given, strategies as st

@given(st.lists(st.integers(min_value=-99, max_value=1000), max_size=50))
def test_sum(nums):
    # We don't have to test sum(), this is just an example!
    mid = len(nums) // 2
    assert sum(nums) == sum(nums[:mid]) + sum(nums[mid:])

By default, Hypothesis will run the test 100 times, each with a different randomly generated list of numbers.

Schema strategies

The solution to my data comparison problem is to have Hypothesis generate a random schema in the form of a strategy, then use that strategy to generate two examples. Doing this repeatedly will get us pairs of data that have the same “shape” that will work well for our tests.

This is kind of twisty, so let’s look at it in pieces. We start with a list of strategies that produce primitive values:

primitives = [
    st.none(),
    st.booleans(),
    st.integers(min_value=-1000, max_value=10_000_000),
    st.floats(min_value=-100, max_value=100),
    st.text(max_size=10),
    st.binary(max_size=10),
]

Then a list of strategies that produce hashable values, which are all the primitives, plus tuples of any of the primitives:

def tuples_of(elements):
    """Make a strategy for tuples of some other strategy."""
    return st.lists(elements, max_size=3).map(tuple)

# List of strategies that produce hashable data.
hashables = primitives + [tuples_of(s) for s in primitives]

We want to be able to make nested dictionaries with leaves of some other type. This function takes a leaf-making strategy and produces a strategy to make those dictionaries:

def nested_dicts_of(leaves):
    """Make a strategy for recursive dicts with leaves from another strategy."""
    return st.recursive(
        leaves,
        lambda children: st.dictionaries(st.text(max_size=10), children, max_size=3),
        max_leaves=10,
    )

Finally, here’s our strategy that makes schema strategies:

nested_data_schemas = st.recursive(
    st.sampled_from(primitives),
    lambda children: st.one_of(
        children.map(lambda s: st.lists(s, max_size=5)),
        children.map(tuples_of),
        st.sampled_from(hashables).map(lambda s: st.sets(s, max_size=10)),
        children.map(nested_dicts_of),
    ),
    max_leaves=3,
)

For debugging, it’s helpful to generate an example strategy from this strategy, and then an example from that, many times:

for _ in range(50):
    print(repr(nested_data_schemas.example().example()))

Hypothesis is good at making data we’d never think to try ourselves. Here is some of what it made:

[None, None, None, None, None]
{}
[{False}, {False, True}, {False, True}, {False, True}]
{(1.9, 80.64553337755876), (-41.30770818038395, 9.42967906108538, -58.835811641800085), (31.102786990742203,), (28.2724197133397, 6.103515625e-05, -84.35107066147154), (7.436329211943294e-263,), (-17.335739410320514, 1.5029061311609365e-292, -8.17077562035881), (-8.029363284353857e-169, 49.45840191722425, -15.301768150196054), (5.960464477539063e-08, 1.1518373121077722e-213), (), (-0.3262457914511714,)}
[b'+nY2~\xaf\x8d*\xbb\xbf', b'\xe4\xb5\xae\xa2\x1a', b'\xb6\xab\xafEi\xc3C\xab"\xe1', b'\xf0\x07\xdf\xf5\x99', b'2\x06\xd4\xee-\xca\xee\x9f\xe4W']
{'fV': [81.37177374286324, 3.082323424992609e-212, 3.089885728465406e-151, -9.51475773638932e-86, -17.061851038597922], 'J»\x0c\x86肭|\x88\x03\x8aU': [29.549966208819654]}
[{}, -68.48316192397687]
None
['\x85\U0004bf04°', 'pB\x07iQT', 'TRUE', '\x1a5ùZâ\U00048752\U0005fdf8ê', '\U000fe0b9m*¤\U000b9f1e']
(14.232866652585258, -31.193835515904652, 62.29850355163285)
{'': {'': None, \U000be8de§\nÈ\U00093608u': None, 'Y\U000709e4¥ùU)GE\U000dddc5¬': None}}
[{(), (b'\xe7', b'')}, {(), (b'l\xc6\x80\xdf\x16\x91', b'', b'\x10,')}, {(b'\xbb\xfb\x1c\xf6\xcd\xff\x93\xe0\xec\xed',), (b'g',), (b'\x8e9I\xcdgs\xaf\xd1\xec\xf7', b'\x94\xe6#', b'?\xc9\xa0\x01~$k'), (b'r', b'\x8f\xba\xe6\xfe\x92n\xc7K\x98\xbb', b'\x92\xaa\xe8\xa6s'), (b'f\x98_\xb3\xd7', b'\xf4+\xf7\xbcU8RV', b'\xda\xb0'), (b'D',), (b'\xab\xe9\xf6\xe9', b'7Zr\xb7\x0bl\xb6\x92\xb8\xad', b'\x8f\xe4]\x8f'), (b'\xcf\xfb\xd4\xce\x12\xe2U\x94mt',), (b'\x9eV\x11', b'\xc5\x88\xde\x8d\xba?\xeb'), ()}, {(b'}', b'\xe9\xd6\x89\x8b')}, {(b'\xcb`', b'\xfd', b'w\x19@\xee'), ()}]
((), (), ())

Finally writing the test

Time to use all of this in a test:

@given(nested_data_schemas.flatmap(lambda s: st.tuples(s, s)))
def test_same_schema(data_pair):
    data1, data2 = data_pair
    h1, h2 = Hasher(), Hasher()
    h1.update(data1)
    h2.update(data2)
    if data1 == data2:
        assert h1.digest() == h2.digest()
    else:
        # Strictly speaking, unequal data could produce equal hashes,
        # but it's very unlikely, so test for it anyway.
        assert h1.digest() != h2.digest()

Here I use the .flatmap() method to draw an example from the nested_data_schemas strategy and call the provided lambda with the drawn example, which is itself a strategy. The lambda uses st.tuples to make tuples with two examples drawn from the strategy. So we get one data schema, and two examples from it as a tuple passed into the test as data_pair. The test then unpacks the data, hashes them, and makes the appropriate assertion.

This works great: the tests pass. To check that the test was working well, I made some breaking tweaks to the Hasher class. If Hypothesis is configured to generate enough examples, it finds data examples demonstrating the failures.

I’m pleased with the results. Hypothesis is something I’ve been wanting to use more, so I’m glad I took this chance to learn more about it and get it working for these tests. To be honest, this is way more than I needed to test my Hasher class. But once I got started, I wanted to get it right, and learning is always good.

I’m a bit concerned that the standard setting (100 examples) isn’t enough to find the planted bugs in Hasher. There are many parameters in my strategies that could be tweaked to keep Hypothesis from wandering too broadly, but I don’t know how to decide what to change.

Actually

The code in this post is different than the actual code I ended up with. Mostly this is because I was working on the code while I was writing this post, and discovered some problems that I wanted to fix. For example, the tuples_of function makes homogeneous tuples: varying lengths with elements all of the same type. This is not the usual use of tuples (see Lists vs. Tuples). Adapting for heterogeneous tuples added more complexity, which was interesting to learn, but I didn’t want to go back and add it here.

You can look at the final strategies.py to see that and other details, including type hints for everything, which was a journey of its own.

Postscript: AI assistance

I would not have been able to come up with all of this by myself. Hypothesis is very powerful, but requires a new way of thinking about things. It’s twisty to have functions returning strategies, and especially strategies producing strategies. The docs don’t have many examples, so it can be hard to get a foothold on the concepts.

Claude helped me by providing initial code, answering questions, debugging when things didn’t work out, and so on. If you are interested, this is one of the discussions I had with it.

December 21, 2025 04:43 PM UTC

December 19, 2025


Luke Plant

Help my website is too small

A jobs web site I belong to just emailed me, telling me that some of the links in my public profile on their site are “broken” and “thus have been removed”.

The evidence that these sites are broken? They are too small:

https://www.djangoproject.com/: response body too small (6220 bytes)

https://www.cciw.co.uk/: response body too small (3033 bytes)

The first is the home page of the Django web framework, and is, unsurprisingly, implemented using Django (see the djangoproject.com source code). The second is one of my own projects, and also implemented using Django (source also available for anyone who cares).

Checking in webdev tools on these sites gives very similar numbers to the above for the over-the-wire size of the initial HTML (though I get slightly higher figures), so this wasn’t a blip caused by downtime, as far as I can see.

Apparently, if your HTML is less than 7k, that obviously can’t be a real website, let alone something as ridiculously small as 3k. Even with compression turned up all the way, it’s clearly impossible to return more than an error message with less than at least 4k, right?

So please can Django get it sorted and add some bloat to their home page, and to their framework, and can someone also send me tips on bloating my own sites, so that my profile links can be counted as real websites? Thanks!

December 19, 2025 01:45 PM UTC


Real Python

The Real Python Podcast – Episode #277: Moving Towards Spec-Driven Development

What are the advantages of spec-driven development compared to vibe coding with an LLM? Are these recent trends a move toward declarative programming? This week on the show, Marc Brooker, VP and Distinguished Engineer at AWS, joins us to discuss specification-driven development and Kiro.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 19, 2025 12:00 PM UTC

December 18, 2025


Django Weblog

Hitting the Home Stretch: Help Us Reach the Django Software Foundation's Year-End Goal!

As we wrap up another strong year for the Django community, we wanted to share an update and a thank you. This year, we raised our fundraising goal from $200,000 to $300,000, and we are excited to say we are now over 88% of the way there. That puts us firmly in the home stretch, and a little more support will help us close the gap and reach 100%.

So why the higher goal this year? We expanded the Django Fellows program to include a third Fellow. In August, we welcomed Jacob Tyler Walls as our newest Django Fellow. That extra capacity gives the team more flexibility and resilience, whether someone is taking parental leave, time off around holidays, or stepping away briefly for other reasons. It also makes it easier for Fellows to attend more Django events and stay connected with the community, all while keeping the project running smoothly without putting too much pressure on any one person.

We are also preparing to raise funds for an executive director role early next year. That work is coming soon, but right now, the priority is finishing this year strong.

We want to say a sincere thank you to our existing sponsors and to everyone who has donated so far. Your support directly funds stable Django releases, security work, community programs, and the long-term health of the framework. If you or your organization have end-of-year matching funds or a giving program, this is a great moment to put them to use and help push us past the finish line.

If you would like to help us reach that final stretch, you can find all the details on our fundraising page

Other ways to support Django:

Thank you for helping support Django and the people who make it possible. We are incredibly grateful for this community and everything you do to keep Django strong.

December 18, 2025 10:04 PM UTC


Sumana Harihareswara - Cogito, Ergo Sumana

Python Software Foundation, National Science Foundation, And Integrity

Python Software Foundation, National Science Foundation, And Integrity

December 18, 2025 07:43 PM UTC


Django Weblog

Introducing the 2026 DSF Board

Thank You to Our Outgoing Directors

We extend our gratitude to Thibaud Colas and Sarah Abderemane, who are completing their terms on the board. Their contributions shaped the foundation in meaningful ways, and the following highlights only scratch the surface of their work.

Thibaud served as President in 2025 and Secretary in 2024. He was instrumental in governance improvements, the Django CNA initiative, election administration, and creating our first annual report. He also led our birthday campaign and helped with the creation of several new working groups this year. His thoughtful leadership helped the board navigate complex decisions.

Sarah served as Vice President in 2025 and contributed significantly to our outreach efforts, working group coordination, and membership management. She also served as a point of contact for the Django CNA initiative alongside Thibaud.

Both Thibaud and Sarah did too many things to list here. They were amazing ambassadors for the DSF, representing the board at many conferences and events. They will be deeply missed, and we are happy to have their continued membership and guidance in our many working groups.

On behalf of the board, thank you both for your commitment to Django and the DSF. The community is better for your service.

Thank You to Our 2025 Officers

Thank you to Tom Carrick and Jacob Kaplan-Moss for their service as officers in 2025.

Tom served as Secretary, keeping our meetings organized and our records in order. Jacob served as Treasurer, providing careful stewardship of the foundation's finances. Their dedication helped guide the DSF through another successful year.

Welcome to Our Newly Elected Directors

We welcome Priya Pahwa and Ryan Cheley to the board, and congratulate Jacob Kaplan-Moss on his re-election.

2026 DSF Board Officers

The board unanimously elected our officers for 2026:

I'm honored to serve as President for 2026. The DSF has important work ahead, and I'm looking forward to building on the foundation that previous boards have established.

Our monthly board meeting minutes may be found at dsf-minutes, and December's minutes are available.

If you have a great idea for the upcoming year or feel something needs our attention, please reach out to us via our Contact the DSF page. We're always open to hearing from you.

December 18, 2025 06:50 PM UTC


Ned Batchelder

A testing conundrum

Update: I found a solution which I describe in Generating data shapes with Hypothesis.

In coverage.py, I have a class for computing the fingerprint of a data structure. It’s used to avoid doing duplicate work when re-processing the same data won’t add to the outcome. It’s designed to work for nested data, and to canonicalize things like set ordering. The slightly simplified code looks like this:

class Hasher:
    """Hashes Python data for fingerprinting."""

    def __init__(self) -> None:
        self.hash = hashlib.new("sha3_256")

    def update(self, v: Any) -> None:
        """Add `v` to the hash, recursively if needed."""
        self.hash.update(str(type(v)).encode("utf-8"))
        match v:
            case None:
                pass
            case str():
                self.hash.update(v.encode("utf-8"))
            case bytes():
                self.hash.update(v)
            case int() | float():
                self.hash.update(str(v).encode("utf-8"))
            case tuple() | list():
                for e in v:
                    self.update(e)
            case dict():
                for k, kv in sorted(v.items()):
                    self.update(k)
                    self.update(kv)
            case set():
                self.update(sorted(v))
            case _:
                raise ValueError(f"Can't hash {v = }")
        self.hash.update(b".")

    def digest(self) -> bytes:
        """Get the full binary digest of the hash."""
        return self.hash.digest()

To test this, I had some basic tests like:

def test_string_hashing():
    # Same strings hash the same.
    # Different strings hash differently.
    h1 = Hasher()
    h1.update("Hello, world!")
    h2 = Hasher()
    h2.update("Goodbye!")
    h3 = Hasher()
    h3.update("Hello, world!")
    assert h1.digest() != h2.digest()
    assert h1.digest() == h3.digest()

def test_dict_hashing():
    # The order of keys doesn't affect the hash.
    h1 = Hasher()
    h1.update({"a": 17, "b": 23})
    h2 = Hasher()
    h2.update({"b": 23, "a": 17})
    assert h1.digest() == h2.digest()

The last line in the update() method adds a dot to the running hash. That was to solve a problem covered by this test:

def test_dict_collision():
    # Nesting matters.
    h1 = Hasher()
    h1.update({"a": 17, "b": {"c": 1, "d": 2}})
    h2 = Hasher()
    h2.update({"a": 17, "b": {"c": 1}, "d": 2})
    assert h1.digest() != h2.digest()

The most recent change to Hasher was to add the set() clause. There (and in dict()), we are sorting the elements to canonicalize them. The idea is that equal values should hash equally and unequal values should not. Sets and dicts are equal regardless of their iteration order, so we sort them to get the same hash.

I added a test of the set behavior:

def test_set_hashing():
    h1 = Hasher()
    h1.update({(1, 2), (3, 4), (5, 6)})
    h2 = Hasher()
    h2.update({(5, 6), (1, 2), (3, 4)})
    assert h1.digest() == h2.digest()
    h3 = Hasher()
    h3.update({(1, 2)})
    assert h1.digest() != h3.digest()

But I wondered if there was a better way to test this class. My small one-off tests weren’t addressing the full range of possibilities. I could read the code and feel confident, but wouldn’t a more comprehensive test be better? This is a pure function: inputs map to outputs with no side-effects or other interactions. It should be very testable.

This seemed like a good candidate for property-based testing. The Hypothesis library would let me generate data, and I could check that the desired properties of the hash held true.

It took me a while to get the Hypothesis strategies wired up correctly. I ended up with this, but there might be a simpler way:

from hypothesis import strategies as st

scalar_types = [
    st.none(),
    st.booleans(),
    st.integers(),
    st.floats(allow_infinity=False, allow_nan=False),
    st.text(),
    st.binary(),
]

scalars = st.one_of(*scalar_types)

def tuples_of(strat):
    return st.lists(strat, max_size=3).map(tuple)

hashable_types = scalar_types + [tuples_of(s) for s in scalar_types]

# Homogeneous sets: all elements same type.
homogeneous_sets = (
    st.sampled_from(hashable_types)
    .flatmap(lambda s: st.sets(s, max_size=5))
)

# Full nested Python data.
python_data = st.recursive(
    scalars,
    lambda children: (
        st.lists(children, max_size=5)
        | tuples_of(children)
        | homogeneous_sets
        | st.dictionaries(st.text(), children, max_size=5)
    ),
    max_leaves=10,
)

This doesn’t make completely arbitrary nested Python data: sets are forced to have elements all of the same type or I wouldn’t be able to sort them. Dictionaries only have strings for keys. But this works to generate data similar to the real data we hash. I wrote this simple test:

from hypothesis import given

@given(python_data)
def test_one(data):
    # Hashing the same thing twice.
    h1 = Hasher()
    h1.update(data)
    h2 = Hasher()
    h2.update(data)
    assert h1.digest() == h2.digest()

This didn’t find any failures, but this is the easy test: hashing the same thing twice produces equal hashes. The trickier test is to get two different data structures, and check that their equality matches their hash equality:

@given(python_data, python_data)
def test_two(data1, data2):
    h1 = Hasher()
    h1.update(data1)
    h2 = Hasher()
    h2.update(data2)

    if data1 == data2:
        assert h1.digest() == h2.digest()
    else:
        assert h1.digest() != h2.digest()

This immediately found problems, but not in my code:

> assert h1.digest() == h2.digest()
E AssertionError: assert b'\x80\x15\xc9\x05...' == b'\x9ap\xebD...'
E
E   At index 0 diff: b'\x80' != b'\x9a'
E
E   Full diff:
E   - (b'\x9ap\xebD...)'
E   + (b'\x80\x15\xc9\x05...)'
E Falsifying example: test_two(
E     data1=(False, False, False),
E     data2=(False, False, 0),
E )

Hypothesis found that (False, False, False) is equal to (False, False, 0), but they hash differently. This is correct. The Hasher class takes the types of the values into account in the hash. False and 0 are equal, but they are different types, so they hash differently. The same problem shows up for 0 == 0.0 and 0.0 == -0.0. The theory of my test was incorrect: some values that are equal should hash differently.

In my real code, this isn’t an issue. I won’t ever be comparing values like this to each other. If I had a schema for the data I would be comparing, I could use it to steer Hypothesis to generate realistic data. But I don’t have that schema, and I’m not sure I want to maintain that schema. This Hasher is useful as it is, and I’ve been able to reuse it in new ways without having to update a schema.

I could write a smarter equality check for use in the tests, but that would roughly approximate the code in Hasher itself. Duplicating product code in the tests is a good way to write tests that pass but don’t tell you anything useful.

I could exclude bools and floats from the test data, but those are actual values I need to handle correctly.

Hypothesis was useful in that it didn’t find any failures others than the ones I described. I can’t leave those tests in the automated test suite because I don’t want to manually examine the failures, but at least this gave me more confidence that the code is good as it is now.

Testing is a challenge unto itself. This brought it home to me again. It’s not easy to know precisely what you want code to do, and it’s not easy to capture that intent in tests. For now, I’m leaving just the simple tests. If anyone has ideas about how to test Hasher more thoroughly, I’m all ears.

December 18, 2025 10:30 AM UTC


Eli Bendersky

Plugins case study: mdBook preprocessors

mdBook is a tool for easily creating books out of Markdown files. It's very popular in the Rust ecosystem, where it's used (among other things) to publish the official Rust book.

mdBook has a simple yet effective plugin mechanism that can be used to modify the book output in arbitrary ways, using any programming language or tool. This post describes the mechanism and how it aligns with the fundamental concepts of plugin infrastructures.

mdBook preprocessors

mdBook's architecture is pretty simple: your contents go into a directory tree of Markdown files. mdBook then renders these into a book, with one file per chapter. The book's output is HTML by default, but mdBook supports other outputs like PDF.

The preprocessor mechanism lets us register an arbitrary program that runs on the book's source after it's loaded from Markdown files; this program can modify the book's contents in any way it wishes before it all gets sent to the renderer for generating output.

Preprocessor flow for mdbook

The official documentation explains this process very well.

Sample plugin

I rewrote my classical "nacrissist" plugin for mdBook; the code is available here.

In fact, there are two renditions of the same plugin there:

  1. One in Python, to demonstrate how mdBook can invoke preprocessors written in any programming language.
  2. One in Rust, to demonstrate how mdBook exposes an application API to plugins written in Rust (since mdBook is itself written in Rust).

Fundamental plugin concepts in this case study

Let's see how this case study of mdBook preprocessors measures against the Fundamental plugin concepts that were covered several times on this blog.

Discovery

Discovery in mdBook is very explicit. For every plugin we want mdBook to use, it has to be listed in the project's book.toml configuration file. For example, in the code sample for this post, the Python narcissist plugin is noted in book.toml as follows:

[preprocessor.narcissistpy]
command = "python3 ../preprocessor-python-narcissist/narcissist.py"

Each preprocessor is a command for mdBook to execute in a sub-process. Here it uses Python, but it can be anything else that can be validly executed.

Registration

For the purpose of registration, mdBook actually invokes the plugin command twice. The first time, it passes the arguments supports <renderer> where <renderer> is the name of the renderer (e.g. html). If the command returns 0, it means the preprocessor supports this renderer; otherwise, it doesn't.

In the second invocation, mdBook passes some metadata plus the entire book in JSON format to the preprocessor through stdin, and expects the preprocessor to return the modified book as JSON to stdout (using the same schema).

Hooks

In terms of hooks, mdBook takes a very coarse-grained approach. The preprocessor gets the entire book in a single JSON object (along with a context object that contains metadata), and is expected to emit the entire modified book in a single JSON object. It's up to the preprocessor to figure out which parts of the book to read and which parts to modify.

Given that books and other documentation typically have limited sizes, this is a reasonable design choice. Even tens of MiB of JSON-encoded data are very quick to pass between sub-processes via stdout and marshal/unmarshal. But we wouldn't be able to implement Wikipedia using this design.

Exposing an application API to plugins

This is tricky, given that the preprocessor mechanism is language-agnostic. Here, mdBook offers some additional utilities to preprocessors implemented in Rust, however. These get access to mdBook's API to unmarshal the JSON representing the context metadata and book's contents. mdBook offers the Preprocessor trait Rust preprocessors can implement, which makes it easier to wrangle the book's contents. See my Rust version of the narcissist preprocessor for a basic example of this.

Renderers / backends

Actually, mdBook has another plugin mechanism, but it's very similar conceptually to preprocessors. A renderer (also called a backend in some of mdBook's own doc pages) takes the same input as a preprocessor, but is free to do whatever it wants with it. The default renderer emits the HTML for the book; other renderers can do other things.

The idea is that the book can go through multiple preprocessors, but at the end a single renderer.

The data a renderer receives is exactly the same as a preprocessor - JSON encoded book contents. Due to this similarity, there's no real point getting deeper into renderers in this post.

December 18, 2025 10:10 AM UTC


Peter Bengtsson

Autocomplete using PostgreSQL instead of Elasticsearch

Here on my blog I have a site search. Before you search, there's autocomplete. The autocomplete is solved by using downshift in React and on the backend, there's an API /api/v1/typeahead?q=bla. Up until today, that backend was powered by Elasticsearch. Now it's powered by PostgreSQL. Here's how I implemented it.

Indexing

A cron job loops over all titles in all blog posts and finds portions of the words in the titles as singles, doubles, and triples. For each one, the popularity of the blog post is accumulated to the extracted keywords and combos.

These are then inserted into a Django ORM model that looks like this:


class SearchTerm(models.Model):
    term = models.CharField(max_length=100, db_index=True)
    popularity = models.FloatField(default=0.0)
    add_date = models.DateTimeField(auto_now=True)
    index_version = models.IntegerField(default=0)

    class Meta:
        unique_together = ("term", "index_version")
        indexes = [
            GinIndex(
                name="plog_searchterm_term_gin_idx",
                fields=["term"],
                opclasses=["gin_trgm_ops"],
            ),
        ]

The index_version is used like this, in the indexing code:


current_index_version = (
    SearchTerm.objects.aggregate(Max("index_version"))["index_version__max"]
    or 0
)
index_version = current_index_version + 1

...

SearchTerm.objects.bulk_create(bulk)

SearchTerm.objects.filter(index_version__lt=index_version).delete()

That means that I don't have to delete previous entries until new ones have been created. So if something goes wrong during the indexing, it doesn't break the API.
Essentially, there are about 13k entries in that model. For a very brief moment there are 2x13k entries and then back to 13k entries when the whole task is done.

The search is done with the LIKE operator.


peterbecom=# select term from plog_searchterm where term like 'za%';
            term
-----------------------------
 zahid
 zappa
 zappa biography
 zappa biography barry
 zappa biography barry miles
 zappa blog
(6 rows)

In Python, it's as simple as:


base_qs = SearchTerm.objects.all()
qs = base_qa.filter(term__startswith=term.lower())

But suppose someone searches for bio we want it to match things like frank zappa biography so what it actually does is:


from django.db.models import Q 

qs = base_qs.filter(
    Q(term__startswith=term.lower()) | Q(term__contains=f" {term.lower()}")
)

Typo tolerance

This is done with the % operator.


peterbecom=# select term from plog_searchterm where term % 'frenk';
  term
--------
 free
 frank
 freeze
 french
(4 rows)

In the Django ORM it looks like this:


base_qs = SearchTerm.objects.all()
qs = base_qs.filter(term__trigram_similar=term.lower())

And if that doesn't work, it gets even more desperate. It does this using the similarity() function. Looks like this in SQL:


peterbecom=# select term from plog_searchterm where similarity(term, 'zuppa') > 0.14;
       term
-------------------
 frank zappa
 zappa
 zappa biography
 radio frank zappa
 frank zappa blog
 zappa blog
 zurich
(7 rows)

Note on typo tolerance

Most of the time, the most basic query works and yields results. I.e. the .filter(term__startswith=term.lower()) query.
It's only if it yields fewer results than the pagination size. That's why the fault tolerance query is only-if-needed. This means, it might send 2 SQL select queries from Python to PostgreSQL. In Elasticsearch, you usually don't do this. You send multiple queries and boost the differently.

It can be done with PostgreSQL too using an UNION operator so that you send one but more complex query.

Speed

It's hard to measure the true performance of these things because they're so fast that it's more about the network speed.

On my fast MacBook Pro M4, I ran about 50 realistic queries and measured the time it took each with this new PostgreSQL-based solution versus the previous Elasticsearch solution. They both take about 4ms per query. I suspect, that 90% of that 4ms is serialization & transmission, and not much time inside the database itself.

The number of rows it searches is only, at the time of writing, 13,000+ so it's hard to get a feel for how much faster Elasticsearch would be than PostgreSQL. But with a GIN index in PostgreSQL, it would have to scale much much larger to feel too slow.

About Elasticsearch

Elasticsearch is better than PostgreSQL at full-text search, including n-grams. Elasticsearch is highly optimized for these kinds of things and has powerful ways that you can make a query be a product of how well it matched with each entry's popularity. With PostgreSQL that gets difficult.

But PostgreSQL is simple. It's solid and it doesn't take up nearly as much memory as Elasticsearch.

December 18, 2025 09:46 AM UTC


Talk Python to Me

#531: Talk Python in Production

Have you ever thought about getting your small product into production, but are worried about the cost of the big cloud providers? Or maybe you think your current cloud service is over-architected and costing you too much? Well, in this episode, we interview Michael Kennedy, author of "Talk Python in Production," a new book that guides you through deploying web apps at scale with right-sized engineering.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/seer-code-review'>Seer: AI Debugging, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Christopher Trudeau - guest host</strong>: <a href="https://www.linkedin.com/in/christopherltrudeau/?featured_on=talkpython" target="_blank" >www.linkedin.com</a><br/> <strong>Michael's personal site</strong>: <a href="https://mkennedy.codes?featured_on=talkpython" target="_blank" >mkennedy.codes</a><br/> <br/> <strong>Talk Python in Production Book</strong>: <a href="https://talkpython.fm/books/python-in-production" target="_blank" >talkpython.fm</a><br/> <strong>glances</strong>: <a href="https://github.com/nicolargo/glances?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>btop</strong>: <a href="https://github.com/aristocratos/btop?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Uptimekuma</strong>: <a href="https://uptimekuma.org?featured_on=talkpython" target="_blank" >uptimekuma.org</a><br/> <strong>Coolify</strong>: <a href="https://coolify.io?featured_on=talkpython" target="_blank" >coolify.io</a><br/> <strong>Talk Python Blog</strong>: <a href="https://talkpython.fm/blog/" target="_blank" >talkpython.fm</a><br/> <strong>Hetzner (€20 credit with link)</strong>: <a href="https://hetzner.cloud/?ref=UQMdSwUenwRE&featured_on=talkpython" target="_blank" >hetzner.cloud</a><br/> <strong>OpalStack</strong>: <a href="https://www.opalstack.com/?featured_on=talkpython" target="_blank" >www.opalstack.com</a><br/> <strong>Bunny.net CDN</strong>: <a href="https://bunny.net/cdn/?featured_on=talkpython" target="_blank" >bunny.net</a><br/> <strong>Galleries from the book</strong>: <a href="https://github.com/mikeckennedy/talk-python-in-production-devops-book/tree/main/galleries?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Pandoc</strong>: <a href="https://pandoc.org?featured_on=talkpython" target="_blank" >pandoc.org</a><br/> <strong>Docker</strong>: <a href="https://www.docker.com?featured_on=talkpython" target="_blank" >www.docker.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=TTbvmC01YvI" target="_blank" >youtube.com</a><br/> <strong>Episode #531 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/531/talk-python-in-production#takeaways-anchor" target="_blank" >talkpython.fm/531</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/531/talk-python-in-production" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

December 18, 2025 08:00 AM UTC


Seth Michael Larson

Delta emulator adds support for SEGA Genesis games

The Delta emulator which I've used for mobile retro-gaming in the past has added beta support for SEGA Genesis and Master System games! Riley and Shane made the announcement through the Delta emulator Patreon and also on Mastodon.

You can install the emulator on iOS through the “TestFlight” application to get access right away. I've done so and tested many of my favorite games including the Sonic the Hedgehog and the Streets of Rage trilogies and found that the emulator handled these games flawlessly.


Delta emulator loaded with SEGA Genesis ROMs

The addition of SEGA Genesis support in Delta is quite exciting for me as the Genesis was my first console. I've amassed quite the collection of SEGA Genesis ROMs from the Sonic Mega Collection on the GameCube and the SEGA Classics collection previously available on Steam. Now I can play any of these games on the go, but I'll probably need to buy a simple Bluetooth controller with a D-Pad for the hand ergonomics.

Unrelatedly: did you know that the AltStore is connected to the Fediverse now? Pretty cool stuff.

Have you tried the Delta emulator or grow up playing SEGA Genesis games like me? Let me know your favorite game from this era!


Playing the Sonic the Hedgehog 3 & Knuckles using “LOCK-ON Technology



Thanks for keeping RSS alive! ♥

December 18, 2025 12:00 AM UTC

December 17, 2025


Sebastian Pölsterl

scikit-survival 0.26.0 released

I am pleased to announce that scikit-survival 0.26.0 has been released.

This is a maintainance release that adds support for Python 3.14 and includes updates to make scikit-survival compatible with new versions of pandas and osqp. It adds support for the pandas string dtype, and copy-on-write, which is going to become the default with pandas 3. In addition, sksurv.preprocessing.OneHotEncoder now supports converting columns with the object dtype.

With this release, the minimum supported version are:

PackageMinimum Version
Python3.11
pandas2.0.0
osqp1.0.2

Install

scikit-survival is available for Linux, macOS, and Windows and can be installed either

via pip:

pip install scikit-survival

or via conda

 conda install -c conda-forge scikit-survival

December 17, 2025 08:26 PM UTC


PyCharm

The Islands theme is now the default look across JetBrains IDEs starting with version 2025.3.
This update is more than a visual refresh. It’s our commitment to creating a soft, balanced environment designed to support focus and comfort throughout your workflow.

We began introducing the new theme earlier this year, gathering feedback, conducting research, and testing it hands-on with developers who use our IDEs every day.

The result is a modern, refined design shaped by real workflows and real feedback. It’s still the IDE you know, just softer, lighter, and more cohesive. 

Let’s take a closer look. Literally.

Softer, clearer, and easier on the eyes

The Islands theme introduces a clean, uncluttered layout with rounded corners and balanced spacing, making the UI feel softer and easier on the eyes. We’ve also made tool window borders more distinct, making it easier to resize elements and adjust the workspace to your liking.

“It’s a modern feel. The radius on the borders and more distinctive layers bring a fresh feeling to the UI.”

Instant tab recognition

When working with multiple files, finding your active tab should never slow you down. The Islands theme improves tab recognition, making the active one clearly visible and easier to spot at a glance. 

“The active tab is very obvious, which is really nice”

Organized spaces for focus support

The new design introduces a clear separation between working areas, giving each part of the IDE – the editor, tool windows, and panels – its own visual space. This layout feels more organized and easier to navigate, helping you move around the IDE without losing focus or pace.

If you want even clearer visual emphasis on the editor, you can enable the Different tool window background option in Settings | Appearance under the Islands theme settings.

This is what we wanted to share about the new Islands theme, now the default look across all JetBrains IDEs. This thoughtful visual update shaped by feedback from daily users and aligned with the latest design directions in macOS and Windows 11 offers a softer, clearer, and more comfortable environment. And we believe this helps you stay productive and focused on what matters most – your code.

December 17, 2025 07:41 PM UTC


Real Python

How to Build the Python Skills That Get You Hired

When you’re learning Python, the sheer volume of topics to explore can feel overwhelming because there’s so much you could focus on. Should you dive into web frameworks before exploring data science? Is test-driven development something you need right away? And which skills actually matter to employers in the age of AI-assisted software development?

By the end of this tutorial, you’ll have:

  • A clear understanding of which Python skills employers consistently look for
  • A personalized Python developer roadmap showing where you are and where you need to go
  • A weekly practice plan that makes consistent progress feel achievable

Python itself is relatively beginner-friendly, but its versatility makes it easy to wander without direction. Without a clear plan, you can spend months studying topics that won’t help you land your first developer job.

This guide will show you how to build a focused learning strategy that aligns with real job market demands. You’ll learn how to research what employers value, assess your current strengths and gaps, and structure a practice routine that turns scattered study sessions into steady progress.

Instead of guessing what to learn next, you’ll have a concrete document that shows you exactly where to focus:

The Python skills worksheet as a table with one row filled out and a link showing on hover

Work through this tutorial to identify the skills you need and set yourself up for success.

Get Your Downloads: Click here to download the free materials that will help you build the Python skills that get you hired.

Step 1: Identify the Python Skills Employers Value Most

Before you dive into another tutorial or framework, you need to understand what the job market actually rewards. Most Python learners make the mistake of studying everything that sounds interesting. You’ll make faster progress by focusing on the skills that appear in job posting after job posting.

Research Real Job Requirements

Start by opening five to ten current job listings for Python-related positions. Look for titles like Python Developer, Backend Engineer, Data Analyst, or Machine Learning Engineer on sites like Indeed, Stack Overflow Jobs, and LinkedIn. As you read through these postings, highlight the technical requirements that appear repeatedly. You’ll quickly start to notice patterns.

To illustrate, consider a few examples of different roles involving Python:

Despite these differences, nearly every job posting shares a common core. Employers want developers who understand Python fundamentals deeply. They should also be able to use version control with Git, write unit tests for their code, and debug problems systematically. Familiarity with DevOps practices and cloud platforms is often a plus. These professional practices matter as much as knowing any specific framework.

Increasingly, job postings also expect familiarity with AI coding tools like GitHub Copilot, Gemini CLI, Cursor, or Claude Code. Employers want developers who can use these tools productively while maintaining the judgment to review and validate AI-generated code.

Note: With AI tools handling more routine coding tasks, employers increasingly value developers who can think at the system level.

Understanding how components fit together, how to design scalable architectures, and how to make sound trade-offs between approaches matters more than ever. These system design skills are harder to outsource to AI because they require judgment about business requirements, user needs, and long-term maintainability.

Your informal survey will reflect what large-scale research confirms. The Stack Overflow Developer Survey ranks Python as one of the most widely used programming languages across all professional roles. The survey also reveals that Python appears in diverse fields, including finance, healthcare, education, and scientific research.

This trend is echoed by the TIOBE Index, a monthly ranking of programming language popularity, where Python consistently appears at or near the top:

TIOBE IndexTIOBE Index

Similarly, LinkedIn’s Workplace Learning Report 2023 named Python as one of the most in-demand technical skills globally. Python’s versatility means that mastering its fundamentals opens doors across multiple career paths.

Understand Different Developer Paths

Python is a phenomenally versatile language. On the one hand, school teachers choose it to help their pupils learn how to program, often starting with fun, visual tools like the built-in turtle graphics module. At the same time, Python runs major platforms like Instagram, plays a role in powering large services such as YouTube, and supports the development of generative AI models. It even once helped control the helicopter flying on Mars!

Note: Check out What Can I Do With Python? to discover how Python helps build software, power AI, automate tasks, drive robotics, and more.

Read the full article at https://realpython.com/python-skills/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 17, 2025 02:00 PM UTC


Python Morsels

Embrace whitespace

Well placed spaces and line breaks can greatly improve the readability of your Python code.

Table of contents

  1. Whitespace around operators
  2. Auto-formatters: both heroes and villains
  3. Using line breaks for implicit line continuation
  4. Separating sections with blank lines
  5. The whitespace is for us, not for Python
  6. Consider ruff, black, and other auto-formatters
  7. Whitespace is all about visual grouping

Whitespace around operators

Compare this:

result = a**2+b**2+c**2

To this:

result = a**2 + b**2 + c**2

I find that second one more readable because the operations we're performing are more obvious (as is the order of operations).

Too much whitespace can hurt readability though:

result = a ** 2 + b ** 2 + c ** 2

This seems like a step backward because we've lost those three groups we had before.

With both typography and visual design, more whitespace isn't always better.

Auto-formatters: both heroes and villains

If you use an auto-formatter …

Read the full article: https://www.pythonmorsels.com/embrace-whitespace/

December 17, 2025 12:00 AM UTC


Armin Ronacher

What Actually Is Claude Code’s Plan Mode?

I’ve mentioned this a few times now, but when I started using Claude it was because Peter got me hooked on it. From the very beginning I became a religious user of what is colloquially called YOLO mode, which basically gives the agent all the permissions so I can just watch it do its stuff.

One consequence of YOLO mode though is that it didn’t work well together with the plan mode that Claude Code had. In the beginning it didn’t inherit all the tool permissions, so in plan mode it actually asked for approval all the time. I found this annoying and as a result I never really used plan mode.

Since I haven’t been using it, I ended up with other approaches. I’ve talked about this before, but it’s a version of iterating together with the agent on creating a form of handoff in the form of a markdown file. My approach has been getting the agent to ask me clarifying questions, taking these questions into an editor, answering them, and then doing a bunch of iterations until I’m decently happy with the end result.

That has been my approach and I thought that this was pretty popular these days. For instance Mario’s pi which I also use, does not have a plan mode and Amp is removing theirs.

However today I had two interesting conversations with people who really like plan mode. As a non-user of plan mode, I wanted to understand how it works. So I specifically looked at the Claude Code implementation to understand what it does, how it prompts the agent, and how it steers the client. I wanted to use the tool loop just to get a better understanding of what I’m missing out on.

This post is basically just what I found out about how it works, and maybe it’s useful to someone who also does not use plan mode and wants to know what it actually does.

Plan Mode in Claude Code

First we need to agree on what a plan is in Claude Code. A plan in Claude Code is effectively a markdown file that is written into Claude’s plans folder by Claude in plan mode. The generated plan doesn’t have any extra structure beyond text. So at least up to that point, there really is not much of a difference between you asking it to write a markdown file or it creating its own internal markdown file.

There are however some other major differences. One is that there are recurring prompts to remind the agent that it’s in read-only mode. The tools for writing files through the agent’s built-in tools are actually still there. It has a little state machine going on to enter and exit plan mode that it can use. Interestingly, it seems like the edit file tool is actually used to manipulate the plan file. So the agent is seemingly editing its own plan file!

Because plan mode is also a tool (or at least the entering and exiting plan mode is), the agent can enter it itself. This has the same effect as if you were to press shift+tab. 1

To encourage the agent to write the plan file, there is a custom prompt injected when you enter it. There is no other enforcement from what I can tell. Other agents might do this differently.

When exiting plan mode it will read the plan file that it wrote to disk and then start working off that. So the path towards spec in the prompt always goes via the file system.

Can You Plan Mode Without Plan Mode?

This obviously raises the question: if the differences are not that significant and it is just “the prompt” and some workflow around it, how much would you have to write into the prompt yourself to get very similar behavior to what the plan mode in Claude Code does?

From a user experience point of view, you basically get two things.

  1. You get a markdown file, but you never get to see it because it’s hidden away in a folder. I would argue that putting it into a specific file has some benefits because you can edit it.
  2. However there is something which you can’t really replicate and that is that plan mode at the end comes with a prompt to the user. That user interface you cannot bring up trivially because there is no way to bring it up without going through the exit plan mode flow, which requires the file to be in a specific location.

But if we ignore those parts and say that we just want similar behavior to what plan mode does from prompting alone, how much prompt do we have to write? What specifically is the delta of entering plan mode versus just writing stuff into the context manually?

The Prompt Differences

When entering plan mode a bunch of stuff is thrown into the context in addition to the system prompt. I don’t want to give the entire prompt here verbatim because it’s a little bit boring, but I want to break it down by roughly what it sends.

The first thing it sends is general information that is now in plan mode which is read-only:

Plan mode is active. The user indicated that they do not want you to execute yet — you MUST NOT make any edits (with the exception of the plan file mentioned below), run any non-readonly tools (including changing configs or making commits), or otherwise make any changes to the system. This supercedes any other instructions you have received.

Then there’s a little bit of stuff about how it should read and edit the plan mode file, but this is mostly just to ensure that it doesn’t create new plan files. Then it sets up workflow suggestions of how plans should be structured:

Phase 1: Initial Understanding

Goal: Gain a comprehensive understanding of the user’s request by reading through code and asking them questions.

  1. Focus on understanding the user’s request and the code associated with their request

  2. (Instructions here about parallelism for tasks)

Phase 2: Design

Goal: Design an implementation approach.

(Some tool instructions)

In the agent prompt:

  • Provide comprehensive background context from Phase 1 exploration including filenames and code path traces
  • Describe requirements and constraints
  • Request a detailed implementation plan

Phase 3: Review

Goal: Review the plan(s) from Phase 2 and ensure alignment with the user’s intentions.

  1. Read the critical files identified by agents to deepen your understanding
  2. Ensure that the plans align with the user’s original request
  3. Use TOOL_NAME to clarify any remaining questions with the user

Phase 4: Final Plan

Goal: Write your final plan to the plan file (the only file you can edit).

  • Include only your recommended approach, not all alternatives
  • Ensure that the plan file is concise enough to scan quickly, but detailed enough to execute effectively
  • Include the paths of critical files to be modified

I actually thought that there would be more to the prompt than this. In particular, I was initially under the assumption that the tools actually turn into read-only. But it is just prompt reinforcement that changes the behavior of the tools and also which tools are available. It is in fact just a rather short predefined prompt that enters plan mode. The tool to enter or exit plan mode is always available, and the same is true for edit and read files. The exiting of the plan mode tool has a description that instructs the agent to understand when it’s done planning:

Use this tool when you are in plan mode and have finished writing your plan to the plan file and are ready for user approval.

How This Tool Works

  • You should have already written your plan to the plan file specified in the plan mode system message
  • This tool does NOT take the plan content as a parameter - it will read the plan from the file you wrote
  • This tool simply signals that you’re done planning and ready for the user to review and approve
  • The user will see the contents of your plan file when they review it

When to Use This Tool IMPORTANT: Only use this tool when the task requires

planning the implementation steps of a task that requires writing code. For research tasks where you’re gathering information, searching files, reading files or in general trying to understand the codebase - do NOT use this tool.

Handling Ambiguity in Plans Before using this tool, ensure your plan is

clear and unambiguous. If there are multiple valid approaches or unclear requirements

So the system prompt is the same. It is just a little bit of extra verbiage with some UX around it. Given the length of the prompt, you can probably have a slash-command that just copy/pastes a version of this prompt into the context but you will not get the UX around it.

The thing I took from this prompt is recommendations about how to use the subtasks and some examples. I’m actually not sure if that has a meaningful impact on how it’s done because at least from the limited testing that I did, I don’t observe much of a difference for how plan mode invokes tools versus how regular execution invokes tools but it’s quite possible, that this comes down to my prompting styles.

Why Does It Matter?

So you might ask why I even write about plan mode. The main motivation is that I am always quite interested in where the user experience in an agentic tool has to be enforced by the harness versus when that user experience comes naturally from the model.

Plan mode as it exists in Claude has this sort of weirdness in my mind where it doesn’t come quite natural to me. It might come natural to others! But why can I not just ask the model to plan with me? Why do I have to switch the user interface into a different mode? Plan mode is just one of many examples where I think that because we are already so used to writing or talking to machines, bringing in more complexity in the user interface takes away some of the magic. I always want to look into whether just working with the model can accomplish something similar enough that I don’t actually need to have another user interaction or a user interface that replicates something that natural language could potentially do.

This is particularly true because my workflow involves wanting to double check what these plans are, to edit them, and to manipulate them. I feel like I’m more in control of that experience if I have a file on disk somewhere that I can see, that I can read, that I can review, that I can edit before actually acting on it. The Claude integrated user experience is just a little bit too far away from me to feel natural. I understand that other people might have different opinions on this, but for me that experience really was triggered by the thought that if people have such a great experience with plan mode, I want to understand what I’m missing out on.

And now I know: I’m mostly a custom prompt to give it structure, and some system reminders and a handful of examples.

  1. This incidentally is also why it’s possible for the plan mode confirmation screen to come up with an error message, that there is no plan unprompted.

December 17, 2025 12:00 AM UTC

December 16, 2025


PyCoder’s Weekly

Issue #713: Deprecations, Compression, Functional Programming, and More (Dec. 16, 2025)

#713 – DECEMBER 16, 2025
View in Browser »

The PyCoder’s Weekly Logo


Deprecations via Warnings Don’t Work for Libraries

Although a DeprecationWarning had been in place for 3 years and the documentation contained warnings, the recent removal of API end points in urllib3 v2.6 caused consternation. Seth examines why the information didn’t properly make its way downstream and what we might do about it in the future.
SETH LARSON

Module Compression Overview

A high-level overview of how to use the compression module, which is the new location for compression libraries in Python 3.14, and where the new zstd compression algorithm can be found.
RODRIGO GIRÃO SERRÃO

10 Docker Containers on Local to Test a 1-line Change

alt

Run one local service and connect it to your shared K8s cluster. No more mocks, no more docker-compose. Just fast, high-fidelity testing in a prod-like env. Read the docs →
SIGNADOT, INC. sponsor

Using Functional Programming in Python

Boost your Python skills with a quick dive into functional programming: what it is, how Python supports it, and why it matters.
REAL PYTHON course

pandas 3.0.0rc0 Released

GITHUB.COM/PANDAS-DEV

PEP 816: WASI Support (Draft)

PYTHON.ORG

DjangoCon Europe 2026 Call for Proposals

DJANGOCON.EU

Articles & Tutorials

Use Python for Scripting!

“Use the right tool” is nice in theory, but not when the tool acts a bit differently from machine to machine, and isn’t always installed. This post suggests using Python instead of shell scripting, especially when you need cross-OS compatibility.
JEAN NIKLAS L’ORANGE

Django 6.0 With Natalia Bidart

The Django Chat podcast interviews Natalia Bidart, a Django Fellow and the release manager for Django 6.0. They talk about the major features including template partials, queues, CSP support, modern email API, and the current work on Django 6.1.
DJANGO CHAT podcast

A “Frozen” Dictionary for Python

A frozen dictionary would disallow any changes to it. An immutable dictionary type could help with performance in certain situations. This article discusses the proposed change to Python.
JAKE EDGE

Estimates: A Necessary Evil?

Developers may hate doing estimates, but without them organizations run into problems prioritizing and communicating to clients. Read on to learn why estimates may be a necessary evil.
ERIK THORSELL

Millions of Locations for Thousands of Brands

“All The Places” is a site built in Python that scrapes the web for location information from thousands of brands’ websites. This post explores some of the data you can find there.
MARK LITWINTSCHIK

30 Things I’ve Learned From 30 Years as a Python Freelancer

Reuven has been freelancing for a long time, including both working and teaching Python and pandas. This post summarizes some of the key things he’s learned in the last 30 years.
REUVEN LERNER

The Rise and Rise of FastAPI

FastAPI has rapidly become the #1 most-starred backend framework on GitHub. This mini-documentary interviews one of its creators, Sebastián Ramirez.
CULTREPO video

Automate Python Package Releases

This post describes how Kevin has automated the creation of new releases for his Python packages updating both PyPI and GitHub.
KEVIN RENSKERS

Python Inner Functions: What Are They Good For?

Learn how to create inner functions in Python to access nonlocal names, build stateful closures, and create decorators.
REAL PYTHON

Quiz: Python Inner Functions: What Are They Good For?

REAL PYTHON

Publish an EPUB Book With Jupyter Book

This quick TIL article shows you how to configure a Jupyter Book to produce the EPUB format.
RODRIGO GIRÃO SERRÃO

Projects & Code

pyarud: Arabic Poetry Analysis

GITHUB.COM/CNEMRI

python-injection: Dependency Injection Framework

GITHUB.COM/100NM

Browse PyPI by Package Type

STACKTCO.COM • Shared by Matthias Wiemann

django-generic-notifications: Multi-Channel Notification

GITHUB.COM/LOOPWERK

react-router-routes: Python Helpers From a React Router

GITHUB.COM/ILOVEITALY

Events

Weekly Real Python Office Hours Q&A (Virtual)

December 17, 2025
REALPYTHON.COM

PyData Bristol Meetup

December 18, 2025
MEETUP.COM

PyLadies Dublin

December 18, 2025
PYLADIES.COM

Chattanooga Python User Group

December 19 to December 20, 2025
MEETUP.COM

PyKla Monthly Meetup

December 24, 2025
MEETUP.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #713.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

December 16, 2025 07:30 PM UTC


Real Python

Exploring Asynchronous Iterators and Iterables

When you write asynchronous code in Python, you’ll likely need to create asynchronous iterators and iterables at some point. Asynchronous iterators are what Python uses to control async for loops, while asynchronous iterables are objects that you can iterate over using async for loops.

Both tools allow you to iterate over awaitable objects without blocking your code. This way, you can perform different tasks asynchronously.

In this video course, you’ll:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 16, 2025 02:00 PM UTC


Caktus Consulting Group

PydanticAI Agents Intro

In previous posts, we explored function calling and how it enables models to interact with external tools. However, manually defining schemas and managing the request/response loop can get tedious as an application grows. Agent frameworks can help here.

December 16, 2025 01:00 PM UTC


Tryton News

Tryton Release 7.8

We are proud to announce the 7.8 release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported.

Here is a list of the most noticeable changes:

Changes for the User

Client

We added now a drop-down menu to the client containing the user’s notifications. Now when a user clicks on a notification, it is marked as read for this user.
Also we implemented an unread counter in the client and raise a user notification pop-up when a new notification is sent by the server.

Now users can subscribe to a chat of documents by toggling the notification bell-icon.
The chat feature has been activated to many documents like sales, purchases and invoices.

Now we display the buttons that are executed on a selection of records at the bottom of lists.

We now implemented an easier way to search for empty relation fields:
The query Warehouse: = will now return records without a warehouse instead of the former result of records with warehouses having empty names. And the former result can be searched by the following query: "Warehouse.Record Name": =.

Now we interchanged the internal ID by the record name when exporting Many2One and Reference fields to CSV. And the export of One2Many and Many2Many fields is using a list of record names.

We also made it possible to import One2Many field content by using a list of names (like for the Many2Many).

Web

We made the keyboard shortcuts now also working on modals.

Server

On scheduled tasks we now also implemented user notifications.
Each user can now subscribe to be notified by scheduled tasks which generates notifications. Notifications will appear in the client drop-down.

Accounting

On supplier invoice we now made it possible to set a payment reference and to validate it. Per default the Creditor Reference is supported. And on customer invoices Tryton generates a payment reference automatically. It is using the Creditor Reference format by default, and the structured communication for Belgian customers. The payment reference can be validated for defined formats like the “Creditor Reference”. And it can be used in payment rules.

Now we support the Belgian structured communication on invoices, payments and statement rules. And with this the reconciliation process can be automated.

We now implemented when succeeding a group of payments, Tryton now will ask for the clearing date instead of just using today.

Now we store the address of the party in the SEPA mandate instead of using just the first party address.

We now added a button on the accounting category to add or remove multiple products easily.

Customs

Now we support customs agents. They define a party to whom the company is delegating the customs between two countries.

Incoterm

We now added also the old version of Incoterms 2000 because some companies and services are still using it.

Now we allow the modification of the incoterms on the customer shipment as long as it has not yet been shipped.

Product

We now make the list of variants for a product sortable. This is useful for e-commerce if you want to put a specific variant in front.

Now it is possible to set a different list price and gross price per variant without the need for a custom module.

We now made the volume and weight usable in price list formulas. This is useful to include taxes based on such criteria.

Production

Now we made it possible to define phantom bill-of-materials (BOM) to group common inputs or outputs for different BOMs. When used in a production, the phantom BOM is replaced by its corresponding materials.

We now made it possible to define a production as a disassembly. In this case the calculation from the BOM is inverted.

Purchasing

Now we restrict the run of the create purchase wizard from purchase requests which are already purchased.

And also we now restrict to run the create quotation wizard on purchase requests when it is no longer possible to create them.

It is now possible to create a new quotation for a purchase request which already has received one.

Now we made the client to open quotations that have been created by the wizard.

We fine-tuned the supply system: When no supplier can supply on time, the system will now choose the fastest supplier.

Sales

Now we made it possible to encode refunding payments on the sale order.

We allow now to group invoices created for a sale rental with the invoices created for sale orders.

In the sale subscription lines we now implemented a summary column similar to sales.

Stock

We now added two new stock reports that calculates the inventory and turnover of the stock. We find this useful to optimize and fine-tune the order points.

Now we added the support for international shipping to the shipping services: DPD, Sendcloud and UPS.

And now we made Tryton to generate a default shipping description based on the custom categories of the shipped goods (with a fallback to “General Merchandise” for UPS). This is useful for international shipping.

We now implemented an un-split functionality to correct erroneous split moves.

Now we allow to cancel a drop-shipment in state done similar to the other shipment types.

Web Shop

We now define the default Incoterm per web shop to set on the sale orders.

Now we added a status URL to the sales coming from a web shop.

We now added the URL to each product that is published in a web shop.

Now we added a button on sale from the web shop to force an update from the web shop.

We did many improvements to extend our Shopify support:

New Modules

EDocument Peppol

The EDocument Peppol Module provides the foundation for sending and receiving
electronic documents on the Peppol network.

EDocument Peppol Peppyrus

The EDocument Peppol Peppyrus Module allows sending and receiving electronic
documents on the Peppol network thanks to the free Peppyrus service.

EDocument UBL

The EDocument UBL Module adds electronic documents from UBL.

Sale Rental

The Sale Rental Module manages rental order.

Sale Rental Progress Invoice

The Sale Rental Progress Invoice Module allows creating progress invoices for
rental orders.

Stock Shipment Customs

The Stock Shipment Customs Module enables the generation of commercial
invoices for both customer and supplier return shipments.

Stock Shipping Point

The Stock Shipping Point Module adds a shipping point to shipments.

Changes for the System Administrator

Server

We now made the server stream the JSON and gzip response to reduce the memory consumption.

Now the trytond-console gains an option to execute a script from a file.

We now replaced the [cron] clean_days configuration by [cron] log_size. Now the storage of the logs of scheduled tasks only depends on its size and no longer on its frequency.

Now we made the login process send the URL for the host of the bus. This way the clients do not need to rely on the browser to manage the redirection. Which wasn’t working on recent browsers, anyway.

We now made the login sessions only valid for the IP address of the client that generates it. This enforces the security against session leak.

Now we let the server set a Message-Id header in all sent emails.

Product

We added a timestamp parameter to the URLs of product images. This allows to force a refresh of the old cached images.

Web Shop

Now we added routes to open products, variants, customers and orders using their Shopify-ID. This can be used to customize the admin UI to add a direct link to Tryton.

Changes for the Developer

Server

In this release we introduce notifications. Their messages are sent to the user as soon as they are created via the bus. They can be linked to a set of records or an action that will be opened when the user click on it.

We made it now possible to configure a ModelSQL based on a table_query to be materialized. The configuration defines the interval at which the data must be refreshed and a wizard lets the user force a refresh.
This is useful to optimize some queries for which the data does not need to be exactly fresh but that could benefit from some indexes.

Now we register the models, wizards and reports in the tryton.cfg module file. This reduces the memory consumption of the server. It does no longer need to import all the installed modules but only the activated modules.
This is also a first step to support typing with the Tryton modular design.

We now added the attribute multiple to the <button> on tree view. When set, the button is shown at the bottom of the view.

Now we implemented the declaration of read-only Wizards. Such wizards use a read-only transaction for the execution and because of this write access on the records is not needed.

We now store only immutable structures in the MemoryCache. This prevents the alteration of cached data.

Now we added a new method to the Database to clear the cached properties of the database. This is useful when writing tests that alter those properties.

We now use the SQL FILTER syntax for aggregate functions.

Now we use the SQL EXISTS operator for searching Many2One fields with the where domain operator.

We introduced now the trytond.model.sequence_reorder method to update the sequence field according to the current order of a record list.

Now we refactored the trytond.config to add cache. It is no more needed to retrieve the configuration as a global variable to avoid performance degradation.

We removed the has_window_functions function from the Database, because the feature is supported by all the supported databases.

Now we added to the trytond.tools pair and unpair methods which are equivalent implementation in Python of the sql_pairing.

Proteus

We now implemented the support of total ordering in Proteus Model.

Marketing

We now set the One-Click header on the marketing emails to let the receivers unsubscribe easily.

Sales

Now we renamed the advance payment conditions into lines for more coherence.

Web Shop

We now updated the Shopify module to use the GraphQL API because their REST-API is now deprecated.

4 posts - 2 participants

Read full topic

December 16, 2025 07:00 AM UTC