Planet Python
Last update: October 19, 2025 04:43 PM UTC
October 19, 2025
Hugo van Kemenade
Three times faster with lazy imports
PEP 810 proposes “explicit lazy imports” for Python 3.15:
Lazy imports defer the loading and execution of a module until the first time the imported name is used, in contrast to ‘normal’ imports, which eagerly load and execute a module at the point of the import statement.
By allowing developers to mark individual imports as lazy with explicit syntax, Python programs can reduce startup time, memory usage, and unnecessary work. This is particularly beneficial for command-line tools, test suites, and applications with large dependency graphs.
It’s not been accepted yet, but let’s try out the reference implementation on one of my CLI tools, pypistats.
Setup #
First fetch the reference implementation. From a CPython checkout:
git remote add LazyImportsCabal https://github.com/LazyImportsCabal/cpython
git fetch LazyImportsCabal
gco lazy # see https://hugovk.dev/blog/2025/my-most-used-command-line-commands/
Because we want to install NumPy and pandas, let’s pretend to be Python 3.14 so we can use the binary wheels instead of having to build from source:
--- a/Include/patchlevel.h
+++ b/Include/patchlevel.h
/* Version parsed out into numeric values */
/*--start constants--*/
#define PY_MAJOR_VERSION 3
-#define PY_MINOR_VERSION 15
+#define PY_MINOR_VERSION 14
#define PY_MICRO_VERSION 0
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA
#define PY_RELEASE_SERIAL 0
/* Version as a string */
-#define PY_VERSION "3.15.0a0"
+#define PY_VERSION "3.14.0a0"
/*--end constants--*/
--- a/configure.ac
+++ b/configure.ac
-m4_define([PYTHON_VERSION], [3.15])
+m4_define([PYTHON_VERSION], [3.14])
Build non-debug CPython with optimisations:
GDBM_CFLAGS="-I$(brew --prefix gdbm)/include" \
GDBM_LIBS="-L$(brew --prefix gdbm)/lib -lgdbm" \
./configure --enable-optimizations --with-lto \
--with-system-libmpdec --config-cache \
--with-openssl="$(brew --prefix openssl@3)" && make -s -j8
Install NumPy and pandas:
./python.exe -m pip install numpy pandas
And then an editable install of the CLI, because we’ll also test changing the imports:
./python.exe -m pip install -e ~/github/pypistats
Let’s check the dependencies with pipdeptree:
uvx "pipdeptree[graphviz]" --python ./python.exe --packages pypistats --graph-output svg > pipdeptree.svg
pypistats has seven direct dependencies, which result in a total of 41 dependencies six layers deep, not counting NumPy and pandas:
Benchmarks #
Let’s benchmark running pypistats --help
, which is meant to be quick, using
hyperfine:
brew install hyperfine
Inline imports #
In the pypistats CLI, I had already applied the trick of moving heavier imports into the functions that call them (the PEP calls these “inline imports”).
Instead of the lazy
keyword, I’m
using the
PYTHON_LAZY_IMPORTS
env var
here to make it easy to compare two different runs.
❯ hyperfine --warmup 10 --runs 20 --export-json out.json \
"./python.exe -m pypistats --help" \
"PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help"
Benchmark 1: ./python.exe -m pypistats --help
Time (mean ± σ): 46.2 ms ± 1.1 ms [User: 38.8 ms, System: 6.4 ms]
Range (min … max): 45.1 ms … 49.6 ms 20 runs
Benchmark 2: PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help
Time (mean ± σ): 35.3 ms ± 0.5 ms [User: 29.5 ms, System: 4.8 ms]
Range (min … max): 34.6 ms … 36.3 ms 20 runs
Summary
PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help ran
1.31 ± 0.04 times faster than ./python.exe -m pypistats --help
Plotted with plot_progression.py:
From 46 to 35 milliseconds, or, 1.31 times faster, not bad.
Fully lazy #
But we no longer need the inline imports trick with PEP 810!
I modified the CLI so all imports are at the top, and also removed if TYPE_CHECKING:
guards. Here’s a
diff.
❯ hyperfine --warmup 10 --runs 20 --export-json out2.json \
"./python.exe -m pypistats --help" \
"PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help"
Benchmark 1: ./python.exe -m pypistats --help
Time (mean ± σ): 104.1 ms ± 1.6 ms [User: 88.2 ms, System: 14.5 ms]
Range (min … max): 101.9 ms … 109.5 ms 20 runs
Benchmark 2: PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help
Time (mean ± σ): 35.7 ms ± 0.5 ms [User: 29.8 ms, System: 4.8 ms]
Range (min … max): 34.7 ms … 36.5 ms 20 runs
Summary
PYTHON_LAZY_IMPORTS=on ./python.exe -m pypistats --help ran
2.92 ± 0.06 times faster than ./python.exe -m pypistats --help
From 104 to 36 milliseconds, or 2.92 times faster, much better!
Header photo: “Lazy Man Fishing” at Cascade Locks on the Columbia River 05/1973 in the U.S. National Archives , with no known copyright restrictions.
Daniel Roy Greenfeld
Using Asyncpg with FastAPI and Air
Recently I've been on a few projects using PostgreSQL where SQLAlchemy and SQLModel felt like overkill. Instead of using those libraries I leaned on writing SQL queries and running those directly in asyncpg instead of using an ORM powered by asyncpg.
Here's how I got it to work
Defined a lifespan function for ASGIApp
Starlette ASGIApp frameworks like FastAPI (and by extension Air) can leverage lifespan functions, which are generators. I've commented the lifespan
object for clarity.
from contextlib import asynccontextmanager
from os import environ
from typing import AsyncIterator
import asyncpg
from starlette.types import ASGIApp
DATABASE_URL = environ['DATABASE_URL']
@asynccontextmanager
async def lifespan(app: ASGIApp) -> AsyncIterator[None]:
"""A lifespan for maintaining the connection to the PostgreSQL DB
Without this, the connection will timeout and queries will fail.
"""
# app.state is where the connection pool is created, which can
# be accessed later inside of views. The is only run once during
# app startup.
app.state.pool = await asyncpg.create_pool(
dsn=DATABASE_URL,
min_size=1,
max_size=10,
)
try:
# This is where the app runs all the URL route functons.
yield
finally:
# This is run once when the app is shut down.
await app.state.pool.close()
Using the lifespan function
Just add the lifespan
function to the app when it is instantiated.
Using the lifespan function for FastAPI projects
All you have to do is pass the lifespan
callable to the FastAPI app instantiation.
from fastapi import FastAPI
# Adding the lifespan app
app = FastAPI(lifespan=lifespan)
@app.get('/users')
async def users(): # every function must be async
# Use the pool object to get the database connection object
async with app.state.pool.acquire() as conn:
results = await conn.fetch('SELECT * from users;')
# FastAPI responses automatically convert dicts to JSON
return {'count': len(results), 'users': results}
Using the lifespan function for Air projects
Air is powered by FastAPI (and Starlette), so uses this lifespan
function the same way as FastAPI.
import air
# Adding the lifespan app
app = air.Air(lifespan=lifespan)
@app.get('/users')
async def users(): # every function must be async
# Use the pool object to get the database connection object
async with app.state.pool.acquire() as conn:
users = await conn.fetch('SELECT * from users;')
# Air tags are converted to HTML during the response stage
# Jinja is also an option, but is outside the scope of this article
return air.layouts.mvpcss(
air.H1(f'Users: {len(users)}'),
air.Ul(
*[air.Li(u['email']) for u in users]
)
)
Incoming data
Changing data requires use of the conn.execute
function. Of course these examples will show how to use pydantic
to validate the incoming data before we allow it to touch our database.
Adding data with FastAPI via asyncpg
As part of the request process for REST API, FastAPI uses pydantic to validate incoming data. This results a delightfully small view for accepting data.
from fastapi import FastAPI
from pydantic import BaseModel, EmailStr
# Adding the lifespan app
app = FastAPI(lifespan=lifespan)
class User(BaseModel):
email: EmailStr
@app.post('/users')
async def users_add(user: User):
# Get the conn object from the database connection pool
async with app.state.pool.acquire() as conn:
# Insert the record with an execute method
await conn.execute(
'INSERT INTO users (email, created_at) VALUES ($1, NOW())',
user.email
)
return user
Adding data with Air via asyncpg
There's no consistent standard within HTML for how to construct a form, much less respond to a bad implementation. Therefore in order to handle incoming data Air needs a bit more code than FastAPI.
import air
from pydantic import BaseModel, EmailStr
# Adding the lifespan app
app = air.Air(lifespan=lifespan)
class User(BaseModel):
email: EmailStr
class UserForm(air.AirForm):
model = User
@app.post('/users')
async def users_add(request: air.Request):
# AirForms make handling incoming forms easier
form = await UserForm.from_request(request)
# AirForms, once instantiated with data, have an `is_valid` property
# which returns a boolean of whether or not the submitted data has
# passed pydantic.
if form.is_valid:
# Get the conn object from the database connection pool
async with app.state.pool.acquire() as conn:
# Insert the record with an execute method
await conn.execute(
'INSERT INTO users (email, created_at) VALUES ($1, NOW())',
form.data.email
)
return air.layouts.mvpcss(
air.H1(f"User: {form.data.email}"),
)
# Simplistic handling of bad signup.
return air.RedirectResponse('/signup')
AirForms supports reporting of bad data. I'll cover how to do that in follow-up article.
October 18, 2025
Graham Dumpleton
Wrapping immutable objects
I am finally close to releasing wrapt
version 2.0.0. The release has been delayed a bit as someone raised a number of questions about special Python dunder methods which the ObjectProxy
class in wrapt
didn't support.
Some of these were omissions due to the fact that the special methods did not exist when wrapt
was first implemented, nor were they a part of initial Python 3 versions when support was added for Python 3. In particular, the __matmul__
, __rmatmul__
and __imatmul__
dunder methods which underly the matrix multiplication operators added in Python 3.5. In other cases that no default support for specific dunder methods existed was a more complicated situation.
I am not going to dwell on the latter in this post, but as part of the overall discussion with the person who raised the questions, they also pointed out some issues with how the dunder methods for in-place operators were handled in the ObjectProxy
class, which is going to be the subject of this post, as it may be an interesting technical exploration.
An in-place operator in Python is an operator that modifies a variable directly without creating a new object. It combines an operation with assignment.
In Python, in-place operators include:
+=
(add and assign)-=
(subtract and assign)*=
(multiply and assign)/=
(divide and assign)//=
(floor divide and assign)%=
(modulo and assign)**=
(exponentiate and assign)@=
(matrix multiply and assign - Python 3.5+)&=
,|=
,^=
(bitwise operations and assign)<<=
,>>=
(bit shift and assign)
Obvious example of using an in-place operator is on integers.
value = 1
value += 1
The result being that value
ends up being set to 2.
Other examples are tuples and lists:
tuple_values = (1, 2, 3)
tuple_values += (4, 5, 6)
list_values = [1, 2, 3]
list_values += [4, 5, 6]
In each of these cases the members of the respective data structures are the integers from 1 to 6.
A potential trap for programmers new to Python arises when combining variable aliasing and in-place operators.
tuple_values_1 = (1, 2, 3)
tuple_values_2 = tuple_values_1
tuple_values_2 += (4, 5, 6)
list_values_1 = [1, 2, 3]
list_values_2 = list_values_1
list_values_2 += [4, 5, 6]
For the case of the tuples, tuple_values_1
ends up as (1, 2, 3)
and tuple_values_2
as (1, 2, 3, 4, 5, 6)
.
For the lists however, both list_values_1
and list_values_2
end up being set to [1, 2, 3, 4, 5, 6]
.
The reason for this is that although both variables for the tuple and list in each case initially point to the same object, the alias or reference is broken in the case of the tuple when the in-place operator is applied to it. This occurs because the instance of a tuple is an immutable where as an instance of a list can be modified.
In practice what this means is that although you use the +=
operator, for an immutable type such as tuple, it will be implemented under the covers as:
tuple_values_2 = tuple_values_2 + (4, 5, 6)
In other words, the variable which prior to that point had been an alias for the original tuple, is replaced with a new object which is the result of adding the extra member items to the tuple.
Although this highlights the problem of using in-place operators when using aliasing, there is actually a more insidious case where this can present itself.
Take for example the case of a class definition with attributes declared at class scope, where we then create an instance of the class, and apply the in-place operator to the attributes of the class via the instance.
class Class:
tuple_values = (1, 2, 3)
list_values = [1, 2, 3]
c = Class()
c.tuple_values += (4, 5, 6)
c.list_values += [4, 5, 6]
Without knowing better, you might think that the result will be that Class.tuple_values
ends up being set to (1, 2, 3, 4, 5, 6)
and that Class.list_values
will be similarly set to [1, 2, 3, 4, 5, 6]
. For the case of the tuple this isn't actually what happens.
Although Class.list_values
does end up being set to [1, 2, 3, 4, 5, 6]
, the value of Class.tuple_values
remains as (1, 2, 3)
.
The reason for this is that since the tuple is immutable, as mentioned above, it will actually effectively be implemented under the covers as:
c.tuple_values = c.tuple_values + (4, 5, 6)
And this is where the potentially non obvious happens.
To explain, think now that if instead of accessing the attribute via the instance of the class, you had accessed the class directly.
C.tuple_values += (4, 5, 6)
As already noted, this would have been implemented as:
C.tuple_values = C.tuple_values + (4, 5, 6)
It is obvious then that the change would be applied to the class scoped attribute and C.tuple_values
would be (1, 2, 3, 4, 5, 6)
.
Look again now at what happened though when accessing the attribute via the instance of the class.
c.tuple_values = c.tuple_values + (4, 5, 6)
Because tuple is immutable, the existing values of the attribute is first read. At this point though, the instance of the class doesn't actually have an attribute tuple_values
, so what happens is that it falls back to reading from the class scoped attribute of the same name.
The value (4, 5, 6)
is added to the value read from the class scoped attribute, and the result assigned back to the attribute. In doing the assignment though, since it was accessed via the class instance, instead of updating the class scoped attribute, it results in the creation of a new attribute against that instance of the class.
To explain it another way, prior to updating the attribute, if we had done:
print(vars(c))
the result would have been an empty dictionary, showing that the instance had no attributes, but after updating tuple_values
we see:
{'tuple_values': (1, 2, 3, 4, 5, 6)}
indicating that the instance now had a separate attribute to that defined on the class.
Just to complicate things even more, if you look at the attributes of the instance after updating list_values
you will also see it defined on the instance as well. For it though, since a list is mutable, it is still an alias to the same list object defined as a class attribute. Confused yet?
This is because although I said that for an immutable type it gets implemented under the covers as:
c.tuple_values = c.tuple_values + (4, 5, 6)
this isn't quite true.
It is closer to say that what occurs is:
if "tuple_values" not in vars(c):
tmp = C.tuple_values
else:
tmp = c.tuple_values
tmp += (4, 5, 6)
c.tuple_values = tmp
but where because tuple is immutable, ends up being:
if "tuple_values" not in vars(c):
tmp = C.tuple_values
else:
tmp = c.tuple_values
tmp = tmp + (4, 5, 6)
c.tuple_values = tmp
For the case of the list, it is similarly implemented as:
if "list_values" not in vars(c):
tmp = C.list_values
else:
tmp = c.list_values
tmp += [4, 5, 6]
c.list_values = tmp
but since a list is mutable, it can be modified in place, meaning that since both the attribute on the instance and the class refer to the same list object, the change is seen when accessed via either.
There is no doubt I have explained this very badly, but if I haven't lost you, you might be thinking now what has this all got to do with using the ObjectProxy
class in wrapt
.
I am not going to go into details how one uses wrapt
to monkey patch code, and I also question why anyone would try and wrap an instance of an immutable type in the first place, but the original problem that was raised boils down to the following code when using ObjectProxy
in wrapt
:
class Class:
tuple_values = wrapt.ObjectProxy((1, 2, 3))
c = Class()
c.tuple_values += (4, 5, 6)
The point of ObjectProxy
is that it acts as a transparent proxy for a wrapped object, where operations on the proxy object should end up with the same result as if the original object was used and it was not wrapped by the proxy.
If this was true, then we should expect that after the above code had executed, Class.tuple_values
when accessed should have resulted in (1, 2, 3)
and c.tuple_values
would be (1, 2, 3, 4, 5, 6)
.
The problem is that this was not what was happening and instead the effective value was in both cases (1, 2, 3, 4, 5, 6)
.
In other words wrapt
was breaking the rules of what should happen for an immutable type.
The first reason for this is that the ObjectProxy
instance that replaces (wraps) the original mutable object is now a rather complicated class instance.
In order to handle the in-place operator for addition being applied to the proxy, the ObjectProxy
class needs to implement the special dunder method __iadd__
. When we say:
c.tuple_values += (4, 5, 6)
this gets translated into:
c.tuple_values = c.__iadd__((4, 5, 6))
In order to have the operation be applied to the wrapped object, the __iadd__
method of ObjectProxy
was implemented as:
def __iadd__(self, other):
self.__wrapped__ += other
return self
On face value this may seem to be correct, but fails for the case of an immutable object.
Going back to what we said occurs under the covers when we use +=
we now have:
if "tuple_values" not in vars(c):
tmp = C.tuple_values
else:
tmp = c.tuple_values
tmp = tmp.__iadd__((4, 5, 6))
c.tuple_values = tmp
The original wrapped object at this point exists as the __wrapped__
attribute on the ObjectProxy
instance referenced by the temporary value.
When +=
is executed, that calls __iadd__
which results in:
self.__wrapped__ += other
which as have explained, since the wrapped object is immutable is implemented as:
self.__wrapped__ = self.__wrapped__ + other
Since though both the attribute on the instance, and the class, reference the same proxy object, and although we are replacing the tuple with the updated value, that is only occuring against the __wrapped__
attribute of the ObjectProxy
instance itself.
What is meant to happen in order to be able to replace the original attribute reference, is that __iadd__
should return any new object to replace it, but as the code was written, it was always returning self
. Thus, the original proxy object is what gets set as the attribute on the instance as the temporary value reference doesn't change.
As much as this is an obscure corner case which in practice would probably never arise since wrapping immutable objects is a questionable use case, the question now is how to fix this and do something different when an immutable object is being wrapped.
Although Python has various builtin immutable types, there isn't a single test one can run to determine if an object is immutable. What we can do though for our specific case of the __iadd__
method implementation, is to assume that if the wrapped object does not itself implement __iadd__
that it is immutable for the purposes of that operation.
The simplistic approach for an immutable object would then be to discard the fact that an ObjectProxy
wrapper was being used, and return the result of adding the wrapped object with the argument to +=
.
def __iadd__(self, other):
if hasattr(self.__wrapped__, "__iadd__"):
self.__wrapped__ += other
return self
else:
return self.__wrapped__ + other
This isn't a good solution though as there is going to be some reason ObjectProxy
was used in the first place, and we have just thrown it away, with any custom behaviour the proxy object implemented lost.
The next alternative is to return the result of using addition within a new instance of ObjectProxy
.
def __iadd__(self, other):
if hasattr(self.__wrapped__, "__iadd__"):
self.__wrapped__ += other
return self
else:
return ObjectProxy(self.__wrapped__ + other)
Because though any custom behaviour of an object proxy is going to be implemented by a class derived from ObjectProxy
, we again are changing the expected overall behaviour of the proxy object, as only the ObjectProxy
base class behaviour is preserved.
Python being a dynamic programming language with introspection capabilities builtin does mean though that we can work out what class type was used to create the proxy object in the first place. We could therefore instead use:
def __iadd__(self, other):
if hasattr(self.__wrapped__, "__iadd__"):
self.__wrapped__ += other
return self
else:
return type(self)(self.__wrapped__ + other)
Unfortunately though, this can also fail.
The problem now is that a custom object proxy type derived from ObjectProxy
could override __init__()
such that instead of it taking a single argument which is the object to be wrapped, takes one or more distinct arguments which are used in the creation of the wrapped object. If this is the case creation of the new proxy object could fail due to mismatched number of arguments or what the argument means.
A further issue is that a custom object proxy could maintain additional state within the custom object proxy which because it isn't transferred into the new proxy object would be lost.
To cut to the solution, what can be done is for ObjectProxy
to be implemented as:
class ObjectProxy:
...
@property
def __object_proxy__(self):
return ObjectProxy
def __iadd__(self, other):
if hasattr(self.__wrapped__, "__iadd__"):
self.__wrapped__ += other
return self
else:
return self.__object_proxy__(self.__wrapped__ + other)
That is, we add a property __object_proxy__
to ObjectProxy
which returns a callable (by default ObjectProxy
) which can be used to create a new instance of the proxy object for the modified wrapped object.
The reason for this convoluted approach is that although it is likely going to be a rare situation, it does allow for a custom object proxy to override how a new proxy object is created.
class CustomObjectProxy(ObjectProxy):
def __init__(self, arg1, arg2, *, wrapped=None):
```The arg1 and arg2 values are inputs to originally create
object to be wrapped. If wrapped is not None, then we adopt
that as wrapped object but still record inputs for later use.
```
self._self_arg1 = arg1
self._self_arg2 = arg2
# Create object to be wrapped based on inputs.
wrapped = ...
super().__init__(wrapped)
@property
def __object_proxy__(self):
def __ctor__(wrapped):
# We need to override what __init__() does so construct
# class using more manual steps.
instance = ObjectProxy.__new__(CustomObjectProxy)
instance._self_arg1 = self._self_arg1
instance._self_arg2 = self._self_arg2
ObjectProxy.__init__(instance, wrapped)
return instance
return __ctor__
One caveat on having the __object_proxy__
property return ObjectProxy
by default is that even if a custom object proxy type still accepts the wrapped object directly when being initialized, it would need to override __object_proxy__
to return the custom object proxy type, if it is ever intended to be used to wrap immutable objects.
The alternative is to define ObjectProxy
as:
class ObjectProxy:
...
@property
def __object_proxy__(self):
return type(self)
This means it will work out of the box for custom object proxy which are initialized with the object to be wrapped, but then you will have the problem mentioned before where a derived class has a custom __init__()
function which takes different arguments.
Either way, if not overridden, you will get a silent failure resulting in different behaviour after an in-place operator is used, or for the case of custom arguments to __init__()
, an outright exception.
For now at least opting to return ObjectProxy
rather than the type of a derived custom object proxy. Someone would already need to be doing something crazy to need to wrap immutable types and so is highly unlikely to encounter the whole issue anyway. Thus returning ObjectProxy
seems to be the slightly safer choice.
Having a solution, we now just need to update all the other in-place operators with similar pattern of code. For example, in-place multiplication.
def __imul__(self, other):
if hasattr(self.__wrapped__, "__imul__"):
self.__wrapped__ *= other
return self
else:
return self.__object_proxy__(self.__wrapped__ * other)
Or at least we are done for the pure Python implementation. Since wrapt
actually uses a C extension and only falls back to the pure Python implementation if the C extension is not available for some reason, all this also had to be done for the C extension as well. How it is done in the C extension will be left to the readers imagination.
End result is that although I have never had anyone report this as a real world problem, and it was only reported as technically wrong, it will be addressed in wrapt
version 2.0.0 as discussed above. Since have not seen it as a problem in the real world, I will not be back porting it to version 1.17.X.
Anyway, hope this was an interesting exploration of a strange corner case.
October 17, 2025
Real Python
The Real Python Podcast – Episode #270: Evolving Teaching Python in the Classroom
How is teaching young students Python changing with the advent of LLMs? This week on the show, Kelly Schuster-Paredes from the Teaching Python podcast joins us to discuss coding and AI in the classroom.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Armin Ronacher
Building an Agent That Leverages Throwaway Code
In August I wrote about my experiments with replacing MCP (Model Context Protocol) with code. In the time since I utilized that idea for exploring non-coding agents at Earendil. And I’m not alone! In the meantime, multiple people have explored this space and I felt it was worth sharing some updated findings. The general idea is pretty simple. Agents are very good at writing code, so why don’t we let them write throw-away code to solve problems that are not related to code at all?
I want to show you how and what I’m doing to give you some ideas of what works and why this is much simpler than you might think.
Pyodide is the Dark Horse
The first thing you have to realize is that Pyodide is secretly becoming a pretty big deal for a lot of agentic interactions. What is Pyodide? Pyodide is an open source project that makes a standard Python interpreter available via a WebAssembly runtime. What is neat about it is that it has an installer called micropip that allows it to install dependencies from PyPI. It also targets the emscripten runtime environment, which means there is a pretty good standard Unix setup around the interpreter that you can interact with.
Getting Pyodide to run is shockingly simple if you have a Node environment. You can directly install it from npm. What makes this so cool is that you can also interact with the virtual file system, which allows you to create a persistent runtime environment that interacts with the outside world. You can also get hosted Pyodide at this point from a whole bunch of startups, but you can actually get this running on your own machine and infrastructure very easily if you want to.
The way I found this to work best is if you banish Pyodide into a web worker. This allows you to interrupt it in case it runs into time limits.
A big reason why Pyodide is such a powerful runtime, is because Python has an amazing ecosystem of well established libraries that the models know about. From manipulating PDFs or word documents, to creating images, it’s all there.
File Systems Are King
Another vital ingredient to a code interpreter is having a file system.
Not just any file system though. I like to set up a virtual file system that I intercept so that I can provide it with access to remote resources from specific file system locations. For instance, you can have a folder on the file system that exposes files which are just resources that come from your own backend API. If the agent then chooses to read from those files, you can from outside the sandbox make a safe HTTP request to bring that resource into play. The sandbox itself does not have network access, so it’s only the file system that gates access to resources.
The reason the file system is so good is that agents just know so much about how they work, and you can provide safe access to resources through some external system outside of the sandbox. You can provide read-only access to some resources and write access to others, then access the created artifacts from the outside again.
Now actually doing that is a tad tricky because the emscripten file system is
sync, and most of the interesting things you can do are async. The option that
I ended up going with is to move the fetch-like async logic into another web
worker and use
Atomics.wait
to block. If your entire Pyodide runtime is in a web worker, that’s not as bad
as it looks.
That said, I wish the emscripten file system API was changed to support stack swiching instead of this. While it’s now possible to hide async promises behind sync abstractions within Pyodide with call_sync, the same approach does not work for the emscripten JavaScript FS API.
I have a full example of this at the end, but the simplified pseudocode that I ended up with looks like this:
// main thread: wrap a worker so fetch() looks synchronous
fetch(url) {
const signalBuffer = new SharedArrayBuffer(4);
const signal = new Int32Array(signalBuffer);
const { port1, port2 } = new MessageChannel();
this.worker.postMessage({url, signalBuffer, port: port2}, [port2]);
Atomics.wait(signal, 0, 0); // park until worker flips the signal
const message = receiveMessageOnPort(port1); // MessageChannel gives the payload
port1.close();
if (message.message.status !== "ok") {
throw new Error(message.message.error.message);
}
return message.message.data;
}
// worker thread: perform async fetch, then wake the main thread
parentPort.on("message", async ({ url, signalBuffer, port }) => {
const signal = new Int32Array(signalBuffer);
try {
const bytes = await fetch(url).then(r => {
if (!r.ok) throw new Error(`HTTP ${r.status}`);
return r.arrayBuffer();
});
port.postMessage({ status: "ok", data: new Uint8Array(bytes) });
Atomics.store(signal, 0, 1); // mark success
} catch (error) {
port.postMessage({ status: "error", error: serialize(error) });
Atomics.store(signal, 0, -1); // mark failure
} finally {
Atomics.notify(signal, 0); // unblock the waiting main thread
port.close();
}
});
Durable Execution
Lastly now that you have agents running, you really need durable execution. I would describe durable execution as the idea of being able to retry a complex workflow safely without losing progress. The reason for this is that agents can take a very long time, and if they interrupt, you want to bring them back to the state they were in. This has become a pretty hot topic. There are a lot of startups in that space and you can buy yourself a tool off the shelf if you want to.
What is a little bit disappointing is that there is no truly simple durable execution system. By that I mean something that just runs on top of Postgres and/or Redis in the same way as, for instance, there is pgmq.
The easiest way to shoehorn this yourself is to use queues to restart your tasks and to cache away the temporary steps from your execution. Basically, you compose your task from multiple steps and each of the steps just has a very simple cache key. It’s really just that simple:
function myAgenticLoop(taskID, initialState) {
let stepCount = 0;
let state = initialState;
while (stepCount < MAX_STEPS) {
let cacheKey = `${taskID}:${stepCount}`;
let cachedState = loadStateFromCache(cacheKey);
if (cachedState !== null) {
state = cachedState.state;
} else {
state = runAgenticStep(state);
storeStateInCache(cacheKey, state);
}
stepCount++;
if (reachedEndCondition(state)) {
break;
}
}
return state;
}
You can improve on this greatly, but this is the general idea. The state is basically the conversation log and whatever else you need to keep around for the tool execution (e.g., whatever was thrown on the file system).
What Other Than Code?
What tools does an agent need that are not code? Well, the code needs to be able to do something interesting so you need to give it access to something. The most interesting access you can provide is via the file system, as mentioned. But there are also other tools you might want to expose. What Cloudflare proposed is connecting to MCP servers and exposing their tools to the code interpreter. I think this is a quite interesting approach and to some degree it’s probably where you want to go.
Some tools that I find interesting:
-
Describe
: a tool that just lets the agent run more inference, mostly with files that the code interpreter generated. For instance if you have a zip file it’s quite fun to see the code interpreter use Python to unpack it. But if then that unpacked file is a jpg, you will need to go back to inference to understand it. -
Help
: a tool that just … brings up help. Again, can be with inference for basic RAG, or similar. I found it quite interesting to let the AI ask it for help. For example, you want the manual tool to allow a query like “Which Python code should I write to create a chart for the given XLSX file?” On the other hand, you can also just stash away some instructions in .md files on the virtual file system and have the code interpreter read it. It’s all an option.
Putting it Together
If you want to see what this roughly looks like, I vibe-coded a simple version of this together. It uses a made-up example but it does show how a sandbox with very little tool availability can create surprising results: mitsuhiko/mini-agent.
When you run it, it looks up the current IP from a special network drive that triggers an async fetch, and then it (usually) uses pillow or matplotlib to make an image of that IP address. Pretty pointless, but a lot of fun!
What an agent run looks likePrompt: Figure out the current ip address and make me a picture of it.
Step 1:
Stop reason: tool_use
Tool call: execute_python {
code: '\n' +
'# Read the current IP address\n' +
"with open('/network/current-ip', 'r') as f:\n" +
' ip_address = f.read().strip()\n' +
' \n' +
'print(f"Current IP Address: {ip_address}")\n'
}
Tool result: {
stdout: 'Current IP Address: 255.255.255.255',
stderr: '',
success: true
}
[Cache] Saved state to: agent-cache/task-0-step-1.json
Step 2:
Stop reason: tool_use
Tool call: execute_python {
code: '\n' +
'import matplotlib.pyplot as plt\n' +
'import matplotlib.patches as patches\n' +
'from matplotlib.patches import FancyBboxPatch\n' +
'import numpy as np\n' +
'\n' +
'# Create figure and axis\n' +
'fig, ax = plt.subplots(1, 1, figsize=(12, 8))\n' +
'ax.set_xlim(0, 10)\n' +
'ax.set_ylim(0, 10)\n' +
"ax.axis('off')\n" +
'\n' +
'# Add background gradient\n' +
'gradient = np.linspace(0, 1, 256).reshape(1, -1)\n' +
'gradient = np.vstack([gradient] * 256)\n' +
"ax.imshow(gradient, extent=[0, 10, 0, 10], aspect='auto', cmap='cool', alpha=0.3)\n" +
'\n' +
'# Title\n' +
"ax.text(5, 9, 'Current IP Address', fontsize=28, weight='bold', \n" +
" ha='center', va='top', color='#1a1a1a')\n" +
'\n' +
'# IP Address Box\n' +
'fancy_box = FancyBboxPatch((0.5, 4.5), 9, 2.5, \n' +
' boxstyle="round,pad=0.2", \n' +
" edgecolor='#2196F3', facecolor='#E3F2FD',\n" +
' linewidth=3)\n' +
'ax.add_patch(fancy_box)\n' +
'\n' +
'# IP Address Text\n' +
"ax.text(5, 5.75, ip_address, fontsize=48, weight='bold', \n" +
" ha='center', va='center', color='#1976D2', family='monospace')\n" +
'\n' +
'# Decorative elements - network nodes\n' +
'circle_positions = [(2, 2), (8, 2), (2, 8), (8, 8), (5, 1), (5, 9)]\n' +
'for pos in circle_positions:\n' +
" circle = patches.Circle(pos, 0.3, color='#FF5722', alpha=0.7)\n" +
' ax.add_patch(circle)\n' +
'\n' +
'# Connection lines\n' +
'for i in range(len(circle_positions)):\n' +
' for j in range(i+1, len(circle_positions)):\n' +
' x = [circle_positions[i][0], circle_positions[j][0]]\n' +
' y = [circle_positions[i][1], circle_positions[j][1]]\n' +
" ax.plot(x, y, color='#2196F3', alpha=0.2, linewidth=1)\n" +
'\n' +
'# Info text\n' +
"ax.text(5, 3.8, '🌐 Your Network Location 🌐', fontsize=14, \n" +
" ha='center', va='top', color='#555', style='italic')\n" +
'\n' +
'plt.tight_layout()\n' +
"plt.savefig('/output/ip_address.png', dpi=150, bbox_inches='tight', facecolor='white')\n" +
'print("✅ Picture saved to /output/ip_address.png")\n' +
'plt.close()\n' +
'\n' +
'print(f"\\n📍 Your IP Address: {ip_address}")\n'
}
Tool result: {
stdout: 'Loading Pillow, contourpy, cycler, fonttools, kiwisolver, matplotlib, numpy, packaging, pyparsing, python-dateutil, pytz, sixLoaded Pillow, contourpy, cycler, fonttools, kiwisolver, matplotlib, numpy, packaging, pyparsing, python-dateutil, pytz, six',
stderr: 'Traceback (most recent call last):\n' +
' File "/lib/python313.zip/_pyodide/_base.py", line 597, in eval_code_async\n' +
' await CodeRunner(\n' +
' ...<9 lines>...\n' +
' .run_async(globals, locals)\n' +
' File "/lib/python313.zip/_pyodide/_base.py", line 411, in run_async\n' +
' coroutine = eval(self.code, globals, locals)\n' +
' File "<exec>", line 8, in <module>\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 1613, in subplots\n' +
' fig = figure(**fig_kw)\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 934, in figure\n' +
' manager = new_figure_manager(\n' +
' num, figsize=figsize, dpi=dpi,\n' +
' facecolor=facecolor, edgecolor=edgecolor, frameon=frameon,\n' +
' FigureClass=FigureClass, **kwargs)\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 464, in new_figure_manager\n' +
' _warn_if_gui_out_of_main_thread()\n' +
' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 441, in _warn_if_gui_out_of_main_thread\n' +
' canvas_class = cast(type[FigureCanvasBase], _get_backend_mod().FigureCanvas)\n' +
' ~~~~~~~~~~~~~~~~^^\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 280, in _get_backend_mod\n' +
' switch_backend(rcParams._get("backend")) # type: ignore[attr-defined]\n' +
' ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^\n' +
' File "/lib/python3.13/site-packages/matplotlib/pyplot.py", line 342, in switch_backend\n' +
' module = importlib.import_module(cbook._backend_module_name(newbackend))\n' +
' File "/lib/python313.zip/importlib/__init__.py", line 88, in import_module\n' +
' return _bootstrap._gcd_import(name[level:], package, level)\n' +
' ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n' +
' File "<frozen importlib._bootstrap>", line 1387, in _gcd_import\n' +
' File "<frozen importlib._bootstrap>", line 1360, in _find_and_load\n' +
' File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked\n' +
' File "<frozen importlib._bootstrap>", line 935, in _load_unlocked\n' +
' File "<frozen importlib._bootstrap_external>", line 1026, in exec_module\n' +
' File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed\n' +
' File "/lib/python3.13/site-packages/matplotlib/backends/backend_webagg.py", line 23, in <module>\n' +
' from js import document\n' +
"ImportError: cannot import name 'document' from 'js' (unknown location)\n",
success: false
}
[Cache] Saved state to: agent-cache/task-0-step-2.json
Step 3:
Stop reason: tool_use
Tool call: execute_python {
code: '\n' +
'from PIL import Image, ImageDraw, ImageFont\n' +
'import os\n' +
'\n' +
'# Read the current IP address\n' +
"with open('/network/current-ip', 'r') as f:\n" +
' ip_address = f.read().strip()\n' +
'\n' +
'# Create a new image with a nice gradient-like background\n' +
'width, height = 1200, 800\n' +
"image = Image.new('RGB', (width, height), color='white')\n" +
'draw = ImageDraw.Draw(image)\n' +
'\n' +
'# Draw background with gradient effect using rectangles\n' +
'for i in range(height):\n' +
' ratio = i / height\n' +
' r = int(30 + (100 * ratio))\n' +
' g = int(100 + (150 * ratio))\n' +
' b = int(200 + (55 * ratio))\n' +
' draw.rectangle([(0, i), (width, i+1)], fill=(r, g, b))\n' +
'\n' +
'# Draw main box\n' +
'box_margin = 100\n' +
'box_top = 250\n' +
'box_height = 300\n' +
'draw.rectangle(\n' +
' [(box_margin, box_top), (width - box_margin, box_top + box_height)],\n' +
' fill=(240, 248, 255),\n' +
' outline=(33, 150, 243),\n' +
' width=5\n' +
')\n' +
'\n' +
'# Draw rounded corners effect\n' +
'for i in range(20):\n' +
' draw.arc([(box_margin, box_top), (box_margin + 40, box_top + 40)], \n' +
' 0, 90, fill=(33, 150, 243), width=5)\n' +
'\n' +
'# Add decorative network nodes\n' +
'node_radius = 15\n' +
'nodes = [(150, 150), (1050, 150), (150, 650), (1050, 650), (600, 100), (600, 700)]\n' +
'for node in nodes:\n' +
' draw.ellipse(\n' +
' [(node[0] - node_radius, node[1] - node_radius),\n' +
' (node[0] + node_radius, node[1] + node_radius)],\n' +
' fill=(255, 87, 34),\n' +
' outline=(200, 50, 0),\n' +
' width=2\n' +
' )\n' +
'\n' +
'# Draw connection lines\n' +
'for i, node1 in enumerate(nodes):\n' +
' for node2 in nodes[i+1:]:\n' +
' draw.line([node1, node2], fill=(33, 150, 243, 50), width=1)\n' +
'\n' +
'# Try to use a large default font\n' +
'try:\n' +
' title_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 60)\n' +
' ip_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuMono-Bold.ttf", 90)\n' +
' subtitle_font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 30)\n' +
'except:\n' +
' title_font = ImageFont.load_default()\n' +
' ip_font = ImageFont.load_default()\n' +
' subtitle_font = ImageFont.load_default()\n' +
'\n' +
'# Draw title\n' +
'draw.text((width // 2, 80), "Current IP Address", font=title_font, \n' +
' fill=(255, 255, 255), anchor="mm")\n' +
'\n' +
'# Draw IP address\n' +
'draw.text((width // 2, box_top + box_height // 2), ip_address, font=ip_font, \n' +
' fill=(25, 118, 210), anchor="mm")\n' +
'\n' +
'# Draw subtitle\n' +
'draw.text((width // 2, box_top + box_height + 50), "🌐 Your Network Location 🌐", \n' +
' font=subtitle_font, fill=(100, 100, 100), anchor="mm")\n' +
'\n' +
'# Save the image\n' +
"output_path = '/output/ip_address.png'\n" +
'image.save(output_path)\n' +
'print(f"✅ Picture saved to {output_path}")\n' +
'print(f"\\n📍 Your IP Address: {ip_address}")\n'
}
Tool result: {
stdout: 'Pillow already loaded from default channelNo new packages to load✅ Picture saved to /output/ip_address.png📍 Your IP Address: 255.255.255.255',
stderr: '',
success: true
}
[Cache] Saved state to: agent-cache/task-0-step-3.json
Step 4:
Stop reason: end_turn
Final result: Perfect! 🎉 I've successfully:
1. **Found your IP address**: **255.255.255.255**
2. **Created a picture** showing your IP address with:
- A nice gradient blue background
- Your IP address prominently displayed in the center
- Decorative network nodes and connection lines
- Professional styling and formatting
The image has been saved to `/output/ip_address.png` and is ready for you to download!
[Cache] Saved state to: agent-cache/task-0-step-4.json
Total steps: 4
Making 1 file(s) available in ./output:
✓ ip_address.png
4he same approach has also been leveraged by Anthropic and Cloudflare. There is some further reading that might give you more ideas:
- Claude Skills is fully leveraging code generation for working with documents or other interesting things. Comes with a (non Open Source) repository of example skills that the LLM and code executor can use: anthropics/skills
- Cloudflare’s Code Mode which is the idea of creating TypeScript bindings for MCP tools and having the agent write code to use them in a sandbox.
October 16, 2025
Ari Lamstein
Visualizing Border Patrol Encounters Under the Second Trump Administration
Summary: I just updated my Immigration Enforcement Streamlit App with Border Patrol data from the current administration. This post walks through what changed and why it matters.
Last week, I published Visualizing 25 Years of Border Patrol Data in Python, which explored long-term trends in Border Patrol “encounters” along the Southwest Land Border. At the time, the dataset ended in November 2024.
I’ve now extended the dataset to include the current fiscal year—and the results are striking.
What’s new?
The encounters module now integrates two sources:
- A historic spreadsheet covering FY2000–FY2024, published by the Office of Homeland Security Statistics (OHSS). This dataset was the topic of my last post.
- A CSV file from CBP’s Southwest Land Border Encounters page, which provides monthly data for FY2025. It currently includes data through August 2025.
You can view the updated visualization in my Immigration Enforcement Streamlit App.
What does the data show?
The lowest monthly encounter values in the entire 25-year dataset all occur after the start of President Trump’s second term, beginning in February 2025.
To visualize this, you can run:
import encounters encounters.get_sw_border_encounters_graph()
The graph includes vertical lines marking presidential transitions, allowing readers to compare enforcement trends across administrations.
When sorting the dataset by monthly encounter totals, the seven lowest months all fall under the current administration. The next lowest values? They appear at the start of Trump’s first term in 2017.
This pattern echoes a point made by a DHS statistician during my earlier research: while the number of people attempting to cross the border is unknowable, it’s likely sensitive to changes in administration policy. The encounters dataset doesn’t capture causality, but it’s probably the best proxy we have—and it offers a clear visual correlation that invites further exploration.
To reproduce this analysis
You can sort the dataset by monthly encounter totals using:
df = encounters.get_sw_border_encounters() df.sort_values('encounters').reset_index(drop=True).head(10)
This returns the ten lowest monthly values in the dataset:
date | encounters | |
---|---|---|
0 | 2025-07-01 | 4596 |
1 | 2025-06-01 | 6068 |
2 | 2025-08-01 | 6321 |
3 | 2025-03-01 | 7180 |
4 | 2025-02-01 | 8353 |
5 | 2025-04-01 | 8377 |
6 | 2025-05-01 | 8725 |
7 | 2017-04-01 | 11130 |
8 | 2017-03-01 | 12200 |
9 | 2017-05-01 | 14520 |
These values are not only historically low—they’re also tightly clustered, suggesting a sustained shift rather than a one-off anomaly.
Why this matters
By extending the dataset and updating the app, I hope to support more timely, transparent conversations about immigration enforcement. The code is open-source, the data sources are documented, and the graph is annotated to support hypothesis generation—not partisan conclusions.
This project doesn’t claim causality, but it does offer a reproducible lens for exploring how policy shifts may correlate with enforcement outcomes. It’s designed to empower journalists, educators, and analysts to ask better questions—not just accept surface-level narratives.
What’s next?
I’m exploring ways to surface encounter demographics and regional breakdowns—details that could help contextualize enforcement patterns and policy shifts.
If you have ideas, feedback, or use cases you’d like to see supported, I’d love to hear from you. Feel free to reach out.
Python Bytes
#453 Python++
<strong>Topics covered in this episode:</strong><br> <ul> <li><em>* <a href="https://pypiplus.com?featured_on=pythonbytes">PyPI+</a></em>*</li> <li><em>* <a href="https://github.com/floRaths/uv-ship?featured_on=pythonbytes">uv-ship</a> - a CLI-tool for shipping with uv</em>*</li> <li><em>* <a href="https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it?featured_on=pythonbytes">How fast is 3.14?</a></em>*</li> <li><em>* <a href="https://github.com/feldroy/air?featured_on=pythonbytes">air - a new web framework built with FastAPI, Starlette, and Pydantic.</a></em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=VwJFlrvT7sk' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="453">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://pypiplus.com?featured_on=pythonbytes">PyPI+</a></strong></p> <ul> <li>Very nice search and exploration tool for PyPI</li> <li>Minor but annoying bug: content-types ≠ content_types on PyPI+ but they are in Python itself. Minimum Python version seems to be interpreted as max Python version.</li> <li>See dependency graphs and more</li> <li>Examples <ul> <li><a href="https://pypiplus.com/project/content-types/0.3.0/?featured_on=pythonbytes">content-types</a></li> <li><a href="https://pypiplus.com/project/jinja_partials/0.3.0/?featured_on=pythonbytes">jinja-partials</a></li> <li><a href="https://pypiplus.com/project/fastapi_chameleon/0.1.17/?featured_on=pythonbytes">fastapi-chameleon</a></li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://github.com/floRaths/uv-ship?featured_on=pythonbytes">uv-ship</a> - a CLI-tool for shipping with uv</strong></p> <ul> <li>“<code>uv-ship</code> is a lightweight companion to <a href="https://docs.astral.sh/uv/?featured_on=pythonbytes">uv</a> that removes the risky parts of cutting a release. It verifies the repo state, bumps your project metadata and optionally refreshes the changelog. It then commits, tags & pushes the result, while giving you the chance to review every step.”</li> </ul> <p><strong>Michael #3: <a href="https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it?featured_on=pythonbytes">How fast is 3.14?</a></strong></p> <ul> <li>by Miguel Grinberg</li> <li>A big focus on threaded vs. non-threaded Python</li> <li>Some times its faster, other times, it’s slower</li> </ul> <p><strong>Brian #4: <a href="https://github.com/feldroy/air?featured_on=pythonbytes">air - a new web framework built with FastAPI, Starlette, and Pydantic.</a></strong></p> <ul> <li>An very new project in Alpha stage by Daniel & Audrey Felderoy, the “Two Scoops of Django” people.</li> <li>Air Tags are an interesting thing.</li> <li>Also <a href="https://feldroy.github.io/air/why/?featured_on=pythonbytes">Why?</a> is amazing <ul> <li>“<strong>Don't use AIR”</strong></li> <li>“Every release could break your code! If you have to ask why you should use it, it's probably not for you.”</li> <li>“If you want to use Air, you can. But we don't recommend it.”</li> <li>“It'll likely infect you, your family, and your codebase with an evil web framework mind virus, , …”</li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://discuss.python.org/t/python-3-15-alpha-1/104358?u=hugovk&featured_on=pythonbytes">Python 3.15a1 is available</a> <ul> <li><a href="https://github.com/astral-sh/uv/releases/tag/0.9.3?featured_on=pythonbytes">uv python install 3.15</a> already works</li> </ul></li> <li><a href="https://pythontest.com/python-lazy-imports-now/?featured_on=pythonbytes">Python lazy imports you can use today</a> - one of two blog posts I threatened to write recently</li> <li><a href="https://pythontest.com/testing-with-python-3-14?featured_on=pythonbytes">Testing against Python 3.14</a> - the other one</li> <li><a href="https://pythontest.com/testing-with-python-3-14/#supporting-314t--free-threaded-python">Free Threading has some trove classifiers</a></li> </ul> <p>Michael:</p> <ul> <li>Blog post about the book: <a href="https://mkennedy.codes/posts/talk-python-in-production-book-is-out/?featured_on=pythonbytes">Talk Python in Production book is out!</a> In particular, the extras are interesting.</li> <li><a href="https://github.com/mikeckennedy/aiusage?featured_on=pythonbytes">AI Usage TUI</a></li> <li><a href="https://mkennedy.codes/posts/show-me-your-ls/?featured_on=pythonbytes">Show me your ls</a></li> <li><a href="https://helium.computer?featured_on=pythonbytes">Helium Browser</a> is interesting. But also has Python as a big role. <ul> <li>GitHub says Languages <ul> <li><strong>Python 97.4% 👀</strong></li> <li>Shell 1.9%</li> <li>Other 0.7%</li> </ul></li> </ul></li> <li>Smallest Python release? <a href="https://docs.python.org/release/3.13.9/whatsnew/changelog.html#python-3-13-9">3.13.9</a></li> </ul> <p><strong>Joke: <a href="https://x.com/itsfoss2/status/1973983885005324607?s=12&featured_on=pythonbytes">An unforgivable crime</a></strong></p>
October 15, 2025
Real Python
Polars vs pandas: What's the Difference?
Polars and pandas both provide DataFrame-based data analysis in Python, but they differ in syntax, performance, and features. In this tutorial on Polars vs pandas, you’ll compare their method chaining styles, run timed performance tests, explore LazyFrame optimizations in Polars, convert data between the two libraries, and create plots with their built-in tools. You’ll also examine scenarios where each library’s strengths make it the better choice.
By the end of this tutorial, you’ll understand that:
- Polars expressions and contexts let you build clear, optimized query pipelines without mutating your original data.
- LazyFrames with query optimization in Polars can outperform pandas for grouped and aggregated workloads.
- Streaming in Polars enables processing datasets that don’t fit in memory, which pandas can’t handle natively.
.to_pandas()
andfrom_pandas()
let you convert between DataFrame formats, and Narwhals offers a library-agnostic API.- Built-in plotting uses Altair for Polars and Matplotlib for pandas, allowing quick visualization directly from DataFrames.
To get the most out of this tutorial, it’s recommended that you already have a basic understanding of how to work with both pandas and Polars DataFrames, as well as Polars LazyFrames.
To complete the examples in this tutorial, you’ll use various tools and the Python REPL. You’ll use the command line to run some scripts that time your code and reveal how pandas and Polars compare. You’ll also take advantage of the plotting capabilities of Jupyter Notebook.
Much of the data you’ll use will be random and self-generated. You’ll also use a cleansed and reformatted Apache Parquet version of some freely available retail data from the UC Irvine Machine Learning Repository. Parquet files are optimized to store data and analyze it efficiently. This enables you to achieve optimal performance from the pandas and Polars libraries.
Before you start, you should download the online_retail.parquet
file from the tutorial downloadables and place it into your project directory.
You’ll need to install the pandas and Polars libraries, as well as PyArrow, Matplotlib, Vega-Altair, and Narwhals, to make sure your code has everything it needs to run. You’ll also use NumPy, which is currently installed automatically when you install pandas.
You may also want to consider creating your own virtual environment within your project folder to install the necessary libraries. This will prevent them from interfering with your current setup.
You can install the required libraries using these commands at your command prompt:
$ python -m pip install polars \
pandas \
pyarrow \
narwhals \
altair \
jupyterlab \
matplotlib
All the code examples are provided in the downloadable materials for this tutorial, which you can download by clicking the link below:
Get Your Code: Click here to download the free sample code you’ll use to learn the differences between Polars and pandas.
Now that you’re set up, it’s time to get started and learn about the main differences between Polars and pandas.
Take the Quiz: Test your knowledge with our interactive “Polars vs pandas: What's the Difference?” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Polars vs pandas: What's the Difference?Take this quiz to test your knowledge of the Polars vs pandas tutorial and review the key differences between these open-source Python libraries.
Do Polars and pandas Use the Same Syntax?
There are similarities between Polars and pandas. For example, they both support Series and DataFrames and can perform many of the same data analysis computations. However, there are some differences in their syntax.
To explore this, you’ll use the order details in your online_retail.parquet
file to analyze both pandas and Polars DataFrames. This file contains the following data:
Column Name | Description |
---|---|
InvoiceNo | Invoice number |
StockCode | Stock code of item |
Description | Item description |
Quantity | Quantity purchased |
InvoiceDate | Date invoiced |
UnitPrice | Item price |
CustomerID | Customer identifier |
Country | Country of purchase made |
Next, you’ll analyze some of this data with pandas and then with Polars.
Using Index-Based Syntax in pandas
Suppose you want a DataFrame with a new Total
column that contains the total cost of each purchase. You also want to apply filtering so you can concentrate on specific data.
To achieve this, you might write the following pandas code in your REPL:
pandas_polars_demo.py
>>> import pandas as pd
>>> orders_pandas = pd.read_parquet("online_retail.parquet")
>>> orders_pandas["Total"] = (
... orders_pandas["Quantity"] * orders_pandas["UnitPrice"]
... )
>>> orders_pandas[["InvoiceNo", "Quantity", "UnitPrice", "Total"]][
... orders_pandas["Total"] > 100
... ].head(3)
InvoiceNo Quantity UnitPrice Total
46 536371 80 2.55 204.0
65 536374 32 10.95 350.4
82 536376 48 3.45 165.6
This code uses pandas index-based syntax, inspired by NumPy, on which pandas was originally built. First, you add a new Total
column to your DataFrame. The column is calculated by multiplying the values of the Quantity
and UnitPrice
columns together. This operation permanently changes your original DataFrame.
Read the full article at https://realpython.com/polars-vs-pandas/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Polars vs pandas: What's the Difference?
In this quiz, you’ll test your understanding of the Polars vs pandas: What’s the Difference? tutorial. Most of the answers are somewhere in the tutorial, but you may need to use some of the links to clarify some of the questions.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Lucas Cimon
Spotlight on pdfly, the Swiss Army knife for PDF files
Project documentation: pdfly.readthedocs.io
pdfly
is the youngest project of the py-pdf
organization.
It has been created by Martin Thoma in 2022.
It's simply a CLI tool to manipulate PDF files, written in Python and based on the fpdf2 & pypdf libraries.
I'm a maintainer of the project 🙂
What can it do & what's next?
Find out by reading the full article
— Permalink
Seth Michael Larson
Re(blog, tweet, toot, skoot, skeеt, post)
Have you noticed the similar terms used for sharing someone else's content with attribution from your own account? Reblogging was the original term for “blogging” another user's content, first developed by project “reBlog” and popularized by Tumblr. Remember that “blog” is a truncation of “weblog” (... should it have been ’blog to capture the shortening?)
Anyway, here's a railroad diagram of all the different words I could think of:
Tweet was coined for Twitter, a “micro-blogging” platform, so retweets were what reblogging was called on that platform. That naming has since changed, but wouldn't you know it: tweets used to be known as “twits”. Given Twitters owner that name makes more sense now than ever.
“Toots” are an elephant-themed “tweet” for Mastodon, with “boost” being the official term. “Skoot” was initially proposed for Bluesky, but cleverer folks suggested “skeеt”, much to Jay's dismay (stop trying to make skoot happen).
Now you see less “platform-specific” terminology being thrown around, like “post” and “repost”. Personally, I'm not a fan: check your posts at the door, we're goin’ ‘bloggin’! 🏄︎
I and many other blogs publish a “blogroll”, or a list of other blogs and pages that we've “reblogged” to our own website. If you're interested, give those a read and discover something new by surfing the web like we did in the 90s.
Thanks for keeping RSS alive! ♥
October 14, 2025
The Python Coding Stack
Creating a Singleton Class in Python And Why You (Probably) Don’t Need It
If you spend long enough in the programming world, you’ll come across the term singleton at some point. And if you hadn’t seen this term yet, well, now you have!
In Python, you don’t need singleton classes. Almost never. But creating one is a great exercise in understanding how Python creates objects. And discussing alternatives to a singleton class helps you explore other aspects of Python.
“Hey Stephen, sounds great, but you haven’t told us what this singleton thing is yet!”
Fair point – here we go.
What’s a Singleton?
The singleton pattern is a design pattern in which a class can produce only one instance. Not two, not three, just one. And each time you try to create a new instance of a singleton class, you get the same object again.
Let me pick a trivial example of when you may need this. You’re writing a game. Perhaps several players can play games simultaneously. And you need a leaderboard. But you only want one leaderboard. You don’t want each game to create its own leaderboard. Otherwise, what’s the point of the leaderboard?
There are other examples in programming when singletons appear to be the solution: creating a connection to a database or to a hardware device – you only want one connection – creating a logger or a configuration manager. But they sound too serious and proper. So I’ll stick with the leaderboard in a game example for this discussion.
Creating a Leaderboard • First Attempt
Let’s say you have a Game
class and you create Game
instances each time a game is played. I won’t write this class as it’s not needed, and I want to keep this article relatively brief (famous last words!).
The Game
class needs to access a leaderboard. Each Game
instance needs to access a leaderboard – the same leaderboard. Let’s say you create a class to manage this leaderboard:

You add the .scores
data attribute, which is a dictionary, in the class’s .__init__()
method. If this is all you need, you definitely don’t need a class. But you add some methods to manage the leaderboard:
Now you have a bit more functionality. Let’s review these methods briefly:
.add_score()
adds a score to the leaderboard, as its name implies. If the player already exists in the.scores
dictionary, you add to their points tally. If the player doesn’t exist yet, you add them to the dictionary. There are neater ways to write this method, but this will do here..get_leaderboard()
returns a sorted list containing the players in order, from those with the highest number of points to the lowest. If you’re not familiar withsorted()
, itskey
parameter, andlambda
functions, you can read one of the most popular articles on The Python Coding Stack: The Key To The key Parameter in Python.display()
displays the leaderboard, using.get_leaderboard()
along the way..reset()
resets the leaderboard. By the way, you see why choosing descriptive names matters! I didn’t include the safety verifications and steps you might want to take for this method.
Looks good? Let’s try it out. For simplicity, you can just add these lines to the same script where you define your class:
You create an instance of Leaderboard
and call .add_score()
three times. If we had a Game
class, the first line, which creates a Leaderboard
instance and assigns it to an identifier, would be included in the Game.__init__()
, but here I’m just creating this variable in the main script. Here’s the displayed leaderboard:
Kate: 15
Stephen: 14
All seems fine. Kate is leading with 15 points. I’m second (also last) with 14 points.
But, later in your program, possibly within a different Game
instance, you write the following:
And here’s the output now:
Kate: 15
Stephen: 14
Now Dealing With ‘another_leaderboard’
Sarah: 13
Max: 7
Recall that the first two lines of the output are from the code you wrote earlier.
But where’s Kate in the new leaderboard? And where am I? This code creates a new Leaderboard
object, unrelated to the previous one. You can confirm this by showing the identity of each object, using id()
, or by using the is
operator:
The outputs from these three calls to print()
are below:
4347130752
4347064592
False
The two objects have different identities. They’re not the same object. The identity values you get will be different from mine, but what matters here is that they’re different from each other.
You could make sure you don’t call Leaderboard()
again in your code. But this is not always possible or straightforward. And you’d still need to make sure your documentation makes this really clear. And will your users read the documentation? Who knows. You may be opening the door to bugs.
Creating a Leaderboard • Second Attempt Using a Singleton Class
Now, you may be thinking: “Hey, I read about this great design pattern: the singleton pattern. Surely, this is a great time to use it…”.
The singleton is an important pattern in some languages. (Spoiler alert: it’s not so useful in Python – you’ll see why later). So let’s try to implement it in Python.
Let’s start with this question: “What’s the first thing that happens when you create an instance of a class in Python?”
If you answered: “Python calls its .__init__()
method to initialise the object”, then you’re not quite right. Something else happens first. The .__init__()
method initialises a “blank” instance of the class. But what creates that instance in the first place? It’s not .__init__()
.
It’s .__new__()
. In most cases, you don’t need to worry about defining .__new__()
when you define a class. The default behaviour is good enough. But in this case, you want to have more control over what happens when you create a new instance of the Leaderboard
class:
You add a few things to the class definition:
You add the class attribute
._instance
. This attribute belongs to the class, so each instance won’t have its own version, unlike the data attributes you create in.__init__()
. The leading underscore in its name is a convention indicating that this attribute is meant for internal use only and that users of this class shouldn’t access it directly.You define the special method
.__new__()
. Let’s explore the code in this method further.
When you don’t define a .__new__()
special method, Python uses the default implementation, which is inherited from the base class object
. All Python classes inherit from the object
class. However, since you define a .__new__()
method in your class, Python uses this method instead of the default .__new__()
.
But then you need to call super().__new__()
, which creates the new blank object using object.__new__()
. However, you only do this if the ._instance
class attribute is None
– that’s what if cls._instance is None:
does. Let’s understand what’s happening here.
The first time you create an instance of the class, you create the new blank instance since ._instance
is None
at first. You then assign this new instance to the class attribute ._instance
. The .__new__()
method should return the instance, which is then passed on to .__init__()
. But we’ll get there later.
What happens if you try to create a new Leaderboard
object again? The second time your code calls Leaderboard.__new__()
, the class attribute ._instance
is no longer None
. It now refers to an object of the class. Therefore, the code bypasses the creation of a new object and returns the existing one.
So, is the problem solved? Let’s find out. Here’s a reminder of the code used to explore this class (with a few extra print()
calls):
Here’s the output now:
Show leaderboard:
Kate: 15
Stephen: 14
Show leaderboard:
Sarah: 13
Max: 7
4344263552
4344263552
True
There’s good news and bad news – which one do you want first?
Let’s start with the good news. The variable names leaderboard
and another_leaderboard
refer to the same object. Notice how the identity returned by id()
is the same, and the is
expression returns True
. They’re the same object. When you call Leaderboard()
the second time, your code doesn’t create a new instance. It uses the existing one.
Great.
But the leaderboards displayed are still different. Why?
You now have the same object – you’re not creating a new one. But you’re reinitialising it the second time you call Leaderboard()
. When you call a class to create an instance by using Leaderboard()
(with parentheses), Python first calls the class’s .__new__()
. But you dealt with that already – it doesn’t create a new instance. However, the instance returned by .__new__()
is passed to the class’s .__init__()
.
And here’s a reminder of what your Leaderboard.__init__()
does:
Your instance already exists. It already has a .scores
data attribute with some data in it. However, when you call .__init__()
again, you overwrite .scores
with an empty dictionary. Any data you already had is gone.
So, you now need to fix this, too. What a pain:
If you haven’t seen the built-in hasattr()
function yet, it stands for has attribute. You pass an object and the name of an attribute, and it returns True
or False
depending on whether that object has that attribute!
In this case, you pass self
as the first argument. And self
is the name that refers to the instance you’re dealing with. You also pass the string “initialised”
as the second argument.
If it’s the first time
.__init__()
is called on this object, the object won’t have the attribute.initialised
since it’s created in the.__init__()
method itself. Therefore, the code within theif
block runs, creating.scores
and.initialised
.The second time you try to initialise the same object – and it will be the same object because of the code you wrote in
.__new__()
– the object will already have the.initialised
data attribute. Therefore, the rest of the code in.__init__()
won’t run.
You won’t overwrite .scores
when you call Leaderboard()
a second time, or a third time… You’re ensuring that an object can only be initialised once.
Run the code now:
Show leaderboard:
Kate: 15
Stephen: 14
Show leaderboard:
Kate: 15
Stephen: 14
Sarah: 13
Max: 7
4336350080
4336350080
True
There’s still only one Leaderboard
object. But now, you’re not overwriting any data it already has, either. As you can see, when you call another_leaderboard = Leaderboard()
, you don’t create a new instance. Instead, you reuse the one you already have. And you don’t erase its existing data, but add on to it.
Now, your class can only have one instance of the Leaderboard
class. It’s a singleton!
That’s a lot of work to create a useable singleton class.
And I won’t open the can of worms of the scenario where you may have multiple concurrent threads. Two or more threads may access .__new__()
at the same time. And that’s not good!
The Python Coding Stack is getting bigger and better, and not just because there are more of you reading every week. I’ll send out an email announcing more articles, more goodies and more value soon. Stay tuned.
And make sure you upgrade to a paid subscription to make sure you don’t miss anything–now is a good time to upgrade, before monthly and yearly subscription prices go up (they only go up for new paid subscribers, never for existing ones)
We’ve Learnt Stuff By Creating a Singleton
Creating a singleton class, as you did above, is a useful exercise to help you understand how Python creates and initialises new objects. From a learning perspective, it’s a great exercise.
But, do you need this in your code?
Generally, no.
It’s a lot of work.
There’s an easier way to achieve the same effect in Python (which may not exist in other programming languages).
And maybe you don’t really need a single global object that you refer to from all over your program.
Let’s explore some options. And no, I won’t cover all the options. I said I want to keep this article brief (but I’m already on 2k+ words). My aim here is to start you off on a journey to think about what goes where, when, and how…
Alternative to the Singleton Class • Move The Class to a New Module
Let’s roll back the changes to the Leaderboard
class. Delete the .__new__()
method and the ._instance
class attribute. And revert back to the original, simpler .__init__()
. However, place the class in its own script. Let’s call it leaderboard.py
:
Note how .__new__()
is no longer there and .__init__()
simply creates the .scores
data attribute.
There’s also one more line of code after the class definition – and only one. You create an instance of the class.
Now, let’s go back to your main script, which could be anywhere else in your program. Let’s call the main script main.py
:
The class defined in leaderboard.py
is not a singleton class. It’s a regular class. However, you create the one and only instance you’ll ever create within leaderboard.py
. Then, you simply import this instance using from leaderboard import leaderboard
. The variable name and module name don’t have to be the same, of course, but in this case they are.
Ah, what if you import the module more than once? I can hear you think… Python only loads a module once. Here’s a demonstration. Add the following print()
call to leaderboard.py
:
Now, go back to main.py
and import leaderboard
a second time anywhere else in your code:
Run this code. How many times do you see the text leaderboard.py loaded
in your output?
leaderboard.py loaded
Show leaderboard:
Kate: 15
Stephen: 14
Show leaderboard:
Kate: 15
Stephen: 14
Sarah: 13
Max: 7
Once. Modules are only loaded once. And the final output still shows the correct, combined leaderboard.
And there you go – you can only have one instance of Leaderboard
now, without any of the hassle of messing around with .__new__()
and .__init__()
.
Modules in Python give you a shortcut to create singleton-like behaviour.
Sometimes, You Can Simplify Further
In fact, do you really need the Leaderboard
class? Sometimes, you do, or you prefer to have everything relating to the leaderboard in a single object. In that case, the solutions in the previous section and in the one after this one are great.
But can you live with a simpler option?
Note that some functions’ names are updated to make them more readable since they’re no longer methods in a class.
Sure, this option may not always be possible. And some purists will scoff at these functions accessing and mutating a global variable (although you can modify the functions to accept the dictionary as one of the arguments, if you prefer).
The leading underscore in _scores
indicates that you don’t intend this variable to be used by the user. It’s not meant for public use. Users should only access it through the functions.
If you prefer, you can place ._scores
and the functions in a separate module and import them. As you saw earlier, Python imports a module only once. Therefore, anything defined within a module is effectively a singleton! In Python, the behaviour of modules makes creating a single instance of a class to share across your code much easier – even trivial. Other languages don’t have this option, which is why the singleton pattern exists.
So, if you think a singleton class is the solution to your problem, consider whether this simpler option will do!
Need More Flexibility and Future-Proof Code?
Here’s yet another option. Create a file called game_services.py
:
You can also define the Leaderboard
class within this module, if you prefer, but here I’m leaving it in its own module, leaderboard.py
. The GameServices
class has a single data attribute, .leaderboard
, which contains an instance of Leaderboard
. This instance of Leaderboard
is created when you create an instance of GameServices
, which you do in the final line in this script.
You’re using composition in this case. The GameServices
class has access to Leaderboard
by having a Leaderboard
instance as one of its attributes. You can read more about composition in this pair of articles: Choose Your Fighter • Let’s Play (#1 in Inheritance vs Composition Pair) and Choose Your Fighter • Inheritance vs. Composition (#2 in Inheritance vs Composition Pair).
Back in main.py
, you can now import this game_services
instance:
At first sight, this version seems similar to the first alternative I presented above, just a bit more complex. However, instead of creating an instance of Leaderboard
that is then used elsewhere, in this version, the Leaderboard
instance is included in a container, the GameServices
object. You then use the instance of the GameServices
object wherever needed.
There’s more boilerplate code, but you also get more flexibility with this version. What if you want to replace the leaderboard with a different one for testing purposes? The classic singleton class is hard to test. This option simplifies things because you can assign a new Leaderboard
to game_services.leaderboard
or create a separate GameServices
object for this purpose. Nothing else needs to change in your code.
You can also use a different implementation of Leaderboard
, say you have an AlternativeLeaderboard
class you want to experiment with. It’s easier and safer to make these changes when the leaderboard is included in the GameServices
object.
And what if you later decide you want multiple leaderboards? Perhaps one for a version of the game and a different leaderboard for another version of the game? You no longer want a singleton! But with this version of the code, you can easily create another data attribute in GameServices
. Sure, you’d be able to do so if using Leaderboard
directly, as in the first example. But this option makes it safer and easier to expand your code.
And perhaps, you have other services you want to share, not just a leaderboard. You can also add more data attributes.
Final Words
Note how the alternatives of the singleton class use standard classes that don’t need a .__new__()
and extra work in the .__init__()
, or they don’t use classes at all. They rely on composition within another class and on the fact that Python loads a module only once per program, so instances created in a module are effectively singletons when used elsewhere in the program.
There are other alternatives for situations where you may be tempted to use a singleton. And there may be some less common scenarios when the answer is still to create a singleton class.
So, I’m not stating that you absolutely never need to create a singleton class in Python. But in most cases, there are neater and more Pythonic alternatives.
Still, creating a singleton class, as we did earlier in this article, is a useful learning exercise!
This publication is entirely supported by its readers – there are no adverts, no sponsors! But it takes a lot of effort and time to get one of these articles out. If you want to support this publication further, and get exclusive articles, videos, and more goodies, you can become a paid subscriber.
You can also support this publication by making a one-off contribution of any amount you wish.
Photo by Marek Piwnicki: https://www.pexels.com/photo/born-by-the-stars-17806401/
Code in this article uses Python 3.14
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Choose Your Fighter • Let’s Play (#1 in Inheritance vs Composition Pair) and Choose Your Fighter • Inheritance vs. Composition (#2 in Inheritance vs Composition Pair)
Appendix: Code Blocks
Code Block #1
class Leaderboard:
def __init__(self):
self.scores = {}
Code Block #2
class Leaderboard:
def __init__(self):
self.scores = {}
def add_score(self, player, score):
if player in self.scores:
self.scores[player] += score
else:
self.scores[player] = score
def get_leaderboard(self):
return sorted(
self.scores.items(),
key=lambda item: item[1],
reverse=True,
)
def display(self):
for player, score in self.get_leaderboard():
print(f”{player}: {score}”)
def reset(self):
# You may want to add a confirmation step in a real application,
# or save a backup to a file first
self.scores.clear()
Code Block #3
# ...
leaderboard = Leaderboard()
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
leaderboard.display()
Code Block #4
# ...
# later in the game, or in another game instance...
print(”\nNow Dealing With ‘another_leaderboard’”)
another_leaderboard = Leaderboard()
another_leaderboard.add_score(”Max”, 7)
another_leaderboard.add_score(”Sarah”, 13)
another_leaderboard.display()
Code Block #5
# ...
print(id(leaderboard))
print(id(another_leaderboard))
print(leaderboard is another_leaderboard)
Code Block #6
class Leaderboard:
_instance = None
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
# ...
Code Block #7
# ...
leaderboard = Leaderboard()
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()
# later in the game, or in another game instance...
another_leaderboard = Leaderboard()
another_leaderboard.add_score(”Max”, 7)
another_leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
another_leaderboard.display()
print()
print(id(leaderboard))
print(id(another_leaderboard))
print(leaderboard is another_leaderboard)
Code Block #8
class Leaderboard:
# ...
def __init__(self):
self.scores = {}
# ...
Code Block #9
class Leaderboard:
# ...
def __init__(self):
# Prevent reinitialisation
if not hasattr(self, “initialised”):
self.scores = {}
self.initialised = True
# ...
Code Block #10
# leaderboard.py
class Leaderboard:
def __init__(self):
self.scores = {}
def add_score(self, player, score):
if player in self.scores:
self.scores[player] += score
else:
self.scores[player] = score
def get_leaderboard(self):
return sorted(
self.scores.items(),
key=lambda item: item[1],
reverse=True,
)
def display(self):
for player, score in self.get_leaderboard():
print(f”{player}: {score}”)
def reset(self):
# You may want to add a confirmation step in a real application,
# or save a backup to a file first
self.scores.clear()
# Create a single instance of Leaderboard
leaderboard = Leaderboard()
Code Block #11
# main.py
from leaderboard import leaderboard
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()
# later in the game, or in another game instance...
leaderboard.add_score(”Max”, 7)
leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
leaderboard.display()
Code Block #12
# leaderboard.py
print(”leaderboard.py loaded”)
class Leaderboard:
# ...
Code Block #13
# main.py
from leaderboard import leaderboard
leaderboard.add_score(”Stephen”, 10)
leaderboard.add_score(”Kate”, 15)
leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
leaderboard.display()
# Note, we usually never place imports in the middle of a file,
# but this is just to illustrate that the singleton instance
# is shared even if we import it again.
from leaderboard import leaderboard
# later in the game, or in another game instance...
leaderboard.add_score(”Max”, 7)
leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
leaderboard.display()
Code Block #14
_scores = {}
def add_score(player, score):
if player in _scores:
_scores[player] += score
else:
_scores[player] = score
def get_leaderboard():
return sorted(
_scores.items(),
key=lambda item: item[1],
reverse=True,
)
def display_leaderboard():
for player, score in get_leaderboard():
print(f”{player}: {score}”)
def reset_leaderboard():
# You may want to add a confirmation step in a real application,
# or save a backup to a file first
_scores.clear()
add_score(”Stephen”, 10)
add_score(”Kate”, 15)
add_score(”Stephen”, 4)
print(”Show leaderboard:”)
display_leaderboard()
# later in the game, or in another game instance...
add_score(”Max”, 7)
add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
display_leaderboard()
Code Block #15
# game_services.py
from leaderboard import Leaderboard
class GameServices:
def __init__(self):
self.leaderboard = Leaderboard()
game_services = GameServices()
Code Block #16
# main.py
from game_services import game_services
game_services.leaderboard.add_score(”Stephen”, 10)
game_services.leaderboard.add_score(”Kate”, 15)
game_services.leaderboard.add_score(”Stephen”, 4)
print(”Show leaderboard:”)
game_services.leaderboard.display()
# later in the game, or in another game instance...
game_services.leaderboard.add_score(”Max”, 7)
game_services.leaderboard.add_score(”Sarah”, 13)
print(”\nShow leaderboard:”)
game_services.leaderboard.display()
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
PyCoder’s Weekly
Issue #704: Python 3.14 Released and More (Oct. 14, 2025)
#704 – OCTOBER 14, 2025
View in Browser »
Python 3.14: Exploring the New Features
Python 3.14 is here! Christopher Trudeau returns to discuss the new version with Real Python team member Bartosz Zaczyński. This year, Bartosz coordinated the series of preview articles with members of the Real Python team and wrote the showcase tutorial, “Python 3.14: Cool New Features for You to Try.” Christopher’s video course, “What’s New in Python 3.14”, covers the topics from the article and shows the new features in action.
REAL PYTHON podcast
Python 3.14: Cool New Features for You to Try
Learn what’s new in Python 3.14, including an upgraded REPL, template strings, lazy annotations, and subinterpreters, with examples to try in your code. Also available as a video course.
REAL PYTHON
Python 3.14 Is Here. How Fast Is It?
A comprehensive deep dive comparison on performance figures between Python versions and variations, including the JIT and free-threaded mode.
MIGUEL GRINBERG
Free-Threaded Python Library Compatibility Checker
A heat map and table summarizing Python free-threaded compatibility in a variety of common Python packages.
FT-CHECKER.COM
Python 3.14: 3 asyncio
Changes
asyncio
changes are often overlooked, in the latest 3.14 release there are 3 new asyncio
features and changes.
JAMIE CHANG • Shared by Jamie Chang
Python Jobs
Senior Python Developer (Houston, TX, USA)
Articles & Tutorials
Why It Took 4 Years to Get a Lock Files Specification
Lock files have always been tool specific, ranging from the simple format found in requirements.txt
to more complex mechanisms in Poetry and uv
. In an attempt to normalize the format for interchange, there is a new PEP. This article talks about why it took so long to get there. See also the associated HN Discussion
BRETT CANNON
The Complete PySpark SQL Guide
PySpark handles large datasets but its syntax has a steep learning curve. PySpark SQL solves this by enabling familiar SQL-style DataFrame operations. This walkthrough teaches you everything from loading data to window functions and pandas UDFs.
CODECUT.AI • Shared by Khuyen Tran
Functions: A Complete Reference
This article is a complete reference for all things related to functions in Python, including structure, signatures, *args
, **kwargs
, using functions as objects, and more.
RODRIGO GIRÃO SERRÃO
“Open Infra Is Not Free”, PyPI Security, & Grants Pause
“The Python Software Foundation has joined the OpenSSF and other organizations in signing the ‘Open Infrastructure Is Not Free: A Joint Statement on Sustainable Stewarding’.”
PSF
Alberta’s Pipelines
Canada’s province of Alberta is oil rich and hosts many pipelines. The province publishes lots of open data on the infrastructure. This post shows you how to explore it.
MARK LITWINTSCHIK
Lazy Imports Using wrapt
PEP 810 proposes adding explicit lazy imports to Python, but you can already achieve this with third party libraries. This post shows you how using wrapt
.
GRAHAM DUMPLETON
Pyx: The Other Side of the uv
Coin
Talk Python interviews Charlie Marsh and they talk about Astral’s new product, pyx, a service that intends to make package management faster.
KENNEDY & MARSH podcast
pytest
2.6.0 Release
A new version of pytest
is out and this quick post shows you the key new feature: changes to how check.raises()
works.
PYTHONTEST
Django: One ORM to Rule All Databases
Comparing the Django ORM support across official database backends, so you don’t have to learn it the hard way.
PAOLO MELCHIORRE
Mastering Python Context Managers
Go beyond just using open()
and learn how Python context managers work and where they are useful.
MOH HAZIANE
=
Alignment in String Formatting
A quick TIL article on how to use =
in string formatting to do alignment with numbers.
ROD
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
October 15, 2025
REALPYTHON.COM
Workshop: Creating Python Communities
October 15 to October 16, 2025
PYTHON-GM.ORG
Swiss Python Summit 2025
October 16 to October 18, 2025
PYTHON-SUMMIT.CH
PyCon NL 2025
October 16 to October 17, 2025
PYCON-NL.ORG
PyData Bristol Meetup
October 16, 2025
MEETUP.COM
PyLadies Dublin
October 16, 2025
PYLADIES.COM
PyCon Thailand 2025
October 17 to October 19, 2025
PYCON.ORG
PyCon Finland 2025
October 17 to October 18, 2025
PLONECONF.ORG
PyConES 2025
October 17 to October 20, 2025
PYCON.ORG
PyBay 2025
October 18 to October 19, 2025
PYBAY.ORG
Python Brasil 2025
October 21 to October 28, 2025
PYTHONBRASIL.ORG.BR
PyCon Davao 2025
October 25 to October 27, 2025
DURIANPY.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #704.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Docs Editorial Board
Meeting Minutes: Oct 14, 2025
Meeting Minutes from Python Docs Editorial Board: Oct 14, 2025
Real Python
Python Descriptors
Descriptors are a specific Python feature that power a lot of the magic hidden under the language’s hood. If you’ve ever thought that Python descriptors are an advanced topic with few practical applications, then this video course is the perfect tool to help you understand this powerful feature. You’ll come to understand why Python descriptors are such an interesting topic and discover the kinds of use cases where you can apply them.
By the end of this video course, you’ll know:
- What Python descriptors are
- Where they’re used in Python’s internals
- How to implement your own descriptors
- When to use Python descriptors
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 13, 2025
Talk Python Blog
Talk Python in Production Story
If you were a website that started from modest beginnings and grew over ten years to support a ton of features and a variety of users, wouldn’t it be great if someone told your story?
My 15 minutes in the spotlight are here! Michael just published his book telling the story in Talk Python in Production. I’ll let Michael tell you all about it. Check out his post giving you the full back story on why the book was written and how it’s different than most DevOps books for Python devs.
Jacob Perkins
Monitoring Celery Tasks with Sentry
Sentry is a great tool for monitoring celery tasks, and alerting when they fail or don’t run on time. But it requires a bit of work to setup properly. Below is some sample code for setting up sentry monitoring of periodic tasks, followed by an explanation.
import math
import sentry_sdk
from celery import signals
from sentry_sdk import monitor
from sentry_sdk.integrations.celery import CeleryIntegration
@signals.beat_init.connect # if you use beats
@signals.celeryd_init.connect
def init_sentry(**kwargs):
sentry_sdk.init(
dsn=...,
integrations=[
CeleryIntegration(monitor_beat_tasks=False)
]
)
@signals.worker_shutdown.connect
@signals.task_postrun.connect
def flush_sentry(**kwargs):
sentry_sdk.flush(timeout=5)
def add_periodic_task(celery, schedule, task):
max_runtime = math.ceil(schedule * 4 / 60)
monitor_config = {
"recovery_threshold": 1,
"failure_issue_threshold": 10,
"checkin_margin": max_runtime,
"max_runtime": max_runtime,
"schedule": {
"type": "interval",
"value": math.ceil(schedule / 60.0)
"unit": "minute"
}
}
name = task.__name__
task = monitor(monitor_slug=name, monitor_config=monitor_config)(task)
celery.add_periodic_task(schedule, celery.task(task).s(), name=name)
Initialize Sentry
The init_sentry
function must be called before any tasks start executing. The sentry docs for celery recommend using the celeryd_init
signal. And if you use celery beats for periodic task execution, then you also need to initialize on the beat_init
signal.
Monitoring Beats Tasks
In this example, I’m setting monitor_beat_tasks=False
to show how you can do manual monitoring. monitor_beat_tasks=True
is much simpler, and doesn’t require any code like in add_periodic_task
. But in my experience, it’s not reliable when using async celery functions. The automatic beats monitoring uses some celery signals that likely don’t get executed correctly under async conditions. But manual monitoring isn’t that hard with a function wrapper, as shown above.
Adding a Periodic Task
The add_periodic_task
function takes a Celery instance, a periodic interval in seconds, and a function to execute. This function can be normal or async
. It then does the following:
- Calculates a
max_runtime
in minutes, so that sentry knows when a task has gone over time. This is also used forcheckin_margin
, giving the task plenty of buffer time before an issue is created. You should adjust these according to your needs. - Creates a
monitor_config
for sentry, specifying the following:- schedule in minutes (rounded up, because sentry doesn’t handle schedules in seconds)
- the number of failures allowed before creating an issue (I put 10, but you should adjust as needed)
- how many successful checkins are required before the issue is marked as resolved (1 is the default, but adjust as needed)
- Wraps the function in the sentry monitor decorator, using the function’s name as the
monitor_slug
. With default beats monitoring, the slug is set to the fullpackage.module.function
path, which can be quite long and becomes hard to scan when you have many tasks. - Schedules the task in celery.
Sentry Flush
While this may not be strictly necessary, calling sentry_sdk.flush
on the worker_shutdown
and task_postrun
signals ensures that events are sent to sentry when a celery task completes.
Monitoring your crons
Once this is all setup and running, you should be able to go to Insights > Crons in your sentry web UI, and see all your celery tasks. Double check your monitor settings to make sure they’re correct, then sit back and relax, while sentry keeps track of how your tasks are running.
Real Python
How to Use Python: Your First Steps
If you’re starting fresh with programming and wondering how to use Python, this tutorial will give you the foundation you need to get started with Python step by step. You’ll learn how to install Python, run your first program, and understand the language’s clear syntax.
You’ll also explore variables, loops, functions, classes, and error handling while discovering tools, coding style, and libraries that help you write effective Python code from day one.
By the end of this tutorial, you’ll understand that:
- You can install Python on Windows, macOS, and Linux using binaries, package managers, or distributions like Anaconda.
- You can use the Python REPL to experiment interactively before writing full scripts, modules, and projects.
- Built-in data types like strings, lists, dictionaries, and sets provide powerful ways to structure information.
- You can handle errors with syntax checks, exceptions, and debugging practices to keep your code running smoothly.
- Tools like editors, integrated development environments (IDEs), and AI assistants can boost your productivity when writing Python code.
You’ll go through your first steps with clear examples so that you can start coding in Python confidently and build on solid ground.
Get Your Code: Click here to download the free sample code that shows you how to use Python.
Take the Quiz: Test your knowledge with our interactive “How to Use Python: Your First Steps” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Use Python: Your First StepsReview the basics of Python with this quiz. Practice syntax, keywords, variables, errors, and tools every beginner should know.
Why Should You Use Python?

Python is a high-level, interpreted, interactive, and object-oriented programming language that’s a great choice as a first language because its code reads like English. It’s flexible, powerful, and allows you to do many things, both big and small.
With Python, you can write basic programs and scripts, as well as create complex and large-scale enterprise solutions. Here’s a sampling of its uses:
- Building desktop applications, including GUI applications, CLI tools, and even games
- Doing mathematical and scientific data analysis
- Building web applications
- Administering computer systems and automating tasks
- Performing DevOps tasks
Note: To learn more about the tasks you can perform with Python, check out the What Can I Do With Python? tutorial.
You’ll find Python across many high-traffic websites. For example, Reddit is written in Python. Dropbox’s earliest prototypes were in Python, and it remains central there. YouTube uses Python among its back-end languages. Meanwhile, Instagram runs on Django, and Pinterest has historically used Python with a modified Django stack.
Python offers many features that make it attractive as your first programming language:
- Readable, beginner-friendly syntax: Python’s design favors code readability, so you spend more time learning programming ideas and less time fighting syntax.
- Accessible: People of all ages, from school children to retirees, have learned Python, and so can you.
- Batteries included: The standard library ships with modules for file processing, networking, mathematics, date and time processing, testing, and more.
- Large community and abundant resources: There’s a vast ecosystem of tutorials, videos, forums, and local meetups for every topic and skill level.
- Proven in the real world: From startups to enterprises and research labs, Python powers production systems, data pipelines, and AI tooling across industries.
- Versatile and scalable: It can be used for quick scripts and automation, as well as web applications, data analysis, machine learning, and even game development.
- Free and cross-platform: Python runs on Windows, macOS, and Linux, and it’s free for both personal and commercial use.
- Open source: Python source code is publicly available under the Python Software Foundation License Version 2, which grants broad rights to use, modify, and distribute, including in proprietary software. Additionally, anyone can contribute to its development.
Compared to other programming languages, Python offers several key features:
- Interpreted: It’s portable and quicker to experiment with than compiled languages.
- Multiparadigm: It lets you write code in different styles, including object-oriented, imperative, and functional.
- Dynamically typed: It checks variable types at runtime, so you don’t need to declare them explicitly.
- Strongly typed: It won’t let unsafe operations on incompatible types go unnoticed.
There’s a lot more to learn about Python. But by now, you should have a better idea of why Python is so popular and why you should consider learning to program with it.
How Do You Install and Run Python?
Before you can learn how to use Python, you need to install it. Python works on Linux, macOS, Windows, and several other platforms. You can download and install the latest version from the official download page. You also have the option to install and use different Python versions for different projects.
Note: For a complete guide on installing Python on your computer, check out the How to Install Python on Your System: A Guide tutorial.
To check what Python version has been installed globally on your operating system, open the terminal or command line and run the following:
Read the full article at https://realpython.com/python-first-steps/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: How to Use Python: Your First Steps
Ready to check your foundation in Python? This quiz reviews core beginner topics like variables, keywords, strings, and errors. You’ll also practice commands and tools that help you get started quickly.
As you work through the questions, focus on recognizing patterns and understanding why an answer is correct. For a guided introduction, see How to Use Python: Your First Steps.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Brian Okken
Python lazy imports you can use today
There’s a proposal, PEP 810 – Explicit lazy imports for Python to natively support lazy importing starting in Python 3.15.
However, it has not been accepted yet, and even if it is accepted, 3.15 is a year away. What do we do now?
The techniques covered in this post that allow you to use lazy importing NOW with Python 3.13, Python 3.12, …, really every version of Python.
Let’s look at a small code example that imports something.
Talk Python to Me
#523: Pyrefly: Fast, IDE-friendly typing for Python
Python typing got fast enough to feel invisible. Pyrefly is a new, open source type checker and IDE language server from Meta, written in Rust, with a focus on instant feedback and real-world DX. Today, we will dig into what it is, why it exists, and how it plays with the rest of the typing ecosystem. We have Abby Mitchell, Danny Yang, and Kyle Into from Pyrefly here to dive into the project.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Abby Mitchell</strong>: <a href="https://www.linkedin.com/in/abby-mitchell/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Danny Yang</strong>: <a href="https://www.linkedin.com/in/yangdanny97/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Kyle Into</strong>: <a href="https://www.linkedin.com/in/kyleinto/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <br/> <strong>Pyrefly</strong>: <a href="https://pyrefly.org?featured_on=talkpython" target="_blank" >pyrefly.org</a><br/> <strong>Pyrefly Documentation</strong>: <a href="https://pyrefly.org/en/docs/?featured_on=talkpython" target="_blank" >pyrefly.org</a><br/> <strong>Pyrefly Installation Guide</strong>: <a href="https://pyrefly.org/en/docs/installation/?featured_on=talkpython" target="_blank" >pyrefly.org</a><br/> <strong>Pyrefly IDE Guide</strong>: <a href="https://pyrefly.org/en/docs/IDE/?featured_on=talkpython" target="_blank" >pyrefly.org</a><br/> <strong>Pyrefly GitHub Repository</strong>: <a href="https://github.com/facebook/pyrefly?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Pyrefly VS Code Extension</strong>: <a href="https://marketplace.visualstudio.com/items?itemName=meta.pyrefly&featured_on=talkpython" target="_blank" >marketplace.visualstudio.com</a><br/> <strong>Introducing Pyrefly: A New Type Checker and IDE Experience for Python</strong>: <a href="https://engineering.fb.com/2025/05/15/developer-tools/introducing-pyrefly-a-new-type-checker-and-ide-experience-for-python/?featured_on=talkpython" target="_blank" >engineering.fb.com</a><br/> <strong>Pyrefly on PyPI</strong>: <a href="https://pypi.org/project/pyrefly/?featured_on=talkpython" target="_blank" >pypi.org</a><br/> <strong>InfoQ Coverage: Meta Pyrefly Python Typechecker</strong>: <a href="https://infoq.com/news/2025/05/meta-pyrefly-python-typechecker/?featured_on=talkpython" target="_blank" >infoq.com</a><br/> <strong>Pyrefly Discord Invite</strong>: <a href="https://discord.gg/Cf7mFQtW7W?featured_on=talkpython" target="_blank" >discord.gg</a><br/> <strong>Python Typing Conformance (GitHub)</strong>: <a href="https://github.com/python/typing/tree/main/conformance?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Typing Conformance Leaderboard (HTML Preview)</strong>: <a href="https://htmlpreview.github.io/?https://github.com/python/typing/blob/main/conformance/results/results.html&featured_on=talkpython" target="_blank" >htmlpreview.github.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=P4RKxl_giH4" target="_blank" >youtube.com</a><br/> <strong>Episode #523 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/523/pyrefly-fast-ide-friendly-typing-for-python#takeaways-anchor" target="_blank" >talkpython.fm/523</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/523/pyrefly-fast-ide-friendly-typing-for-python" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
October 12, 2025
Anwesha Das
ssh version output in stderr
Generally Linux commands print their version on stdout
, for example
git --version
or python --version
. But not ssh
. ssh -V
prints output to stderr
.
To test it you can do the following:
git version on stdout
> git --version 2> error 1> output
> cat output
git version 2.51.0
ssh version on stderr
> ssh -V 2>> error 1>> output
> cat error
OpenSSH_9.9p1, OpenSSL 3.2.4 11 Feb 2025
Hope this will be helpful.
October 11, 2025
Paolo Melchiorre
My Django On The Med 2025 🏖️
A summary of my experience at Django On The Med 2025 told through the posts I published on Mastodon during the conference.
Hugo van Kemenade
Releasing Python 3.14.0
Prologue #
I livetooted the release of Python 3.14.0. Here it is in blogpost form!
One week #
Only one week left until the release of Python 3.14.0 final!
What are you looking forward to?
#Python
#Python314
Tue, Sep 30, 2025, 15:19 EEST
Three days #
Three days until release and a bug in the Linux kernel has turned a dozen buildbots red…
It’s already been fixed in the kernel, but will take some time to bubble up. We’ll skip that test for relevant kernel versions in the meantime.
#Python
#Python314
Sat, Oct 4, 2025, 16:15 EEST
Green #
And back to green!
#Python
#Python314
Sun, Oct 5, 2025, 16:58 EEST
Release day! #
First off, check blockers and buildbots.
A new release-blocker appeared yesterday (because of course) but it can wait until 3.14.1.
Three deferred-blockers are also waiting until 3.14.1.
A new tier-2 buildbot failure appeared yesterday (because of course) but it had previously been offline for a month and will need some reconfiguration. Can ignore.
OK, let’s make a Python!
#Python
#Python314
#release
Tue, Oct 7, 2025, 11:40 EEST
run_release.py
#
Next up, merge and backport the final change to What’s New in Python 3.14 to declare it latest stable.
Now start run_release.py
, the main release automation script, which does a bunch of
pre-checks, runs blurb to create a merged changelog, bumps some numbers, and pushes a
branch and tag to my fork. It’ll go upstream at the end of a successful build.
Then kick off the CI to build source zips, docs and Android binaries.
#Python
#Python314
#release
Tue, Oct 7, 2025, 12:43 EEST
Installers #
(That’s actually the second CI attempt, we had to update some script arguments following an Android test runner update.)
This build takes about half an hour.
I’ve also informed the Windows and macOS release managers about the tag and they will start up installer builds.
This takes a few hours, so I’ve got time to finish up the release notes.
PEP 101 is the full process, but much is automated and we don’t need to follow it all manually.
#Python
#Python314
#release
Tue, Oct 7, 2025, 12:52 EEST
Windows #
The Windows build has been started.
The jobs with profile-guided optimisation (PGO) build once, then collect a profile by running the tests, and then build again using that profile, to see how ‘real’ code executes and optimises for that.
Meanwhile, the docs+source+Android build has finished and the artifacts have been copied to where they need to go with SBOMs created.
#Python
#Python314
#release
Tue, Oct 7, 2025, 13:50 EEST
macOS #
The Windows build is ready and macOS is underway.
#Python
#Python314
#release
Tue, Oct 7, 2025, 15:36 EEST
Final steps #
macOS installer done, next on to the final publishing and announcing steps.
#Python
#Python314
#release
Tue, Oct 7, 2025, 17:02 EEST
🚀 It’s out! #
🥧 Please install and enjoy Python 3.14!
#Python
#Python314
#release
Tue, Oct 7, 2025, 17:27 EEST
Finally #
And the last few tasks: announce also on the
blog &
mailing lists,
update the PEP &
downloads landing page, fix
Discourse post
links, unlock the 3.14
branch for the
core team to start landing PRs that didn’t need to be in the RC, and eat the pie.
A HUGE thanks to @sovtechfund Fellowship for allowing me to dedicate my time on getting this out 🎉
#Python
#Python314
#release
Tue, Oct 7, 2025, 19:28 EEST
Django Weblog
2026 DSF Board Nominations
Nominations are open for the elections of the 2026 Django Software Foundation Board of Directors. The Board guides the direction of the marketing, governance and outreach activities of the Django community. We provide funding, resources, and guidance to Django events on a global level.
The Board of Directors consists of seven volunteers who are elected to two-year terms. This is an excellent opportunity to help advance Django. We can’t do it without volunteers, such as yourself. Anyone including current Board members, DSF Members, or the public at large can apply to the Board. It is open to all.
How to apply
If you are interested in helping to support the development of Django we’d enjoy receiving your application for the Board of Directors. Please fill out the 2026 DSF Board Nomination form by 23:59 on October 31, 2025 Anywhere on Earth to be considered.
Submit your nomination for the 2026 Board
If you have any questions about applying, the work, or the process in general please don’t hesitate to reach out on the Django forum or via email to foundation@djangoproject.com.
Thank you for your time, and we look forward to working with you in 2026!
The 2025 DSF Board of Directors.