Planet Python
Last update: October 09, 2025 09:43 PM UTC
October 09, 2025
Everyday Superpowers
Why I switched from HTMX to Datastar
In 2022, David Guillot delivered an inspiring DjangoCon Europe talk, showcasing a web app that looked and felt as dynamic as a React app. Yet he and his team had done something bold. They converted it from React to HTMX, cutting their codebase by almost 70% while significantly improving its capabilities.
Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
I saw similar results when I switched my projects from HTMX to Datastar. It was exciting to reduce my code while building real-time, multi-user applications without needing WebSockets or complex frontend state management.
While preparing my FlaskCon 2025 talk, I hit a wall. I was juggling HTMX and AlpineJS to keep pieces of my UI in sync, but they fell out of step. I lost hours debugging why my component wasn’t updating. Neither library communicates with the other. Since they are different libraries created by different developers, you are the one responsible for helping them work together.
Managing the dance to initialize components at various times and orchestrating events was causing me to write more code than I wanted to and spend more time than I could spare to complete tasks.
Knowing that Datastar had the capability of both libraries with a smaller download, I thought I’d give it a try. It handled it without breaking a sweat, and the resulting code was much easier to understand.
I appreciate that there’s less code to download and maintain. Having a library handle all of this in under 11 KB is great for improving page load performance, especially for users on mobile devices. The less you need to download, the better off you are.
But that's just the starting point.
As I incorporated Datastar into my project at work, I began to appreciate Datastar’s API. It feels significantly lighter than HTMX. I find that I need to add fewer attributes to achieve the desired results.
For example, most interactions with HTMX require you to create an attribute to define the URL to hit, what element to target with the response, and then you might need to add more to customize how HTMX behaves, like this:
<span hx-target="#rebuild-bundle-status-button"
hx-select="#rebuild-bundle-status-button"
hx-swap="outerHTML"
hx-trigger="click"
hx-get="/rebuild/status-button"></span>
One doesn’t always need all of these, but I find it common to have two or three attributes every time[2]{And then there are the times I need to remember to look up the ancestry chain to see if any attribute changes the way I’m expecting things to work. Those are confusing bugs when they happen!}.
With Datastar, I regularly use just one attribute, like this:
<span data-on-click="@get('/rebuild/status-button')"></span>
This gives me less to think about when I return months later and need to recall how this works.
The primary difference between HTMX and Datastar is that HTMX is a front-end library that advances the HTML specification. DataStar is a server-side-driven library that aims to create high-performance, web-native, live-updating web applications.
In HTMX, you describe its behavior by adding attributes to the element that triggers the request, even if it updates something far away on the page. That’s powerful, but it means your logic is scattered across multiple layers. Datastar flips that: the server decides what should change, keeping all your update logic in one place.
To cite an example from HTMX’s documentation:
<div>
<div id="alert"></div>
<button hx-get="/info"
hx-select="#info-details"
hx-swap="outerHTML"
hx-select-oob="#alert">
Get Info!
</button>
</div>
When the button is pressed, it sends a GET request to `/info`, replaces the button with the element in the response that has the ID 'info-details', and then retrieves the element in the response with the ID 'alert', replacing the element with the same ID on the page.
This is a lot for that button element to know. To author this code, you need to know what information you’re going to return from the server, which is done outside of editing the HTML. This is when HTMX loses the ”locality of behavior” I like so much.
Datastar, on the other hand, expects the server to define the behavior, and it works better.
To replicate the behavior above, you have options. The first option keeps the HTML similar to above:
<div>
<div id="alert"></div>
<button id="info-details"
data-on-click="@get('/info')">
Get Info!
</button>
</div>
In this case, the server can return an HTML string with two root elements that have the same IDs as the elements they’re updating:
<p id="info-details">These are the details you are looking for…</p>
<div id="alert">Alert! This is a test.</div>
I love this option because it’s simple and performant.
A better option would change the HTML to treat it as a component.
What is this component? It appears to be a way for the user to get more information about a specific item.
What happens when the user clicks the button? It seems like either the information appears or there is no information to appear, and instead we render an error. Either way, the component becomes static.
Maybe we could split the component into each state, first, the placeholder:
<!-- info-component-placeholder.html -->
<div id="info-component">
<button data-on-click="@get('/product/{{product.id}}/info')">
Get Info!
</button>
</div>
Then the server could render the information the user requests…
<!-- info-component-get.html -->
<div id="info-component">
{% if alert %}<div id="alert">{{ alert }}</div>{% endif %}
<p>{{product.additional_information}}</p>
</div>
…and Datastar will update the page to reflect the changes.
This particular example is a little wonky, but I hope you get the idea. Thinking at a component level is better as it prevents you from entering an invalid state or losing track of the user's state.
One of the amazing things from David Guillot's talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
David’s team accomplished that by having HTMX trigger a JavaScript event, which in turn triggered the remote component to issue a GET request to update itself with the most up-to-date count.
With Datastar, you can update multiple components at once, even in a synchronous function.
If we have a component that allows someone to add an item to a shopping cart:
<form id="purchase-item"
data-on-submit="@post('/add-item', {contentType: 'form'})">"
>
<input type=hidden name="cart-id" value="{{cart.id}}">
<input type=hidden name="item-id" value="{{item.id}}">
<fieldset>
<button data-on-click="$quantity -= 1">-</button>
<label>Quantity
<input name=quantity type=number data-bind-quantity value=1>
</label>
<button data-on-click="$quantity += 1">+</button>
</fieldset>
<button type=submit>Add to cart</button>
{% if msg %}
<p class=message>{{msg}}</p>
{% endif %}
</form>
And another one that shows the current count of items in the cart:
<div id="cart-count">
<svg viewBox="0 0 10 10" xmlns="http://www.w3.org/2000/svg">
<use href="#shoppingCart">
</svg>
{{count}}
</div>
Then a developer can update them both in the same request. This is one way it could look in Django:
from datastar_py.consts import ElementPatchMode
from datastar_py.django import (
DatastarResponse,
ServerSentEventGenerator as SSE,
)
def add_item(request):
# skipping all the important state updates
return DatastarResponse([
SSE.patch_elements(
render_to_string('purchase-item.html', context=dict(cart=cart, item=item, msg='Item added!'))
),
SSE.patch_elements(
render_to_string('cart-count.html', context=dict(count=item_count))
),
])
Being a part of the Datastar Discord, I appreciate that Datastar isn't just a helper script. It’s a philosophy about building apps with the web’s own primitives, letting the browser and the server do what they’re already great at.
Where HTMX is trying to push the HTML spec forward, Datastar is more interested in promoting the adoption of web-native features, such as CSS view transitions, Server-Sent Events, and web components, where appropriate.
This has been a massive eye-opener for me, as I’ve long wanted to leverage each of these technologies, and now I’m seeing the benefits.
One of the biggest wins I achieved with Datastar was by refactoring a complicated AlpineJS component and extracting a simple web component that I reused in multiple places[3]{I’ll talk more about this in an upcoming post.}.
I especially appreciate this because there are times when it's best to rely on JavaScript to accomplish a task. But it doesn't mean you have to reach for a tool like React to achieve it. Creating custom HTML elements is a great pattern to accomplish tasks with high locality of behavior and the ability to reuse them across your app.
However, Datastar provides you with even more capabilities.
Apps built with collaboration as a first-class feature stand out from the rest, and Datastar is up to the challenge.
To accomplish this, most HTMX developers achieve updates either by "pulling" information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
Datastar uses a simple web technology called Server-Sent Events (SSE) to allow the server to "push" updates to connected clients. When something changes, such as a user adding a comment or a status change, the server can immediately update browsers with minimal additional code.
You can now build live dashboards, admin panels, and collaborative tools without crafting custom JavaScript. Everything flows from the server, through HTML.
Additionally, suppose a client's connection is interrupted. In that case, the browser will automatically attempt to reconnect without requiring additional code, and it can even notify the server, "This is the last event I received." It's wonderful.
Being a part of the Datastar community on Discord has helped me appreciate the Datastar vision of making web apps. They aim to have push-based UI updates, reduce complexity, and leverage tools like web components to handle more complex situations locally. It’s common for the community to help newcomers by helping them realize they’re overcomplicating things.
Here are some of the tips I’ve picked up:
- Don’t be afraid to re-render the whole component and send it down the pipe. It’s easier, it probably won’t affect performance too much, you get better compression ratios, and it’s incredibly fast for the browser to parse HTML strings.
- The server is the state of truth and is more powerful than the browser. Let it handle the majority of the state. You probably don’t need the reactive signals as much as you think you do.
- Web components are great for encapsulating logic into a custom element with high locality of behavior. A great example of this is the star field animation in the header of the Datastar website. The `<ds-starfield>` element encapsulates all the code to animate the star field and exposes three attributes to change its internal state. Datastar drives the attributes whenever the range input changes or the mouse moves over the element.
But what I’m most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
The examples page includes a database monitoring demo that leverages Hypermedia to significantly improve the speed and memory footprint of a demo presented at a JavaScript conference.
The one million checkbox experiment was too much for the server it started on. Anders Murphy used Datastar to create one billion checkboxes on an inexpensive server.
But the one that most inspired me was a web app that displayed data from every radar station in the United States. When a blip changed on a radar, the corresponding dot in the UI would change within 100 milliseconds. This means that *over 800,000 points are being updated per second*. Additionally, the user could scrub back in time for up to an hour (with under a 700 millisecond delay). Can you imagine this as a Hypermedia app? This is what Datastar enables.
I’m still in what I consider my discovery phase of Datastar. Replacing the standard HTMX functionality of ajaxing updates to a UI was quick and easy to implement. Now I’m learning and experimenting with different patterns to use Datastar to achieve more and more.
For decades, I’ve been interested in ways I could provide better user experiences with real-time updates, and I love that Datastar enables me to do push-based updates, even in synchronous code.
HTMX filled me with so much joy when I started using it. But I haven’t felt like I lost anything since switching to Datastar. In fact, I feel like I’ve gained so much more.
If you’ve ever felt the joy of using HTMX, I bet you’ll feel the same leap again with Datastar. It’s like discovering what the web was meant to do all along.
Read more...
Mike Driscoll
An /intro to Python 3.14’s New Features
Python 3.14 came out this week and has many new features and improvements. For the full details behind the release, the documentation is the best source. However, you will find a quick overview of the major changes here.
As with most Python releases, backwards compatibility is rarely broken. However, there has been a push to clean up the standard library, so be sure to check out what was removed and what has been deprecated. In general, most of the items in these lists are things the majority of Python users do not use anyway.
But enough with that. Let’s learn about the big changes!
Release Changes in 3.14
The biggest change to come to Python in a long time is the free-threaded build of Python. While free-threaded Python existed in 3.13, it was considered experimental at that time. Now in 3.14, free-threads are officially supported, but still optional.
Free-threaded Python is a build option in Python. You can turn it on if you want to when you build Python. There is still debate about turning free-threading on by default, but that has not been decided at the time of writing of this article.
Another new change in 3.14 is an experimental just-in-time (JIT) compiler for MacOS and Windows release binaries. Currently, the JIT compiler is NOT recommended in production. If you’d like you test it out, you can set PYTHON_JIT=1
as an environmental variable. When running with JIT enabled, you may see Python perform 10% slower or up to 20% faster, depending on workload.
Note that native debuggers and profilers (gdp and perf) are not able to unwind JIT frames, although Python’s own pdb and profile modules work fine with them. Free-threaded builds do not support the JIT compilter though.
The last item of note is that GPG (Pretty Good Privacy) signatures are not provided for Python 3.14 or newer versions. Instead, users must use Sigstore verification materials. Releases have been signed using Sigstore since Python 3.11.
Python Interpreter Improvements
There are a slew of new improvements to the Python interpreter in 3.14. Here is a quick listing along with links:
- PEP 649 and PEP 749: Deferred evaluation of annotations
- PEP 734: Multiple interpreters in the standard library
- PEP 750: Template strings
- PEP 758: Allow except and except* expressions without brackets
- PEP 765: Control flow in finally blocks
- PEP 768: Safe external debugger interface for CPython
- A new type of interpreter
- Free-threaded mode improvements
- Improved error messages
- Incremental garbage collection
Let’s talk about the top three a little. Deferred evaluation of annotations refers to type annotations. In the past, the type annotations that are added to functions, classes, and modules were evaluated eagarly. That is no longer the case. Instead, the annotations are stored in special-purpose annotate functions and evaluated only when necessary with the exception of if from __future__ import annotations
is used at the top of the module.
the reason for this change it to improve performance and usability of type annotations in Python. You can use the new annotationlib
module to inspect deferred annotations. Here is an example from the documentation:
>>> from annotationlib import get_annotations, Format >>> def func(arg: Undefined): ... pass >>> get_annotations(func, format=Format.VALUE) Traceback (most recent call last): ... NameError: name 'Undefined' is not defined >>> get_annotations(func, format=Format.FORWARDREF) {'arg': ForwardRef('Undefined', owner=<function func at 0x...>)} >>> get_annotations(func, format=Format.STRING) {'arg': 'Undefined'}
Another interesting change is the addition of multiple interpreters in the standard library. The complete formal definition of this new feature can be found in PEP 734. This feature has been available in Python for more than 20 years, but only throught the C-API. Starting in Python 3.14, you can now use the new concurrent.interpreters
module.
Why would you want to use multiple Python interpreters?
- They support a more human-friendly concurrency model
- They provide a true multi-core parallelism
These interpreters provide isolated “processes” that run in parallel with no sharing by default.
Another feature to highlightare the template string literals (t-strings). Full details can be found in PEP 750. Brett Cannon, a core developer of the Python language, posted a good introductory article about these new t-strings on his blog. A template string or t-string is a new mechanism for custom string processing. However, unlike an f-string, a t-string will return an object that represents the static and the interpolated parts of the string.
Here’s a quick example from the documentation:
>>> variety = 'Stilton' >>> template = t'Try some {variety} cheese!' >>> type(template) <class 'string.templatelib.Template'> >>> list(template) ['Try some ', Interpolation('Stilton', 'variety', None, ''), ' cheese!']
You can use t-strings to sanitize SQL, improve logging, implement custom, lightweight DSLs, and more!
Standard Library Improvements
Python’s standard library has several significant improvements. Here are the ones highlighted by the Python documentation:
- PEP 784: Zstandard support in the standard library
- Asyncio introspection capabilities
- Concurrent safe warnings control
- Syntax highlighting in the default interactive shell, and color output in several standard library CLIs
If you do much compression in Python, then you will be happy that Python has added Zstandard support in addition to the zip and tar archive support that has been there for many years.
Compressing a string using Zstandard can be accomplished with only a few lines of code:
from compression import zstd import math data = str(math.pi).encode() * 20 compressed = zstd.compress(data) ratio = len(compressed) / len(data) print(f"Achieved compression ratio of {ratio}")
Another neat addition to the Python standard library is asyncio introspection via a new command-line interface. You can now use the following command to introspect:
- python -m asyncio ps PID
- python -m asyncio pstree PID
The ps
sub-command will inspect the given process ID and siplay information about the current asyncio tasks. You will see a task table as output which contains a listing of all tasks, their names and coroutine stacks, and which tasks are awaiting them.
The pstree
sub-command will fetch the same information, but it will render them using a visual async call tree instead, which shows the coroutine relationships in a hierarcical format. Ths pstree
command is especiialy useful for debugging stuck or long-running async programs.
One other neat update to Python is that the default REPL shell now highlights Python syntax. You can change the color theme using an experimental API _colorize.set_theme() which can be called interactively or in the PYTHONSTARTUP
script. The REPL also supports impor tauto-completion, which means you can start typing the name of a module and then hit tab to get it to complete.
Wrapping Up
Python 3.14 looks to be an exciting release with many performance improvements. They have also laid down more framework to continue improving Python’s speed.
The latest version of Python has many other imrpovements to modules that aren’t listed here. To see all the nitty gritty details, check out the What’s New in Python 3.14 page in the documentation.
Drop a comment to let us know what you think of Python 3.14 and what you are excited to see in upcoming releases!
The post An /intro to Python 3.14’s New Features appeared first on Mouse Vs Python.
October 08, 2025
Real Python
Python 3.14: Cool New Features for You to Try
Python 3.14 was released on October 7, 2025. While many of its biggest changes happen under the hood, there are practical improvements you’ll notice right away. This version sharpens the language’s tools, boosts ergonomics, and opens doors to new capabilities without forcing you to rewrite everything.
In this tutorial, you’ll explore features like:
- A smarter, more colorful REPL experience
- Error messages that guide you toward fixes
- Safer hooks for live debugging
- Template strings (t-strings) for controlled interpolation
- Deferred annotation evaluation to simplify typing
- New concurrency options like subinterpreters and a free-threaded build
If you want to try out the examples, make sure you run Python 3.14 or a compatible preview release.
Note: On Unix systems, when you create a new virtual environment with the new Python 3.14, you’ll spot a quirky alias:
(venv) $ 𝜋thon
Python 3.14.0 (main, Oct 7 2025, 17:32:06) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
This feature is exclusive to the 3.14 release as a tribute to the mathematical constant π (pi), whose rounded value, 3.14, is familiar to most people.
As you read on, you’ll find detailed examples and explanations for each feature. Along the way, you’ll get tips on how they can streamline your coding today and prepare you for what’s coming next.
Get Your Code: Click here to download the free sample code that you’ll use to learn about the new features in Python 3.14.
Take the Quiz: Test your knowledge with our interactive “Python 3.14: Cool New Features for You to Try” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python 3.14: Cool New Features for You to TryIn this quiz, you'll test your understanding of the new features introduced in Python 3.14. By working through this quiz, you'll review the key updates and improvements in this version of Python.
Developer Experience Improvements
Python 3.14 continues the trend of refining the language’s ergonomics. This release enhances the built-in interactive shell with live syntax highlighting and smarter autocompletion. It also improves syntax and runtime error messages, making them clearer and more actionable. While these upgrades don’t change the language itself, they boost your productivity as you write, test, and debug code.
Even Friendlier Python REPL
Python’s interactive interpreter, also known as the REPL, has always been the quickest way to try out a snippet of code, debug an issue, or explore a third-party library. It can even serve as a handy calculator or a bare-bones data analysis tool. Although your mileage may vary, you typically start the REPL by running the python
command in your terminal without passing any arguments:
$ python
Python 3.14.0 (main, Oct 7 2025, 17:32:06) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
The humble prompt, which consists of three chevrons (>>>
), invites you to type a Python statement or an expression for immediate evaluation. As soon as you press Enter, you’ll instantly see the computed result without having to create any source files or configure a project workspace. After each result, the familiar prompt returns, ready to accept your next command:
>>> 2 + 2
4
>>>
For years, the stock Python REPL remained intentionally minimal. It was fast and reliable, but lacked the polish of alternative shells built by the community, like IPython, ptpython, or bpython.
That started to change in Python 3.13, which adopted a modern REPL based on PyREPL borrowed from the PyPy project. This upgrade introduced multiline editing, smarter history browsing, and improved Tab completion, while keeping the simplicity of the classic REPL.
Python 3.14 takes the interactive shell experience to the next level, introducing two new features:
- Syntax highlighting: Real-time syntax highlighting with configurable color themes
- Code completion: Autocompletion of module names inside
import
statements
Together, these improvements make the built-in REPL feel closer to a full-fledged code editor while keeping it lightweight and always available. The Python REPL now highlights code as you type. Keywords, strings, comments, numbers, and operators each get their own color, using ANSI escape codes similar to those that already color prompts and tracebacks in Python 3.13:
Python 3.14 Syntax Highlighting in the REPLNotice how the colors shift as you type, once the interactive shell has enough context to parse your input. In particular, tokens such as the underscore (_
) are recognized as soft keywords only in the context of pattern matching, and Python highlights them in a distinct color to set them apart. This colorful output also shows up in the Python debugger (pdb) when you set a breakpoint()
on a given line of code, for example.
Additionally, a few of the standard-library modules can now take advantage of this new syntax-coloring capability of the Python interpreter:
Colorful Output in Python 3.14's Standard-Library ModulesThe argparse
module displays a colorful help message, the calendar
module highlights the current day, the json
module pretty-prints and colorizes JSON documents. Finally, the unittest
module provides a colorful output for failed assertions to make reading and diagnosing them easier.
Read the full article at https://realpython.com/python314-new-features/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Python 3.14: Cool New Features for You to Try
In this quiz, you’ll test your understanding of Python 3.14: Cool New Features for You to Try. By working through this quiz, you’ll review the key updates and improvements in this version of Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 07, 2025
PyCoder’s Weekly
Issue #703: PEP 8, Error Messages in Python 3.14, splitlines(), and More (Oct. 7, 2025)
#703 – OCTOBER 7, 2025
View in Browser »
Python Violates PEP 8
PEP 8 outlines the preferred coding style for Python. It often gets wielded as a cudgel in online conversations. This post talks about what PEP 8 says and where it often gets ignored.
AL SWIEGART
Python 3.14 Preview: Better Syntax Error Messages
Python 3.14 includes ten improvements to error messages, which help you catch common coding mistakes and point you in the right direction.
REAL PYTHON
Free Course: Build a Durable AI Agent with Temporal and Python
Curious about how to build an AI agent that actually works in production? This free hands-on course shows you how with Python and Temporal. Learn to orchestrate workflows, recover from failures, and deliver a durable chatbot agent that books trips and generates invoices. Explore Tutorial →
TEMPORAL sponsor
Why splitlines()
Instead of split("\n")
?
To split text into lines in Python you should use the splitlines()
method, not the split()
method, and this post shows you why.
TREY HUNNER
Python Jobs
Senior Python Developer (Houston, TX, USA)
Articles & Tutorials
Advice on Beginning to Learn Python
What’s changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.
REAL PYTHON podcast
Winning a Bet About six
In 2020, Seth Larson and Andrey Petrov made a bet about whether six
, the Python 2 compatibility shim would still be in the top 20 PyPI downloads. Seth won, but probably only because of a single library still using it.
SETH LARSON
Show Off Your Python Chops: Win the 2025 Table & Plotnine Contests
Showcase your Python data skills! Submit your best Plotnine charts and table summaries to the 2025 Contests. Win swag, boost your portfolio, and get recognized by the community. Deadline: Oct 17, 2025. Submit now!
POSIT sponsor
Durable Python Execution With Temporal
Talk Python interviews Mason Egger to discuss Temporal, a durable execution platform that enables developers to build scalable applications without sacrificing productivity or reliability.
KENNEDY & EGGER podcast
Astral’s ty
: A New Blazing-Fast Type Checker for Python
Learn to use ty, an ultra-fast Python type checker written in Rust. Get setup instructions, run type checks, and fine-tune custom rules in personal projects.
REAL PYTHON
What Is “Good Taste” in Software Engineering?
This opinion piece talks about the difference between skill and taste when writing software. What “clean code” means to one may not be the same as to others.
SEAN GOEDECKE
Modern Python Linting With Ruff
Ruff is a blazing-fast, modern Python linter with a simple interface that can replace Pylint, isort, and Black—and it’s rapidly becoming popular.
REAL PYTHON course
Introducing tdom
: HTML Templating With t‑strings
Python 3.14 introduces t-strings and this article shows you tdom
a new HTML DOM toolkit that takes advantage of them to produce safer output.
DAVE PECK
Full Text Search With Django and SQLite
A walkthrough how to build full text search to power the search functionality of a blog using Django and SQLite.
TIMO ZIMMERMANN
Projects & Code
subprocesslib: Like pathlib
for the subprocess
Module
PYPI.ORG • Shared by Antoine Cezar
Python Implementation of the Cap’n Web Protocol
GITHUB.COM/ABILIAN • Shared by Stefane Fermigier
Events
Weekly Real Python Office Hours Q&A (Virtual)
October 8, 2025
REALPYTHON.COM
PyCon Africa 2025
October 8 to October 13, 2025
PYCON.ORG
Wagtail Space 2025
October 8 to October 11, 2025
ZOOM.US
PyCon Hong Kong 2025
October 11 to October 13, 2025
PYCON.HK
PyCon NL 2025
October 16 to October 17, 2025
PYCON-NL.ORG
PyCon Thailand 2025
October 17 to October 19, 2025
PYCON.ORG
PyCon Finland 2025
October 17 to October 18, 2025
PLONECONF.ORG
PyConES 2025
October 17 to October 20, 2025
PYCON.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #703.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Python Morsels
Python 3.14's best new features
Python 3.14 includes syntax highlighting, improved error messages, enhanced support for cocurrency and parallelism, t-strings and more!

Table of contents
- Very important but not my favorites
- Python 3.14: now in color!
- My tiny contribution
- Beginner-friendly error messages
- Tab completion for import statements
- Standard library improvements
- Cleaner multi-exception catching
- Concurrency improvements
- External debugger interface
- T-strings (template strings)
- Try out Python 3.14 yourself
Very important but not my favorites
I'm not going to talk about the experimental free-threading mode, the just-in-time compiler, or other performance improvements. I'm going to focus on features that you can use right after you upgrade.
Python 3.14: now in color!
One of the most immediately …
Read the full article: https://www.pythonmorsels.com/python314/
Real Python
What's New in Python 3.14
Python 3.14 was published on October 7, 2025. While many of its biggest changes happen under the hood, there are practical improvements you’ll notice right away. This version sharpens the language’s tools, boosts ergonomics, and opens doors to new capabilities without forcing you to rewrite everything.
In this video course, you’ll explore features like:
- A smarter, more colorful REPL experience
- Error messages that guide you toward fixes
- Safer hooks for live debugging
- Template strings (t-strings) for controlled interpolation
- Deferred annotation evaluation to simplify typing
- New concurrency options like subinterpreters and a free-threaded build
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Seth Michael Larson
Is the "Nintendo Classics" collection a good value?
Nintendo Classics is a collection of hundreds of retro video games from Nintendo (and Sega) consoles from the NES to the GameCube. Nintendo Classics is included with the Nintendo Switch Online (NSO) subscription, which starts at $20/year (~$1.66/month) for individual users.
Looking at the prices of retro games these days, this seems like an incredible value for players that want to play these games. This post is sharing a dataset that I've curated about Nintendo Classics games and mapping their value to actual physical prices of the same games, with some interesting queries.
For example, here's a graph showing the total value (in $USD) of Nintendo Classics over time:
The dataset was generated from the tables provided on Wikipedia (CC-BY-SA). The dataset doesn't contain pricing information, instead only links to corresponding Pricecharting pages. This page only shares approximate aggregate price information, not prices of individual games. This page will be automatically updated over time as Nintendo announces more games are coming to Nintendo Classics. This page was last updated October 7th, 2025.
How many games and value per platform?
There are 8 unique platforms on Nintendo Classics each with their own collection of games. The below table includes the value of both added and announced-but-not-added games. You can see that the total value of games in Nintendo Classics is many thousands of dollars if genuine physical copies were purchased instead. Here's a graph showing the total value of each platform changing over time:
And here's the data for all published and announced games as a table:
Platform | Games | Total Value | Value per Game |
---|---|---|---|
NES | 91 | $1980 | $21 |
SNES | 83 | $3600 | $43 |
Game Boy (GB/GBC) | 41 | $1615 | $39 |
Nintendo 64 (N64) | 42 | $1130 | $26 |
Sega Genesis | 51 | $2910 | $57 |
Game Boy Advance (GBA) | 30 | $930 | $31 |
GameCube | 9 | $640 | $71 |
Virtual Boy | 14 | $2580 | $184 |
All Platforms | 361 | $15385 | $42 |
View SQL query
SELECT platform, COUNT(*), SUM(price), SUM(price)/COUNT(*)
FROM games
GROUP BY platform;
How much value is in each Nintendo Classics tier?
There are multiple "tiers" of Nintendo Classics each with a different up-front price (for the console itself) and ongoing price for the Nintendo Switch Online (NSO) subscription.
Certain collections require specific hardware, such as Virtual Boy requiring either the recreation ($100) or cardboard ($30) Virtual Boy headset and GameCube collection requiring a Switch 2 ($450). All other collections work just fine with a Switch Lite ($100). All platforms beyond NES, SNES, Game Boy, and Game Boy Color require NSO + Expansion Pass.
Platforms | Requires | Price | Games | Games Value |
---|---|---|---|---|
NES, SNES, GB, GBC | Switch Lite & NSO * | $100 + $20/Yr | 215 | $7195 |
+N64, Genesis, GBA | Switch Lite & NSO+EP | $100 + $50/Yr | 338 | $12165 |
+Virtual Boy | Switch Lite, NSO+EP, & VB | $130 + $50/Yr | 352 | $14745 |
+GameCube | Switch 2 & NSO+EP | $450 + $50/Yr | 361 | $15385 |
* I wanted to highlight that Nintendo Switch Online (NSO) without Expansion Pack has the option to actually pay $3 monthly rather than $20 yearly. This doesn't make sense if you're paying for a whole year anyway, but if you want to just play a game in the NES, SNES, GB, or GBC collections you can pay $3 for a month of NSO and play games for very cheap.
How often are games added to Nintendo Classics?
Nintendo Classics tends to add a few games per platform every year. Usually when a platform is first announced a whole slew of games are added during the announcement with a slow drip-feed of games coming later.
Here's the break-down per year how many games were added to each platform:
Platform | 2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 | 2025 |
---|---|---|---|---|---|---|---|---|
NES | 30 | 30 | 8 | 2 | 5 | 4 | 12 | |
SNES | 25 | 18 | 13 | 9 | 1 | 9 | 8 | |
N64 | 10 | 13 | 8 | 8 | 3 | |||
Genesis | 20 | 17 | 8 | 3 | 3 | |||
Game Boy | 19 | 16 | 6 | |||||
GBA | 13 | 12 | 5 | |||||
GameCube | 9 | |||||||
Virtual Boy | ||||||||
All Platforms | 30 | 55 | 26 | 55 | 43 | 53 | 60 | 34 |
View SQL query
SELECT platform, STRFTIME('%Y', added_date) AS year, COUNT(*)
FROM games
GROUP BY platform, year
ORDER BY platform, year DESC;
What are the rarest or valuable games in Nintendo Classics?
There are a bunch of valuable and rare games available in Nintendo Classics. Here are the top-50 most expensive games that are available in the collection:
View SQL query
SELECT platform, name, price FROM games
ORDER BY price DESC LIMIT 50;
Who publishes their games to Nintendo Classics?
Nintendo Classics has more publishers than just Nintendo and Sega. Looking at which third-party publishers are publishing their games to Nintendo Classics can give you a hint at what future games might make their way to the collection:
Publisher | Games | Value |
---|---|---|
Capcom | 17 | $1055 |
Xbox Game Studios | 13 | $245 |
Koei Tecmo | 13 | $465 |
City Connection | 11 | $240 |
Konami | 10 | $505 |
Bandai Namco Entertainment | 9 | $190 |
Sunsoft | 7 | $155 |
Natsume Inc. | 7 | $855 |
G-Mode | 7 | $190 |
Arc System Works | 6 | $110 |
View SQL query
SELECT publisher, COUNT(*) AS num_games, SUM(price)
FROM games WHERE publisher NOT IN ('Nintendo', 'Sega')
GROUP BY publisher
ORDER BY num_games DESC LIMIT 20;
What games have been removed from Nintendo Classics?
There's only been one game that's been removed from Nintendo Classics so far. There likely will be more in the future:
Platform | Game | Added Date | Removed Date |
---|---|---|---|
SNES | Super Soccer | 2019-09-05 | 2025-03-25 |
View SQL query:
SELECT platform, name, added_date, removed_date
FROM games WHERE removed_date IS NOT NULL;
This site uses the MIT licensed ChartJS for the line chart visualization.
Thanks for keeping RSS alive! ♥
October 06, 2025
Ari Lamstein
Visualizing 25 Years of Border Patrol Data with Python
I recently had the chance to speak with a statistician at the Department of Homeland Security (DHS) about my Streamlit app that visualizes trends in US Immigration Enforcement data (link). Our conversation helped clarify a question I’d raised in an earlier post—one that emerged from a surprising pattern in the data.
A Surprising Pattern
The first graph in my post showed how the number of detainees in ICE custody has changed over time, broken down by the arresting agency: ICE (Immigration and Customs Enforcement) or CBP (Customs and Border Protection). The agency-level split revealed an unexpected trend.
As I noted in the post:
Equally interesting is the agency-level data: since Trump took office ICE detentions are sharply up, but CBP detentions are down. I am not sure why CBP detentions are down.
A Potential Answer
This person suggested that CBP arrests might reflect not just enforcement capacity, but the number of people attempting to cross the border illegally—a figure that could fluctuate based on how welcoming an administration appears to be toward immigration.
This was a new lens for me. I hadn’t considered that attempted border crossings might rise or fall with shifts in presidential tone or policy. Given that one of Trump’s central campaign promises in 2024 was to crack down on illegal immigration (link), it felt like a hypothesis worth exploring.
The Data: USBP Encounters
While we can’t directly measure how many people attempt to cross the border illegally, DHS publishes a dataset that records each time the US Border Patrol (USBP) encounters a “removable alien”—a term DHS uses for individuals subject to removal under immigration law. This dataset can serve as a rough proxy for attempted illegal crossings.
The data is available on this page and is published as an Excel workbook titled “CBP Encounters – USBP – November 2024.” It covers October 1999 through November 2024, spanning five presidential administrations. While it doesn’t include data from the current administration (which began in January 2025), it does offer a historical view of enforcement trends.
The workbook contains 16 sheets; this analysis focuses on the “Monthly Region” tab. In this sheet, “Region” refers to the part of the border where the encounter occurred: Coastal Border, Northern Land Border, or Southwest Land Border.
The Analysis
To support this analysis, I created a new Python module called encounters
. It’s available in my existing immigration_enforcement repo, along with the dataset and example workbooks. I’ve tagged the version of the code used in this post as usbp_encounters_post, so people will always be able to run the examples below—even if the repo evolves. You’re welcome to clone it and use it as a foundation for your own analysis.
One important detail: this dataset records dates using fiscal years, which run from October 1 to September 30. For example, October of FY2020 corresponds to October 2019 on the calendar. To simplify analysis, the function encounters.get_monthly_region_df
reads in the “Monthly Region” sheet and automatically converts all fiscal year dates to calendar dates:
To preview the data, we can load the “Monthly Region” sheet using the encounters
module like this:
import encounters df = encounters.get_monthly_region_df() df.head()
This returns:
date | region | quantity | |
---|---|---|---|
0 | 1999-10-01 | Coastal Border | 740 |
1 | 1999-10-01 | Northern Land Border | 1250 |
2 | 1999-10-01 | Southwest Land Border | 87820 |
3 | 1999-11-01 | Coastal Border | 500 |
4 | 1999-11-01 | Northern Land Border | 960 |
To visualize the data, we can use Plotly to create a time series of encounters by region:
import plotly.express as px px.line( df, x="date", y="quantity", color="region", title="USBP Border Encounters Over Time", color_discrete_sequence=px.colors.qualitative.T10, )
From this graph, a few patterns stand out:
- Encounters are overwhelmingly concentrated at the Southwest Land Border.
- Until around 2015, the data shows a strong seasonal rhythm, typically dipping in December and peaking in March.
- After 2015, variability increases sharply, with both the lowest (2017) and highest (2023) values occurring in this period.
A Better Graph
Since the overwhelming majority of encounters occur at the Southwest Land Border, it makes sense to focus the visualization there. To explore how encounter trends align with presidential transitions, we can annotate the graph to show when administrations changed. The function encounters.get_monthly_encounters_graph
handles this:
encounters.get_monthly_encounters_graph(annotate_administrations=True)
This annotated graph appears to support what the DHS statistician suggested: encounter numbers sometimes shift dramatically between administrations. The change is especially pronounced for the Trump and Biden administrations:
- The lowest value (April 2017) occurred shortly after Trump took office.
- The transition from Trump to Biden marks one of the sharpest increases in the dataset.
- The highest value (December 2023) occurred during Biden’s administration.
Potential Policy Link
While I’m not an expert on immigration policy, Wikipedia offers summaries of the immigration policies under both the Trump and Biden administrations.
It describes Trump’s policies as aiming to reduce both legal and illegal immigration—through travel bans, lower refugee admissions, and stricter enforcement measures. And the page on Biden’s immigration policy begins:
“The immigration policy Joe Biden initially focused on reversing many of the immigration policies of the previous Trump administration.”
The contrast between these two approaches is stark, and it’s at least plausible that the low number of encounters at the start of Trump’s first term, and the spike in encounters at the start of Biden’s term, reflect responses to these shifts.
Future Work
This post is just a first step in analyzing Border Patrol Encounter data. Looking ahead, here are a few directions I’m excited to explore:
- Integrate this graph into my existing Immigration Enforcement Streamlit app (link).
- Incorporate more timely data. While this dataset is only published annually, DHS appears to release monthly updates here. Finding a way to surface those numbers in the app would make it more responsive to current trends.
- Explore other dimensions of the dataset. Beyond raw encounter counts, the data includes details like citizenship, family status, and where encounters happen. These facets could offer deeper insight into enforcement patterns and humanitarian implications.
While comments on my blog are disabled, I welcome hearing from readers. You can contact me here.
Real Python
It's Almost Time for Python 3.14 and Other Python News
Python 3.14 nears release with new features in sight, and Django 6.0 alpha hints at what’s next for the web framework. Several PEPs have landed, including improvements to type annotations and support for the free-threaded Python effort.
Plus, the Python Software Foundation announced new board members, while Real Python dropped a bundle of fresh tutorials and updates. Read on to learn what’s new in the world of Python this month!
Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update.
Python 3.14 Reaches Release Candidate 3
Python 3.14.0rc3 was announced in September, bringing the next major version of Python one step closer to final release. This release candidate includes critical bug fixes, final tweaks to new features, and overall stability improvements.
Python 3.14 is expected to introduce new syntax options, enhanced standard-library modules, and performance boosts driven by internal C API changes. For the complete list of changes in Python 3.14, consult the official What’s new in Python 3.14 documentation.
The release also builds upon ongoing work toward making CPython free-threaded, an effort that will eventually allow better use of multicore CPUs. Developers are encouraged to test their projects with the RC to help identify regressions or issues before the official release.
The final release, 3.14.0, is scheduled for October 7. Check out Real Python’s series about the new features you can look forward to in Python 3.14.
Django 6.0 Alpha Released
Django 6.0 alpha 1 is out! This first public preview gives early access to the upcoming features in Django’s next major version. Although not production-ready, the alpha includes significant internal updates and deprecations, setting the stage for future capabilities.
Some of the early changes include enhanced async support, continued cleanup of old APIs, and the groundwork for upcoming improvements in database backend integration. Now is a great time for Django developers to test their apps and provide feedback before Django 6.0 is finalized.
Django 5.2.6, 5.1.12, and 4.2.24 were released separately with important security fixes. If you maintain Django applications, then these updates are strongly recommended.
Read the full article at https://realpython.com/python-news-october-2025/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Brian Okken
pytest 2.6.0 release
There’s a new release of pytest-check. Version 2.6.0.
This is a cool contribution from the community.
The problem
In July, bluenote10 reported that check.raises()
doesn’t behave like pytest.raises()
in that the AssertionError
returned from check.raises()
doesn’t have a queryable value
.
Example of pytest.raises()
:
with pytest.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
We’d like check.raises()
to act similarly:
with check.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
But that didn’t work prior to 2.6.0. The issue was that the value returned from check.raises()
didn’t have any .value
atribute.
Talk Python to Me
#522: Data Sci Tips and Tricks from CodeCut.ai
Today we’re turning tiny tips into big wins. Khuyen Tran, creator of CodeCut.ai, has shipped hundreds of bite-size Python and data science snippets across four years. We dig into open-source tools you can use right now, cleaner workflows, and why notebooks and scripts don’t have to be enemies. If you want faster insights with fewer yak-shaves, this one’s packed with takeaways you can apply before lunch. Let’s get into it.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Khuyen Tran (LinkedIn)</strong>: <a href="https://www.linkedin.com/in/khuyen-tran-1ab926151/?featured_on=talkpython" target="_blank" >linkedin.com</a><br/> <strong>Khuyen Tran (GitHub)</strong>: <a href="https://github.com/khuyentran1401/?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CodeCut</strong>: <a href="https://codecut.ai/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Production-ready Data Science Book (discount code TalkPython)</strong>: <a href="https://codecut.ai/production-ready-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <br/> <strong>Why UV Might Be All You Need</strong>: <a href="https://codecut.ai/why-uv-might-all-you-need/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>How to Structure a Data Science Project for Readability and Transparency</strong>: <a href="https://codecut.ai/how-to-structure-a-data-science-project-for-readability-and-transparency-2/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Stop Hard-coding: Use Configuration Files Instead</strong>: <a href="https://codecut.ai/stop-hard-coding-in-a-data-science-project-use-configuration-files-instead/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Simplify Your Python Logging with Loguru</strong>: <a href="https://codecut.ai/simplify-your-python-logging-with-loguru/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Git for Data Scientists: Learn Git Through Practical Examples</strong>: <a href="https://codecut.ai/git-for-data-scientists-learn-git-through-practical-examples/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Marimo (A Modern Notebook for Reproducible Data Science)</strong>: <a href="https://codecut.ai/marimo-a-modern-notebook-for-reproducible-data-science/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Text Similarity & Fuzzy Matching Guide</strong>: <a href="https://codecut.ai/text-similarity-fuzzy-matching-guide/?featured_on=talkpython" target="_blank" >codecut.ai</a><br/> <strong>Loguru (Python logging made simple)</strong>: <a href="https://github.com/Delgan/loguru?tab=readme-ov-file#modern-string-formatting-using-braces-style" target="_blank" >github.com</a><br/> <strong>Hydra</strong>: <a href="https://hydra.cc/?featured_on=talkpython" target="_blank" >hydra.cc</a><br/> <strong>Marimo</strong>: <a href="https://marimo.io/?featured_on=talkpython" target="_blank" >marimo.io</a><br/> <strong>Quarto</strong>: <a href="https://quarto.org/?featured_on=talkpython" target="_blank" >quarto.org</a><br/> <strong>Show Your Work! Book</strong>: <a href="https://austinkleon.com/show-your-work/?featured_on=talkpython" target="_blank" >austinkleon.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=lypo8Ul4NhU" target="_blank" >youtube.com</a><br/> <strong>Episode #522 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/522/data-sci-tips-and-tricks-from-codecut.ai#takeaways-anchor" target="_blank" >talkpython.fm/522</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/522/data-sci-tips-and-tricks-from-codecut.ai" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Rodrigo Girão Serrão
Functions: a complete reference | Pydon't 🐍
This article serves as a complete reference for all the non-trivial things you should know about Python functions.
Functions are the basic building block of any Python program you write, and yet, many developers don't leverage their full potential. You will fix that by reading this article.
Knowing how to use the keyword def
is just the first step towards knowing how to define and use functions in Python.
As such, this Pydon't covers everything else there is to learn:
- How to structure and organise functions.
- How to work with a function signature, including parameter order,
*args
and**kwargs
, and the special syntax introduced by*
and/
. - What anonymous functions are, how to define them with the keyword
lambda
, and when to use them. - What it means for functions to be objects and how to leverage that in your code.
- How closures seem to defy a fundamental rule of scoping in Python.
- How to leverage closures to create the decorator pattern.
- What the keyword
yield
is and what generator functions are. - What the keyword
async
is and what asynchronous functions are. - How partial function application allows you to create new functions from existing functions.
- How the term “function” is overloaded and how you can create your own objects that behave like functions.
Bookmark this reference for later or download the “Pydon'ts – write elegant Python code” ebook for free. The ebook contains this chapter and many others, including hundreds of tips to help you write better Python code. Download the ebook “Pydon'ts – write elegant Python code” here.
What goes into a function and what doesn't
Do not overcrowd your functions with logic for four or five different things. A function should do a single thing, and it should do it well, and the name of the function should clearly tell you what your function does.
If you are unsure about whether some piece of code should be a single function or multiple functions, it's best to err on the side of too many functions. That is because a function is a modular piece of code, and the smaller your functions are, the easier it is to compose them together to create more complex behaviours.
Consider the function process_order
defined below, an exaggerated example that breaks these best practices to make the point clearer.
While it is not incredibly long, it does too many things:
def process_order(order):
# Validate the order:
for item, quantity, price in order:
if quantity <= 0:
raise ValueError(f"Cannot buy 0 or less of {item}.")
if price <= 0:
raise ValueError(f"Price must be positive.")
# Write the receipt:
total = 0
with open("receipt.txt", "w") as f:
for item, quantity, price in order:
# This week, yoghurts and batteries are on sale.
if "yoghurt" in item:
price *= 0.8
elif "batteries" in item:
price *= 0.5
# Write this line of the receipt:
partial = price * quantity
f.write(f"{item:>15} --- {quantity:>3}...
October 05, 2025
Paolo Melchiorre
Django: one ORM to rule all databases 💍
Comparing the Django ORM support across official database backends, so you don’t have to learn it the hard way.
Christian Ledermann
Python Code Quality Tools Beyond Linting
The landscape of Python software quality tooling is currently defined by two contrasting forces: high-velocity convergence and deep specialization. The recent, rapid adoption of Ruff has solved the long-standing community problem of coordinating dozens of separate linters and formatters, establishing a unified, high-performance axis for standard code quality.
A second category of tools continues to operate in necessary, but isolated, silos. Tools dedicated to architectural enforcement and deep structural metrics, such as:
- import-linter (Layered architecture enforcement)
- tach (Dependency visualization and enforcement)
- complexipy, radon, lizard (Metrics for overall and cognitive complexity)
- module_coupling_metrics, lcom, and cohesion (Metrics for coupling and class cohesion)
- pyscn - Python Code Quality Analyzer (Module dependencies, clone detection, complexity)
These projects address fundamental challenges of code maintainability, evolvability, and architectural debt that extend beyond the scope of fast, stylistic linting. The success of Ruff now presents the opportunity to foster a cross-tool discussion focused not just on syntax, but on structure.
Specialized quality tools are vital for long-term maintainability and risk assessment. Tools like import-linter
and tach
mitigate technical risk by enforcing architectural rules, preventing systemic decay, and reducing change costs. Complexity and cohesion metrics from tools such as complexipy
, lcom
, and cohesion
quantitatively flag overly complex or highly coupled components, acting as early warning systems for technical debt. By analysing the combined outputs, risk assessment shifts to predictive modelling: integrating data from individual tools (e.g., import-linter
violations, complexipy
scores) creates a multi-dimensional risk score. Overlaying these results, such as identifying modules that are both low in cohesion and involved in tach
-flagged dependency cycles, generates a "heat map" of technical debt. This unified approach, empirically validated against historical project data like bug frequency and commit rates can yield a predictive risk assessment. It identifies modules that are not just theoretically complex but empirically confirmed sources of instability, transforming abstract quality metrics into concrete, prioritized refactoring tasks for the riskiest codebase components.
Reasons to Connect
Bring the maintainers and core users of these diverse tools into a shared discussion.
Increasing Tool Visibility and Sustainability: Specialized tools often rely on small, dedicated contributor pools and suffer from knowledge isolation, confining technical debate to their specific GitHub repository. A broader discussion provides these projects with critical outreach, exposure to a wider user base, and a stronger pipeline of new contributors, ensuring their long-term sustainability.
Let's start the conversation on how to 'measure' maintainable, and architecturally sound Python code.
And keep Goodhart's law: "When a measure becomes a target, it ceases to be a good measure" in mind ;-)
Daniel Roy Greenfeld
Using pyinstrument to profile Air apps
Air is built on FastAPI, so we could use pyinstrument's instructions modified. However, because profilers reveal a LOT of internal data, in our example we actively use an environment variable.
You will need both air
and pyinstrument
to get this working:
# preferred
uv add "air[standard]" pyinstrument
# old school
pip install "air[standard]" pyinstrument
And here's how to use pyinstrument to find bottlenecks:
import asyncio
from os import getenv
import air
from pyinstrument import Profiler
app = air.Air()
# Use an environment variable to control if we are profiling
# This is a value that should never be set in production
if getenv("PROFILING"):
@app.middleware("http")
async def profile_request(request: air.Request, call_next):
profiling = request.query_params.get("profile", False)
if profiling:
profiler = Profiler()
profiler.start()
await call_next(request)
profiler.stop()
return air.responses.HTMLResponse(profiler.output_html())
else:
return await call_next(request)
@app.page
async def index(pause: float = 0):
if pause:
await asyncio.sleep(pause)
title = f"Pausing for {pause} seconds"
return air.layouts.mvpcss(
air.Title(title),
air.H1(title),
# Provide three options for testing the profiler
air.P('Using asyncio.sleep to simulate bottlenecks'),
air.Ol(
air.Li(
air.A(
f"Pause for 0.1 seconds",
href="/?profile=1&pause=0.1",
target="_blank",
)
),
air.Li(
air.A(
f"Pause for 0.3 seconds",
href="/?profile=1&pause=0.3",
target="_blank",
)
),
air.Li(
air.A(
f"Pause for 1.0 seconds",
href="/?profile=1&pause=1.0",
target="_blank",
)
),
),
)
Running the test app:
Rather than set the environment variable, for this kind of thing I like to prefix the CLI command with a PROFILING=1
prefix to send the environment variable for just this run of the project. By doing so we trigger pyinstrument
:
PROFILING=1 fastapi dev main.py
Once you have it running, check it out here: http://localhost:8000
Screenshots
Graham Dumpleton
Lazy imports using wrapt
PEP 810 (explicit lazy imports) was recently released for Python. The idea with this PEP is to add explicit syntax for implementing lazy imports for modules in Python.
lazy import json
Lazily importing modules in Python is not a new idea and there have been a number of packages available to achieve a similar result, they just lacked an explicit language syntax to hide what is actually going on under the covers.
When I saw this PEP it made me realise that a new feature I added into wrapt
for upcoming 2.0.0 release can be used to implement a lazy import with little effort.
For those who only know of wrapt
as being a package for implementing Python decorators, it should be known that the ability to implement decorators using the approach it does, was merely one outcome of the true reason for wrapt
existing.
The actual reason wrapt
was created was to be able to perform monkey patching of Python code.
One key aspect of being able to monkey patch Python code is to be able to have the ability to wrap target objects in Python with a wrapper which acts as a transparent object proxy for the original object. By extending the object proxy type, one can then intercept access to the target object to perform specific actions.
For the purpose of this discussion we can ignore the step of creating a custom object proxy type and look at just how the base object proxy works.
import wrapt
import graphlib
print(type(graphlib))
print(id(graphlib.TopologicalSorter), graphlib.TopologicalSorter)
xgraphlib = wrapt.ObjectProxy(graphlib)
print(type(xgraphlib))
print(id(graphlib.TopologicalSorter), xgraphlib.TopologicalSorter)
In this example we import a module called graphlib
from the Python standard library. We then access from that module the class graphlib.TopologicalSorter
and print it out.
When we wrap the same module with an object proxy, the aim is that anything you could do with the original module, could also be done via the object proxy. The output from the above is thus:
<class 'module'>
35852012560 <class 'graphlib.TopologicalSorter'>
<class 'ObjectProxy'>
35852012560 <class 'graphlib.TopologicalSorter'>
verifying that in both cases the TopologicalSorter
is in fact the same object, even though for the proxy the apparent type is different.
The new feature which has been added for wrapt
version 2.0.0 is a lazy object proxy. That is, instead of passing to the object proxy when created the target object to be wrapped, you pass a function, with this function being called to create or otherwise obtain the target object to be wrapped, the first time the proxy object is accessed.
Using this feature we can easily implement lazy module importing.
import sys
import wrapt
def lazy_import(name):
return wrapt.LazyObjectProxy(lambda: __import__(name, fromlist=[""]))
graphlib = lazy_import("graphlib")
print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))
print(type(graphlib))
print(graphlib.TopologicalSorter)
print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))
Running this the output is:
sys.modules['graphlib'] = None
<class 'LazyObjectProxy'>
<class 'graphlib.TopologicalSorter'>
sys.modules['graphlib'] = <module 'graphlib' from '.../lib/python3.13/graphlib.py'>
One key thing to note here is that when the lazy import is setup, no changes have been made to sys.modules
. It is only later when the module is truly imported that you see an entry in sys.modules
for that module name.
Some lazy module importers work by injecting into sys.modules
a fake module object for the target module. This has to be done right up front when the application is started. Because the fake entry exists, when import
is later used to import that module it thinks it has already been imported and thus what is added into the scope where import
is used is the fake module, with the actual module not being imported at that point.
What then happens is that when code attempts to use something from the module, an overridden __getattr__
special dunder method on the fake module object gets triggered, which on the first use causes the actual module to then be imported.
That sys.modules
is modified and a fake module added is one of the criticisms one sees about such lazy module importers. That is, the change they make is global to the whole application which could have implications such as where side affects of importing a module are expected to be immediate.
With the way the wrapt
example works above, no global change is required to sys.modules
, and instead impacts are only local to the scope where the lazy import was made.
Reducing the impacts to just the scope where the lazy import was used is actually one of the goals of the PEP. The example using wrapt
shows that it can be done, but it means you can't use an import
statement, but then that is what the PEP aims to still allow, albeit they still require a new lazy
keyword for when doing the import. Either way, the code where you want to have a lazy import needs to be different.
The other thing which the PEP should avoid is the module reference in the scope where the module is imported being any sort of fake module object. Initially the module reference would effectively be a place holder, but as soon as used, the actual module would be imported and the place holder replaced.
For the wrapt
example the module reference would always be a proxy object, although technically with a bit of stack diving trickey you could also replace the module reference with the actual module as a side effect of the first use. This sort of trick is left as an exercise for the reader.
October 04, 2025
Paolo Melchiorre
My DjangoCon US 2025
A summary of my experience at DjangoCon US 2025 told through the posts I published on Mastodon during the conference.
Rodrigo Girão Serrão
TIL #134 – = alignment in string formatting
Today I learned how to use the equals sign to align numbers when doing string formatting in Python.
There are three main alignment options in Python's string formatting:
Character | Meaning |
---|---|
< |
align left |
> |
align right |
^ |
centre |
However, numbers have a fourth option =
.
On the surface, it looks like it doesn't do anything:
x = 73
print(f"@{x:10}@") # @ 73@
print(f"@{x:=10}@") # @ 73@
But that's because =
influences the alignment of the sign.
If I make x
negative, we already see something:
x = -73
print(f"@{x:10}@") # @ -73@
print(f"@{x:=10}@") # @- 73@
So, the equals sign =
aligns a number to the right but aligns its sign to the left.
That may look weird, but I guess that's useful if you want to pad a number with 0s:
x = -73
print(f"@{x:010}@") # @-000000073@
In fact, there is a shortcut for this type of alignment, which is to just put a zero immediately to the left of the width when aligning a number:
x = -73
print(f"@{x:010}@") # @-000000073@
The zero immediately to the left changes the default alignment of numbers to be =
instead of >
.
October 03, 2025
Luke Plant
Breaking “provably correct” Leftpad
I don’t know much about about formal methods, so a while back I read Hillel Wayne’s post Let’s prove Leftpad with interest. However:
I know Donald Knuth’s famous quote: “Beware of bugs in the above code; I have only proved it correct, not tried it”
I also know how it turned out that code that had been proved correct harboured a bug not found for decades.
So I thought I’d take a peek and do some testing on these Leftpad implementations that are all “provably correct”.
Methodology
I’ll pick a few, simple, perfectly ordinary inputs at random, and work out what I think the output should be. This is a pretty trivial problem so I’m expecting that all the implementations will match my output. [narrator: He is is expecting no such thing]
I’m also expecting that, even if for some reason I’ve made a mistake, all the implementations will at least match each other. [narrator: More lies] They’ve all been proved correct, right?
Here are my inputs and expected outputs. I’m going to pad to a length of 10, and use -
as padding so it can be seen and counted more easily than spaces.
Item | Input | Length | Expected padding | Expected Output |
---|---|---|---|---|
1 | 𝄞 |
1 | 9 | ---------𝄞 |
2 | Å |
1 | 9 | ---------Å |
3 | 𓀾 |
1 | 9 | ---------𓀾 |
4 | אֳֽ֑ |
1 | 9 | ---------אֳֽ֑ |
5 | résumé |
6 | 4 | ----résumé |
6 | résumé |
6 | 4 | ----résumé |
[“ordinary”, “random” - I think my work here is done…]
I’ve used a monospace font so that the right hand side of the outputs all line up as you’d expect.
Entry 6 is not a mistake, by the way, it just does “e acute” in a different way to entry 5. Nothing to see here, move along…
Implementations
Not all of the implementations were that easy to run. In fact, many of them didn’t take any kind of “string” as input, but vectors or lists or such things, and it wasn’t obvious to me how to pass strings in. So I discounted them.
For the ones I could run, I attempted to do so by embedding the test inputs directly in the program, if possible.
Liquid Haskell
Embedding the characters directly in Haskell source code kept getting me “lexical error in string/character literal”, so I wrote a small driver program that read from a file.
Java
The leftpad function provided didn’t take a string, but a char[]
. Thankfully, it’s easy to convert from String
objects, using the .toCharArray()
function. So I did that.
Lean4
There is a handy online playground, and the implementation had a helpful #eval
block that I could modify to get output. You can play with it here.
Rust
The code here had loads of extra lines regarding specs etc. which I stripped so I could easily run it, which worked fine.
More tricky was that the code didn’t take a string, but some Vec<Thingy>
. As I know nothing about Rust, I got ChatGPT to tell me how to convert from a string to that. It gave me two options, I picked the one that looked simpler and less <<angry>>. I didn’t deliberately pick the one which made Rust look even worse than all the others, out of peevish resentment for every time someone has rewritten some Python code (my go-to language) in Rust and made it a million times faster – that’s a ridiculous suggestion.
Some competition!
To make things interesting, let’s compare these provably correct implementations with one vibe-coded by ChatGPT, in some random language, like, say, um, Swift. It gave me this code:
import Foundation func leftPad(_ string: String, length: Int, pad: Character = " ") -> String { let paddingCount = max(0, length - string.count) return String(repeating: pad, count: paddingCount) + string }
You can play with it online here.
Results
Here are the results, green for correct and red for … less correct.
table.results { font-family: monospace; } td.result-pass { background-color: light-dark(#00ff00, #004000); color: light-dark(black, white); } td.result-fail { background-color: light-dark(#ff8080, #400000); color: light-dark(black, white); }Input | Reference | Java | Haskell | Lean | Rust | Swift |
---|---|---|---|---|---|---|
𝄞 | ---------𝄞 | --------𝄞 | ---------𝄞 | ---------𝄞 | ------𝄞 | ---------𝄞 |
Å | ---------Å | --------Å | --------Å | --------Å | -------Å | ---------Å |
𓀾 | ---------𓀾 | --------𓀾 | ---------𓀾 | ---------𓀾 | ------𓀾 | ---------𓀾 |
אֳֽ֑ | ---------אֳֽ֑ | ------אֳֽ֑ | ------אֳֽ֑ | ------אֳֽ֑ | --אֳֽ֑ | ---------אֳֽ֑ |
résumé | ----résumé | ----résumé | ----résumé | ----résumé | --résumé | ----résumé |
résumé | ----résumé | --résumé | --résumé | --résumé | résumé | ----résumé |
And pivoted the other way around so you can compare individual inputs more easily:
Language | 𝄞 | Å | 𓀾 | אֳֽ֑ | résumé | résumé |
---|---|---|---|---|---|---|
Reference | ---------𝄞 | ---------Å | ---------𓀾 | ---------אֳֽ֑ | ----résumé | ----résumé |
Java | --------𝄞 | --------Å | --------𓀾 | ------אֳֽ֑ | ----résumé | --résumé |
Haskell | ---------𝄞 | --------Å | ---------𓀾 | ------אֳֽ֑ | ----résumé | --résumé |
Lean | ---------𝄞 | --------Å | ---------𓀾 | ------אֳֽ֑ | ----résumé | --résumé |
Rust | ------𝄞 | -------Å | ------𓀾 | --אֳֽ֑ | --résumé | résumé |
Swift | ---------𝄞 | ---------Å | ---------𓀾 | ---------אֳֽ֑ | ----résumé | ----résumé |
Comments
Rust, as expected, gets nul points. What can I say?
Vibe-coding with Swift: 💯
Other that, we can see:
The only item that all implementations (apart from Rust) get correct is entry 5, the first of the two
résumé
options.Java is mostly consistent with the others, but it appears it doesn’t like musical notation, or Egyptian hieroglyphics (item 3 is “standing mummy”), which seems a little rude.
The score so far:
Fancy-pants languages and formal verification: 0
Vibe-coding it with ChatGPT: 1
Explanation
OK, I’ve had my fun now :-)
(The original “Let’s Prove Leftpad” project was done “because it is funny”, and this post is in the same spirit. I want to be especially clear that I’m not actually a fan of vibe-coding).
What’s actually going on here? There are two main issues, both tied up with the concept of “the length of a string”.
(If you already know enough about Unicode, or don’t care about the details, you can skip to the “What went wrong?” section to continue discussion regarding formal verification).
First:
What is a character?
Strings are composed of “characters”, but what are they?
Most modern computer languages, and all the ones I included above, use Unicode as the basis for answering this. Unicode, at its heart, is a list of “code points” (although it is bit more than this). A code point, however, is not exactly a character.
Many of the code points you use most often, like Latin Capital Letter A U+0041, are exactly what you think of as a character. But many are not. These include:
combining characters, which are used to add accents and marks to other characters. These have some complexity:
First, whether you regard the accent as part of the character or not can be a matter of debate. For example, in
é
, “e acute”, you might think of this as a different letter toe
, or ane
plus an acute accent. In some languages, this distinction is critical e.g. in Turkishç
is not just “c with some decoration”, it’s an entirely distinct letter.Second, Unicode often has multiple ways of generating these accented characters, either “pre-composed” as a single code point, or as a combination of multiple code points. So, there is both:
é:
Latin Small Letter E With Acute U+00E9
é:
Latin Small Letter E U+0065
+Combining Acute Accent U+0301
control characters such as bidi isolation characters.
and probably more
So, Unicode has another concept, called the “extended grapheme cluster”, or “user-perceived character”, which more closely maps to what you might think of as a “character”. That’s the concept I’m implicitly using for my claim of what leftpad should output.
Secondly, there is the question of:
How does a programming language handle strings?
Different languages have different fundamental answers to the question of “what is a string?”, different internal representations of them (how the data is actually stored), and different ways of exposing strings to the programmer.
Some languages, especially performance oriented ones, provide little to zero insulation from the internal representation, while others provide a fairly strong abstraction. Some languages, like Haskell, provide multiple string types (which can be used with string literals in your code with the OverloadedStrings extension).
At this point, as well as “code points”, we’ve got to consider “encodings”. If you have a code point, even a “simple” one like U+0041
(A
), you have said nothing about how you are going to store or transmit that data. An encoding is a system for doing that. Two relevant ones here:
UTF-8 is probably the most common/popular one. It uses anything from 1 to 4 bytes to express code points. It has lots of useful properties, but an important one is backwards compatibility with ASCII - if you happen to be limited to the 127 characters found in ASCII, then UTF-8 encoded Unicode text is one-byte-per-codepoint, and is byte-for-byte the the same as ASCII.
UTF-16 is an encoding where most code points (specifically those in the Basic Multilingual Plane or BMP) take up 2 bytes, and the remainder can be specified using 4 bytes and a system called “surrogate pairs”.
UTF-16 exists because originally Unicode had less than 65,536 code points, meaning you could represent all points with just two bytes, and it was thought we would never need more.
In terms of languages today, with some simplification we can say the following:
In Haskell, Lean, Python, and many other languages, strings are sequences of Unicode code points.
There is little attempt to hide the idea of code points from you, although in some, like Python, the internal representation itself might be hidden (see PEP 393).
In Java and Javascript, strings are sequences of UTF-16 items.
Originally strings were intended to be sequences of code points, but when Unicode grew, to avoid breaking all existing Java code, it morphed into the UTF-16 compromise.
In Rust, strings are UTF-8 encoded Unicode code points (see String).
Rust does also have an easily accessible chars method/concept, which corresponds to a Unicode code point. I didn’t use this above - Rust would have behaved the same as Haskell/Lean if I had.
In Swift, strings are sequences of user-perceived characters.
I’m not a user of Swift, but from the docs it appears to do a pretty good job of abstracting away from the “sequence of code point” paradigm. For example, iterating over a string gets you the “characters” (i.e. extended grapheme clusters) one by one, and the
.count
property gives you the number of such characters. It does also have a.length
property that gives you the number of code points.
Putting them together
These differences, between them, explain the differences in output above. In more detail:
the treble clef
𝄞
is a single code point, but it is outside the BMP and requires 2 UTF-16 items. So Java considers the length to be 2, and only 8 padding characters were added, in contrast to other languages.I created the
Å
using two code points (although there is a pre-composed version of this character).The
אֳֽ֑
is Hebrew, and is composed an Aleph followed by multiple vowel marks and a cantillation mark, bringing it up to 4 code points.résumé
was spelled in two different ways, one with precomposedé
which is one code point, the other with combining characters where eaché
requires two code points.None of the inputs was wholly in the ASCII range, so encoding them as UTF-8 requires more bytes, which is why Rust (as I used it) behaved as it did.
What went wrong?
For me, the biggest issue is not the “code points” vs “characters” debate, which is responsible for most of the variation shown, but the issue that resulted in the difference in the Java output i.e. UTF-16. All of the others (if I hadn’t stitched up Rust) would have resulted in the same output at least.
Apparently, nothing in the process of doing the formal verification forced the implementations to converge, and I think it is pretty fair to conclude that that at least one of the implementations must be faulty, given that they produce different output.
So what went wrong?
Lies, damned lies and natural language
English (or any natural language) is at the heart of the problem here. We should start with the phrase “provably correct”. It has a technical definition, but I’m not convinced those English words help us. The post accompanying the LiquidHaskell entry for Leftpad puts it this way:
My eyes roll whenever I read the phrase “proved X (a function, a program) correct”.
There is no such thing as “correct”.
There are only “specifications” or “properties”, and proofs that ensures that your code matches those specifications or properties.
For these reasons, I think I’d prefer talking about “formally verified” functions – it at least prompts you to ask “what does that mean”, and maybe suggests that you should be thinking about what, specifically, has been verified.
The next bit of English to trip us up is “the length of a string”. It’s extremely easy to imagine this is an easy concept, but in the presence of Unicode it really isn’t.
Hillel’s original informal requirements don’t actually use that phrase, instead they use the pseudo-code max(n, len(str))
. Looking at the implementations, it appears people have subconsciously interpreted this as “the length of the string”, and then assumed that the functions like length
or size
that their language provides do “the right thing”.
We could conclude this is in fact a problem with informal requirements – it was at the level of interpreting those requirements that this went wrong. Therefore, we need more formal specifications and verification, not less. But I don’t think this gets round the fact that we’ve got to translate at some point, and at that point you’ve got trouble.
What is correct?
The issue I haven’t addressed so far is whether my reference output and the Swift implementation are actually “correct”. The reality is that you can make arguments for different things.
Implicitly, I’m arguing that left pad should be used for visual alignment in a fixed width context, and the implementation that does the best at that is the best one. I think that is a pretty reasonable case. But I’m sure you could make a case for other output – there isn’t actually anything that says what left pad should be used for. It’s possible that there are use cases where “the language’s underlying concept of the length of a string, whatever that may be” is the most important thing.
In addition, I was hiding the fact that “fixed width” is yet another lie:
I was originally going to use a flag character like 🏴 as one of my inputs, which is a single “extended grapheme cluster” that uses no less then 7 code points. It also results in 14 UTF-16 units in Java. The problem was that this character, like most emojis and many other characters from wide scripts like Chinese, takes up a double width even with a monospace font.
To maintain the subterfuge of “look how these all line up neatly in the correct output”, I was forced to use other examples. In other words, the example use case I was relying on to prove that these leftpad implementations were broken, is itself a broken concept. But I would still maintain that my reference output is closer to what you would expect leftpad to do.
A big point is this: even if we argue that a give implementation is “correct” (in that it does what its specifications say it does), that doesn’t mean you are using it correctly. Are you using it for its intended purpose and context? That seems like a really hard question to answer even for leftpad, and many other real world functions are similar.
So, I’m not sure what my final conclusion is, other than “programming is hard, let’s go shopping let’s eat chocolate” (alternative suggested by my wife, that’s my plan for the evening then).
Confessions and corrections
The Swift implementation was indeed written by ChatGPT, and it got it right first time, with just the prompt “Implement leftpad in Swift”. However:
Swift is the only language I know where an implementation that does what I wanted it to do is that simple.
I followed up by getting ChatGPT to produce a Python version, and it had all the same problems as Haskell/Lean and similar.
I noticed that Swift doesn’t calculate strings lengths the way I would need for my use case for some characters, such as Zero Width Space U+200B and Right-To-Left Isolate U+2067, which I would need to count for zero length.
As mentioned, the other way to use the Rust version has the same behaviour as the Haskell/Lean/etc versions. ChatGPT did actually point out to me that this way was better, and the other way (the one I used) was adequate only if the input was limited to ASCII.
I do feel slightly justified in my treatment of this solution however - the provided code didn’t give a function that actually took a string, nor did it force you to use it in a specific way. It dodged the hard part, so I punished it.
Response
Hillel was kind enough to look at this post, and had this response to add:
In general, formally verified code can “go wrong” in two ways: the proven properties that don’t match what we need, or the proof depends on assumptions that are not true in practice. This is a good example of the former. An example of the latter would be the assumption that the output string is storable in memory. None of the formally verified functions will correctly render
leftpad(“-“, 1e300, “foo”)
. This is why we always need to be careful when talking about “proving correctness”. In formal methods, “correct” always means “conforms to a certain specification under certain assumptions”, which is very different from the colloquial use of “correct” (does what you want and doesn’t have bugs).
He also pointed out that padding/alignment functionality available in standard libraries, like Python’s Format Specification Mini-Language and Javascript’s padStart, have similar issues.
Mariatta
Disabling Signup in Django allauth
Django allauth
Django allauth is a popular third party package that provides a lot of functionality for handling user authentication, with support for social authentication, email verification, multi-factor authentication, and more.
It is a powerful library that greatly expands the built-in Django authentication system. It comes with its own basic forms and models for user registration, login, logout, and password management.
I like using it because often I just wanted to get a new Django project up and running quickly without having to write
up all the authentication-related views, forms, and templates myself. I’m using django-allauth
in PyLadiesCon
Portal, and in my personal project Secret Codes.
Real Python
The Real Python Podcast – Episode #268: Advice on Beginning to Learn Python
What's changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 01, 2025
Real Python
Python 3.14 Preview: Better Syntax Error Messages
Python 3.14 brings a fresh batch of improvements to error messages that’ll make debugging feel less like detective work and more like having a helpful colleague point out exactly what went wrong. These refinements build on the clearer tracebacks introduced in recent releases and focus on the mistakes Python programmers make most often.
By the end of this tutorial, you’ll understand that:
- Python 3.14’s improved error messages help you debug code more efficiently.
- There are ten error message enhancements in 3.14 that cover common mistakes, from keyword typos to misusing
async with
. - These improvements can help you catch common coding mistakes faster.
- Python’s error messages have evolved from version 3.10 through 3.14.
- Better error messages accelerate your learning and development process.
There are many other improvements and new features coming in Python 3.14. The highlights include the following:
To try any of the examples in this tutorial, you need to use Python 3.14. The tutorials How to Install Python on Your System: A Guide and Managing Multiple Python Versions With pyenv walk you through several options for adding a new version of Python to your system.
Get Your Code: Click here to download the free sample code that you’ll use to learn about the error message improvements in Python 3.14.
Take the Quiz: Test your knowledge with our interactive “Python 3.14 Preview: Better Syntax Error Messages” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python 3.14 Preview: Better Syntax Error MessagesExplore how Python 3.14 improves error messages with clearer explanations, actionable hints, and better debugging support for developers.
Better Error Messages in Python 3.14
When Python 3.9 introduced a new parsing expression grammar (PEG) parser for the language, it opened the door to better error messages in Python 3.10. Python 3.11 followed with even better error messages, and that same effort continued in Python 3.12.
Python 3.13 refined these messages further with improved formatting and clearer explanations, making multiline errors more readable and adding context to complex error situations. These improvements build upon PEP 657, which introduced fine-grained error locations in tracebacks in Python 3.11.
Now, Python 3.14 takes another step forward, alongside other significant changes like PEP 779, which makes the free-threaded build officially supported, and PEP 765, which disallows using return
, break
, or continue
to exit a finally
block. What makes the error message enhancements in Python 3.14 special is their focus on common mistakes.
Each improved error message follows a consistent pattern:
- It identifies the mistake.
- It explains what’s wrong in plain English.
- It suggests a likely fix when possible.
The error message improvements in Python 3.14 cover SyntaxError
, ValueError
, and TypeError
messages. For an overview of the ten improvements you’ll explore in this tutorial, expand the collapsible section below:
-
Keyword Typos —
SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:invalid syntax. Did you mean 'for'?
-
elif
Afterelse
—SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:'elif' block follows an 'else' block
-
Conditional Expressions —
SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:expected expression after 'else', but statement is given
-
String Closure —
SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:invalid syntax. Is this intended to be part of the string?
-
String Prefixes —
SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:'b' and 'f' prefixes are incompatible
-
Unpacking Errors —
ValueError
⭕️ Previous:too many values to unpack (expected 2)
✅ Improved:too many values to unpack (expected 2, got 3)
-
as
Targets —SyntaxError
⭕️ Previous:invalid syntax
✅ Improved:cannot use list as import target
-
Unhashable Types —
TypeError
⭕️ Previous:unhashable type: 'list'
✅ Improved:cannot use 'list' as a dict key (unhashable type: 'list')
-
math
Domain Errors —ValueError
⭕️ Previous:math domain error
✅ Improved:expected a nonnegative input, got -1.0
-
async with
Errors —TypeError
⭕️ Previous:'TaskGroup' object does not support the context manager protocol
✅ Improved:object does not support the context manager protocol...Did you mean to use 'async with'?
The examples in this tutorial show both the error messages from Python 3.13 and the improved messages in Python 3.14, so you can see the differences even if you haven’t installed 3.14 yet.
For a general overview of Python’s exception system, see Python’s Built-in Exceptions: A Walkthrough With Examples, or to learn about raising exceptions, check out Python’s raise: Effectively Raising Exceptions in Your Code.
Clearer Keyword Typo Suggestions
A typo is usually a tiny mistake, sometimes just one extra letter, but it’s enough to break your code completely. Typos that involve Python keywords are among the most common syntax errors in Python code.
In Python 3.13 and earlier, a typo in a keyword produces a generic syntax error that offers no guidance about what might be wrong:
Python 3.13
>>> forr i in range(5):
File "<python-input-0>", line 1
forr i in range(5):
^
SyntaxError: invalid syntax
The error points to the problem area with a helpful caret symbol (^
), which at least tells you where Python found the error. However, it doesn’t suggest what you might have meant. The message “invalid syntax” is technically correct but not particularly helpful in practice.
You have to figure out on your own that forr
should actually be for
. It might be obvious once you spot it, but finding that single wrong letter can take a surprisingly long time when you’re focused on logic rather than spelling.
Python 3.14 recognizes when you type something close to a Python keyword and offers a helpful suggestion that immediately points you to the fix:
Read the full article at https://realpython.com/python314-error-messages/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Django security releases issued: 5.2.7, 5.1.13, and 4.2.25
In accordance with our security release policy, the Django team is issuing releases for Django 5.2.7, Django 5.1.13, and Django 4.2.25. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB
QuerySet.annotate(), QuerySet.alias(), QuerySet.aggregate(), and QuerySet.extra() methods were subject to SQL injection in column aliases, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to these methods on MySQL and MariaDB.
Thanks to sw0rd1ight for the report.
This issue has severity "high" according to the Django security policy.
CVE-2025-59682: Potential partial directory-traversal via archive.extract()
The django.utils.archive.extract() function, used by startapp --template and startproject --template, allowed partial directory-traversal via an archive with file paths sharing a common prefix with the target directory.
Thanks to stackered for the report.
This issue has severity "low" according to the Django security policy.
Affected supported versions
- Django main
- Django 6.0 (currently at alpha status)
- Django 5.2
- Django 5.1
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0 (currently at alpha status), 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
CVE-2025-59682: Potential partial directory-traversal via archive.extract()
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
The following releases have been issued
- Django 5.2.7 (download Django 5.2.7 | 5.2.7 checksums)
- Django 5.1.13 (download Django 5.1.13 | 5.1.13 checksums)
- Django 4.2.25 (download Django 4.2.25 | 4.2.25 checksums)
The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.
Real Python
Quiz: Python 3.14 Preview: Better Syntax Error Messages
This quiz helps you get familiar with the upgraded error messages in Python 3.14. You’ll review new keyword typo suggestions, improved math errors, string prefix feedback, and more.
Put your understanding to the test and discover how Python’s improved error messages can help you debug code faster.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]