Planet Python
Last update: October 09, 2025 09:43 PM UTC
October 09, 2025
Everyday Superpowers
Why I switched from HTMX to Datastar
In 2022, David Guillot delivered an inspiring DjangoCon Europe talk, showcasing a web app that looked and felt as dynamic as a React app. Yet he and his team had done something bold. They converted it from React to HTMX, cutting their codebase by almost 70% while significantly improving its capabilities.
Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
I saw similar results when I switched my projects from HTMX to Datastar. It was exciting to reduce my code while building real-time, multi-user applications without needing WebSockets or complex frontend state management.
While preparing my FlaskCon 2025 talk, I hit a wall. I was juggling HTMX and AlpineJS to keep pieces of my UI in sync, but they fell out of step. I lost hours debugging why my component wasn’t updating. Neither library communicates with the other. Since they are different libraries created by different developers, you are the one responsible for helping them work together.
Managing the dance to initialize components at various times and orchestrating events was causing me to write more code than I wanted to and spend more time than I could spare to complete tasks.
Knowing that Datastar had the capability of both libraries with a smaller download, I thought I’d give it a try. It handled it without breaking a sweat, and the resulting code was much easier to understand.
I appreciate that there’s less code to download and maintain. Having a library handle all of this in under 11 KB is great for improving page load performance, especially for users on mobile devices. The less you need to download, the better off you are.
But that's just the starting point.
As I incorporated Datastar into my project at work, I began to appreciate Datastar’s API. It feels significantly lighter than HTMX. I find that I need to add fewer attributes to achieve the desired results.
For example, most interactions with HTMX require you to create an attribute to define the URL to hit, what element to target with the response, and then you might need to add more to customize how HTMX behaves, like this:
<span hx-target="#rebuild-bundle-status-button"
hx-select="#rebuild-bundle-status-button"
hx-swap="outerHTML"
hx-trigger="click"
hx-get="/rebuild/status-button"></span>
One doesn’t always need all of these, but I find it common to have two or three attributes every time[2]{And then there are the times I need to remember to look up the ancestry chain to see if any attribute changes the way I’m expecting things to work. Those are confusing bugs when they happen!}.
With Datastar, I regularly use just one attribute, like this:
<span data-on-click="@get('/rebuild/status-button')"></span>
This gives me less to think about when I return months later and need to recall how this works.
The primary difference between HTMX and Datastar is that HTMX is a front-end library that advances the HTML specification. DataStar is a server-side-driven library that aims to create high-performance, web-native, live-updating web applications.
In HTMX, you describe its behavior by adding attributes to the element that triggers the request, even if it updates something far away on the page. That’s powerful, but it means your logic is scattered across multiple layers. Datastar flips that: the server decides what should change, keeping all your update logic in one place.
To cite an example from HTMX’s documentation:
<div>
<div id="alert"></div>
<button hx-get="/info"
hx-select="#info-details"
hx-swap="outerHTML"
hx-select-oob="#alert">
Get Info!
</button>
</div>
When the button is pressed, it sends a GET request to `/info`, replaces the button with the element in the response that has the ID 'info-details', and then retrieves the element in the response with the ID 'alert', replacing the element with the same ID on the page.
This is a lot for that button element to know. To author this code, you need to know what information you’re going to return from the server, which is done outside of editing the HTML. This is when HTMX loses the ”locality of behavior” I like so much.
Datastar, on the other hand, expects the server to define the behavior, and it works better.
To replicate the behavior above, you have options. The first option keeps the HTML similar to above:
<div>
<div id="alert"></div>
<button id="info-details"
data-on-click="@get('/info')">
Get Info!
</button>
</div>
In this case, the server can return an HTML string with two root elements that have the same IDs as the elements they’re updating:
<p id="info-details">These are the details you are looking for…</p>
<div id="alert">Alert! This is a test.</div>
I love this option because it’s simple and performant.
A better option would change the HTML to treat it as a component.
What is this component? It appears to be a way for the user to get more information about a specific item.
What happens when the user clicks the button? It seems like either the information appears or there is no information to appear, and instead we render an error. Either way, the component becomes static.
Maybe we could split the component into each state, first, the placeholder:
<!-- info-component-placeholder.html -->
<div id="info-component">
<button data-on-click="@get('/product/{{product.id}}/info')">
Get Info!
</button>
</div>
Then the server could render the information the user requests…
<!-- info-component-get.html -->
<div id="info-component">
{% if alert %}<div id="alert">{{ alert }}</div>{% endif %}
<p>{{product.additional_information}}</p>
</div>
…and Datastar will update the page to reflect the changes.
This particular example is a little wonky, but I hope you get the idea. Thinking at a component level is better as it prevents you from entering an invalid state or losing track of the user's state.
One of the amazing things from David Guillot's talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
David’s team accomplished that by having HTMX trigger a JavaScript event, which in turn triggered the remote component to issue a GET request to update itself with the most up-to-date count.
With Datastar, you can update multiple components at once, even in a synchronous function.
If we have a component that allows someone to add an item to a shopping cart:
<form id="purchase-item"
data-on-submit="@post('/add-item', {contentType: 'form'})">"
>
<input type=hidden name="cart-id" value="{{cart.id}}">
<input type=hidden name="item-id" value="{{item.id}}">
<fieldset>
<button data-on-click="$quantity -= 1">-</button>
<label>Quantity
<input name=quantity type=number data-bind-quantity value=1>
</label>
<button data-on-click="$quantity += 1">+</button>
</fieldset>
<button type=submit>Add to cart</button>
{% if msg %}
<p class=message>{{msg}}</p>
{% endif %}
</form>
And another one that shows the current count of items in the cart:
<div id="cart-count">
<svg viewBox="0 0 10 10" xmlns="http://www.w3.org/2000/svg">
<use href="#shoppingCart">
</svg>
{{count}}
</div>
Then a developer can update them both in the same request. This is one way it could look in Django:
from datastar_py.consts import ElementPatchMode
from datastar_py.django import (
DatastarResponse,
ServerSentEventGenerator as SSE,
)
def add_item(request):
# skipping all the important state updates
return DatastarResponse([
SSE.patch_elements(
render_to_string('purchase-item.html', context=dict(cart=cart, item=item, msg='Item added!'))
),
SSE.patch_elements(
render_to_string('cart-count.html', context=dict(count=item_count))
),
])
Being a part of the Datastar Discord, I appreciate that Datastar isn't just a helper script. It’s a philosophy about building apps with the web’s own primitives, letting the browser and the server do what they’re already great at.
Where HTMX is trying to push the HTML spec forward, Datastar is more interested in promoting the adoption of web-native features, such as CSS view transitions, Server-Sent Events, and web components, where appropriate.
This has been a massive eye-opener for me, as I’ve long wanted to leverage each of these technologies, and now I’m seeing the benefits.
One of the biggest wins I achieved with Datastar was by refactoring a complicated AlpineJS component and extracting a simple web component that I reused in multiple places[3]{I’ll talk more about this in an upcoming post.}.
I especially appreciate this because there are times when it's best to rely on JavaScript to accomplish a task. But it doesn't mean you have to reach for a tool like React to achieve it. Creating custom HTML elements is a great pattern to accomplish tasks with high locality of behavior and the ability to reuse them across your app.
However, Datastar provides you with even more capabilities.
Apps built with collaboration as a first-class feature stand out from the rest, and Datastar is up to the challenge.
To accomplish this, most HTMX developers achieve updates either by "pulling" information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
Datastar uses a simple web technology called Server-Sent Events (SSE) to allow the server to "push" updates to connected clients. When something changes, such as a user adding a comment or a status change, the server can immediately update browsers with minimal additional code.
You can now build live dashboards, admin panels, and collaborative tools without crafting custom JavaScript. Everything flows from the server, through HTML.
Additionally, suppose a client's connection is interrupted. In that case, the browser will automatically attempt to reconnect without requiring additional code, and it can even notify the server, "This is the last event I received." It's wonderful.
Being a part of the Datastar community on Discord has helped me appreciate the Datastar vision of making web apps. They aim to have push-based UI updates, reduce complexity, and leverage tools like web components to handle more complex situations locally. It’s common for the community to help newcomers by helping them realize they’re overcomplicating things.
Here are some of the tips I’ve picked up:
- Don’t be afraid to re-render the whole component and send it down the pipe. It’s easier, it probably won’t affect performance too much, you get better compression ratios, and it’s incredibly fast for the browser to parse HTML strings.
- The server is the state of truth and is more powerful than the browser. Let it handle the majority of the state. You probably don’t need the reactive signals as much as you think you do.
- Web components are great for encapsulating logic into a custom element with high locality of behavior. A great example of this is the star field animation in the header of the Datastar website. The `<ds-starfield>` element encapsulates all the code to animate the star field and exposes three attributes to change its internal state. Datastar drives the attributes whenever the range input changes or the mouse moves over the element.
But what I’m most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
The examples page includes a database monitoring demo that leverages Hypermedia to significantly improve the speed and memory footprint of a demo presented at a JavaScript conference.
The one million checkbox experiment was too much for the server it started on. Anders Murphy used Datastar to create one billion checkboxes on an inexpensive server.
But the one that most inspired me was a web app that displayed data from every radar station in the United States. When a blip changed on a radar, the corresponding dot in the UI would change within 100 milliseconds. This means that *over 800,000 points are being updated per second*. Additionally, the user could scrub back in time for up to an hour (with under a 700 millisecond delay). Can you imagine this as a Hypermedia app? This is what Datastar enables.
I’m still in what I consider my discovery phase of Datastar. Replacing the standard HTMX functionality of ajaxing updates to a UI was quick and easy to implement. Now I’m learning and experimenting with different patterns to use Datastar to achieve more and more.
For decades, I’ve been interested in ways I could provide better user experiences with real-time updates, and I love that Datastar enables me to do push-based updates, even in synchronous code.
HTMX filled me with so much joy when I started using it. But I haven’t felt like I lost anything since switching to Datastar. In fact, I feel like I’ve gained so much more.
If you’ve ever felt the joy of using HTMX, I bet you’ll feel the same leap again with Datastar. It’s like discovering what the web was meant to do all along.
Read more...
Mike Driscoll
An /intro to Python 3.14’s New Features
Python 3.14 came out this week and has many new features and improvements. For the full details behind the release, the documentation is the best source. However, you will find a quick overview of the major changes here. As with most Python releases, backwards compatibility is rarely broken. However, there has been a push to […]
The post An /intro to Python 3.14’s New Features appeared first on Mouse Vs Python.
October 08, 2025
Real Python
Python 3.14: Cool New Features for You to Try
Learn what's new in Python 3.14, including an upgraded REPL, template strings, lazy annotations, and subinterpreters, with examples to try in your code.
Quiz: Python 3.14: Cool New Features for You to Try
In this quiz, you'll test your understanding of the new features introduced in Python 3.14. By working through this quiz, you'll review the key updates and improvements in this version of Python.
October 07, 2025
PyCoder’s Weekly
Issue #703: PEP 8, Error Messages in Python 3.14, splitlines(), and More (Oct. 7, 2025)
Python Morsels
Python 3.14's best new features
Python 3.14 includes syntax highlighting, improved error messages, enhanced support for cocurrency and parallelism, t-strings and more!

Table of contents
- Very important but not my favorites
- Python 3.14: now in color!
- My tiny contribution
- Beginner-friendly error messages
- Tab completion for import statements
- Standard library improvements
- Cleaner multi-exception catching
- Concurrency improvements
- External debugger interface
- T-strings (template strings)
- Try out Python 3.14 yourself
Very important but not my favorites
I'm not going to talk about the experimental free-threading mode, the just-in-time compiler, or other performance improvements. I'm going to focus on features that you can use right after you upgrade.
Python 3.14: now in color!
One of the most immediately …
Read the full article: https://www.pythonmorsels.com/python314/
Real Python
What's New in Python 3.14
Covers Python 3.14's key changes: free-threading, subinterpreters, t-strings, lazy annotations, new REPL features, and improved error messages.
Seth Michael Larson
Is the "Nintendo Classics" collection a good value?
October 06, 2025
Ari Lamstein
Visualizing 25 Years of Border Patrol Data with Python
I recently had the chance to speak with a statistician at the Department of Homeland Security (DHS) about my Streamlit app that visualizes trends in US Immigration Enforcement data (link). Our conversation helped clarify a question I’d raised in an earlier post—one that emerged from a surprising pattern in the data. A Surprising Pattern The […]
Real Python
It's Almost Time for Python 3.14 and Other Python News
The final release of Python 3.14 is almost here! Plus, there's Django 6.0 alpha, key PEP updates, PSF board results, and fresh Real Python resources.
Brian Okken
pytest 2.6.0 release
There’s a new release of pytest-check. Version 2.6.0.
This is a cool contribution from the community.
The problem
In July, bluenote10 reported that check.raises()
doesn’t behave like pytest.raises()
in that the AssertionError
returned from check.raises()
doesn’t have a queryable value
.
Example of pytest.raises()
:
with pytest.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
We’d like check.raises()
to act similarly:
with check.raises(Exception) as e:
do_something()
assert str(e.value) == "<expected error message>"
But that didn’t work prior to 2.6.0. The issue was that the value returned from check.raises()
didn’t have any .value
atribute.
Talk Python to Me
#522: Data Sci Tips and Tricks from CodeCut.ai
Today we’re turning tiny tips into big wins. Khuyen Tran, creator of CodeCut.ai, has shipped hundreds of bite-size Python and data science snippets across four years. We dig into open-source tools you can use right now, cleaner workflows, and why notebooks and scripts don’t have to be enemies. If you want faster insights with fewer yak-shaves, this one’s packed with takeaways you can apply before lunch. Let’s get into it.
Rodrigo Girão Serrão
Functions: a complete reference | Pydon't 🐍
This article serves as a complete reference for all the non-trivial things you should know about Python functions.
Functions are the basic building block of any Python program you write, and yet, many developers don't leverage their full potential. You will fix that by reading this article.
Knowing how to use the keyword def
is just the first step towards knowing how to define and use functions in Python.
As such, this Pydon't covers everything else there is to learn:
- How to structure and organise functions.
- How to work with a function signature, including parameter order,
*args
and**kwargs
, and the special syntax introduced by*
and/
. - What anonymous functions are, how to define them with the keyword
lambda
, and when to use them. - What it means for functions to be objects and how to leverage that in your code.
- How closures seem to defy a fundamental rule of scoping in Python.
- How to leverage closures to create the decorator pattern.
- What the keyword
yield
is and what generator functions are. - What the keyword
async
is and what asynchronous functions are. - How partial function application allows you to create new functions from existing functions.
- How the term “function” is overloaded and how you can create your own objects that behave like functions.
Bookmark this reference for later or download the “Pydon'ts – write elegant Python code” ebook for free. The ebook contains this chapter and many others, including hundreds of tips to help you write better Python code. Download the ebook “Pydon'ts – write elegant Python code” here.
What goes into a function and what doesn't
Do not overcrowd your functions with logic for four or five different things. A function should do a single thing, and it should do it well, and the name of the function should clearly tell you what your function does.
If you are unsure about whether some piece of code should be a single function or multiple functions, it's best to err on the side of too many functions. That is because a function is a modular piece of code, and the smaller your functions are, the easier it is to compose them together to create more complex behaviours.
Consider the function process_order
defined below, an exaggerated example that breaks these best practices to make the point clearer.
While it is not incredibly long, it does too many things:
def process_order(order):
# Validate the order:
for item, quantity, price in order:
if quantity <= 0:
raise ValueError(f"Cannot buy 0 or less of {item}.")
if price <= 0:
raise ValueError(f"Price must be positive.")
# Write the receipt:
total = 0
with open("receipt.txt", "w") as f:
for item, quantity, price in order:
# This week, yoghurts and batteries are on sale.
if "yoghurt" in item:
price *= 0.8
elif "batteries" in item:
price *= 0.5
# Write this line of the receipt:
partial = price * quantity
f.write(f"{item:>15} --- {quantity:>3}...
October 05, 2025
Paolo Melchiorre
Django: one ORM to rule all databases 💍
Comparing the Django ORM support across official database backends, so you don’t have to learn it the hard way.
Christian Ledermann
Python Code Quality Tools Beyond Linting
The landscape of Python software quality tooling is currently defined by two contrasting forces: high-velocity convergence and deep specialization. The recent, rapid adoption of Ruff has solved the long-standing community problem of coordinating dozens of separate linters and formatters, establishing a unified, high-performance axis for standard code quality.
A second category of tools continues to operate in necessary, but isolated, silos. Tools dedicated to architectural enforcement and deep structural metrics, such as:
- import-linter (Layered architecture enforcement)
- tach (Dependency visualization and enforcement)
- complexipy, radon, lizard (Metrics for overall and cognitive complexity)
- module_coupling_metrics, lcom, and cohesion (Metrics for coupling and class cohesion)
- pyscn - Python Code Quality Analyzer (Module dependencies, clone detection, complexity)
These projects address fundamental challenges of code maintainability, evolvability, and architectural debt that extend beyond the scope of fast, stylistic linting. The success of Ruff now presents the opportunity to foster a cross-tool discussion focused not just on syntax, but on structure.
Specialized quality tools are vital for long-term maintainability and risk assessment. Tools like import-linter
and tach
mitigate technical risk by enforcing architectural rules, preventing systemic decay, and reducing change costs. Complexity and cohesion metrics from tools such as complexipy
, lcom
, and cohesion
quantitatively flag overly complex or highly coupled components, acting as early warning systems for technical debt. By analysing the combined outputs, risk assessment shifts to predictive modelling: integrating data from individual tools (e.g., import-linter
violations, complexipy
scores) creates a multi-dimensional risk score. Overlaying these results, such as identifying modules that are both low in cohesion and involved in tach
-flagged dependency cycles, generates a "heat map" of technical debt. This unified approach, empirically validated against historical project data like bug frequency and commit rates can yield a predictive risk assessment. It identifies modules that are not just theoretically complex but empirically confirmed sources of instability, transforming abstract quality metrics into concrete, prioritized refactoring tasks for the riskiest codebase components.
Reasons to Connect
Bring the maintainers and core users of these diverse tools into a shared discussion.
Increasing Tool Visibility and Sustainability: Specialized tools often rely on small, dedicated contributor pools and suffer from knowledge isolation, confining technical debate to their specific GitHub repository. A broader discussion provides these projects with critical outreach, exposure to a wider user base, and a stronger pipeline of new contributors, ensuring their long-term sustainability.
Let's start the conversation on how to 'measure' maintainable, and architecturally sound Python code.
And keep Goodhart's law: "When a measure becomes a target, it ceases to be a good measure" in mind ;-)
Daniel Roy Greenfeld
Using pyinstrument to profile Air apps
Quick instructions for a drop-in Air middleware for identifying performance bottlenecks in Air apps
Graham Dumpleton
Lazy imports using wrapt
PEP 810 (explicit lazy imports) was recently released for Python. The idea with this PEP is to add explicit syntax for implementing lazy imports for modules in Python.
lazy import json
Lazily importing modules in Python is not a new idea and there have been a number of packages available to achieve a similar result, they just lacked an explicit language syntax to hide what is actually going on under the covers.
When I saw this PEP it made me realise that a new feature I added into wrapt
for upcoming 2.0.0 release can be used to implement a lazy import with little effort.
For those who only know of wrapt
as being a package for implementing Python decorators, it should be known that the ability to implement decorators using the approach it does, was merely one outcome of the true reason for wrapt
existing.
The actual reason wrapt
was created was to be able to perform monkey patching of Python code.
One key aspect of being able to monkey patch Python code is to be able to have the ability to wrap target objects in Python with a wrapper which acts as a transparent object proxy for the original object. By extending the object proxy type, one can then intercept access to the target object to perform specific actions.
For the purpose of this discussion we can ignore the step of creating a custom object proxy type and look at just how the base object proxy works.
import wrapt
import graphlib
print(type(graphlib))
print(id(graphlib.TopologicalSorter), graphlib.TopologicalSorter)
xgraphlib = wrapt.ObjectProxy(graphlib)
print(type(xgraphlib))
print(id(graphlib.TopologicalSorter), xgraphlib.TopologicalSorter)
In this example we import a module called graphlib
from the Python standard library. We then access from that module the class graphlib.TopologicalSorter
and print it out.
When we wrap the same module with an object proxy, the aim is that anything you could do with the original module, could also be done via the object proxy. The output from the above is thus:
<class 'module'>
35852012560 <class 'graphlib.TopologicalSorter'>
<class 'ObjectProxy'>
35852012560 <class 'graphlib.TopologicalSorter'>
verifying that in both cases the TopologicalSorter
is in fact the same object, even though for the proxy the apparent type is different.
The new feature which has been added for wrapt
version 2.0.0 is a lazy object proxy. That is, instead of passing to the object proxy when created the target object to be wrapped, you pass a function, with this function being called to create or otherwise obtain the target object to be wrapped, the first time the proxy object is accessed.
Using this feature we can easily implement lazy module importing.
import sys
import wrapt
def lazy_import(name):
return wrapt.LazyObjectProxy(lambda: __import__(name, fromlist=[""]))
graphlib = lazy_import("graphlib")
print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))
print(type(graphlib))
print(graphlib.TopologicalSorter)
print("sys.modules['graphlib'] =", sys.modules.get("graphlib", None))
Running this the output is:
sys.modules['graphlib'] = None
<class 'LazyObjectProxy'>
<class 'graphlib.TopologicalSorter'>
sys.modules['graphlib'] = <module 'graphlib' from '.../lib/python3.13/graphlib.py'>
One key thing to note here is that when the lazy import is setup, no changes have been made to sys.modules
. It is only later when the module is truly imported that you see an entry in sys.modules
for that module name.
Some lazy module importers work by injecting into sys.modules
a fake module object for the target module. This has to be done right up front when the application is started. Because the fake entry exists, when import
is later used to import that module it thinks it has already been imported and thus what is added into the scope where import
is used is the fake module, with the actual module not being imported at that point.
What then happens is that when code attempts to use something from the module, an overridden __getattr__
special dunder method on the fake module object gets triggered, which on the first use causes the actual module to then be imported.
That sys.modules
is modified and a fake module added is one of the criticisms one sees about such lazy module importers. That is, the change they make is global to the whole application which could have implications such as where side affects of importing a module are expected to be immediate.
With the way the wrapt
example works above, no global change is required to sys.modules
, and instead impacts are only local to the scope where the lazy import was made.
Reducing the impacts to just the scope where the lazy import was used is actually one of the goals of the PEP. The example using wrapt
shows that it can be done, but it means you can't use an import
statement, but then that is what the PEP aims to still allow, albeit they still require a new lazy
keyword for when doing the import. Either way, the code where you want to have a lazy import needs to be different.
The other thing which the PEP should avoid is the module reference in the scope where the module is imported being any sort of fake module object. Initially the module reference would effectively be a place holder, but as soon as used, the actual module would be imported and the place holder replaced.
For the wrapt
example the module reference would always be a proxy object, although technically with a bit of stack diving trickey you could also replace the module reference with the actual module as a side effect of the first use. This sort of trick is left as an exercise for the reader.
October 04, 2025
Paolo Melchiorre
My DjangoCon US 2025
A summary of my experience at DjangoCon US 2025 told through the posts I published on Mastodon during the conference.
Rodrigo Girão Serrão
TIL #134 – = alignment in string formatting
Today I learned how to use the equals sign to align numbers when doing string formatting in Python.
There are three main alignment options in Python's string formatting:
Character | Meaning |
---|---|
< |
align left |
> |
align right |
^ |
centre |
However, numbers have a fourth option =
.
On the surface, it looks like it doesn't do anything:
x = 73
print(f"@{x:10}@") # @ 73@
print(f"@{x:=10}@") # @ 73@
But that's because =
influences the alignment of the sign.
If I make x
negative, we already see something:
x = -73
print(f"@{x:10}@") # @ -73@
print(f"@{x:=10}@") # @- 73@
So, the equals sign =
aligns a number to the right but aligns its sign to the left.
That may look weird, but I guess that's useful if you want to pad a number with 0s:
x = -73
print(f"@{x:010}@") # @-000000073@
In fact, there is a shortcut for this type of alignment, which is to just put a zero immediately to the left of the width when aligning a number:
x = -73
print(f"@{x:010}@") # @-000000073@
The zero immediately to the left changes the default alignment of numbers to be =
instead of >
.
October 03, 2025
Luke Plant
Breaking “provably correct” Leftpad
Why? Because it’s fun.
Mariatta
Disabling Signup in Django allauth
Django allauth
Django allauth is a popular third party package that provides a lot of functionality for handling user authentication, with support for social authentication, email verification, multi-factor authentication, and more.
It is a powerful library that greatly expands the built-in Django authentication system. It comes with its own basic forms and models for user registration, login, logout, and password management.
I like using it because often I just wanted to get a new Django project up and running quickly without having to write
up all the authentication-related views, forms, and templates myself. I’m using django-allauth
in PyLadiesCon
Portal, and in my personal project Secret Codes.
Real Python
The Real Python Podcast – Episode #268: Advice on Beginning to Learn Python
What's changed about learning Python over the last few years? What new techniques and updated advice should beginners have as they start their journey? This week on the show, Stephen Gruppetta and Martin Breuss return to discuss beginning to learn Python.
October 01, 2025
Real Python
Python 3.14 Preview: Better Syntax Error Messages
Python 3.14 includes ten improvements to error messages, which help you catch common coding mistakes and point you in the right direction.
Django Weblog
Django security releases issued: 5.2.7, 5.1.13, and 4.2.25
In accordance with our security release policy, the Django team is issuing releases for Django 5.2.7, Django 5.1.13, and Django 4.2.25. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.
CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB
QuerySet.annotate(), QuerySet.alias(), QuerySet.aggregate(), and QuerySet.extra() methods were subject to SQL injection in column aliases, using a suitably crafted dictionary, with dictionary expansion, as the **kwargs passed to these methods on MySQL and MariaDB.
Thanks to sw0rd1ight for the report.
This issue has severity "high" according to the Django security policy.
CVE-2025-59682: Potential partial directory-traversal via archive.extract()
The django.utils.archive.extract() function, used by startapp --template and startproject --template, allowed partial directory-traversal via an archive with file paths sharing a common prefix with the target directory.
Thanks to stackered for the report.
This issue has severity "low" according to the Django security policy.
Affected supported versions
- Django main
- Django 6.0 (currently at alpha status)
- Django 5.2
- Django 5.1
- Django 4.2
Resolution
Patches to resolve the issue have been applied to Django's main, 6.0 (currently at alpha status), 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.
CVE-2025-59681: Potential SQL injection in QuerySet.annotate(), alias(), aggregate(), and extra() on MySQL and MariaDB
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
CVE-2025-59682: Potential partial directory-traversal via archive.extract()
- On the main branch
- On the 6.0 branch
- On the 5.2 branch
- On the 5.1 branch
- On the 4.2 branch
The following releases have been issued
- Django 5.2.7 (download Django 5.2.7 | 5.2.7 checksums)
- Django 5.1.13 (download Django 5.1.13 | 5.1.13 checksums)
- Django 4.2.25 (download Django 4.2.25 | 4.2.25 checksums)
The PGP key ID used for this release is Jacob Walls: 131403F4D16D8DC7
General notes regarding security reporting
As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.
Real Python
Quiz: Python 3.14 Preview: Better Syntax Error Messages
Explore how Python 3.14 improves error messages with clearer explanations, actionable hints, and better debugging support for developers.