Planet Python
Last update: October 29, 2025 09:43 PM UTC
October 29, 2025
Patrick Altman
How We Continually Deliver Software
We currently have five different web applications in production and they all share a very similar stack - Django/Vue/Docker/PostgreSQL (some with Redis/django-rq for background tasks).
We have developed a set of Github Actions for Continuous Integration / Continuous Delivery that take care the this basic workflow:

- Every commit either on
mainor a feature branch, runs:- Python Linting
- Vue/JS Testing
- Build Docker Image and then on that image run:
- Python tests
- Check for missing migrations
- Push image / tags after being rebuilt without the dev mode flag
- Then if on
mainit follows through with a deployment to a QA app on Heroku.
We have a second workflow for handling releases.
When a release is generated/published in Github:
- Pulls latest image from the Github Container Repository
- Pushes the tagged image to Heroku
- Executes release commands, but this time to a Production app on Heroku

Results
These two pipelines enable us to work really fast. It speeds up code reviews as most of the testing is done automatically allowing us to focus on just the business rules and architecture getting put into place. It speeds up end to end testing and getting user feedback having code automatically deployed to a QA test instance that won&apost interfere / interrupt production. And finally it speeds up getting releases out to production which we do as needed, often a few times a day!
Open Source
The two yaml files configuring these were hundreds of lines long with lots of duplication except for a few things. We were copying them around when we&aposd start a new web app, and then tweak. They&aposd invariably get out of sync and it was becoming a burden to maintain.
So we extracted actions and workflows into wedgworth/actions which is now open source so if you like our workflow you can feel free to use (or fork and tweak to suit your needs).
Now each project looks like this:
ci.yaml
name: Test / Build / Deploy to QA
on:
push:
branches: "**"
tags-ignore: "**"
jobs:
test-and-build:
name: CI
uses: wedgworth/actions/.github/workflows/test.yml@v7.0.0
with:
python-src-dir: myapp
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
CR_UN: ${{ secrets.CR_UN }}
CR_PAT: ${{ secrets.CR_PAT }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
deploy-qa:
name: CD
needs: [test-and-build]
if: ${{ github.event.ref == &aposrefs/heads/main&apos }}
uses: wedgworth/actions/.github/workflows/deploy.yml@v7.0.0
with:
app-name: my-heroku-app-qa
processes: web release
secrets:
HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
CR_UN: ${{ secrets.CR_UN }}
CR_PAT: ${{ secrets.CR_PAT }}
release.yaml
name: Publish and Release Image
on:
release:
types: [published]
jobs:
release:
name: Release
uses: wedgworth/actions/.github/workflows/release.yml@v7.0.0
with:
app-name: my-heroku-app-prod
processes: web release
secrets:
HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
CR_UN: ${{ secrets.CR_UN }}
CR_PAT: ${{ secrets.CR_PAT }}We still copy and paste these but they are extremely stable.
We just need to set python-src-dir, app-name, and processes .
These do use runners from namespace.so which are not free (but cheap!) and run much faster especially when doing Docker builds than the Github runners.
There might be a way to make these configurable so if you like what you see but want to use the Github runners, we&aposd welcome a pull request to make this more generally useful, otherwise feel free to fork it and run your own copies.
Happy building!
Antonio Cuni
Inside SPy, part 1: Motivations and Goals
Inside SPyđ„ž, part 1: Motivations and Goals
This is the first of a series of posts in which I will try to give a deep explanation ofSPy, including motivations, goals, rules of thelanguage, differences with Python and implementation details.
This post focuses primarily on the problem space: why Python is fundamentally hardto optimize, what trade-offs existing solutions require, and where current approachesfall short. Subsequent posts in this series will explore the solutions in depth. Fornow, let's start with the essential question: what is SPy?
!!! Success "" Before diving in, I want to express my gratitude to my employer, Anaconda, for giving me the opportunity to dedicate 100% of my time to this open-source project.
Real Python
Logging in Python
Logging in Python lets you record important information about your programâs execution. You use the built-in logging module to capture logs, which provide insights into application flow, errors, and usage patterns. With Python logging, you can create and configure loggers, set log levels, and format log messages without installing additional packages. You can also generate log files to store records for later analysis.
By the end of this tutorial, youâll understand that:
- Logging involves recording program execution information for later analysis.
- You can use logging to debug, perform analysis, and monitor usage patterns.
- Logging in Python works by configuring loggers and setting log levels.
- Using a logging library provides structured logging and control over log output.
- You should prefer logging over
print()because it decreases the maintenance burden and allows you to manage log levels.
Youâll do the coding for this tutorial in the Python standard REPL. If you prefer Python files, then youâll find a full logging example as a script in the materials of this tutorial. You can download this script by clicking the link below:
Get Your Code: Click here to download the free sample code that youâll use to learn about logging in Python.
Take the Quiz: Test your knowledge with our interactive âLogging in Pythonâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Logging in PythonIn this quiz, you'll test your understanding of Python's logging module. With this knowledge, you'll be able to add logging to your applications, which can help you debug errors and analyze performance.
If youâre curious about an alternative to Pythonâs built-in logging module, then check out How to Use Loguru for Simpler Python Logging. While the standard libraryâs logging requires explicit configuration of handlers, formatters, and log levels, Loguru comes pre-configured after installing it with pip.
Starting With Pythonâs Logging Module
The logging module in Pythonâs standard library is a ready-to-use, powerful module thatâs designed to meet the needs of beginners as well as enterprise teams.
Note: Since logs offer a variety of insights, the logging module is often used by other third-party Python libraries, too. Once youâre more advanced in the practice of logging, you can integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.
To leverage this versatility, itâs a good idea to get a better understanding of how the logging module works under the hood. For example, you could take a stroll through the logging moduleâs source code.
The main component of the logging module is something called the logger. You can think of the logger as a reporter in your code that decides what to record, at what level of detail, and where to store or send these records.
Exploring the Root Logger
To get a first impression of how the logging module and a logger work, open the Python standard REPL and enter the code below:
>>> import logging
>>> logging.warning("Remain calm!")
WARNING:root:Remain calm!
The output shows the severity level before each message along with root, which is the name the logging module gives to its default logger. This output shows the default format that can be configured to include things like a timestamp or other details.
In the example above, youâre sending a message on the root logger. The log level of the message is WARNING. Log levels are an important aspect of logging. By default, there are five standard severity levels for logging events. Each has a corresponding function that can be used to log events at that level of severity.
Note: Thereâs also a NOTSET log level, which youâll encounter later in this tutorial when you learn about custom logging handlers.
Here are the five default log levels, in order of increasing severity:
| Log Level | Function | Description |
|---|---|---|
DEBUG |
logging.debug() |
Provides detailed information thatâs valuable to you as a developer. |
INFO |
logging.info() |
Provides general information about whatâs going on with your program. |
WARNING |
logging.warning() |
Indicates that thereâs something you should look into. |
ERROR |
logging.error() |
Alerts you to an unexpected problem thatâs occurred in your program. |
CRITICAL |
logging.critical() |
Tells you that a serious error has occurred and may have crashed your app. |
The logging module provides you with a default logger that allows you to get started with logging without needing to do much configuration. However, the logging functions listed in the table above reveal a quirk that you may not expect:
>>> logging.debug("This is a debug message")
>>> logging.info("This is an info message")
>>> logging.warning("This is a warning message")
WARNING:root:This is a warning message
>>> logging.error("This is an error message")
ERROR:root:This is an error message
>>> logging.critical("This is a critical message")
CRITICAL:root:This is a critical message
Notice that the debug() and info() messages didnât get logged. This is because, by default, the logging module logs the messages with a severity level of WARNING or above. You can change that by configuring the logging module to log events of all levels.
Adjusting the Log Level
To set up your basic logging configuration and adjust the log level, the logging module comes with a basicConfig() function. As a Python developer, this camel-cased function name may look unusual to you, as it doesnât follow the PEP 8 naming conventions:
Thatâs because it was adopted from Log4j, a logging utility in Java. Itâs a known issue in the package, but by the time it was decided to add it to the standard library, it had already been adopted by users, and changing it to meet PEP 8 requirements would cause backwards compatibility issues.
Read the full article at https://realpython.com/python-logging/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 28, 2025
Python Morsels
__dict__: where Python stores attributes
Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.
Table of contents
- A class with some attributes
- The
__dict__attribute - Modules have a
__dict__attribute - Classes also have a
__dict__attribute - Accessing the
__dict__attribute - Not all objects have a
__dict__attribute - Inspecting the attributes of any Python object
- Most objects store their attributes in a
__dict__dictionary
A class with some attributes
We have a class here, called Product:
class Product:
def __init__(self, name, price):
self.name = name
self.price = price
def display_price(self):
return f"${self.price:,.2f}"
And we have two instances of this class:
>>> duck = Product("rubber duck", price=1)
>>> mug = Product("mug", price=5)
Each of these class instances has their own separate data (a name attribute, and a price attribute):
>>> duck.name
'rubber duck'
>>> mug.price
5
Where are these attributes actually stored? Where does their data live?
The __dict__ attribute
Each of these class instances âŠ
Read the full article: https://www.pythonmorsels.com/__dict__/
PyCoderâs Weekly
Issue #706: Quasars, Faking Data, GIL-free Web, and More (Oct. 28, 2025)
#706 â OCTOBER 28, 2025
View in Browser »
Investigating Quasars With Polars and marimo
Learn to visualize quasar redshift data by building an interactive marimo dashboard using Polars, pandas, and Matplotlib. You’ll practice retrieving, cleaning, and displaying data in your notebook. You’ll also build interactive UI components that live-update visualizations in the notebook.
REAL PYTHON course
Faker: Generate Realistic Test Data in Python
If you want to generate test data with specific types (bool, float, text, integers) and realistic characteristics (names, addresses, colors, emails, phone numbers, locations) Faker can help you do that.
KHUYEN TRAN âą Shared by Khuyen Tran
Level up Your AI Development
Level up your AI with proven patterns for durability, retries, and reliability. Temporalâs AI Cookbook gives you the recipes to go from prototype to production â
TEMPORAL sponsor
The Future of Python Web Services Looks GIL-free
This is another free-threaded Python 3.14 bench-marking article, but this time instead of being toy calculating problems, the measurements emulate how web frameworks work.
GIOVANNI BARILLARI
Python Jobs
Senior Python Developer (Houston, TX, USA)
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
pytest Fixtures: How to Use & Organize Them
Fixtures make your life as a developer easier when using Pytest. Learn how to use them in different ways to organize your test suite more effectively, and get a glimpse on how Streamlit and Pydantic uses them.
PATRICKM.DE âą Shared by Patrick MĂŒller
Async Django: A Solution in Search of a Problem?
This opinion piece states that while a technical marvel, async Django has been quietly rejected by the community it was built for, with the vast majority of developers sticking to simpler, proven solutions.
KEVIN RENSKERS
Build Faster with GitHub Actions Using Depot
Depot supercharges GitHub Actions with high-performance runners and remote caching that cut build times dramatically. Learn how to integrate Depot into your workflow for faster, more efficient CI pipelines â
DEPOT sponsor
What Can I Do With Python?
Learn how Python builds software, powers AI, automates tasks, and drives robotics. Discover tools and projects to guide your programming journey.
REAL PYTHON
Django bulk_update Memory Issue
Recently, AnĆŸe had to write a Django migration to update hundreds of thousands of database objects, it didn’t go as smoothly as planned.
ANĆœE PEÄAR
When Should You Use .__repr__() vs .__str__() in Python?
Find out when to choose Python’s __repr__() vs __str__() in your classes so your objects show helpful information for debugging and user output.
REAL PYTHON
Lore
Wisdoms, aphorisms, and pointed observations that Redowan frequently quotes in conversations about software, philosophy, and ways of working.
REDOWAN DELOWAR
T-strings: Python’s Fifth String Formatting Technique?
Pythonâs new t-strings may look like f-strings, but they work in a totally different way, allowing you to delay string interpolation.
TREY HUNNER
Three Times Faster With Lazy Imports
PEP 810 proposes adding explicit lazy importing to Python. This article shows just what that can do for your program’s startup times.
HUGO VAN KEMENADE
CPython Core Dev Sprint 2025 at Arm Cambridge
Arm’s Cambridge headquarters hosted a week of development for Python core contributors, this post describes what work got done.
PYTHON SOFTWARE FOUNDATION
Announcing PSF Community Service Award Recipients!
This post announces three new Service Award Recipients: Katie McLaughlin, Sarah Kuchinsy, and Rodrigo GirĂŁo SerrĂŁo.
PYTHON SOFTWARE FOUNDATION
Best Practices for Using Python & uv Inside Docker
Best practices for using Python & uv inside Docker
ASHISH BHATIA
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
October 29, 2025
REALPYTHON.COM
PyCon Sweden
October 30 to November 1, 2025
PYCON.SE
PyCon FR 2025
October 30 to November 3, 2025
PYCON.FR
PyDay đ + Django Birthday đŠ
October 31 to November 1, 2025
PYLADIES.CO
Django Girls BogotĂĄ Workshop
November 1 to November 2, 2025
DJANGOGIRLS.ORG
PyDelhi User Group Meetup
November 1, 2025
MEETUP.COM
JupyterCon 2025
November 3 to November 6, 2025
LINUXFOUNDATION.ORG
PyCon Mini Tokai 2025
November 8 to November 9, 2025
PYCON.JP
PyCon Chile 2025
November 8 to November 10, 2025
PYCON.CL
Happy Pythoning!
This was PyCoder’s Weekly Issue #706.
View in Browser »
[ Subscribe to đ PyCoder’s Weekly đ â Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Christian Ledermann
Scratching the Itch, Paying the Debt: How Community Keeps Legacy Open Source Projects Alive
Introduction
Every developer has that one project that started as a personal solution and unexpectedly found a life of its own. For me, that was FastKML, a library I built in 2012 to âscratch my own itch.â I needed to embed maps into a website, and at the time, KML was the de facto standard for visualizing geospatial data on the web. GeoJSON existed but was still in its infancy and unsupported by OpenLayers, which was then the best tool for embedding maps.
Other Python libraries for KML existed, but most were either limited in scope, lacked Python 3 support, or didnât meet my performance needs. Performance was crucial, so I built FastKML using lxml instead of the slower XML DOM used by many contemporaries.
As FastKML evolved, it depended on Shapely for geometry handling, an excellent library, but one that required C extensions and added installation complexity. That led to the birth of PyGeoIf, a pure Python implementation of basic geospatial objects. PyGeoIf aimed to serve as a lightweight, dependency-free substitute for Shapely when users didnât need all of its advanced geometry operations. The API mirrored Shapelyâs closely, making migration as simple as replacing
from pygeoif import ...
with
from shapely import ...
Over the years, both projects aged gracefully, but not without technical debt. They bore the marks of an earlier Python era: Python 2/3 compatibility hacks (at the very beginning Python 2.6 was still in use), missing type hints, and occasionally ambiguous function signatures.
Still, they worked. The test coverage exceeded 95%, bugs were rare, and they continued solving real problems for users long after I had moved on to other roles outside GIS. To my surprise, the packages remained popular; downloads were steady, and employers still asked about them. But I knew the code looked dated, and if I had to review it today, it wouldnât pass.
Fast forward to 2020. The geospatial landscape had changed; GeoJSON had overtaken KML, Pythonâs ecosystem had matured, and I had learned a great deal about clean code and maintainability. It was time to modernize these legacy projects for the new decade.
The Need for Change
Modernization wasnât just a matter of adding type hints or updating syntax, it was about bringing two long-lived projects in line with modern development practices. The original codebases had served well for years, but they were increasingly difficult to extend. Function signatures were ambiguous, internal logic was tangled, and adding new features often caused shotgun surgery, requiring edits across multiple unrelated files.
The Shapely API had evolved too, fully embracing PEP 8 naming conventions and adopting more expressive methods. To remain compatible, PyGeoIf needed to evolve alongside it. Meanwhile, Python itself had transformed: type hints, static analysis, and property-based testing were now standard practice rather than novelty.
Drivers of Change
The single most important motivator was the introduction of type hints in Python. Type annotations have revolutionized how Python code is written, reviewed, and maintained, enhancing readability and catching subtle bugs early through tools like mypy.
The first step was static analysis with tools like mypy, which immediately flagged legacy Python 2 compatibility hacks, ambiguous function signatures, and missing type hints. Extending the tests in tandem ensured that each refactor preserved correctness.
Beyond that, the desire for clearer APIs, more maintainable structures, and modern testing techniques pushed the modernization effort forward. I wanted code that not only worked but was readable, testable, and future-proof.
A Tale of Two Refactors
For PyGeoIf, version 0.7 had been released in 2017. Four years later, in September 2021, I published the first beta of the 1.0 series: fully type-annotated, statically checked with mypy, and tested using property-based testing with Hypothesis and improved tests with mutation testing with MutMut. By September 2022, version 1.0 was stable, and by October 2025, it had matured to version 1.5.
For FastKML, after a long silence since version 0.11 in 2015, I released version 0.12 in September 2021, incorporating long-neglected pull requests and minor improvements. A month later came FastKML 1.0 alpha 1 on PyPI. What I thought would be a quick release became an 18-iteration journey spanning three years, culminating in version 1.0 in November 2024; finally the library I had envisioned years earlier.
Reflecting on Contributions and Community Support
Over the past few years of developing PyGeoIf and FastKML, the journey has been shaped not only by personal effort but also by the support and engagement of the open-source community. One striking example of this has been Hacktoberfest contributions which consistently provided motivation and tangible progress.
These contributions may seem small individually, but collectively they have kept the momentum going. Seeing community members engage with the projects during Hacktoberfest has been a continuing source of encouragement, reminding me that every bit of contribution helps make the software more robust, maintainable, and welcoming to others.
The positive impact goes beyond the specific changes. Hacktoberfest contributions have:
- Encouraged ongoing improvement by motivating incremental updates.
- Highlighted the value of community participation in maintaining and modernizing open-source projects.
- Reinforced a sense of shared purpose, showing that even small efforts can collectively advance a project.
This ongoing collaboration has made the development process more rewarding and sustainable, reinforcing a simple but powerful lesson: in open-source, community engagement isnât just about code, itâs about inspiration and momentum.
Hacktoberfest contributions arenât just code, theyâre encouragement. They spark incremental improvements, highlight the value of shared effort, and inspire continued development. Seeing others invest their time and ideas in these projects has been a constant source of motivation to keep improving, testing, and refining.
Hacktoberfest and the Power of Community
Developing PyGeoIf and FastKML has been a journey of learning, coding, and refining, but itâs the community contributions, especially during Hacktoberfest, that have truly kept the momentum alive.
October has consistently brought a wave of engagement: pre-commit hooks, bug fixes, minor enhancements, and automated improvements. Each contribution, no matter how small, reinforced the sense of progress and reminded me that open-source thrives on collaboration.
Looking forward, this collaborative energy continues to fuel future features and refinements. Hacktoberfest has proven that even small contributions can make a big difference, both in the code and in the spirit of the community.
Will Kahn-Greene
Open Source Project Maintenance 2025
Every October, I do a maintenance pass on all my projects. At a minimum, that involves dropping support for whatever Python version is no longer supported and adding support for the most recently released Python version. While doing that, I go through the issue tracker, answer questions, and fix whatever I can fix. Then I release new versions. Then I think about which projects I should deprecate and figure out a deprecation plan for them.
This post covers the 2025 round.
TL;DR
sphinx-js -- transferred to pyodide organization
crashstats-tools and siggen -- transferred to the Mozilla crash ingestion team, which I'm no longer on
paul-mclendahand -- deprecated and archived
pip-stale -- deprecated and archived
everett -- released v3.5.0, then deprecated and archived
fillmore -- released v2.2.0, then deprecated and archived
kent -- released v2.2.0
markus -- released v5.2.0
bleach -- released v6.3.0
Read more⊠(7 min remaining to read)
Real Python
Speed Up Python With Concurrency
Concurrency is the act of having your computer do multiple things at the same time. If you’ve heard a lot of talk about asyncio being added to Python but are curious how it compares to other concurrency methods or are wondering what concurrency is and how it might speed up your program, you’ve come to the right place.
In this course, you’ll learn the following:
- How I/O bound programs are effected by latency
- Which concurrent programming patterns to use
- What the differences are between the Python concurrency libraries
- How to write code that uses the
threading,asyncio, andmultiprocessinglibraries
Sample code was tested using Python 3.8.5. Since much of the asyncio library has been in flux since Python 3.4, it’s recommended to use at least Python 3.7 for the asyncio portions of the course.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
October 27, 2025
PyCharm
Welcome to the highlights and key takeaways from the recently released Django Developers Survey. Now in its fourth year, this annual collaboration between the Django Software Foundation and PyCharm tabulates responses from over 4,600 Django developers worldwide. If you work with Python and the web more broadly, there’s a lot to learn from what’s happening in the vibrant Django ecosystem.
My name is Will Vincent, and I’m a longtime contributor to the Django community as well as a Developer Advocate at PyCharm. Over the last six years, I’ve co-written the weekly Django News newsletter alongside Jeff Triplett and co-hosted the Django Chat podcast with Carlton Gibson; in both venues, we find a seemingly inexhaustible supply of topics, packages, and people to discuss.
Django is celebrating its 20th anniversary this year and is settling in quite nicely, thank you, to its mature status. Backward-breaking changes are exceedingly rare, even as new feature versions (5.2, 6.0, 6.1, etc.) are released every eight months, double-digit PRs are merged into core each week, and the global community has never been stronger.
This thriving ecosystem exists thanks to the ongoing work of Djangoâs maintainers, reviewers, and mentors. Each year, PyCharm joins forces with the Django Software Foundation to support that work through the Annual Django Fundraiser.
Until November 11, 2025, you can get 30% off PyCharm Professional, and JetBrains will donate all proceeds to the DSF â directly funding the people who make Django stronger with every release. Over the past nine years, the campaign has raised more than $330,000 for Djangoâs continued growth and stability.
One final point before we dive into the results: despite being used by millions of developers and some of the worldâs largest companies, Django itself remains largely unaware of its real-world usage. By design, there is no analytics tracking on the official Django website and no concrete metric of downloads aside from the admittedly imperfect measure of PyPI Stats.
This survey has become one of, if not the, primary way for the community to understand current Django usage. In recent years, survey results led to the Redis cache backend receiving official support in Django 4.0. More recently, MongoDB saw solid usage numbers and prioritized releasing an official django-mongodb-backend package for the first time this year.
In short, this survey is essential and provides the best glimpse any of us have into the actual usage trends and future feature desires of the wider Django community.
Key Django trends in 2025
Let’s take a look at the notable and sometimes surprising trends in this year’s Django survey.
HTMX + Alpine.js Are Ascendant
React and jQuery remain the two most popular JavaScript frameworks to use with Django, but momentum continues to build for HTMX and Alpine.js. These technologies favor a server-rendered template approach with interactivity sprinkled in.
Twenty years ago, when Django was first released, single-page applications (SPAs) were rare. Most websites relied on a hypermedia approach of server-rendered templates; the introduction of jQuery in 2006 provided a way to sprinkle in JavaScript-powered interactivity without needing to become a JavaScript expert.
Fast forward ten years, and many web frameworks, including Django, were being used to power RESTful API backends consumed by dedicated JavaScript frontends, such as React, Angular, and Vue.
But since the Django Survey began in 2021, the pendulum has shifted back towards server-side templates. HTMX has grown from 5% in 2021 to 24%, while Alpine.js has grown from 3% to 14% usage. At the same time, React and jQuery have consistently declined from 37% in 2021 to 32% for React and 26% for jQuery. It is interesting to note that Vue, the third-most popular JavaScript framework, has also declined over this time period from 28% to 17%.
The forthcoming Django 6.0 release adds official support for template partials, further cementing the HTMX/Alpine.js combination as a viable alternative for developers. The release of this new feature also speaks to one of the strengths of the Django ecosystem, which is the thousands of third-party packages available. Some eventually make their way into core, as this one did, first starting as django-template-partials by Carlton Gibson and formally brought into core with the help of Farhan Ali Raza during his Google Summer of Code program this year.
What does this all mean for Django? It speaks to Django’s maturity and continued evolution that it can support multiple frontend patterns in web development: API backends via django-rest-framework or django-ninja for developers who prefer a SPA architecture, and also server-rendered templates enhanced by HTMX, Alpine.js, and soon template partials. Django continues to iterate to meet the needs of modern web developers, while retaining the stability and security that make it so indispensable to millions of existing users.
AI Usage is Growing
A majority of respondents (79%) still rely on official documentation as their primary learning resource, followed by Stack Overflow (39%), and both AI tools and YouTube (38%). For AI tools, this is a remarkable rise considering the category didn’t even exist several years ago. It is also worth noting that blogs (33%) and books (22%) now trail well behind.
For Django development, 69% reported using ChatGPT, followed by 34% for GitHub Copilot, 15% for Anthropic Claude, and 9% for JetBrains AI Assistant. The most popular tasks for AI were autocomplete (56%), generating code (51%), and writing boilerplate code (44%). We will likely see even greater rates of adoption in this area in next year’s survey results.
Anecdotally, many hallway conversations at DjangoCon Europe and DjangoCon US this year centered on AI tooling. The options available â chat, autocomplete, and agents â are all relatively new, and there is yet to be a community consensus on how to best utilize them for Django development, despite ongoing discussions around AI Agent Rules and related topics on the Django Forum.
Django Developers are Experienced
In marked contrast to the Python Survey released earlier this year, which showed exactly half (50%) of respondents had less than two years of professional experience, Django developers are a very experienced bunch: 30% of respondents reporting 11+ years of experience, followed by 26% for 6-10 years, and 21% for 3-5 years. That means that 77%–or 3 out of 4–Django developers have at least three years of professional coding experience.
An overwhelming majority of respondents (82%) use Django professionally, in addition to personal usage. Roughly half (51%) report using Django for backend APIs with Django REST Framework, while a full 80% perform full-stack development, no doubt enhanced by the growing server-rendered templating options.
Strong Support for Type Hints
Perhaps it should come as no surprise, given the relative experience of respondents to this survey, that there was overwhelming support for Type Hints: 63% reported already using type hints in their Django code, with another 17% planning to, resulting in a remarkable 80% overall rate.
When asked if type hints should be added to Django core â an ongoing point of discussion on the Django Steering Council â 84% said yes, with 45% indicating a willingness to contribute themselves.
Django, like Python, has long strived to be both welcoming to newcomers while also providing the more advanced tools experienced programmers often prefer, such as type hints.
PostgreSQL Paces the Field
When it comes to favored database backend, it is no surprise that those with built-in support reign supreme, starting with PostgreSQL at 76%, followed by SQLite at 42%, MySQL at 27%, and MariaDB at 9%. These percentages have remained remarkably consistent over the past four years.
Oracle continues to enjoy relative growth in usage, climbing from 2% in 2021 and 2022 to 10% in 2023 and 9% in 2024. Newer entrants, such as MongoDB, also deserve attention: even without official support, it managed an 8% share in 2023, indicating the desire for NoSQL options powered by Django. This survey result was a key component in the Mongo team’s decision to invest in an official Django MongoDB backend, which was fully released this year.
It will be interesting to track database support in the coming years, given a resurgence in interest around using SQLite in production â not just for local development â as well as NoSQL options from MongoDB, and to monitor whether Oracle continues to maintain its usage share.
Popular Third-Party Packages
When asked for their top five favorite third-party Django packages, there was a very long tail of responses, reflecting both the depth and breadth of packages in the Django ecosystem. Resources such as djangopackages.org, the awesome-django repo, and the new Django Ecosystem page highlight that Django’s secret sauce is its ecosystem of third-party apps and add-ons.
Notably, Django REST Framework was the runaway favorite with 49% followed by `django-debug-toolbar` at 27%, `django-celery` at 26%, `django-cors-headers` at 19%, `django-filter` at 18%, and `django-allauth` at 18%. Many different packages received support after these top few, again speaking to the breadth of options available to Django developers.
The Latest Django Version Reigns Supreme
An overwhelming majority of respondents (75%) report being on the latest version of Django, which is impressive given the cadence of feature releases, which occur approximately every eight months. For example, Django 5.1 was released in August 2024, Django 5.2 in April 2025, and Django 6.0 will come out in December 2025.
Despite the regular release schedule, Django takes great efforts to remain stable and has a well-established depreciation and removal policy; breaking changes are rare.
It is also worth noting that certain feature releases â historically those ending in .2, such as 3.2, 4.2, and 5.2 â are designed as Long-Term Support (LTS) releases, receiving all security and data loss fixes for three years.
Although updating every LTS release is one option, it is heartening to see so many Django developers opting for the latest release, as this ensures you are receiving the latest and greatest version of the framework. It is also far easier to update incrementally, with every feature release, rather than waiting a few years in between.
pytest Prevails
In the words of Django co-creator Jacob Kaplan-Moss, “Code without tests is broken by design.” Django has its own testing framework built on top of Python’s unittest library, which provides extra features tailored for web applications. Many developers also use `pytest`–similarly popular in the broader Python community–for even more testing help.
The survey showed that `pytest` remains the most popular option for testing Django projects, at 39%, followed closely by `unittest`, at 33%. Two Django-specific plugins, `pytest-django` and `django-test-plus`, also received strong support. The `coverage` library was used by 21% of developers; it provides a useful way to measure the test coverage present in a project. Further down the list were end-to-end testing options, such as Selenium and Playwright.
These results are consistent with others in the Python ecosystem: `unittest` and `pytest` are, by far, the two dominant ways to test Python libraries, so it is no surprise to see both rank so highly here.
Actionable Ideas
Now that you’ve read my take on the highlights from this year’s results, what are the next steps? First, know that Django is a mature, boring technology by design; you can carry on being productive with your work, updating to the latest versions of Python and Django, and have confidence that the rug won’t be pulled out from under you with breaking changes.
But the broader Python and open-source ecosystems continue to innovate and mutate, and there are definitely productivity gains to be had if you experiment a little. In that spirit, here are four actionable ideas you can pursue:
Action 1: Try out HTMX
If you haven’t yet taken the time to find out what the excitement is about, head over to the Examples section on the HTMX website to see common UI improvements. It is almost as easy as copy and paste for many interactive elements; thereâs no need to fire up a dedicated JavaScript framework to achieve similar results.
Action 2: Experiment with AI
The momentum is clearly swinging towards some flavor of AI tools becoming part of the standard Django developer workflow, although there is no clear consensus on what exactly that entails.
On one end of the spectrum are developers who want minimal to no assistance: catch typos and obvious language errors, nothing else, thank you. A step up are autocomplete options of various degrees, followed by chat-assisted programming, which includes either code snippets or entire codebases, and then asks the LLM questions about it. The final frontier, at the moment, is agents that can take a prompt and attempt to solve it on their own.
Most Django developers are somewhere in the model, experimenting with these new AI tools, but not fully sold. As the tools and the IDE integrations improve over the next year, it will be interesting to see what next year’s survey respondents report in terms of their AI usage.
Action 3: Update to the Latest Version of Django
The best way to take advantage of all that Django and Python have to offer is to be on the latest release. Both are mature and rarely implement breaking changes, so this has never been easier. In production codebases with tests, updates should be as straightforward as updating the version number, running the test suite, and fixing any errors that emerge.
Staying current is like performing maintenance on your car: much easier to do a bit every so often rather than wait a few years before something breaks. It also means you are on the most secure and performant version of your tools.
Action 4: Stay Informed of the Django Ecosystem
Django is a batteries-included framework and ecosystem: there is a lot going on. This can feel overwhelming at times, but the good news is that there are resources in whatever medium you prefer to keep you informed, from the official Django website to podcasts, newsletters, conferences, and more. The recently launched Django ecosystem page is a great starting point.
Interested in learning more? Check out the complete Django Developers Survey Results here.
Real Python
Using Python Optional Arguments When Defining Functions
You define Python functions with optional arguments to make them flexible and reusable. By assigning default values, using *args for variable arguments, or **kwargs for keyword arguments, you let your functions handle different inputs without rewriting code. This tutorial shows you how and why to use Python optional arguments, and how to avoid common pitfalls when setting defaults.
By the end of this tutorial, youâll understand that:
- Parameters are names in a function definition, while arguments are the values you pass when calling the function
- You can assign default values to parameters so that arguments become optional
- You should avoid mutable data types like lists or dictionaries as default values to prevent unexpected behavior
- You can use
*argsto collect any number of positional arguments and**kwargsto collect keyword arguments - Python raises
TypeErrorwhen you omit required arguments andSyntaxErrorwhen you misorder parameters with defaults
Defining your own functions is an essential skill for writing clean and effective code. Once you master Pythonâs optional arguments, youâll be able to define functions that are more powerful and more flexible.
To get the most out of this tutorial, youâll need some familiarity with defining functions with required arguments.
Get Your Code: Click here to download the free sample code that youâll use to learn about the optional arguments in functions.
Take the Quiz: Test your knowledge with our interactive âUsing Python Optional Arguments When Defining Functionsâ quiz. Youâll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Using Python Optional Arguments When Defining FunctionsPractice Python function parameters, default values, *args, **kwargs, and safe optional arguments with quick questions and short code tasks.
Creating Functions in Python for Reusing Code
You can think of a function as a mini-program that runs within another program or within another function. The main program calls the mini-program and sends information that the mini-program will need as it runs. When the function completes all of its actions, it may send some data back to the main program that has called it.
The primary purpose of a function is to allow you to reuse the code within it whenever you need it, using different inputs if required.
When you use functions, youâre extending your Python vocabulary. This lets you express the solution to your problem in a clearer and more succinct way.
In Python, by convention, you should name a function using lowercase letters with words separated by an underscore, such as do_something(). These conventions are described in PEP 8, which is Pythonâs style guide. Youâll need to add parentheses after the function name when you call it. Since functions represent actions, itâs a best practice to start your function names with a verb to make your code more readable.
Defining Functions With No Input Parameters
In this tutorial, youâll use the example of a basic program that creates and maintains a shopping list and prints it out when youâre ready to go to the supermarket.
Start by creating a new Python script youâll call optional_params.py and add a shopping list:
optional_params.py
shopping_list = {
"Bread": 1,
"Milk": 2,
"Chocolate": 1,
"Butter": 1,
"Coffee": 1,
}
Youâre using a dictionary to store the item name as the key and the quantity you need to buy of each item as the value. You can define a function to display the shopping list:
optional_params.py
shopping_list = {
"Bread": 1,
"Milk": 2,
"Chocolate": 1,
"Butter": 1,
"Coffee": 1,
}
def show_list():
for item_name, quantity in shopping_list.items():
print(f"{quantity}x {item_name}")
show_list()
When you run this script, youâll get a printout of the shopping list:
$ python optional_params.py
1x Bread
2x Milk
1x Chocolate
1x Butter
1x Coffee
The function youâve defined has no input parameters, as the parentheses in the function signature are empty. The signature is the first line in the function definition:
def show_list():
You donât need any input parameters in this example since the dictionary shopping_list is a global variable. This means that it can be accessed from everywhere in the program, including from within the function definition. This is called the global scope.
Note: You can read more about scope in Python Scope & the LEGB Rule: Resolving Names in Your Code.
Using global variables in this way is not a good practice. It can lead to several functions making changes to the same data structure, which can lead to bugs that are hard to find. Youâll see how to improve on this later on in this tutorial when you pass the dictionary to the function as an argument.
Read the full article at https://realpython.com/python-optional-arguments/ »
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm
The State of Django 2025
Real Python
Quiz: Using Python Optional Arguments When Defining Functions
You’ll revisit how Python handles parameters and arguments—from default values and their order to flexible patterns like *args and **kwargs. You’ll also see when a simple Boolean flag can make your function calls clearer and more expressive.
In this quiz, you’ll test your understanding of how mutable default argument values can lead to unexpected behavior. You’ll also practice unpacking sequences and mappings in function calls and formatting output with flags. For a deeper dive, check out the guide to optional arguments.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Talk Python to Me
#525: NiceGUI Goes 3.0
Building a UI in Python usually means choosing between "quick and limited" or "powerful and painful." What if you could write modern, component-based web apps in pure Python and still keep full control? NiceGUI, pronounced "Nice Guy" sits on FastAPI with a Vue/Quasar front end, gives you real components, live updates over websockets, and itâs running in production at Zauberzeug, a German robotic company. On this episode, Iâm talking with NiceGUIâs creators, Rodja Trappe and Falko Schindler, about how it works, where it shines, and whatâs coming next. With version 3.0 releasing around the same time this episode comes out, we spend the end of the episode celebrating the 3.0 release.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect'>Posit</a><br> <a href='https://talkpython.fm/agntcy'>Agntcy</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Rodja Trappe</strong>: <a href="https://github.com/rodja?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Falko Schindler</strong>: <a href="https://github.com/falkoschindler?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>NiceGUI 3.0.0 release</strong>: <a href="https://github.com/zauberzeug/nicegui/releases/tag/v3.0.0?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Full LLM/Agentic AI docs instructions for NiceGUI</strong>: <a href="https://github.com/zauberzeug/nicegui/wiki#chatgpt-and-other-llms" target="_blank" >github.com</a><br/> <br/> <strong>Zauberzeug</strong>: <a href="https://zauberzeug.com?featured_on=talkpython" target="_blank" >zauberzeug.com</a><br/> <strong>NiceGUI</strong>: <a href="https://nicegui.io?featured_on=talkpython" target="_blank" >nicegui.io</a><br/> <strong>NiceGUI GitHub Repository</strong>: <a href="https://github.com/zauberzeug/nicegui/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>NiceGUI Authentication Examples</strong>: <a href="https://github.com/zauberzeug/nicegui/blob/main/examples/authentication?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>NiceGUI v3.0.0rc1 Release</strong>: <a href="https://github.com/zauberzeug/nicegui/releases/tag/v3.0.0rc1?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Valkey</strong>: <a href="https://valkey.io?featured_on=talkpython" target="_blank" >valkey.io</a><br/> <strong>Caddy Web Server</strong>: <a href="https://caddyserver.com?featured_on=talkpython" target="_blank" >caddyserver.com</a><br/> <strong>JustPy</strong>: <a href="https://justpy.io?featured_on=talkpython" target="_blank" >justpy.io</a><br/> <strong>Tailwind CSS</strong>: <a href="https://tailwindcss.com?featured_on=talkpython" target="_blank" >tailwindcss.com</a><br/> <strong>Quasar ECharts v5 Demo</strong>: <a href="https://quasar-echarts-v5.netlify.app?featured_on=talkpython" target="_blank" >quasar-echarts-v5.netlify.app</a><br/> <strong>AG Grid</strong>: <a href="https://www.ag-grid.com?featured_on=talkpython" target="_blank" >ag-grid.com</a><br/> <strong>Quasar Framework</strong>: <a href="https://quasar.dev?featured_on=talkpython" target="_blank" >quasar.dev</a><br/> <strong>NiceGUI Interactive Image Documentation</strong>: <a href="https://nicegui.io/documentation/interactive_image?featured_on=talkpython" target="_blank" >nicegui.io</a><br/> <strong>NiceGUI 3D Scene Documentation</strong>: <a href="https://nicegui.io/documentation/scene#3d_scene" target="_blank" >nicegui.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=74UXonJfl6o" target="_blank" >youtube.com</a><br/> <strong>Episode #525 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/525/nicegui-goes-3.0#takeaways-anchor" target="_blank" >talkpython.fm/525</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/525/nicegui-goes-3.0" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>đ„ Served in a Flask đž</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
Python Bytes
#455 Gilded Python and Beyond
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://www.reddit.com/r/Python/comments/18hn2t1/cyclopts_a_cli_library_that_fixes_13_annoying/?featured_on=pythonbytes">Cyclopts: A CLI library</a></strong></li> <li><em>* <a href="https://blog.baro.dev/p/the-future-of-python-web-services-looks-gil-free?featured_on=pythonbytes">The future of Python web services looks GIL-free</a></em>*</li> <li><em>* <a href="https://labs.quansight.org/blog/free-threaded-gc-3-14?featured_on=pythonbytes">Free-threaded GC</a></em>*</li> <li><em>* <a href="https://pythontest.com/polite-lazy-imports-python-packages/?featured_on=pythonbytes">Polite lazy imports for Python package maintainers</a></em>*</li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=exSYX16Hk8M' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="455">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://www.reddit.com/r/Python/comments/18hn2t1/cyclopts_a_cli_library_that_fixes_13_annoying/?featured_on=pythonbytes">Cyclopts: A CLI library</a></p> <ul> <li>A CLI library that fixes 13 annoying issues in Typer</li> <li>Much of Cyclopts was inspired by the excellent <a href="https://typer.tiangolo.com/?featured_on=pythonbytes">Typer</a> library.</li> <li>Despite its popularity, Typer has some traits that I (and others) find less than ideal. Part of this stems from Typer's age, with its first release in late 2019, soon after Python 3.8's release. Because of this, most of its API was initially designed around assigning proxy default values to function parameters. This made the decorated command functions difficult to use outside of Typer. With the introduction of <a href="https://docs.python.org/3/library/typing.html#typing.Annotated"><strong><code>Annotated</code></strong></a> in python3.9, type-hints were able to be directly annotated, allowing for the removal of these proxy defaults.</li> <li>The 13: <ul> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/argument_vs_option/README.html?featured_on=pythonbytes">Argument vs Option</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/positional_or_keyword/README.html?featured_on=pythonbytes">Positional or Keyword Arguments</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/choices/README.html?featured_on=pythonbytes">Choices</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/default_command/README.html?featured_on=pythonbytes">Default Command</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/docstring/README.html?featured_on=pythonbytes">Docstring Parsing</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/decorator_parentheses/README.html?featured_on=pythonbytes">Decorator Parentheses</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/optional_list/README.html?featured_on=pythonbytes">Optional Lists</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/keyword_multiple_values/README.html?featured_on=pythonbytes">Keyword Multiple Values</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/flag_negation/README.html?featured_on=pythonbytes">Flag Negation</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/help_defaults/README.html?featured_on=pythonbytes">Help Defaults</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/validation/README.html?featured_on=pythonbytes">Validation</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/union_support/README.html?featured_on=pythonbytes">Union/Optional Support</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/version_flag/README.html?featured_on=pythonbytes">Adding a Version Flag</a></li> <li><a href="https://cyclopts.readthedocs.io/en/latest/vs_typer/documentation/README.html?featured_on=pythonbytes">Documentation</a></li> </ul></li> </ul> <p><strong>Brian #2: <a href="https://blog.baro.dev/p/the-future-of-python-web-services-looks-gil-free?featured_on=pythonbytes">The future of Python web services looks GIL-free</a></strong></p> <ul> <li>Giovanni Barillari</li> <li><p>âPython 3.14 was released at the beginning of the month. This release was particularly interesting to me because of the improvements on the "free-threaded" variant of the interpreter.</p> <p>Specifically, the two major changes when compared to the free-threaded variant of Python 3.13 are:</p> <ul> <li>Free-threaded support now reached <em>phase II</em>, meaning it's no longer considered experimental</li> <li>The implementation is now completed, meaning that the <em>workarounds</em> introduced in Python 3.13 to make code sound without the GIL are now gone, and the free-threaded implementation now uses the <a href="https://peps.python.org/pep-0659/?featured_on=pythonbytes">adaptive interpreter</a> as the GIL enabled variant. These facts, plus additional optimizations make the performance penalty now way better, moving from a 35% penalty to a 5-10% difference.â</li> </ul></li> <li>Lots of benchmark data, both ASGI and WSGI</li> <li>Lots of great thoughts in the <a href="https://blog.baro.dev/p/the-future-of-python-web-services-looks-gil-free#final-thoughts">âFinal Thoughtsâ section</a>, including <ul> <li>âOn asynchronous protocols like ASGI, despite the fact the concurrency model doesn't change that much â we shift from one event loop per process, to one event loop per thread â just the fact we no longer need to scale memory allocations just to use more CPU is a <em>massive improvement</em>. â</li> <li>â⊠for everybody out there coding a web application in Python: simplifying the concurrency paradigms and the deployment process of such applications is <em>a good thing</em>.â</li> <li>â⊠to me the future of Python web services looks GIL-free.â</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://labs.quansight.org/blog/free-threaded-gc-3-14?featured_on=pythonbytes">Free-threaded GC</a></strong></p> <ul> <li>The free-threaded build of Python uses a different garbage collector implementation than the default GIL-enabled build.</li> <li><strong>The Default GC:</strong> In the standard CPython build, every object that supports garbage collection (like lists or dictionaries) is part of a per-interpreter, doubly-linked list. The list pointers are contained in a PyGC_Head structure.</li> <li><strong>The Free-Threaded GC:</strong> Takes a different approach. It scraps the PyGC_Head structure and the linked list entirely. Instead, it allocates these objects from a special memory heap managed by the "mimalloc" library. This allows the GC to find and iterate over all collectible objects using mimalloc's data structures, without needing to link them together manually.</li> <li>The free-threaded GC does <strong>NOT</strong> support "generationsâ</li> <li>By marking all objects reachable from these <em>known</em> roots, we can identify a large set of objects that are definitely alive and exclude them from the more expensive cycle-finding part of the GC process.</li> <li><strong>Overall speedup of the free-threaded GC collection is between 2 and 12 times faster</strong> than the 3.13 version.</li> </ul> <p><strong>Brian #4: <a href="https://pythontest.com/polite-lazy-imports-python-packages/?featured_on=pythonbytes">Polite lazy imports for Python package maintainers</a></strong></p> <ul> <li>Will McGugan commented on a <a href="https://www.linkedin.com/feed/update/urn:li:activity:7387408872946171904?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7387408872946171904%2C7387770808019984384%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287387770808019984384%2Curn%3Ali%3Aactivity%3A7387408872946171904%29&featured_on=pythonbytes">LI post by Bob Belderbos</a> regarding lazy importing</li> <li><p>âI'm excited about this PEP.</p> <p>I wrote a lazy loading mechanism for Textual's widgets. Without it, the entire widget library would be imported even if you needed just one widget. Having this as a core language feature would make me very happy.â</p> <p><a href="https://github.com/Textualize/textual/blob/main/src/textual/widgets/__init__.py?featured_on=pythonbytes"><strong>https://github.com/Textualize/textual/blob/main/src/textual/widgets/__init__.py</strong></a></p></li> <li><p>Well, I was excited about Willâs example for how to, essentially, allow users of your package to import only the part they need, when they need it.</p></li> <li>So I wrote up my thoughts and an explainer for how this works.</li> <li>Special thanks to Trey Hunnerâs <strong>E<a href="https://www.pythonmorsels.com/every-dunder-method/?featured_on=pythonbytes">very dunder method in Python</a>,</strong> which I referenced to understand the difference between <code>__getattr__()</code> and <code>__getattribute__()</code>.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Started writing a book on Test Driven Development. <ul> <li>Should have an announcement in a week or so.</li> <li>I want to give folks access while Iâm writing it, so Iâll be opening it up for early access as soon as I have 2-3 chapters ready to review. Sign up for the <a href="https://pythontest.com/newsletter/?featured_on=pythonbytes">pythontest newsletter</a> if youâd like to be informed right away when itâs ready. Or stay tuned here.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li>New course!!! <a href="https://training.talkpython.fm/courses/agentic-ai-programming-for-python?featured_on=pythonbytes"><strong>Agentic AI Programming for Python</strong></a></li> <li>Iâll <a href="https://luma.com/bzm5etak?featured_on=pythonbytes">be on Vanishing Gradients</a> as a guest talking book + ai for data scientists</li> <li><a href="https://openai.com/index/introducing-chatgpt-atlas/?featured_on=pythonbytes">OpenAI launches ChatGPT Atlas</a></li> <li>https://github.com/jamesabel/ismain by James Abel</li> <li><a href="https://github.com/stillya/vpet?featured_on=pythonbytes">Pets in PyCharm</a></li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/Mayhem4Markets/status/1980001528464175463?featured_on=pythonbytes">You're absolutely right</a></p>
October 26, 2025
Brian Okken
Polite lazy imports for Python package maintainers
If you are a maintainer of a Python package, it’s nice if you pay attention to the time it takes to import your package.
Further, if you’ve got a Python package with multiple components where it’s probable that many users will only use part of the package, then it’s super nice if you set up your __init__.py files for lazy importing.
Previously - lazy importing other packages
In Python lazy imports you can use today, I discussed:
Rodrigo GirĂŁo SerrĂŁo
TIL #135 â Build the Python documentation
Today I learned how to build the Python documentation to preview changes I wanted to make.
If you're not on Windows, all it takes is to run make -C Doc venv htmllive to build the Python documentation locally and to preview it.
This command will build the documentation, start a local server to browse the docs, and also watch for changes in the documentation source files to live-reload while you edit!
I needed this because the Python 3.14 documentation for the module concurrent.interpreters had a terrible-looking âSee alsoâ callout with elements that were grossly misaligned:
This makes my want to cry.
However, since I don't know rST, only Markdown, the issue wasn't obvious to me:
.. seealso::
:class:`~concurrent.futures.InterpreterPoolExecutor`
combines threads with interpreters in a familiar interface.
.. XXX Add references to the upcoming HOWTO docs in the seealso block.
:ref:`isolating-extensions-howto`
how to update an extension module to support multiple interpreters
:pep:`554`
:pep:`734`
:pep:`684`
After some Googling, turns out the problem is the comment .. XXX Add references....
Since it's indentend four spaces, it's being interpreted as a blockquote!
The fix was just deleting a single space from the left of .. XXX ....
However, I did not stop there! I went above and beyond, capitalising the sentences and adding a full stop to the one that didn't have it!
In the end, the âSee alsoâ callout was looking better:
What a work of art.
The Python Coding Stack
Impostors âą How Even The Python Docs Get This Wrong* âą [Club]
Can you spot all the errors in the following paragraph? There are several:
Two functions that enable you to work effectively with loops in Python are
zip()andenumerate(). Along with therange()function, they’re some of the most common tools you’ll see inforloops. And when you master them, you can start exploring the functions in theitertoolsmodule.
The correct number of errors in this text is either four or zero. Confused? I don’t blame you. And here’s a bit more confusion for you. It doesn’t matter either way.
Let’s talk about impostors in Python.
It’s likely that one of the first examples you saw when learning about for loops used the range() function as part of the for statement.
But you were fooled. That example, whichever one it was, didn’t do such a thing.
October 25, 2025
Django Weblog
On the Air for Djangoâs 20th Birthday: Special Event Station W2D
Back in July, we celebrated a very special occasion: Djangoâs 20th birthday đ To mark the occasion, three amateur radio operators (including myself) spent the next 14 days, operating evenings and weekends, broadcasting a special event call sign: W2D.
Over those two weeks, we completed 1,026 radio contacts with radio operators in 47 geopolitical entities (for example, the continental US, Alaska and Hawaii are considered separate entities). The US Federal Communications Commission (FCC) issues special event "call signs" for these types of events. We selected W2D for 20 years of Django, but the reference to "Web 2.0" during Django's early years was a bonus!
Over 7,000 lookups were counted on a main callsign lookup site as radio operators checked into what W2D was about. Ham radio is a very popular activity, with more than 750,000 licensed hams in the US!
We created a custom certificate inspired by the design of the Django admin interface for those who made contact with us (certificates are common / expected for events like this in the radio hobby). Here is a sample one, other amateurs contacting the event were able to generate/download their own Django admin inspired certificate from a Django site (which does repeat for those who contacted us multiple times):

Thank you to the amateur radio operators who made the event possible and of course those who contacted us! Thanks to you this was a fun time for us all. Additionally, thank you to the Django Software Foundation and its members who make the Django Web Framework and its community possible.

This screenshot shows 3 other stations (ON7EQ from Belgium, PC2J from the Netherlands, and WA4NFO from the US all calling W2D on "20 meters" (14 MHz, so named because the wavelength would be 20 meters long per wave) All of the orange bubbles in the map show the other stations receiving the signal from W2D being transmitted with 30 watts of RF power. The antenna is an approximately 63 foot long piece of wire running between a balcony and a fence post.

This map shows approximate locations of each geopolitical entity worked during the special event and a count of contacts made in each.
Check out our birthday website for more events â up next, PyDay + Cumple Django organized by PyLadies Colombia in BogotĂĄ đšđŽ đđâ€ïž
October 24, 2025
Giampaolo Rodola
Wheels for free-threaded Python now available in psutil
With the release of psutil 7.1.2, wheels for free-threaded Python are now available. This milestone was achieved largely through a community effort, as several internal refactorings to the C code were required to make it possible (see issue #2565). Many of these changes were contributed by Lysandros Nikolaou. Thanks to him for the effort and for bearing with me in code reviews! ;-)
What is free-threaded Python?
Free-threaded Python (available since Python 3.13) refers to Python builds that are compiled with the GIL (Global Interpreter Lock) disabled, allowing true parallel execution of Python bytecodes across multiple threads. This is particularly beneficial for CPU-bound applications, as it enables better utilization of multi-core processors.
The state of free-threaded wheels
According to Hugo van Kemenade's free-threaded wheels tracker, the adoption of free-threaded wheels among the top 360 most-downloaded PyPI packages with C extensions is still limited. Only 128 out of these 360 packages provide wheels compiled for free-threaded Python, meaning they can run on Python builds with the GIL disabled. This shows that, while progress has been made, most popular packages with C extensions still do not offer ready-made wheels for free-threaded Python.
What it means for users
When a library author provides a wheel, users can install a pre-compiled
binary package without having to build it from source. This is especially
important for packages with C extensions, like psutil, which is largely
written in C. Such packages often have complex build requirements and require
installing a C compiler. On Windows, that means installing Visual Studio or
the Build Tools, which can take several gigabytes and a significant setup
effort. Providing wheels spare users from this hassle, makes installation far
simpler, and is effectively essential for the users of that package. You
basically pip install psutil and you're done.
What it means for library authors
Currently, universal wheels for free-threaded Python do not exist. Each wheel must be built specifically for a Python version. Right now authors must create separate wheels for Python 3.13 and 3.14. Which means distributing a lot of files already:
psutil-7.1.2-cp313-cp313t-macosx_10_13_x86_64.whl
psutil-7.1.2-cp313-cp313t-macosx_11_0_arm64.whl
psutil-7.1.2-cp313-cp313t-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl
psutil-7.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
psutil-7.1.2-cp313-cp313t-win_amd64.whl
psutil-7.1.2-cp313-cp313t-win_arm64.whl
psutil-7.1.2-cp314-cp314t-macosx_10_15_x86_64.whl
psutil-7.1.2-cp314-cp314t-macosx_11_0_arm64.whl
psutil-7.1.2-cp314-cp314t-manylinux2010_x86_64.manylinux_2_12_x86_64.manylinux_2_28_x86_64.whl
psutil-7.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl
psutil-7.1.2-cp314-cp314t-win_amd64.whl
psutil-7.1.2-cp314-cp314t-win_arm64.whl
This also multiplies CI jobs and slows down the test matrix (see build.yml). A true universal wheel would greatly reduce this overhead, allowing a single wheel to support multiple Python versions and platforms. Hopefully, Python 3.15 will simplify this process. Two competing proposals, PEP 803 and PEP 809, aim to standardize wheel naming and metadata to allow producing a single wheel that covers multiple Python versions. That would drastically reduce distribution complexity for library authors, and it's fair to say it's essential for free-threaded CPython to truly succeed.
How to install free-threaded psutil
You can now install psutil for free-threaded Python directly via pip:
pip install psutil --only-binary=:all:
This ensures you get the pre-compiled wheels without triggering a source build.
External links
Real Python
The Real Python Podcast â Episode #271: Benchmarking Python 3.14 & Enabling Asyncio to Scale
How does Python 3.14 perform under a few hand-crafted benchmarks? Does the performance of asyncio scale on the free-threaded build? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Graham Dumpleton
Detecting object wrappers
It should not need to be said, but monkey patching is evil.
At least that is the mantra we like to recite, but reality is that for some things in Python it is the only practical solution.
The best example of this and the reason that wrapt was created in the first place, is to instrument existing Python code to collect metrics about its performance when run in production.
Since one cannot expect a customer for an application performance monitoring (APM) service to modify their code, as well as code of the third party dependencies they may use, transparently reaching in and monkey patching code at runtime is the best one can do.
Doing this can be fraught with danger and one has to be very cautious on how you monkey patch code and what the patches do. It is all well and good if you are doing this only for your own code and so any issues that crop up only affect yourself, but if applying such changes to a customers application code in order for them to use your service, you have to be even more careful.
This caution needs to be elevated to the next level again for wrapt since it is the go to package for monkey patching Python code in such situations.
With such caution in mind, the latest version of wrapt was marked as a major version update. In general I thought the changes were good and everything should still be compatible for use cases I knew of, but you never know what strange things people do. My memory also isn't the best and I will not necessarily remember all the tricks even I have used in the past when using wrapt and how they might be affected. So a major version update it was, just to be safe.
I naively thought that everything should then be good and users of wrapt would be diligent and ensure they tested their usage of this major new version before updating. Unfortunately, one major SaaS vendor using wrapt wasn't pinning their dependencies against the major version. This resulted in their customers unknowingly being upgraded to the new major version of wrapt and although I don't know the full extent of it, apparently this did cause a bit of unexpected havoc for some of their users.
The reason for the problems that arose were that monkey patching was being done dynamically at time of use of some code, vs at time of import. This is nothing strange in itself when doing monkey patching, but the issue was that the code needed to check whether the monkey patches had already been applied and if it had, it should not be done a second time. Detecting this situation is a bit tricky and the simple solution may not always work. Everything was made worse by the fact that wrapt made changes to the class hierarchy for the object proxies it provides.
Object proxy hierarchy
Prior to wrapt version 2.0.0, the class hierarchy for the object proxy and function wrappers were as follows.
class ObjectProxy: ...
class CallableObjectProxy(ObjectProxy): ...
class _FunctionWrapperBase(ObjectProxy): ...
class BoundFunctionWrapper(_FunctionWrapperBase): ...
class FunctionWrapper(_FunctionWrapperBase): ...
Decorators and generic function wrappers used the FunctionWrapper class. If needing to create your own custom object proxies you would derive your custom wrapper class from ObjectProxy.
The _FunctionWrapperBase was an internal implementation detail and would never be used directly. Except for some corner cases the BoundFunctionWrapper class should also never need to be used directly.
In wrapt version 2.0.0, this hierarchy was changed, but with some new proxy class types thrown in as well. End result looks as follows.
class BaseObjectProxy: ...
class ObjectProxy(BaseObjectProxy): ...
class AutoObjectProxy(BaseObjectProxy): ...
class LazyObjectProxy(AutoObjectProxy): ...
class CallableObjectProxy(BaseObjectProxy): ...
class _FunctionWrapperBase(BaseObjectProxy): ...
class BoundFunctionWrapper(_FunctionWrapperBase): ...
class FunctionWrapper(_FunctionWrapperBase): ...
The reason for introducing BaseObjectProxy was that early on in the life of wrapt some bad choices were made as to what default proxy methods were added to the ObjectProxy class. One of these was for the special __iter__() method.
This method presented problems because some code out there, rather than simply attempting to iterate over an object and catching the resulting exception when it wasn't iterable, would try and optimise things, or change behaviour based on whether the __iter__() method existed on an object. Even though a wrapped object might not be iterable and define that method, because the object proxy always provided it, it would cause problems with code which had that check for the existance of __iter__().
It was quite a long time before this mistake of adding __iter__() was noticed and it couldn't just be backed out as that would then break code which became dependent on that proxy method existing. Taking the proxy method away would have forced people to create a custom object proxy class of their own which added it back for their use case, or at the minimum they would need to add it to their existing custom object proxy class.
In wrapt version 2.0.0 it was decided to finally try and address this mistake. The new BaseObjectProxy class is the same as the original ObjectProxy class except that the proxy method for __iter__() has been removed. The only thing in the ObjectProxy class in version 2.0.0 is the addition of the __iter__() method to keep backward compatibility with code out there using wrapt.
The recommended approach going forward is that the ObjectProxy class should effectively be ignored and when creating a custom object proxy, you should instead inherit from BaseObjectProxy. If you need a proxy method for __iter__(), you should add it explicitly in your custom object proxy.
Testing for an object proxy
I thought the above changes were reasonable and should allow existing code using wrapt to continue to work. I have to admit though that I did forget one situation where the change may cause a problem. This is when testing whether an object is already wrapped by an object proxy, in order to avoid applying the same object proxy a second time.
In part I probably overlooked it as it isn't a reliable test to use in the first place if you used the simple way of doing it. As such, when I have had to do it in the past, I have avoided the simple way and used a more complicated test.
The simple test in this case that I am referring to is a test of the form:
if not isinstance(object, ObjectProxy):
... add custom object proxy to object
Prior to wrapt 2.0.0 this test could be applied to an object to test whether it was wrapped by a custom object proxy, including FunctionWrapper or BoundFunctionWrapper as used by decorators, or a generic function wrapper. The test relied on checking whether the object was actually a wrapper derived from the ObjectProxy base class.
As I understand it, this is the test that was being used in the instrumentation package used by the SaaS product that had all the issues when it suddenly started using wrapt version 2.0.0 due to not being pinned against the major version of wrapt.
With wrapt version 2.0.0 this check would start failing for FunctionWrapper and BoundFunctionWrapper as they no longer derive from ObjectProxy, but derived from BaseObjectProxy instead.
Even if these changes to the class hierarchy hadn't been made in wrapt, this test was already fragile and could have broken at some point for other reasons.
Take for example if the original code which was being patched decided to start using wrapt themselves to add a decorator to the function which later code then tried to monkey patch. Since the decorator using wrapt would pass the test for already being wrapped, the code doing the monkey patching would wrongly assume that it's own object proxy had already been applied and not add it. Thus the target function would not be instrumented as intended.
Another case is where there were multiple packages trying to monkey patch the same function for different purposes. If both used wrapt to do the monkey patching, but each with their own custom object proxy, the first to apply the monkey patch would win if the latter was testing whether it should apply its own monkey patch.
In other words, the test is not specific enough and would detect the presense of any object proxy wrapping the target object. This isn't all though and other issues can also arise as will get to later.
Anyway, the end result of this test not working as intended when the SaaS package started using wrapt version 2.0.0 was that every time it tested whether it had already applied the generic function wrapper using FunctionWrapper, it thought it hadn't and so it would add it again. As the number of times the wrapper was added grew, so did memory usage by the application. It seems that there being a problem was only finally noticed when the number of nested function wrappers became so great that the Python call stack size was exceeded when the wrapped function was being called. Up till then, as well as memory usage being affected, performance would also have been affected, as well as possibly any metrics data captured.
The quick fix to ensure this code would also work with wrapt version 2.0.0 would be to use something like the following.
import wrapt
BASE_OBJECT_PROXY = wrapt.ObjectProxy
if hasattr(wrapt, "BaseObjectProxy"):
BASE_OBJECT_PROXY = wrapt.BaseObjectProxy
if not isinstance(object, BASE_OBJECT_PROXY):
... add custom object proxy to object
This is still going to be fragile though as noted above, even if fixes the immediate problem.
If using a custom object proxy class, it would be better to test for the specific type of that custom object proxy.
if not isinstance(object, CustomObjectProxy):
object = CustomObjectProxy(object)
This is better because the custom object proxy is your type and so it is properly testing that it is in fact your wrapper.
If it was the case they were using FunctionWrapper, they would likely not have encountered an issue if they had used:
if not isinstance(object, FunctionWrapper):
object = FunctionWrapper(object, ...)
but then it still isn't detecting whether it was their specific wrapper.
Depending on how the monkey patch is being applied, it would still be better to create your own empty custom function wrapper class type:
class CustomFunctionWrapper(FunctionWrapper): pass
if not isinstance(object, CustomFunctionWrapper):
object = CustomFunctionWrapper(object, ...)
Testing explicitly against your own custom object type is therefore better, but still not foolproof.
Nested function wrappers
Where this can still fall apart is where multiple wrappers are applied to the same target function. If your custom wrapper was added first, but then another applied on top, when you come back to check whether your wrapper has already been applied, you will not see it if looking for a wrapper using your custom type.
There are two things that might help in this situation.
The first is that for wrapt object proxies at least, if you try and access an attribute, if it does not exist on the wrapper, it will lookup whether it exists on the wrapped object.
This means that if you add a uniquely named attribute to a custom object proxy, you can test whether your object proxy wraps an object by looking up that attribute. Because attribute lookup will fall through from the wrapper to the wrapped object, if you have multiple wrappers it will propogate all the way down to the original wrapped object looking for it.
class CustomObjectProxy(ObjectProxy):
__custom_object_proxy_marker__ = True
object = ObjectProxy(CustomObjectProxy(None))
if not hasattr(object, "__custom_object_proxy_marker__"):
object = CustomObjectProxy(object)
Thus your custom object proxy would not be applied a second time, even though it was nested below another.
As noted though, this only works for wrapt object proxies. Or at least, it requires any object proxy to propagate attribute look up to the wrapped object.
This will not work for example were there a function wrapper created using a nested function as is typically done with simple decorators, even if the implementation of those decorators uses functools.wraps().
That said, if functools.wraps() is used with a conventional function wrapper mixed in with use of an object proxy using wrapt, another option does exist.
This is that Python at one point (not sure when), introduced that any function wrapper (eg., decorators), should expose a __wrapped__ attribute which provides access to the wrapped object.
I can't remember the exact purpose in requiring this, but the wrapt object proxy supports it, and functools.wraps() also ensures it is provided.
Where the __wrapped__ attribute exists, what we can therefore do is traverse the chain of wrappers ourselves, looking for the type of our custom wrapper type.
found = False
wrapper = object
while wrapper is not None:
if isinstance(wrapper, CustomObjectProxy):
found = True
break
wrapper = getattr(wrapper, "__wrapped__", None)
if not found:
object = CustomObjectProxy(object)
If you are using the generic FunctionWrapper class, rather than need to create a derived version for every different use case, you could use a named function wrapper with attribute that is looked up.
class CustomFunctionWrapper(FunctionWrapper):
def __init__(self, name, wrapped):
self.__self_name = name
super().__init__(wrapped)
found = False
wrapper = object
while wrapper is not None:
if isinstance(wrapper, CustomFunctionWrapper):
if wrapper.__self_name == "wrapper-type":
found = True
break
wrapper = getattr(wrapper, "__wrapped__", None)
if not found:
object = CustomFunctionWrapper("wrapper-type", object)
End result is that if you want to try and be a resilient as possible, you should always use a custom object proxy type if you need to detect a specific wrapper of your own. This includes needing to create your own derived version of FunctionWrapper, with optional name attribute to distinguish different use cases if needed.
This then should be done in combination with traversing any chain of wrappers looking for it, and not assume your wrapper will be top most.
What's to be learned
The most important thing to learn about what happened in this case is that if packages appear to use the SEMVAR strategy for versioning, then believe that if a major version update occurs, there is a good chance it could have API incompatibilies. Sure it isn't guaranteed that minor versions will not also inadvertantly break things, but a major version sure is a big red flag.
So look at how packages use versioning and consider at least pinning versions to a major version.
Finally, always be very cautious when doing monkey patching and try and design stuff to be as bullet proof as possible, especially if the end target of the monkey patches is with code run by other people, such as the case for APM service instrumentation. Your customers will always appreciate you more when you don't break their applications. đ
Seth Michael Larson
Easily create co-authored commits with GitHub handles
You can add co-authors to a GitHub commit using the Co-authored-by
field in the git commit message. But what if your co-author doesn't have a
public email address listed on GitHub?
No problem, you can use this handy script to automatically discover a users' display name and per-account "noreply" email address that'll mark their account as a co-author without a public email address.
#!/bin/bash
login="$@"
read id display_name < <(echo $(curl -s "https://api.github.com/users/$login" | jq -r '.id, .name'))
echo "Co-authored-by: $display_name <$id+$login@users.noreply.github.com>"
I've added this script to my PATH in a file named coauthoredby, so I can call the script like so:
$ coauthoredby sethmlarson
Co-authored-by: Seth Michael Larson <18519037+sethmlarson@users.noreply.github.com>
And this can be used auto-magically with multi line git commits, so if I'm trying to credit Quentin Pradet as a co-author I'd do this:
$ git commit -m "Fixing bugs as usual
>
> $(coauthoredby pquentin)"
Resulting in this git commit message:
$ git log
Author: Seth Michael Larson <sethmichaellarson@gmail.com>
Date: Fri Oct 24 11:07:55 2025 -0500
Fixing bugs as usual
Co-authored-by: Quentin Pradet <42327+pquentin@users.noreply.github.com>
Thanks for keeping RSS alive! â„
October 23, 2025
"Michael Kennedy's Thoughts on Technology"
Course: Agentic AI for Python Devs
I just published a brand new course over at Talk Python: Agentic AI Programming for Python Devs.
This course teaches you how to collaborate with agentic AI tools, not just chatbots or autocomplete, but AI that can understand your entire project, execute commands, run tests, format code, and build complete features autonomously. You’ll learn to guide these tools like you would a talented junior developer on your team, setting up the right guardrails and roadmaps so they consistently deliver well-structured, maintainable code that matches your standards. Think of it as pair programming with an AI partner who learns your preferences, follows your conventions, and gets more effective the better you communicate.
I think you’ll find it immensely valuable. Check it out over at talkpython.fm/agentic-ai
PyCharm
Why Performance Matters in Python Development
Why Performance Matters in Python Development
This is a guest post from Dido Grigorov, a deep learning engineer and Python programmer with 17 years of experience in the field.
Pythonâs flexibility and ease of use make it a go-to language for developers, but its interpreted nature and global interpreter lock (GIL) can lead to performance bottlenecks, especially in large-scale or resource-intensive applications. Whether you’re building web servers, data pipelines, or real-time systems, optimizing Python code can save time, reduce costs, and improve the user experience.
Drawing from practical examples and insights from the Python community, this article explores proven performance hacks to help your applications run faster and more smoothly.
Understanding Python’s performance characteristics
Python’s interpreted nature fundamentally shapes its performance profile. Unlike compiled languages, like C, C++, or Rust, that translate source code directly into machine code before execution, Python follows a multi-step process that introduces inherent overhead.
When you run a Python script, the interpreter first compiles your source code into bytecode â a lower-level, platform-independent representation. This bytecode is then executed by the Python Virtual Machine (PVM), which translates each instruction into machine code at runtime.
This interpretation layer, while providing Python’s trademark flexibility and cross-platform compatibility, comes with a performance cost. Each line of code must be processed and translated during execution, creating a bottleneck that becomes particularly pronounced in computation-heavy scenarios.
The global interpreter lock challenge
Compounding Python’s interpreted nature is the global interpreter lock (GIL) â arguably one of the most significant performance limitations in CPython. The GIL is a mutex (mutual exclusion lock) that ensures only one thread can execute Python bytecode at any given moment, even on multi-core systems. While this design simplifies memory management and prevents race conditions in CPython’s internals, it effectively serializes execution for CPU-bound tasks.
The GIL’s impact varies significantly depending on your workload. I/O-bound operations â such as file reading, network requests, or database queries â can release the GIL during blocking operations, allowing other threads to proceed. However, CPU-intensive tasks like mathematical computations, data processing, or algorithmic operations remain largely single-threaded, unable to leverage multiple CPU cores through traditional threading.
With and without GIL
CPU cores detected: 32
Workers: 4
———————————————-
=== GIL-BOUND: pure Python loop ===
Single thread (py): 0.86s
Threads x4 (py): 3.24s
Processes x4 (py): 0.85s
———————————————-
=== GIL-RELEASED: PBKDF2 in C (hashlib) ===
Single thread (pbkdf2): 0.25s
Threads x4 (pbkdf2): 0.67s
Processes x4 (pbkdf2): 0.31s
Beyond language limitations: Code-level performance issues
While Python’s architectural constraints are well-documented, many performance problems stem from suboptimal coding practices that are entirely within developers’ control. These inefficiencies can dwarf the overhead introduced by interpretation and the GIL, making code optimization both critical and highly impactful.
Common performance pitfalls include:
- Unnecessary looping
- String concatenation in loops
- Not using built-in functions or libraries
- Misusing lists instead of appropriate data structures
- Overusing global variables
- Not using generators for large datasets
- Ignoring the GIL in CPU-bound tasks
- Excessive function calls or recursion
- Not profiling or benchmarking
- Overusing list comprehensions for side effects
- Repeatedly accessing object attributes
- Not using
enumerate()for index tracking - Inefficient dictionary key checks
- Copying large data structures unnecessarily
- Ignoring exception handling overhead
- Overusing regular expressions
- Not batching I/O operations
- Dynamic type checking overhead
- Not using context managers for resources
The critical importance of optimization
Understanding why performance optimization matters requires examining its impact across multiple dimensions of software development and deployment.
Resource efficiency and infrastructure costs
Unoptimized code creates a cascading effect on system resources. CPU-intensive operations that could run in seconds might take minutes, tying up processors and preventing other tasks from executing efficiently. Memory inefficiencies can lead to excessive RAM usage, forcing systems to rely on slower disk-based virtual memory. Poor I/O patterns can saturate network connections or overwhelm storage systems.
In cloud environments, these inefficiencies translate directly into financial costs. Cloud providers charge based on resource consumption â CPU time, memory usage, storage operations, and network bandwidth.
A poorly optimized algorithm that runs 10 times slower doesn’t just delay results; it increases your cloud bill by an order of magnitude. For organizations processing large datasets or serving high-traffic applications, these costs can quickly escalate from hundreds to thousands of dollars monthly.
Scalability and system limits
Performance issues that seem manageable during development or small-scale testing become exponentially worse under production loads. An application that handles dozens of users might crumble under thousands. A data processing pipeline that works with gigabytes of data might fail entirely when faced with terabytes.
Scalability challenges often emerge at predictable inflection points. Database queries that perform adequately with thousands of records may time out with millions. Web applications that respond quickly to individual requests may become unresponsive when handling concurrent users. Background processing jobs that complete within acceptable timeframes during off-peak hours may fall behind during high-demand periods, creating growing backlogs that eventually overwhelm the system.
Poor performance characteristics make horizontal scaling â adding more servers or instances â less effective and more expensive. Instead of smoothly distributing load across additional resources, inefficient code often creates bottlenecks that prevent proper load distribution. This forces organizations to over-provision resources or implement complex workarounds that increase system complexity and operational overhead.
User experience and competitive advantage
In user-facing applications, performance directly impacts user satisfaction and business outcomes. Research consistently shows that users abandon applications and websites that respond slowly, with abandonment rates increasing dramatically for delays exceeding 2â3 seconds. Mobile applications face even stricter performance expectations, as users expect instant responsiveness despite potentially limited processing power and network connectivity.
Development velocity and technical debt
Counter-intuitively, investing in performance optimization often accelerates development velocity over time. Slow test suites discourage frequent testing, leading to longer feedback cycles and increased bug detection costs. Development environments that respond sluggishly reduce programmer productivity and increase context-switching overhead. Build and deployment processes that take excessive time slow down iteration cycles and delay feature releases.
Performance problems also compound over time, creating technical debt that becomes increasingly expensive to address. Code that performs adequately during initial development may degrade as features are added, data volumes grow, or usage patterns evolve. Early investment in performance-conscious design and optimization practices prevents this degradation and maintains system responsiveness as complexity increases.
Operational stability and reliability
Performance and reliability are intimately connected. Systems operating near their performance limits are inherently fragile, with little headroom to handle traffic spikes, data volume increases, or unexpected load patterns. What appears to be a reliability issue â application crashes, timeouts, or service unavailability â often traces back to performance problems that exhaust system resources or exceed operational thresholds.
Optimized code provides operational resilience by maintaining acceptable performance even under stressful conditions. This resilience translates into better uptime, fewer emergency interventions, and more predictable system behavior. Organizations with performance-optimized systems spend less time firefighting production issues and more time delivering value to users.
The strategic imperative
Performance optimization isn’t merely a technical consideration â it’s a strategic business imperative that affects costs, user satisfaction, competitive positioning, and operational efficiency. While Python’s interpreted nature and GIL limitations create inherent performance constraints, the majority of performance issues stem from code-level inefficiencies that skilled developers can identify and address.
The key is approaching performance optimization systematically, with clear metrics, targeted improvements, and continuous monitoring. Rather than premature optimization that complicates code without measurable benefit, effective performance work focuses on identifying actual bottlenecks, implementing proven optimization techniques, and measuring results to ensure improvements deliver real value.
Dispelling common performance myths
Before exploring optimization techniques, it’s crucial to address widespread misconceptions that can lead developers down unproductive paths or cause them to dismiss Python entirely for performance-critical applications.
Myth 1: Python is universally slow
This oversimplification ignores Python’s nuanced performance characteristics and the critical distinction between different types of computational workloads. Python’s performance varies dramatically depending on the nature of the task and how the code leverages the broader Python ecosystem.
For I/O-bound operations â tasks that spend most of their time waiting for external resources like file systems, databases, or network services â Python’s interpreted overhead becomes largely irrelevant. When your program spends 95% of its time waiting for a database query to complete or a web API to respond, the few milliseconds of interpretation overhead pale in comparison to the I/O latency. In these scenarios, Python’s expressiveness and rapid development capabilities far outweigh any performance concerns.
Myth 2: The GIL eliminates all concurrency benefits
The global interpreter lock’s impact on performance is frequently misunderstood, leading developers to either avoid threading entirely or attempt inappropriate parallelization strategies. The reality is more nuanced and depends heavily on workload characteristics.
For I/O-bound tasks, threading remains highly effective despite the GIL. When a thread initiates an I/O operation â whether reading from disk, making a network request, or querying a database â it releases the GIL, allowing other threads to execute Python code. This means that a multi-threaded web scraper can achieve significant performance improvements by processing multiple requests concurrently, even though each individual request involves Python code execution.
The distinction is clear when comparing CPU-bound versus I/O-bound scenarios. A single-threaded program making sequential HTTP requests will be limited by network latency, regardless of CPU speed. Adding threads allows multiple requests to proceed simultaneously, dramatically reducing total execution time. Conversely, a CPU-intensive calculation like computing prime numbers or processing images will see minimal benefit from threading due to GIL contention.
Understanding this distinction enables appropriate tool selection. Threading excels for I/O-bound parallelism, while CPU-bound tasks benefit from multiprocessing (which bypasses the GIL by using separate processes) or specialized libraries that release the GIL during computation-intensive operations.
Myth 3: Hardware upgrades substitute for code optimization
The temptation to solve performance problems by upgrading hardware is understandable, but often ineffective and economically wasteful. While faster processors, additional memory, or improved storage can provide linear performance improvements, they cannot overcome fundamental algorithmic inefficiencies that exhibit quadratic, cubic, or exponential time complexity.
Consider a sorting algorithm comparison: Upgrading from a 2GHz to a 4GHz processor will make Bubble Sort run twice as fast, but switching from Bubble Sort (O(nÂČ)) to Quick Sort (O(n log n)) provides exponentially greater improvements as data size increases. For sorting 10,000 elements, the algorithmic improvement might yield a 100x speed gain, while the hardware upgrade provides only a 2x improvement.
This principle scales to real-world applications. A database query with poor indexing won’t be significantly helped by faster hardware, but adding appropriate indexes can reduce execution time from minutes to milliseconds. A web application that loads entire datasets into memory for simple filtering operations will struggle regardless of available RAM, but implementing database-level filtering can reduce both memory usage and execution time by orders of magnitude.
Hardware upgrades also carry ongoing costs in cloud environments. A poorly optimized application might require a server instance with 16 CPU cores and 64GB of RAM, while an optimized version could run efficiently on 2 cores and 8GB of RAM. Over time, the cost difference becomes substantial, especially when multiplied across multiple environments or scaled to handle increasing load.
Myth 4: Performance optimization is about intuition and experience
The belief that experienced developers can reliably identify performance bottlenecks through code review or intuition leads to wasted effort and suboptimal results. Human intuition about code performance is notoriously unreliable, especially in high-level languages like Python where the relationship between source code and actual execution patterns is complex.
Profiling tools provide objective, quantitative data about where programs actually spend their time, often revealing surprising results. That nested loop you suspected was the bottleneck might account for only 2% of execution time, while an innocuous-looking string operation consumes 60% of runtime. Library calls that appear lightweight might involve substantial overhead, while seemingly expensive operations might be highly optimized.
Tools like cProfile, line_profiler, and memory_profiler remove guesswork by providing detailed breakdowns of function call frequency, execution time, and resource usage. These tools not only identify actual bottlenecks but also quantify the potential impact of optimizations, helping prioritize improvement efforts.
Profiling also reveals performance patterns that aren’t immediately obvious from source code inspection. Garbage collection overhead, import costs, attribute access patterns, and memory allocation behavior all contribute to runtime performance in ways that static analysis cannot predict. Data-driven optimization based on profiling results consistently outperforms intuition-based approaches.
Moving beyond misconceptions
Recognizing these myths enables more strategic thinking about Python performance optimization. Rather than dismissing Python for performance-sensitive applications or attempting blanket optimizations, developers can make informed decisions about when and how to optimize based on actual usage patterns and measured performance characteristics.
The key insight is that Python performance optimization is contextual and empirical. Understanding your specific workload characteristics, measuring actual performance bottlenecks, and selecting appropriate optimization strategies based on data rather than assumptions will yield far better results than following generic performance advice or avoiding Python entirely due to perceived limitations.
The critical role of profiling and benchmarking
Effective performance optimization relies on measurement, not guesswork. Without systematic profiling and benchmarking, optimization efforts often miss the actual bottlenecks and waste valuable development time on improvements that don’t matter.
Why intuition fails
Developer intuition about performance is notoriously unreliable. We naturally focus on complex-looking code, assuming it’s slow, while overlooking simple operations that actually dominate runtime. Python’s high-level abstractions make this worse â an innocent list comprehension might trigger massive memory allocations, while an apparently expensive algorithm contributes negligibly to total execution time.
Profiling: Finding the real bottlenecks
Profiling tools provide objective data about where your program actually spends time. Tools like cProfile show which functions are called most frequently and consume the most time, while line_profiler pinpoints specific statements within functions. Memory profilers track allocation patterns and identify memory leaks that trigger excessive garbage collection.
This data-driven approach replaces guesswork with facts. Instead of optimizing code that looks slow, you optimize the functions that actually dominate your application’s runtime.
Benchmarking: Measuring success
Benchmarking quantifies whether your optimizations actually work. Without systematic measurement, you can’t know if changes improve performance, by how much, or under what conditions.
Effective benchmarking requires running multiple iterations to account for system variability, using representative datasets that match production workloads, and testing both isolated functions (microbenchmarks) and complete workflows (end-to-end benchmarks).
Benchmarking with cProfile
Benchmarking with cProfile reveals slow functions and call counts. For example, profiling a naive Fibonacci function shows millions of recursive calls, but memoization reduces this dramatically:
# Naive Fibonacci
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
def process_data(n):
results = []
for i in range(n):
results.append(fib(i) * 2)
return results
import cProfile
cProfile.run("process_data(30)")
# Output: 4356620 function calls in 4.239 seconds
# With memoization
def fib_memo(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo)
return memo[n]
def process_data_fast(n):
return [fib_memo(i) * 2 for i in range(n)]
cProfile.run("process_data_fast(30)")
# Output: 91 function calls in 0.000 seconds
Time measured:
Naive Fibonacci: Output: 4,356,620 function calls in 4.239 seconds
4356620 function calls (64 primitive calls) in 1.646 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
4356586/30 1.646 0.000 1.646 0.055 <ipython-input-45-6e8c229d1114>:3(fib)
1 0.000 0.000 1.646 1.646 <ipython-input-45-6e8c229d1114>:8(process_data)
1 0.000 0.000 1.646 1.646 <string>:1(<module>)
1 0.000 0.000 1.646 1.646 {built-in method builtins.exec}
30 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
With memoization: 91 function calls in 0.000 seconds â Memoization helps by storing the results of expensive function calls and reusing them when the same inputs occur again.
91 function calls (35 primitive calls) in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <ipython-input-46-d2e1a597ad5b>:12(process_data_fast)
1 0.000 0.000 0.000 0.000 <ipython-input-46-d2e1a597ad5b>:13(<listcomp>)
86/30 0.000 0.000 0.000 0.000 <ipython-input-46-d2e1a597ad5b>:4(fib_memo)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Benchmarking with time and timeit
- time: Offers quick runtime estimates for rough comparisons.
- timeit: Ideal for benchmarking small snippets and running them multiple times for accuracy. For instance, comparing summation methods shows
sum(range(n))is fastest:
import timeit
n = 1000000
setup = "n = 1000000"
loop_time = timeit.timeit("total = 0\nfor i in range(n): total += i", setup=setup, number=10)
sum_time = timeit.timeit("sum(range(n))", setup=setup, number=10)
print(f"Loop: {loop_time:.4f}s, Built-in Sum: {sum_time:.4f}s")
Time measured:
- Loop: 2.0736s
- Built-in sum: 0.9875s
How to optimize Python code with PyCharm
Steps to profile Python code in PyCharm
1. Open your project
Open the Python project you want to analyze in PyCharm.
2. Choose the file or function to profile
Open the Python file you want to profile. You can profile a full script or specific test functions.
3. Run with the PyCharm profiler
You have two main options:
Option A: Profile a script
- Right-click the Python script (e.g.
main.py). - Select: Profile ‘<script name>’
Option B: Use the menu
- Go to Run | ProfileâŠ
- Choose your script or configuration.
4. Wait for execution
The script will run normally, but PyCharm will collect profiling data in the background.
5. View profiling results
Once execution completes:
- PyCharm will open the Profiler window.
- Youâll then see:
- The function call hierarchy
- The execution time per function
- The number of calls
- CPU time vs. wall time
- Call graphs (optional)
- The function call hierarchy
You can sort by time, filter functions, and drill down into nested calls.
Understanding the profiling report
The PyCharm IDE runs a Python script (test.py) that compares the performance of a naive Fibonacci implementation with a memoized version. The built-in profiler’s Statistics tab is open, revealing that the naive fib function was called over 4.3 million times, taking 603 ms in total, while the memoized version completed almost instantly.
We benchmark naive versus memoized Fibonacci calculations. The built-in profiler is open in the Flame Graph view, visually illustrating that the naive fib function dominates the execution time, while the memoized versionâs impact is negligible, highlighting the performance gains from memoization.
PyCharm spotlights a classic bottleneck: A naive Fibonacci thrashes in recursion while the memoized path sails through. The Call Tree view lays it bare â two deep fib branches swallow essentially the entire 603 ms runtime:
- It attributes ~100% of total time to
process_data()âfib(), split across the left/right recursive calls (~301 ms each).
- This symmetry is typical of the exponential
fib(n-1) + fib(n-2)pattern without caching. - Since the memoized version doesnât meaningfully appear in the tree, it indicates near-zero overhead after warm-up â evidence that memoization collapses the exponential recursion into linear-time lookups.
- Takeaway: for overlapping subproblems (like Fibonacci), memoization transforms runtime from O(Ï^n) to O(n) and eliminates the recursion-dominated hotspots you see here.
In the PyCharm profilerâs Method List view, the naive Fibonacci routine makes its dominance clear â fib() consumes nearly all of the 603 ms execution time, while process_data() and the moduleâs entry point simply act as conduits:
The table shows fib() at 301 ms per branch, totaling the full runtime, which aligns with the exponential recursion tree where both subcalls expand almost equally.
The âOwn Execution Timeâ column confirms that most of the cost is in the recursive body itself, not in overhead from process_data() or I/O.
No memoized calls appear here because the fast version runs so quickly it barely registers on the profiler. The pattern is a textbook example of performance profiling â identifying the hotspot and confirming that optimization should target the recursive core rather than the surrounding code.
The PyCharm profilerâs Call Graph view paints a clear picture âfib() dominates the execution time in bright red, clocking 603 ms over 4.3 million calls, while everything else barely registers.
The red nodes in the graph highlight performance hotspots, with fib() responsible for 100% of the total runtime. Both process_data() and the moduleâs main entry point route directly into this costly recursion, confirming thereâs no other significant time sink.
Green nodes like fib_mem0() and append() in lists indicate negligible runtime impact, which aligns with the near-instant memoized execution path.
The visualization makes the optimization priority obvious â replace or cache the expensive fib() calls, and the programâs runtime collapses from hundreds of milliseconds to near zero.
In summary
Optimize when your code creates noticeable bottlenecks in critical processes, especially as workloads grow. Key triggers include rising cloud costs from inefficient resource usage, user-facing delays (even 100 ms can hurt engagement), and when moving from prototype to production.
Performance optimization delivers efficiency gains, enables scalability for growth, controls costs, provides competitive advantages, ensures reliability under pressure, and improves user satisfaction.
How to approach optimization
Follow a systematic approach. Profile first to identify real bottlenecks rather than guessing, leverage established libraries like NumPy and pandas, prioritize algorithmic improvements (better Big O complexity beats micro-optimizations), and always measure results to verify improvements.
CPU-bound vs I/O-bound operations
- CPU-bound tasks are limited by processor speed and include computations like mathematical calculations, image processing, and machine learning training. Python’s global interpreter lock (GIL) restricts true multithreading for these tasks.
- I/O-bound operations involve waiting for input/output like file reads, network requests, or database queries. The CPU remains mostly idle during these operations, making them less affected by the GIL.
General optimization principles
- Measure first: Use profiling tools (
cProfile,timeit,line_profiler) to identify bottlenecks. Optimizing without data is guesswork. - Choose the right tool: Match algorithms, data structures, and concurrency models to your task (CPU-bound vs. I/O-bound).
- Minimize overhead: Reduce Pythonâs interpretive and dynamic nature where it slows you down.
- Trade-offs: Balance speed, memory, and readability based on your needs.
