skip to navigation
skip to content

Planet Python

Last update: February 18, 2025 09:43 PM UTC

February 18, 2025


PyCoder’s Weekly

Issue #669: Joining Strings, MongoDB in Django, Mobile Wheels, and More (Feb. 18, 2025)

#669 – FEBRUARY 18, 2025
View in Browser »

The PyCoder’s Weekly Logo


How to Join Strings in Python

In this tutorial, you’ll learn how to use Python’s built-in .join() method to combine string elements from an iterable into a single string with a specified separator. You’ll also learn about common pitfalls, and how CPython makes .join() work efficiently.
REAL PYTHON

Creating the MongoDB Database Backend for Django

Django supports a number of relational databases, but to go NoSQL you need to use third party tools. This is about to change as a backend for MongoDB is in development. This talks about the history of Mongo and Django and how the new code is structured.
JIB ADEGUNLOYE

Postgres, Now with Built-in Warehousing

alt

Why manage two databases when one does it all? Crunchy Data Warehouse keeps your transactional database running smoothly while adding warehouse features like querying object storage, BI tool connections, and more. Scale efficiently with the Postgres you trust, without the complexity →
CRUNCHY DATA sponsor

PyPI Now Supports iOS and Android Wheels

PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
SARAH GOODING • Shared by Sarah Gooding

Python Release 3.14.0a5

PYTHON.ORG

PyPy v7.3.18 Released

PYPY.ORG

PEP 765: Disallow Return/Break/Continue That Exit a Finally Block (Accepted)

PYTHON.ORG

Quiz: How to Join Strings in Python

REAL PYTHON

Quiz: Python for Loops: The Pythonic Way

REAL PYTHON

Python Jobs

Backend Software Engineer (Anywhere)

Brilliant.org

More Python Jobs >>>

Articles & Tutorials

Charlie Marsh: Accelerating Python Tooling With Ruff and uv

Are you looking for fast tools to lint your code and manage your projects? How is the Rust programming language being used to speed up Python tools? This week on the show, we speak with Charlie Marsh about his company, Astral, and their tools, uv and Ruff.
REAL PYTHON podcast

Managing Django’s Queue

Carlton is one of the core developers of Django. This post talks about staying on top of the incoming pull-requests, bug fixes, and everything else in the development queue.
CARLTON GIBSON

Unify Distributed Data from Edge-to-Cloud

alt

Meet HiveMQ Pulse: Built to organize distributed data into a structured namespace for seamless access from edge-to-cloud. Gain insights from distributed devices and systems, with a single source of truth for your data. Get early access →
HIVEMQ sponsor

Shipping Software on Time and on Budget

The detailed post talks about all the things you can do to try to get better at delivering on time and on budget. The article includes a lot of good references as well.
CARLTON GIBSON

Great Tables

Talk Python To Me interviews Rich Iannone and Michael Chow from Posit. They discuss the transformative power of data tables with the Great Tables library.
KENNEDY, IANNONE, & CHOW podcast

pytest-mock: Mocking in pytest

pytest-mock is currently the #3 pytest plugin. It is a wrapper around unittest.mock. This covers what mocking is, and how to do it well in pytest.
BRIAN OKKEN podcast

Tail-Call Interpreter Added to CPython

New code for a tail-call interpreter has been added to the Python 3.14 alpha. It is an opt-in feature for now, but promises performance improvements.
PYTHON.ORG

Python Free-Threading Guide

This is a centralized collection of documentation and trackers around compatibility with free-threaded CPython for the Python open source ecosystem.
QUANSIGHT

re.Match.groupdict

This quick TIL post shows how you can use the .groupdict() method from a regex match to get a dictionary with all named groups.
RODRIGO GIRÃO SERRÃO

The 10-Step Checklist for Continuous Delivery

Learn how to implement Continuous Delivery with this 10-step guide featuring actionable insights, examples, and best practices.
ANTHONY CAMPOLO

Exploring ICEYE’s Satellite Imagery

This article does a deep dive data-analysis on satellite imagery of an airport. It uses pandas, geopandas, PyTorch, and more.
MARK LITWINTSCHIK

Terminal Colours Are Tricky

Choosing just the right palette for your terminal can be tricky. This article talks about the why and how.
JULIE EVANS

Projects & Code

Validoopsie: Data Validation Made Effortless!

GITHUB.COM/AKMALSOLIEV • Shared by Akmal Soliev

tea-tasting: Statistical Analysis of A/B Tests

GITHUB.COM/E10V

pyquery: A Jquery-Like Library for Python

GITHUB.COM/GAWEL

arq: Fast Job Queuing and RPC With Asyncio and Redis

GITHUB.COM/PYTHON-ARQ

micropie: Ultra-Micro Python Web Framework

GITHUB.COM/PATX

Events

Weekly Real Python Office Hours Q&A (Virtual)

February 19, 2025
REALPYTHON.COM

Workshop: Creating Python Communities

February 20 to February 21, 2025
PYTHON-GM.ORG

PyData Bristol Meetup

February 20, 2025
MEETUP.COM

PyLadies Dublin

February 20, 2025
PYLADIES.COM

Django Girls Koforidua

February 21 to February 23, 2025
DJANGOGIRLS.ORG

Python Weekend Abuja

February 21, 2025
CODECAMPUS.COM.NG

DjangoCongress JP 2025

February 22 to February 23, 2025
DJANGOCONGRESS.JP

PyConf Hyderabad 2025

February 22 to February 24, 2025
PYCONFHYD.ORG

PyCon Namibia

February 24 to February 28, 2025
PYCON.ORG

PyCon APAC 2025

March 1 to March 3, 2025
PYTHON.PH


Happy Pythoning!
This was PyCoder’s Weekly Issue #669.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

February 18, 2025 07:30 PM UTC


Real Python

Concatenating Strings in Python Efficiently

Python string concatenation is a fundamental operation that combines multiple strings into a single string. In Python, you can concatenate strings using the + operator or append them with +=. For more efficient concatenation, especially when working with lists of strings, the .join() method is recommended. Other techniques include using StringIO for large datasets and the print() function for quick screen output.

By the end of this video course, you’ll understand that you can:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 18, 2025 02:00 PM UTC


PyCharm

Which Is the Best Python Web Framework: Django, Flask, or FastAPI?

Which Is the best Python web framework: Django, Flask, or FastAPI?

Search for Python web frameworks, and three names will consistently come up: Django, Flask, and FastAPI. Our latest Python Developer Survey Results confirm that these three frameworks remain developers’ top choices for backend web development with Python.

All three frameworks are open-source and compatible with the latest versions of Python. 

But how do you determine which web framework is best for your project? Here, we’ll look at the pros and cons of each and compare how they stack up against one another.

Django

Django is a “batteries included”, full-stack web framework used by the likes of Instagram, Spotify, and Dropbox, to name but a few. Pitched as “the web framework for perfectionists with deadlines”, the Django framework was designed to make it easier and quicker to build robust web apps.

First made available as an open-source project in 2005, Django is a mature project that remains in active development 20 years later. It’s suitable for many web applications, including social media, e-commerce, news, and entertainment sites.

Django follows a model-view-template (MVT) architecture, where each component has a specific role. Models are responsible for handling the data and defining its structure. The views manage the business logic, processing requests and fetching the necessary data from the models. Finally, templates present this data to the end user – similar to views in a model-view-controller (MVC) architecture. 

As a full-stack web framework, Django can be used to build an entire web app (from database to HTML and JavaScript frontend).

Alternatively, you can use the Django REST Framework to combine Django with a frontend framework (such as React) to build both mobile and browser-based apps.

Explore our comprehensive Django guide, featuring an overview of prerequisite knowledge, a structured learning path, and additional resources to help you master the framework. 

Django advantages

There are plenty of reasons why Django remains one of the most widely used Python web frameworks, including:

Django disadvantages

Despite its many advantages, there are a few reasons you might want to look at options other than Django when developing your next web app.

Flask

Flask is a Python-based micro-framework for backend web development. However, don’t let the term “micro” deceive you. As we’ll see, Flask isn’t only limited to smaller web apps. 

Instead, Flask is designed with a simple core based on Werkzeug WSGI (Web Server Gateway Interface) and Jinja2 templates. Well-known users of Flask include Netflix, Airbnb, and Reddit.

Flask was initially created as an April Fools’ Day joke and released as an open-source project in 2010, a few years after Django. The micro-framework’s approach is fundamentally different from Django’s. While Django takes a “batteries included” style and comes with a lot of the functionality you may need for building web apps, Flask is much leaner.

The philosophy behind the micro-framework is that everyone has their preferences, so developers should be free to choose their own components. For this reason, Flask doesn’t include a database, ORM (object-relational mapper), or ODM (object-document mapper). 

When you build a web app with Flask, very little is decided for you upfront. This can have significant benefits, as we’ll discuss below.

Flask advantages

We’ve seen usage of Flask grow steadily over the last five years through our State of the Developer Ecosystem survey – it overtook Django for the first time in 2021. 

Some reasons for choosing Flask as a backend web framework include:

Flask disadvantages

While Flask has a lot to offer, there are a few things to consider before you decide to use it in your next web development project.

FastAPI

As the name suggests, FastAPI is a micro-framework for building high-performance web APIs with Python. Despite being relatively new – it was first released as an open-source project in 2018 ­– FastAPI has quickly become popular among developers, ranking third in our list of the most popular Python web frameworks since 2021.

FastAPI is based on Uvicorn, an ASGI (Asynchronous Server Gateway Interface) server, and Starlette, a web micro-framework. FastAPI adds data validation, serialization, and documentation to streamline building web APIs.

When developing FastAPI, the micro-framework’s creator drew on the experiences of working with many different frameworks and tools. Whereas Django was developed before frontend JavaScript web frameworks (such as React or Vue.js) were prominent, FastAPI was designed with this context in mind. 

The emergence of OpenAPI (formerly Swagger) as a format for structuring and documenting APIs in the preceding years provided an industry standard that FastAPI could leverage.

Beyond the implicit use case of creating RESTful APIs, FastAPI is ideal for applications that require real-time responses, such as messaging platforms and dashboards. Its high performance and asynchronous capabilities make it a good fit for data-intensive apps, including machine learning models, data processing, and analytics.

FastAPI advantages

FastAPI first received its own category in our State of the Developer Ecosystem survey in 2021, with 14% of respondents using the micro-framework. 

Since then, usage has increased to 20%, alongside a slight dip in the use of Flask and Django. 

These are some of the reasons why developers are choosing FastAPI:

FastAPI disadvantages

Before deciding that FastAPI is the best framework for your next project, bear in mind the following:

Choosing between Flask, Django, and FastAPI

So, which is the best Python web framework? As with many programming things, the answer is “it depends”.

The right choice hinges on answering a few questions: What kind of app are you building? What are your priorities? How do you expect your project to grow in the future?

All three popular Python web frameworks come with unique strengths, so assessing them in the context of your application will help you make the best decision. 

Django is a great option if you need standard web app functionality out of the box, making it suitable for projects that require a more robust structure. It’s particularly advantageous if you’re using a relational database, as its ORM simplifies data management and provides built-in security features. However, the extensive functionality may feel overwhelming for smaller projects or simple applications.

Flask, on the other hand, offers greater flexibility. Its minimalist design enables developers to pick and choose the extensions and libraries they want, making it suitable for projects where you need to customize features. This approach works well for startups or MVPs, where your requirements might change and evolve rapidly. While Flask is easy to get started with, keep in mind that building more intricate applications will mean exploring various extensions.

FastAPI is a strong contender when speed is of the essence, especially for API-first or machine learning projects. It uses modern Python features like type hints to provide automatic data validation and documentation. FastAPI is an excellent choice for applications that need high performance, like microservices or data-driven APIs. Despite this, it may not be as feature-rich as Django or Flask in terms of built-in functionality, which means you might need to implement additional features manually. 

For a deeper comparison between Django and the different web frameworks, check out our other guides, including:

Python web framework overview

 DjangoFlaskFastAPI
Design philosophyFull-stack framework designed for web apps with relational databases.Lightweight backend micro-framework.Lightweight micro-framework for building web APIs.
Ease of use“Batteries included” approach means everything you need is in the box, accelerating development. That said, the amount of functionality available can present a steep learning curve.As Flask is a micro-framework, there is less code to familiarize yourself with upfront.High levels of flexibility to choose your preferred libraries and extensions. However, having less functionality built in requires more external dependencies.Like Flask, less functionality is built in than with Django. Type hints and validation speed up development and reduce errors. Compatible with OpenAPI for automatic API reference docs.
ExtensibilityLargest selection of compatible packages out of the three.Large number of compatible packages.Fewer compatible packages than Flask or Django.
PerformanceGood, but not as fast as Flask or FastAPI.Slightly faster than Django but not as performant as FastAPI.Fastest of the three.
ScalabilityMonolithic design can limit scalability. Support for async processing can improve performance under high load.Highly scalable thanks to a lightweight and modular design.Highly scalable thanks to a lightweight and modular design.
SecurityMany cybersecurity defenses built in.Client-side cookies secured by default. Other security protections need to be added, and dependencies should be checked for vulnerabilities.Support for OAuth 2.0 out of the box. Other security protections need to be added, and dependencies should be checked for vulnerabilities.
MaturityOpen source since 2005 and receives regular updates.Open source since 2010 and receives regular updates.Open source since 2018 and receives regular updates.
CommunityLarge and active following.Active and likely to keep growing as Flask remains popular.Smaller following than Django or Flask.
DocumentationThe most active and robust official documentation. Extensive official documentation.The least active official documentation, given its age.

Further reading

Start your web development project with PyCharm

Regardless of your primary framework, you can access all the essential web development tools in a single IDE. PyCharm provides built-in support for Django, FastAPI, and Flask, while also offering top-notch integration with frontend frameworks like React, Angular, and Vue.js.

February 18, 2025 10:00 AM UTC


Python Software Foundation

Where is the PSF? 2025 Edition

Where to Find the PSF Online

One of the main ways we reach people for news and information about the PSF and Python is on social media. There’s been a lot of uncertainty around X as well as some other platforms popping up, so we wanted to share a brief round-up of other places you can find us:

As always, if you are looking for technical support rather than news about the foundation, we have collected links and resources here for people who are new or looking to get deeper into the Python programming language: https://www.python.org/about/gettingstarted/

You can also ask questions about Python or the PSF on Python’s Discuss forum. The PSF category is the best place to reach us on the forum!

 

Where to Find PyCon US Online

Here’s where you can go for updates and information specific to PyCon US:

 

Where to Find PyPI Online

Here’s where you can go for updates and information specific to PyPI:

Thank you for keeping in touch, and see you around the Internet!

February 18, 2025 06:30 AM UTC

February 17, 2025


Real Python

Python News Roundup: February 2025

The new year has brought a flurry of activity to the Python community. New bugfix releases of Python 3.12 and 3.13 show that the developers seemingly never sleep. A new type of interpreter is slated for the upcoming Python 3.14 as part of ongoing efforts to improve Python’s performance.

Poetry takes a giant leap toward compatibilty with other project management tools with the release of version 2. If you’re interested in challenging yourself with some programming puzzles, check out the new season of Coding Quest.

Time to jump in! Enjoy this tour of what’s happening in the world of Python!

Poetry Version 2 Adds Compatibility

Poetry is a trusted and powerful project and dependency manager for Python. Initially created by Sébastien Eustace in 2018, it reached its Version 1 milestone in 2019. Since then, it has grown to be one of the most commonly used tools for managing Python projects.

On January 5, 2025, the Poetry team announced the release of Poetry 2.0.0. This major release comes with many updates. One of the most requested changes is compatibility with PEP 621, which describes how to specify project metadata in pyproject.toml.

Most of the common tools for project management, including setuptools, uv, Hatch, Flit, and PDM, use pyproject.toml and the project table in a consistent way, as defined in PEP 621. With Poetry on board as well, you can more simply migrate your project from one tool to another.

This improved compatibility with the rest of the Python eco-system comes with a price. There are a few breaking changes in Poetry 2 compared to earlier versions. If you’re already using Poetry, you should take care when updating to the latest version.

The changelog describes all changes, and you can read the documentation for advice on how to migrate your existing projects to the new style of configuration.

The Python Team Releases Bugfix Versions for 3.12 and 3.13

Read the full article at https://realpython.com/python-news-february-2025/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 17, 2025 02:00 PM UTC


Python Bytes

#420 90% Done in 50% of the Available Time

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://peps.python.org/pep-0772/?featured_on=pythonbytes">PEP 772 – Packaging governance process</a></strong></li> <li><strong><a href="https://www.mongodb.com/blog/post/mongodb-django-backend-now-available-public-preview?utm_source=www.pythonweekly.com&utm_medium=newsletter&utm_campaign=python-weekly-issue-687-february-13-2025&_bhlid=ac970bf5150af48b53b11f639dd520db04c9a2aa&featured_on=pythonbytes">Official Django MongoDB Backend</a> Now Available in Public Preview</strong></li> <li><a href="https://qntm.org/devphilo?featured_on=pythonbytes"><strong>Developer Philosophy</strong></a></li> <li><strong><a href="https://docs.python.org/release/3.13.2/whatsnew/changelog.html#python-3-13-2">Python 3.13.2</a> released</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=CW4mZ3XNfY8' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="420">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it. </p> <p><strong>Brian #1:</strong> <a href="https://peps.python.org/pep-0772/?featured_on=pythonbytes">PEP 772 – Packaging governance process</a> </p> <ul> <li>draft, created 21-Jan, by Barry Warsaw, Deb Nicholson, Pradyun Gedam</li> <li>“As Python packaging has matured, several interrelated problems with the current way of managing the technical development, decision making and processes have become apparent.”</li> <li>“This PEP proposes a Python Packaging Council with broad authority over packaging standards, tools, and implementations. Like the Python Steering Council, the Packaging Council seeks to exercise this authority as rarely as possible; instead, they use this power to establish standard processes.”</li> <li>PEP discusses <ul> <li>PyPA, Packaging-WG, Interoperability Standards, Python Steering Council, and Expectations of an elected Packaging Council</li> <li>A specification with <ul> <li>Composition: 5 people</li> <li>Mandate, Responsibilities, Delegations, Process, Terms, etc.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.mongodb.com/blog/post/mongodb-django-backend-now-available-public-preview?utm_source=www.pythonweekly.com&utm_medium=newsletter&utm_campaign=python-weekly-issue-687-february-13-2025&_bhlid=ac970bf5150af48b53b11f639dd520db04c9a2aa&featured_on=pythonbytes">Official Django MongoDB Backend</a> Now Available in Public Preview</p> <ul> <li>Over the last few years, Django developers have increasingly used MongoDB, presenting an opportunity for an official MongoDB-built Python package to make integrating both technologies as painless as possible.</li> <li>Features <ul> <li><strong>The ability to use Django models with confidence</strong>. Developers can use Django <a href="https://docs.djangoproject.com/en/5.1/topics/db/models/?featured_on=pythonbytes">models</a> to represent MongoDB documents, with support for Django forms, validations, and authentication.</li> <li><strong>Django admin support</strong>. The package allows users to fire up the Django admin page as they normally would, with full support for <a href="https://docs.djangoproject.com/en/5.1/topics/migrations/#module-django.db.migrations">migrations</a> and database schema history.</li> <li><strong>Native connecting from settings.py</strong>. Just as with any other database provider, developers can customize the database engine in settings.py to get MongoDB up and running.</li> <li><strong>MongoDB-specific querying optimizations</strong>. Field lookups have been replaced with aggregation calls (aggregation stages and aggregate operators), JOIN operations are represented through $lookup, and it’s possible to build indexes right from Python.</li> <li><strong>Limited advanced functionality</strong>. While still in development, the package already has support for time series, projections, and XOR operations.</li> <li><strong>Aggregation pipeline support</strong>. Raw querying allows aggregation pipeline operators. Since aggregation is a superset of what traditional MongoDB Query API methods provide, it gives developers more functionality.</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://qntm.org/devphilo?featured_on=pythonbytes"><strong>Developer Philosophy</strong></a></p> <ul> <li>by qntm</li> <li>Intended as “advice for junior developers about personal dev philosophy”, I think these are just great tips to keep in mind.</li> <li>The items <ul> <li>Avoid, at all costs, arriving at a scenario where the ground-up rewrite starts to look attractive <ul> <li>This is less about “don’t do rewrites”, but about noticing the warning signs ahead of time.</li> </ul></li> <li>Aim to be 90% done in 50% of the available time <ul> <li>Great quote: “The first 90% of the job takes 90% of the time. The last 10% of the job takes the other 90% of the time.”</li> </ul></li> <li>Automate good practices</li> <li>Think about pathological data <ul> <li>“Nobody cares about the golden path. Edge cases are our <em>entire job</em>.”</li> <li>Brian’s note: But also think about the happy path. Documenting and testing what you think of as the happy path is a testing start and helps others understand your idea of how things are supposed to work.</li> </ul></li> <li>There’s usually a simpler way to write it</li> <li>Write code to be testable</li> <li>It is insufficient for code to be provably correct; it should be obviously, visibly, trivially correct <ul> <li>Brian’s note: Even if it’s obviously, visibly, trivially correct, it will still break. So test it anyway.</li> </ul></li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://docs.python.org/release/3.13.2/whatsnew/changelog.html#python-3-13-2">Python 3.13.2</a> released</p> <ul> <li>Python 3.13’s second maintenance release. </li> <li>About 250 changes went into this update</li> <li>Also Python 3.12.9, Python 3.12’s ninth maintenance release already. Just 180 changes for 3.12, but it’s still worth upgrading.</li> <li>For us, it’s simply rebuilding our Docker base (i.e. —no-cache) with these lines: <pre><code>RUN curl -LsSf https://astral.sh/uv/install.sh | sh RUN --mount=type=cache,target=/root/.cache uv venv --python 3.13 /venv </code></pre></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>Still thinking about pytest plugins a lot.</li> <li>The <a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes">top pytest plugin list</a> <ul> <li>Has been updated for Feb</li> <li>Is starting to include things without “pytest” in the name, like Hypothesis and Syrupy. <ul> <li>Eventually I’ll have to add “looking at trove classifiers” as part of the search, but for now, let me know if you’re favorite is missing.</li> </ul></li> <li>Includes T&amp;C podcast episode links if I’ve covered it on the show. <ul> <li>There’s 2 so far</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li>There's <a href="https://github.com/pyscript/pyscript/releases/tag/2025.2.1?featured_on=pythonbytes">a new release of PyScript</a> out. All the details are here: Highlight is new PyGame-CE support. Go play!</li> <li><a href="https://peps.python.org/pep-2026/?featured_on=pythonbytes">PEP 2026 – Calendar versioning for Python</a> rejected. :(</li> <li><a href="https://peps.python.org/pep-0759/?featured_on=pythonbytes">PEP 759 – External Wheel Hosting</a> withdrawn</li> </ul> <p><strong>Joke:</strong> </p> <ul> <li><a href="https://bsky.app/profile/bruno.rocha.social/post/3lhhearmiz22v?featured_on=pythonbytes">Pride Versioning</a></li> </ul>

February 17, 2025 08:00 AM UTC


Quansight Labs Blog

Mastering DuckDB when you're used to pandas or Polars

It's not as scary as you think

February 17, 2025 12:00 AM UTC

February 14, 2025


Kay Hayen

Nuitka this week #16

Hey Nuitka users! This started out as an idea of a weekly update, but that hasn’t happened, and so we will switch it over to just writing up when something interesting happens and then push it out relatively immediately when it happens.

Nuitka Onefile Gets More Flexible: --onefile-cache-mode and {PROGRAM_DIR}

We’ve got a couple of exciting updates to Nuitka’s onefile mode that give you more control and flexibility in how you deploy your applications. These enhancements stem from real-world needs and demonstrate Nuitka’s commitment to providing powerful and adaptable solutions.

Taking Control of Onefile Unpacking: --onefile-cache-mode

Onefile mode is fantastic for creating single-file executables, but the management of the unpacking directory where the application expands has sometimes been a bit… opaque. Previously, Nuitka would decide whether to clean up this directory based on whether the path used runtime-dependent variables. This made sense in theory, but in practice, it could lead to unexpected behavior and made debugging onefile issues harder.

Now, you have complete control! The new --onefile-cache-mode option lets you explicitly specify what happens to the unpacking directory:

This gives you the power to choose the behavior that best suits your needs. No more guessing!

Relative Paths with {PROGRAM_DIR}

Another common request, particularly from users deploying applications in more restricted environments, was the ability to specify the onefile unpacking directory relative to the executable itself. Previously, you were limited to absolute paths or paths relative to the user’s temporary directory space.

We’ve introduced a new variable, {PROGRAM_DIR}, that you can use in the --onefile-tempdir-spec option. This variable is dynamically replaced at runtime with the full path to the directory containing the onefile executable.

For example:

nuitka --onefile --onefile-tempdir-spec="{PROGRAM_DIR}/.myapp_data" my_program.py

This would create a directory named .myapp_data inside the same directory as the my_program.exe (or my_program on Linux/macOS) and unpack the application there. This is perfect for creating truly self-contained applications where all data and temporary files reside alongside the executable.

Nuitka Commercial and Open Source

These features, like many enhancements to Nuitka, originated from a request by a Nuitka commercial customer. This highlights the close relationship between the commercial offerings and the open-source core. While commercial support helps drive development and ensures the long-term sustainability of Nuitka, the vast majority of features are made freely available to all users.

Give it a Try!

This change will be in 2.7 and is currently

We encourage you to try out these new features and let us know what you think! As always, bug reports, feature requests, and contributions are welcome on GitHub.

February 14, 2025 11:00 PM UTC


Django Weblog

DjangoCongress JP 2025 Announcement and Live Streaming!

DjangoCongress JP 2025, to be held on Saturday, February 22, 2025 at 10 am (Japan Standard Time), will be broadcast live!

It will be streamed on the following YouTube Live channels:

This year there will be talks not only about Django, but also about FastAPI and other asynchronous web topics. There will also be talks on Django core development, Django Software Foundation (DSF) governance, and other topics from around the world. Simultaneous translation will be provided in both English and Japanese.

Schedule

ROOM1
ROOM2

A public viewing of the event will also be held in Tokyo. A reception will also be held, so please check the following connpass page if you plan to attend.

Registration (connpass page): DjangoCongress JP 2025パブリックビューイング

February 14, 2025 10:12 PM UTC


Eli Bendersky

Decorator JITs - Python as a DSL

Spend enough time looking at Python programs and packages for machine learning, and you'll notice that the "JIT decorator" pattern is pretty popular. For example, this JAX snippet:

import jax.numpy as jnp
import jax

@jax.jit
def add(a, b):
  return jnp.add(a, b)

# Use "add" as a regular Python function
... = add(...)

Or the Triton language for writing GPU kernels directly in Python:

import triton
import triton.language as tl

@triton.jit
def add_kernel(x_ptr,
               y_ptr,
               output_ptr,
               n_elements,
               BLOCK_SIZE: tl.constexpr):
    pid = tl.program_id(axis=0)
    block_start = pid * BLOCK_SIZE
    offsets = block_start + tl.arange(0, BLOCK_SIZE)
    mask = offsets < n_elements
    x = tl.load(x_ptr + offsets, mask=mask)
    y = tl.load(y_ptr + offsets, mask=mask)
    output = x + y
    tl.store(output_ptr + offsets, output, mask=mask)

In both cases, the function decorated with jit doesn't get executed by the Python interpreter in the normal sense. Instead, the code inside is more like a DSL (Domain Specific Language) processed by a special purpose compiler built into the library (JAX or Triton). Another way to think about it is that Python is used as a meta language to describe computations.

In this post I will describe some implementation strategies used by libraries to make this possible.

Preface - where we're going

The goal is to explain how different kinds of jit decorators work by using a simplified, educational example that implements several approaches from scratch. All the approaches featured in this post will be using this flow:

Flow of Python source --> Expr IR --> LLVM IR --> Execution Expr IR --> LLVM IR --> Execution" /> Expr IR --> LLVM IR --> Execution" class="align-center" src="https://eli.thegreenplace.net/images/2025/decjit-python.png" />

These are the steps that happen when a Python function wrapped with our educational jit decorator is called:

  1. The function is translated to an "expression IR" - Expr.
  2. This expression IR is converted to LLVM IR.
  3. Finally, the LLVM IR is JIT-executed.

Steps (2) and (3) use llvmlite; I've written about llvmlite before, see this post and also the pykaleidoscope project. For an introduction to JIT compilation, be sure to read this and maybe also the series of posts starting here.

First, let's look at the Expr IR. Here we'll make a big simplification - only supporting functions that define a single expression, e.g.:

def expr2(a, b, c, d):
    return (a + d) * (10 - c) + b + d / c

Naturally, this can be easily generalized - after all, LLVM IR can be used to express fully general computations.

Here are the Expr data structures:

class Expr:
    pass

@dataclass
class ConstantExpr(Expr):
    value: float

@dataclass
class VarExpr(Expr):
    name: str
    arg_idx: int

class Op(Enum):
    ADD = "+"
    SUB = "-"
    MUL = "*"
    DIV = "/"

@dataclass
class BinOpExpr(Expr):
    left: Expr
    right: Expr
    op: Op

To convert an Expr into LLVM IR and JIT-execute it, we'll use this function:

def llvm_jit_evaluate(expr: Expr, *args: float) -> float:
    """Use LLVM JIT to evaluate the given expression with *args.

    expr is an instance of Expr. *args are the arguments to the expression, each
    a float. The arguments must match the arguments the expression expects.

    Returns the result of evaluating the expression.
    """
    llvm.initialize()
    llvm.initialize_native_target()
    llvm.initialize_native_asmprinter()
    llvm.initialize_native_asmparser()

    cg = _LLVMCodeGenerator()
    modref = llvm.parse_assembly(str(cg.codegen(expr, len(args))))

    target = llvm.Target.from_default_triple()
    target_machine = target.create_target_machine()
    with llvm.create_mcjit_compiler(modref, target_machine) as ee:
        ee.finalize_object()
        cfptr = ee.get_function_address("func")
        cfunc = CFUNCTYPE(c_double, *([c_double] * len(args)))(cfptr)
        return cfunc(*args)

It uses the _LLVMCodeGenerator class to actually generate LLVM IR from Expr. This process is straightforward and covered extensively in the resources I linked to earlier; take a look at the full code here.

My goal with this architecture is to make things simple, but not too simple. On one hand - there are several simplifications: only single expressions are supported, very limited set of operators, etc. It's very easy to extend this! On the other hand, we could have just trivially evaluated the Expr without resorting to LLVM IR; I do want to show a more complete compilation pipeline, though, to demonstrate that an arbitrary amount of complexity can be hidden behind these simple interfaces.

With these building blocks in hand, we can review the strategies used by jit decorators to convert Python functions into Exprs.

AST-based JIT

Python comes with powerful code reflection and introspection capabilities out of the box. Here's the astjit decorator:

def astjit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise ASTJITError("Keyword arguments are not supported")
        source = inspect.getsource(func)
        tree = ast.parse(source)

        emitter = _ExprCodeEmitter()
        emitter.visit(tree)
        return llvm_jit_evaluate(emitter.return_expr, *args)

    return wrapper

This is a standard Python decorator. It takes a function and returns another function that will be used in its place (functools.wraps ensures that function attributes like the name and docstring of the wrapper match the wrapped function).

Here's how it's used:

from astjit import astjit

@astjit
def some_expr(a, b, c):
    return b / (a + 2) - c * (b - a)

print(some_expr(2, 16, 3))

After astjit is applied to some_expr, what some_expr holds is the wrapper. When some_expr(2, 16, 3) is called, the wrapper is invoked with *args = [2, 16, 3].

The wrapper obtains the AST of the wrapped function, and then uses _ExprCodeEmitter to convert this AST into an Expr:

class _ExprCodeEmitter(ast.NodeVisitor):
    def __init__(self):
        self.args = []
        self.return_expr = None
        self.op_map = {
            ast.Add: Op.ADD,
            ast.Sub: Op.SUB,
            ast.Mult: Op.MUL,
            ast.Div: Op.DIV,
        }

    def visit_FunctionDef(self, node):
        self.args = [arg.arg for arg in node.args.args]
        if len(node.body) != 1 or not isinstance(node.body[0], ast.Return):
            raise ASTJITError("Function must consist of a single return statement")
        self.visit(node.body[0])

    def visit_Return(self, node):
        self.return_expr = self.visit(node.value)

    def visit_Name(self, node):
        try:
            idx = self.args.index(node.id)
        except ValueError:
            raise ASTJITError(f"Unknown variable {node.id}")
        return VarExpr(node.id, idx)

    def visit_Constant(self, node):
        return ConstantExpr(node.value)

    def visit_BinOp(self, node):
        left = self.visit(node.left)
        right = self.visit(node.right)
        try:
            op = self.op_map[type(node.op)]
            return BinOpExpr(left, right, op)
        except KeyError:
            raise ASTJITError(f"Unsupported operator {node.op}")

When _ExprCodeEmitter finishes visiting the AST it's given, its return_expr field will contain the Expr representing the function's return value. The wrapper then invokes llvm_jit_evaluate with this Expr.

Note how our decorator interjects into the regular Python execution process. When some_expr is called, instead of the standard Python compilation and execution process (code is compiled into bytecode, which is then executed by the VM), we translate its code to our own representation and emit LLVM from it, and then JIT execute the LLVM IR. While it seems kinda pointless in this artificial example, in reality this means we can execute the function's code in any way we like.

AST JIT case study: Triton

This approach is almost exactly how the Triton language works. The body of a function decorated with @triton.jit gets parsed to a Python AST, which then - through a series of internal IRs - ends up in LLVM IR; this in turn is lowered to PTX by the NVPTX LLVM backend. Then, the code runs on a GPU using a standard CUDA pipeline.

Naturally, the subset of Python that can be compiled down to a GPU is limited; but it's sufficient to run performant kernels, in a language that's much friendlier than CUDA and - more importantly - lives in the same file with the "host" part written in regular Python. For example, if you want testing and debugging, you can run Triton in "interpreter mode" which will just run the same kernels locally on a CPU.

Note that Triton lets us import names from the triton.language package and use them inside kernels; these serve as the intrinsics for the language - special calls the compiler handles directly.

Bytecode-based JIT

Python is a fairly complicated language with a lot of features. Therefore, if our JIT has to support some large portion of Python semantics, it may make sense to leverage more of Python's own compiler. Concretely, we can have it compile the wrapped function all the way to bytecode, and start our translation from there.

Here's the bytecodejit decorator that does just this [1]:

def bytecodejit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise BytecodeJITError("Keyword arguments are not supported")

        expr = _emit_exprcode(func)
        return llvm_jit_evaluate(expr, *args)

    return wrapper


def _emit_exprcode(func):
    bc = func.__code__
    stack = []
    for inst in dis.get_instructions(func):
        match inst.opname:
            case "LOAD_FAST":
                idx = inst.arg
                stack.append(VarExpr(bc.co_varnames[idx], idx))
            case "LOAD_CONST":
                stack.append(ConstantExpr(inst.argval))
            case "BINARY_OP":
                right = stack.pop()
                left = stack.pop()
                match inst.argrepr:
                    case "+":
                        stack.append(BinOpExpr(left, right, Op.ADD))
                    case "-":
                        stack.append(BinOpExpr(left, right, Op.SUB))
                    case "*":
                        stack.append(BinOpExpr(left, right, Op.MUL))
                    case "/":
                        stack.append(BinOpExpr(left, right, Op.DIV))
                    case _:
                        raise BytecodeJITError(f"Unsupported operator {inst.argval}")
            case "RETURN_VALUE":
                if len(stack) != 1:
                    raise BytecodeJITError("Invalid stack state")
                return stack.pop()
            case "RESUME" | "CACHE":
                # Skip nops
                pass
            case _:
                raise BytecodeJITError(f"Unsupported opcode {inst.opname}")

The Python VM is a stack machine; so we emulate a stack to convert the function's bytecode to Expr IR (a bit like an RPN evaluator). As before, we then use our llvm_jit_evaluate utility function to lower Expr to LLVM IR and JIT execute it.

Using this JIT is as simple as the previous one - just swap astjit for bytecodejit:

from bytecodejit import bytecodejit

@bytecodejit
def some_expr(a, b, c):
    return b / (a + 2) - c * (b - a)

print(some_expr(2, 16, 3))

Bytecode JIT case study: Numba

Numba is a compiler for Python itself. The idea is that you can speed up specific functions in your code by slapping a numba.njit decorator on them. What happens next is similar in spirit to our simple bytecodejit, but of course much more complicated because it supports a very large portion of Python semantics.

Numba uses the Python compiler to emit bytecode, just as we did; it then converts it into its own IR, and then to LLVM using llvmlite [2].

By starting with the bytecode, Numba makes its life easier (no need to rewrite the entire Python compiler). On the other hand, it also makes some analyses harder, because by the time we're in bytecode, a lot of semantic information existing in higher-level representations is lost. For example, Numba has to sweat a bit to recover control flow information from the bytecode (by running it through a special interpreter first).

Tracing-based JIT

The two approaches we've seen so far are similar in many ways - both rely on Python's introspection capabilities to compile the source code of the JIT-ed function to some extent (one to AST, the other all the way to bytecode), and then work on this lowered representation.

The tracing strategy is very different. It doesn't analyze the source code of the wrapped function at all - instead, it traces its execution by means of specially-boxed arguments, leveraging overloaded operators and functions, and then works on the generated trace.

The code implementing this for our smile demo is surprisingly compact:

def tracejit(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        if kwargs:
            raise TraceJITError("Keyword arguments are not supported")

        argspec = inspect.getfullargspec(func)

        argboxes = []
        for i, arg in enumerate(args):
            if i >= len(argspec.args):
                raise TraceJITError("Too many arguments")
            argboxes.append(_Box(VarExpr(argspec.args[i], i)))

        out_box = func(*argboxes)
        return llvm_jit_evaluate(out_box.expr, *args)

    return wrapper

Each runtime argument of the wrapped function is assigned a VarExpr, and that is placed in a _Box, a placeholder class which lets us do operator overloading:

@dataclass
class _Box:
    expr: Expr

_Box.__add__ = _Box.__radd__ = _register_binary_op(Op.ADD)
_Box.__sub__ = _register_binary_op(Op.SUB)
_Box.__rsub__ = _register_binary_op(Op.SUB, reverse=True)
_Box.__mul__ = _Box.__rmul__ = _register_binary_op(Op.MUL)
_Box.__truediv__ = _register_binary_op(Op.DIV)
_Box.__rtruediv__ = _register_binary_op(Op.DIV, reverse=True)

The remaining key function is _register_binary_op:

def _register_binary_op(opcode, reverse=False):
    """Registers a binary opcode for Boxes.

    If reverse is True, the operation is registered as arg2 <op> arg1,
    instead of arg1 <op> arg2.
    """

    def _op(arg1, arg2):
        if reverse:
            arg1, arg2 = arg2, arg1
        box1 = arg1 if isinstance(arg1, _Box) else _Box(ConstantExpr(arg1))
        box2 = arg2 if isinstance(arg2, _Box) else _Box(ConstantExpr(arg2))
        return _Box(BinOpExpr(box1.expr, box2.expr, opcode))

    return _op

To understand how this works, consider this trivial example:

@tracejit
def add(a, b):
    return a + b

print(add(1, 2))

After the decorated function is defined, add holds the wrapper function defined inside tracejit. When add(1, 2) is called, the wrapper runs:

  1. For each argument of add itself (that is a and b), it creates a new _Box holding a VarExpr. This denotes a named variable in the Expr IR.
  2. It then calls the wrapped function, passing it the boxes as runtime parameters.
  3. When (the wrapped) add runs, it invokes a + b. This is caught by the overloaded __add__ operator of _Box, and it creates a new BinOpExpr with the VarExprs representing a and b as children. This BinOpExpr is then returned [3].
  4. The wrapper unboxes the returned Expr and passes it to llvm_jit_evaluate to emit LLVM IR from it and JIT execute it with the actual runtime arguments of the call: 1, 2.

This might be a little mind-bending at first, because there are two different executions that happen:

  • The first is calling the wrapped add function itself, letting the Python interpreter run it as usual, but with special arguments that build up the IR instead of doing any computations. This is the tracing step.
  • The second is lowering this IR our tracing step built into LLVM IR and then JIT executing it with the actual runtime argument values 1, 2; this is the execution step.

This tracing approach has some interesting characteristics. Since we don't have to analyze the source of the wrapped functions but only trace through the execution, we can "magically" support a much richer set of programs, e.g.:

@tracejit
def use_locals(a, b, c):
    x = a + 2
    y = b - a
    z = c * x
    return y / x - z

print(use_locals(2, 8, 11))

This just works with our basic tracejit. Since Python variables are placeholders (references) for values, our tracing step is oblivious to them - it follows the flow of values. Another example:

@tracejit
def use_loop(a, b, c):
    result = 0
    for i in range(1, 11):
        result += i
    return result + b * c

print(use_loop(10, 2, 3))

This also just works! The created Expr will be a long chain of BinExpr additions of i's runtime values through the loop, added to the BinExpr for b * c.

This last example also leads us to a limitation of the tracing approach; the loop cannot be data-dependent - it cannot depend on the function's arguments, because the tracing step has no concept of runtime values and wouldn't know how many iterations to run through; or at least, it doesn't know this unless we want to perform the tracing run for every runtime execution [4].

The tracing approach is useful in several domains, most notably automatic differentiation (AD). For a slightly deeper taste, check out my radgrad project.

Tracing JIT case study: JAX

The JAX ML framework uses a tracing approach very similar to the one described here. The first code sample in this post shows the JAX notation. JAX cleverly wraps Numpy with its own version which is traced (similar to our _Box, but JAX calls these boxes "tracers"), letting you write regular-feeling Numpy code that can be JIT optimized and executed on accelerators like GPUs and TPUs via XLA. JAX's tracer builds up an underlying IR (called jaxpr) which can then be emitted to XLA ops and passed to XLA for further lowering and execution.

For a fairly deep overview of how JAX works, I recommend reading the autodidax doc.

As mentioned earlier, JAX has some limitations with things like data-dependent control flow in native Python. This won't work, because there's control flow that depends on a runtime value (count):

import jax

@jax.jit
def sum_datadep(a, b, count):
    total = a
    for i in range(count):
        total += b
    return total

print(sum_datadep(10, 3, 3))

When sum_datadep is executed, JAX will throw an exception, saying something like:

This concrete value was not available in Python because it depends on the value of the argument count.

As a remedy, JAX has its own built-in intrinsics from the jax.lax package. Here's the example rewritten in a way that actually works:

import jax
from jax import lax

@jax.jit
def sum_datadep_fori(a, b, count):
    def body(i, total):
        return total + b

    return lax.fori_loop(0, count, body, a)

fori_loop (and many other built-ins in the lax package) is something JAX can trace through, generating a corresponding XLA operation (XLA has support for While loops, to which this lax.fori_loop can be lowered).

The tracing approach has clear benefits for JAX as well; because it only cares about the flow of values, it can handle arbitrarily complicated Python code, as long as the flow of values can be traced. Just like the local variables and data-independent loops shown earlier, but also things like closures. This makes meta-programming and templating easy [5].

Code

The full code for this post is available on GitHub.


[1]Once again, this is a very simplified example. A more realistic translator would have to support many, many more Python bytecode instructions.
[2]In fact, llvmlite itself is a Numba sub-project and is maintained by the Numba team, for which I'm grateful!
[3]For a fun exercise, try adding constant folding to the wrapped _op: when both its arguments are constants (not boxes), instead placing each in a _Box(ConstantExpr(...)), it could perform the mathematical operation on them and return a single constant box. This is a common optimization in compilers!
[4]

In all the JIT approaches showed in this post, the expectation is that compilation happens once, but the compiled function can be executed many times (perhaps in a loop). This means that the compilation step cannot depend on the runtime values of the function's arguments, because it has no access to them. You could say that it does, but that's just for the very first time the function is run (in the tracing approach); it has no way of knowing their values the next times the function will run.

JAX has some provisions for cases where a function is invoked with a small set of runtime values and we want to separately JIT each of them.

[5]A reader pointed out that TensorFlow's AutoGraph feature combines the AST and tracing approaches. TF's eager mode performs tracing, but it also uses AST analyses to rewrite Python loops and conditions into builtins like tf.cond and tf.while_loop.

February 14, 2025 09:49 PM UTC


Hugo van Kemenade

Improving licence metadata

What? #

PEP 639 defines a spec on how to document licences used in Python projects.

Instead of using a Trove classifier such as “License :: OSI Approved :: BSD License”, which is imprecise (for example, which BSD licence?), the SPDX licence expression syntax is used.

How? #

pypproject.toml #

Change pyproject.toml as follows.

I usually use Hatchling as a build backend, and support was added in 1.27:

 [build-system]
 build-backend = "hatchling.build"
 requires = [
 "hatch-vcs",
- "hatchling",
+ "hatchling>=1.27",
 ]

Replace the freeform license field with a valid SPDX license expression, and add license-files which points to the licence files in the repo. There’s often only one, but if you have more than one, list them all:

 [project]
 ...
-license = { text = "MIT" }
+license = "MIT"
+license-files = [ "LICENSE" ]

Optionally delete the deprecated licence classifier:

 classifiers = [
 "Development Status :: 5 - Production/Stable",
 "Intended Audience :: Developers",
- "License :: OSI Approved :: MIT License",
 "Operating System :: OS Independent",

For example, see humanize#236 and prettytable#350.

Upload #

Then make sure to use a PyPI uploader that supports this.

I recommend using Trusted Publishing which I use with pypa/gh-action-pypi-publish to deploy from GitHub Actions. I didn’t need to make any changes here, just make a release as usual.

Result #

PyPI #

PyPI shows the new metadata:

Screenshot of PyPI showing licence expression: BSD-3-Clause

pip #

pip can also show you the metadata:

❯ pip install prettytable==3.13.0
❯ pip show prettytable
Name: prettytable
Version: 3.13.0
...
License-Expression: BSD-3-Clause
Location: /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages
Requires: wcwidth
Required-by: norwegianblue, pypistats

Thank you! #

A lot of work went into this. Thank you to PEP authors Philippe Ombredanne for creating the first draft in 2019, to C.A.M. Gerlach for the second draft in 2021, and especially to Karolina Surma for getting the third draft finish line and helping with the implementation.

And many projects were updated to support this, thanks to the maintainers and contributors of at least:


Header photo: Amelia Earhart’s 1932 pilot licence in the San Diego Air and Space Museum Archive, with no known copyright restrictions.

February 14, 2025 03:11 PM UTC


Real Python

The Real Python Podcast – Episode #239: Behavior-Driven vs Test-Driven Development & Using Regex in Python

What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 14, 2025 12:00 PM UTC


Daniel Roy Greenfeld

Building a playing card deck

Today is Valentine's Day. That makes it the perfect day to write a blog post about showing how to not just build a deck of cards, but show off cards from the heart suite.

February 14, 2025 09:50 AM UTC

February 13, 2025


Bojan Mihelac

Prefixed Parameters for Django querystring tag

An overview of Django 5.1's new querystring tag and how to add support for prefixed parameters.

February 13, 2025 09:37 PM UTC


Peter Bengtsson

get in JavaScript is the same as property in Python

Prefix a function, in an object or class, with `get` and then that acts as a function call without brackets. Just like Python's `property` decorator.

February 13, 2025 12:41 PM UTC


EuroPython

EuroPython February 2025 Newsletter

Hey ya 👋 

Hope you&aposre all having a fantastic February. We sure have been busy and got some exciting updates for you as we gear up for EuroPython 2025, which is taking place once again in the beautiful city of Prague. So let&aposs dive right in!

🗃️ Community Voting on Talks & Workshops

EuroPython 2025 is right around the corner and our programme team is hard at work putting together an amazing lineup. But we need your help to shape the conference! We received over 572 fantastic proposals, and now it’s time for Community Voting! 🎉 If you&aposve attended EuroPython before or submitted a proposal this year, you’re eligible to vote.

📢 More votes = a stronger, more diverse programme! Spread the word and get your EuroPython friends to cast their votes too.

🏃The deadline is Monday next week, so don’t miss your chance!

🗳️ Vote now: https://ep2025.europython.eu/programme/voting

🧐Call for Reviewers

Want to play a key role in building an incredible conference? Join our review team and help select the best talks for EuroPython 2025! Whether you&aposre a Python expert or an enthusiastic community member, your insights matter.

We’d like to also thank the over 100 people who have already signed up to review! For those who haven’t done so yet, please remember to accept your Pretalx link and get your reviews in by Monday 17th February.

You can already start reviewing proposals, and each review takes as little as 5 minutes. We encourage reviewers to go through at least 20-30 proposals, but if you can do more, even better! With almost 600 submissions to pick from, your help ensures we curate a diverse and engaging programme.

If you&aposre passionate about Python and want to contribute, we’d love to have you. Sign up here: forms.gle/4GTJjwZ1nHBGetM18.

🏃The deadline is Monday next week, so don’t delay!

Got questions? Reach out to us at programme@europython.eu

📣 Community Outreach

EuroPython isn’t just present at other Python events—we actively support them too! As a community sponsor, we love helping local PyCons grow and thrive. We love giving back to the community and strengthening Python events across Europe! 🐍💙

PyCon + Web in Berlin
The EuroPython team had a fantastic time at PyCon + Web in Berlin, meeting fellow Pythonistas, exchanging ideas, and spreading the word about EuroPython 2025. It was great to connect with speakers, organizers, and attendees. 

Ever wondered how long it takes to walk from Berlin to Prague? A huge thank you to our co-organizers, Cheuk, Artur, and Cristián, for answering that in their fantastic lightning talk about EuroPython!

alt

FOSDEM 2025
We had some members of the EuroPython team at FOSDEM 2025, connecting with the open-source community and spreading the Python love! 🎉 We enjoyed meeting fellow enthusiasts, sharing insights about the EuroPython Society, and giving away the first EuroPython 2025 stickers. If you stopped by—thank you and we hope to see you in Prague this July.

alt

🦒 Speaker Mentorship Programme

The signups for The Speaker Mentorship Programme closed on 22nd January 2025. We’re excited to have matched 43 mentees with 24 mentors from our community. We had an increase in the number of mentees who signed up and that’s amazing! We’re glad to be contributing to the journey of new speakers in the Python community. A massive thank you to our mentors for supporting the mentees and to our mentees; we’re proud of you for taking this step in your journey as a speaker. 

26 mentees submitted at least 1 proposal. Out of this number, 13 mentees submitted 1 proposal, 9 mentees submitted 2 proposals, 2 mentees submitted 3 proposals, 1 mentee submitted 4 proposals and lastly, 1 mentee submitted 5 proposals. We wish our mentees the best of luck. We look forward to the acceptance of their proposals.

In a few weeks, we will host an online panel session with 2–3 experienced community members who will share their advice with first-time speakers. At the end of the panel, there will be a Q&A session to answer all the participants’ questions.

You can watch the recording of the previous year’s workshop here:

💰Sponsorship

EuroPython is one of the largest Python conferences in Europe, and it wouldn’t be possible without our sponsors. We are so grateful for the companies who have already expressed interest. If you’re interested in sponsoring EuroPython 2025 as well, please reach out to us at sponsoring@europython.eu.

🎤 EuroPython Speakers Share Their Experiences

We asked our past speakers to share their experiences speaking at EuroPython. These videos have been published on YouTube as shorts, and we&aposve compiled them into brief clips for you to watch.

A big thanks goes to Sebastian Witowski, Jan Smitka, Yuliia Barabash, Jodie Burchell, Max Kahan, and Cheuk Ting Ho for sharing their experiences.

Why You Should Submit a Proposal for EuroPython? Part 2

Why You Should Submit a Proposal for EuroPython? Part 3

📊 EuroPython Society Board Report 

The EuroPython conference wouldn’t be what it is without the incredible volunteers who make it all happen. 💞 Behind the scenes, there’s also the EuroPython Society—a volunteer-led non-profit that manages the fiscal and legal aspects of running the conference, oversees its organization, and works on a few smaller projects like the grants programme. To keep everyone in the loop and promote transparency, the Board is sharing regular updates on what we’re working on.

The January board report is ready: https://europython-society.org/board-report-for-january-2025/

🐍 Upcoming Events in the Python Community

That&aposs all for now! Keep an eye on your inbox and our website for more news and announcements. We&aposre counting down the days until we can come together in Prague to celebrate our shared love for Python. 🐍❤️

Cheers,
The EuroPython Team

February 13, 2025 08:36 AM UTC

February 12, 2025


Kay Hayen

Nuitka Release 2.6

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.

This release has all-around improvements, with a lot effort spent on bug fixes in the memory leak domain, and preparatory actions for scalability improvements.

Bug Fixes

Package Support

New Features

Optimization

Anti-Bloat

Organizational

Tests

Cleanups

Summary

This a major release that it consolidates Nuitka big time.

The scalability work has progressed, even if no immediately visible effects are there yet, the next releases will have them, as this is the main area of improvement these days.

The memory leaks found are very important and very old, this is the first time that asyncio should be working perfect with Nuitka, it was usable before, but compatibility is now much higher.

Also, this release puts out a much nicer help output and handling of plugins help, which no longer needs tricks to see a plugin option that is not enabled (yet), during --help. The user interface is hopefully more clean due to it.

February 12, 2025 11:00 PM UTC


Giampaolo Rodola

psutil: drop Python 2.7 support

About dropping Python 2.7 support in psutil, 3 years ago I stated:

Not a chance, for many years to come. [Python 2.7] currently represents 7-10% of total downloads, meaning around 70k / 100k downloads per day.

Only 3 years later, and to my surprise, downloads for Python 2.7 dropped to 0.36%! As such, as of psutil 7.0.0, I finally decided to drop support for Python 2.7!

The numbers

These are downloads per month:

$ pypinfo --percent psutil pyversion
Served from cache: False
Data processed: 4.65 GiB
Data billed: 4.65 GiB
Estimated cost: $0.03

| python_version | percent | download_count |
| -------------- | ------- | -------------- |
| 3.10           |  23.84% |     26,354,506 |
| 3.8            |  18.87% |     20,862,015 |
| 3.7            |  17.38% |     19,217,960 |
| 3.9            |  17.00% |     18,798,843 |
| 3.11           |  13.63% |     15,066,706 |
| 3.12           |   7.01% |      7,754,751 |
| 3.13           |   1.15% |      1,267,008 |
| 3.6            |   0.73% |        803,189 |
| 2.7            |   0.36% |        402,111 |
| 3.5            |   0.03% |         28,656 |
| Total          |         |    110,555,745 |

According to pypistats.org Python 2.7 downloads represents the 0.28% of the total, around 15.000 downloads per day.

The pain

Maintaining 2.7 support in psutil had become increasingly difficult, but still possible. E.g. I could still run tests by using old PYPI backports. GitHub Actions could still be tweaked to run tests and produce 2.7 wheels on Linux and macOS. Not on Windows though, for which I had to use a separate service (Appveyor). Still, the amount of hacks in psutil source code necessary to support Python 2.7 piled up over the years, and became quite big. Some disadvantages that come to mind:

psutil-6.1.1-cp27-cp27m-macosx_10_9_x86_64.whl
psutil-6.1.1-cp27-none-win32.whl
psutil-6.1.1-cp27-none-win_amd64.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27m-manylinux2010_x86_64.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_i686.whl
psutil-6.1.1-cp27-cp27mu-manylinux2010_x86_64.whl

The removal

The removal was done in PR-2841, which removed around 1500 lines of code (nice!). It felt liberating. In doing so, in the doc I still made the promise that the 6.1.* serie will keep supporting Python 2.7 and will receive critical bug-fixes only (no new features). It will be maintained in a specific python2 branch. I explicitly kept the setup.py script compatible with Python 2.7 in terms of syntax, so that, when the tarball is fetched from PYPI, it will emit an informative error message on pip install psutil. The user trying to install psutil on Python 2.7 will see:

$ pip2 install psutil
As of version 7.0.0 psutil no longer supports Python 2.7.
Latest version supporting Python 2.7 is psutil 6.1.X.
Install it with: "pip2 install psutil==6.1.*".

As the informative message states, users that are still on Python 2.7 can still use psutil with:

pip2 install psutil==6.1.*

Related tickets

February 12, 2025 11:00 PM UTC


EuroPython Society

Board Report for January 2025

The top priority for the board in January was finishing the hiring of our event manager. We’re super excited to introduce Anežka Müller! Anežka is a freelance event manager and a longtime member of the Czech Python community. She’s a member of the Pyvec board, co-organizes PyLadies courses, PyCon CZ, Brno Pyvo, and Brno Python Pizza. She’ll be working closely with the board and OPS team, mainly managing communication with service providers. Welcome onboard!

Our second priority was onboarding teams. We’re happy that we already have the Programme team in place—they started early and launched the Call for Proposals at the beginning of January. We’ve onboarded a few more teams and are in the process of bringing in the rest.

Our third priority was improving our grant programme in order to support more events with our limited budget and to make it more clear and transparent. We went through past data, came up with a new proposal, discussed it, voted on it, and have already published it on our blog.

Individual reports:

Artur

Mia

Cyril

Aris

Ege

Shekhar

Anders

February 12, 2025 03:08 PM UTC


Python Morsels

Avoid over-commenting in Python

When do you need a comment in Python and when should you consider an alternative to commenting?

Table of contents

  1. Documenting instead of commenting
  2. Non-obvious variables and values
  3. Unnamed code blocks
  4. Missing variables due to embedded operations
  5. Indexes instead of variables
  6. Comment the "why" more than the "what"
  7. Consider whether there's a better alternative to your comments

Documenting instead of commenting

Here is a comment I would not write in my code:

def first_or_none(iterable):
    # Return the first item in given iterable (or None if empty).
    for item in iterable:
        return item
    return None

That comment seems to describe what this code does... so why would I not write it?

I do like that comment, but I would prefer to write it as a docstring instead:

def first_or_none(iterable):
    """Return the first item in given iterable (or None if empty)."""
    for item in iterable:
        return item
    return None

Documentation strings are for conveying the purpose of function, class, or module, typically at a high level. Unlike comments, they can be read by Python's built-in help function:

>>> help(first_or_none)
Help on function first_or_none in module __main__:

first_or_none(iterable)
    Return the first item in given iterable (or None if empty).

Docstrings are also read by other documentation-oriented tools, like Sphinx.

Non-obvious variables and values

Here's a potentially helpful comment:

Read the full article: https://www.pythonmorsels.com/avoid-comments/

February 12, 2025 03:05 PM UTC


Real Python

Python Keywords: An Introduction

Python keywords are reserved words with specific functions and restrictions in the language. Currently, Python has thirty-five keywords and four soft keywords. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.

By the end of this tutorial, you’ll understand that:

  • There are 35 keywords and four soft keywords in Python.
  • You can get a list of all keywords using keyword.kwlist from the keyword module.
  • Soft keywords in Python act as keywords only in specific contexts.
  • print and exec are keywords that have been deprecated and turned into functions in Python 3.

In this article, you’ll find a basic introduction to all Python keywords and soft keywords along with other resources that will be helpful for learning more about each keyword.

Get Your Cheat Sheet: Click here to download a free cheat sheet that summarizes the main keywords in Python.

Take the Quiz: Test your knowledge with our interactive “Python Keywords: An Introduction” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python Keywords: An Introduction

In this quiz, you'll test your understanding of Python keywords and soft keywords. These reserved words have specific functions and restrictions in Python, and understanding how to use them correctly is fundamental for building Python programs.

Python Keywords

Python keywords are special reserved words that have specific meanings and purposes and can’t be used for anything but those specific purposes. These keywords are always available—you’ll never have to import them into your code.

Python keywords are different from Python’s built-in functions and types. The built-in functions and types are also always available, but they aren’t as restrictive as the keywords in their usage.

An example of something you can’t do with Python keywords is assign something to them. If you try, then you’ll get a SyntaxError. You won’t get a SyntaxError if you try to assign something to a built-in function or type, but it still isn’t a good idea. For a more in-depth explanation of ways keywords can be misused, check out Invalid Syntax in Python: Common Reasons for SyntaxError.

There are thirty-five keywords in Python. Here’s a list of them, each linked to its relevant section in this tutorial:

Two keywords have additional uses beyond their initial use cases. The else keyword is also used with loops and with try and except in addition to in conditional statements. The as keyword is most commonly used in import statements, but also used with the with keyword.

The list of Python keywords and soft keywords has changed over time. For example, the await and async keywords weren’t added until Python 3.7. Also, both print and exec were keywords in Python 2.7 but were turned into built-in functions in Python 3 and no longer appear in the keywords list.

Python Soft Keywords

As mentioned above, you’ll get an error if you try to assign something to a Python keyword. Soft keywords, on the other hand, aren’t that strict. They syntactically act as keywords only in certain conditions.

This new capability was made possible thanks to the introduction of the PEG parser in Python 3.9, which changed how the interpreter reads the source code.

Leveraging the PEG parser allowed for the introduction of structural pattern matching in Python. In order to use intuitive syntax, the authors picked match, case, and _ for the pattern matching statements. Notably, match and case are widely used for this purpose in many other programming languages.

To prevent conflicts with existing Python code that already used match, case, and _ as variable or function names, Python developers decided to introduce the concept of soft keywords.

Currently, there are four soft keywords in Python:

You can use the links above to jump to the soft keywords you’d like to read about, or you can continue reading for a guided tour.

Value Keywords: True, False, None

There are three Python keywords that are used as values. These values are singleton values that can be used over and over again and always reference the exact same object. You’ll most likely see and use these values a lot.

There are a few terms used in the sections below that may be new to you. They’re defined here, and you should be aware of their meaning before proceeding:

Read the full article at https://realpython.com/python-keywords/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 12, 2025 02:00 PM UTC


EuroPython Society

Changes in the Grants Programme for 2025

TL;DR:

Background:

The EPS introduced a Grant Programme in 2017. Since then, we have granted almost EUR 350k through the programme, partly via EuroPython Finaid and by directly supporting other Python events and projects across Europe. In the last two years, the Grant Programme has grown to EUR 100k per year, with even more requests coming in.

With this growth come new challenges in how to distribute funds fairly so that more events can benefit. Looking at data from the past two years, we’ve often been close to or over our budget. The guidelines haven’t been updated in a while. As grant requests become more complex, we’d like to simplify and clarify the process, and better explain it on our website.

We would also like to acknowledge that EuroPython, when traveling around Europe, has an additional impact on the host country, and we’d like to set aside part of the budget for the local community.

The Grant Programme is also a primary funding source for EuroPython Finaid. To that end, we aim to allocate 30% of the total Grant Programme budget to Finaid, an increase from the previous 25%.

Changes:

Using 2024 data, and the budget available for Community Grants (60% of total), we’ve simulated different budget caps and found a sweet spot at 6000EUR, where we are able to support all the requests with most of the grants being below that limit. For 2025 we expect to receive a similar or bigger number of requests.


2024

6k

5k

4k

3.5

3

Grant #1

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #2

€ 8,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #3

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #4

€ 5,000.00

€ 5,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #5

€ 10,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #6

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #7

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

Grant #8

€ 5,000.00

€ 5,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #9

€ 6,000.00

€ 6,000.00

€ 5,000.00

€ 4,000.00

€ 3,500.00

€ 3,000.00

Grant #10

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

€ 2,900.00

Grant #11

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

Grant #12

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

Grant #13

€ 450.00

€ 450.00

€ 450.00

€ 450.00

€ 450.00

€ 450.00

Grant #14

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

€ 3,000.00

Grant #15

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

€ 1,000.00

Grant #16

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

€ 2,000.00

Grant #17

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,500.00

€ 3,000.00

Grant #18

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

€ 1,500.00

SUM

€ 66,350.00

€ 60,350.00

€ 57,350.00

€ 52,350.00

€ 48,350.00

€ 43,850.00


alt

We are introducing a special 10% pool of money to be used on projects in the host country (in 2025 that’s again Czech Republic). This pool is set aside at the beginning of the year, with one caveat that we would like to deploy it in the first half of the year. Whatever is left unused goes back to the Community Pool to be used in second half of the year.

Expected outcome:

February 12, 2025 01:16 PM UTC


Real Python

Quiz: Python Keywords: An Introduction

In this quiz, you’ll test your understanding of Python Keywords.

Python keywords are reserved words with specific functions and restrictions in the language. These keywords are always available in Python, which means you don’t need to import them. Understanding how to use them correctly is fundamental for building Python programs.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

February 12, 2025 12:00 PM UTC


Zato Blog

Modern REST API Tutorial in Python

Modern REST API Tutorial in Python

Great APIs don't win theoretical arguments - they just prefer to work reliably and to make developers' lives easier.

Here's a tutorial on what building production APIs is really about: creating interfaces that are practical in usage, while keeping your systems maintainable for years to come.

Sound intriguing? Read the modern REST API tutorial in Python here.

Modern REST API tutorial in Python

More resources

➤ Python API integration tutorials
What is a Network Packet Broker? How to automate networks in Python?
What is an integration platform?
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?
Open-source iPaaS in Python

February 12, 2025 08:00 AM UTC


Kushal Das

pass using stateless OpenPGP command line interface

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

February 12, 2025 05:26 AM UTC