skip to navigation
skip to content

Planet Python

Last update: November 05, 2025 07:43 PM UTC

November 05, 2025


TestDriven.io

Cursor vs. Claude for Django Development

This article looks at how Cursor and Claude compare when developing a Django application.

November 05, 2025 07:42 PM UTC


Real Python

Python MarkItDown: Convert Documents Into LLM-Ready Markdown

The MarkItDown library lets you quickly turn PDFs, Office files, images, HTML, audio, and URLs into LLM-ready Markdown. In this tutorial, you’ll compare MarkItDown with Pandoc, run it from the command line, use it in Python code, and integrate conversions into AI-powered workflows.

By the end of this tutorial, you’ll understand that:

  • You can install MarkItDown with pip using the [all] specifier to pull in optional dependencies.
  • The CLI’s results can be saved to a file using the -o or --output command-line option followed by a target path.
  • The .convert() method reads the input document and converts it to Markdown text.
  • You can connect MarkItDown’s MCP server to clients like Claude Desktop to expose on-demand conversions to chats.
  • MarkItDown can integrate with LLMs to generate image descriptions and extract text from images with OCR and custom prompts.

To decide whether to use MarkItDown or another library—such as Pandoc—for your Markdown conversion tasks, consider these factors:

Use Case Choose MarkItDown Choose Pandoc
You want fast Markdown conversion for documentation, blogs, or LLM input.
You need high visual fidelity, fine-grained layout control, or broader input/output format support.

Your choice depends on whether you value speed, structure, and AI-pipeline integration over full formatting fidelity or wide-format support. MarkItDown isn’t intended for perfect, high-fidelity conversions for human consumption. This is especially true for complex document layouts or richly formatted content, in which case you should use Pandoc.

Get Your Code: Click here to download the free sample code that shows you how to use Python MarkItDown to convert documents into LLM-ready Markdown.

Take the Quiz: Test your knowledge with our interactive “Python MarkItDown: Convert Documents Into LLM-Ready Markdown” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Python MarkItDown: Convert Documents Into LLM-Ready Markdown

Practice MarkItDown basics. Convert PDFs, Word documents, Excel documents, and HTML documents to Markdown. Try the quiz.

Start Using MarkItDown

MarkItDown is a lightweight Python utility for converting various file formats into Markdown content. This tool is useful when you need to feed large language models (LLMs) and AI-powered text analysis pipelines with specific content that’s stored in other file formats. This lets you take advantage of Markdown’s high token efficiency.

The library supports a wide list of input formats, including the following:

  • PDF
  • PowerPoint
  • Word
  • Excel
  • Images
  • HTML
  • Text-based formats (CSV, JSON, XML)

The relevance of MarkItDown lies in its minimal setup and its ability to handle multiple input file formats. In the following sections, you’ll learn how to install and set up MarkItDown in your Python environment and explore its command-line interface (CLI) and main features.

Installation

To get started with MarkItDown, you need to install the library from the Python Package Index (PyPI) using pip. Before running the command below, make sure you create and activate a Python virtual environment to avoid cluttering your system Python installation:

Shell
(venv) $ python -m pip install 'markitdown[all]'

This command installs MarkItDown and all its optional dependencies in your current Python environment. After the installation finishes, you can verify that the package is working correctly:

Shell
(venv) $ markitdown --version
markitdown 0.1.3

This command should display the installed version of MarkItDown, confirming a successful installation. That should be it! You’re all set up to start using the library.

Note: If you’re running the latest Python 3.14 release, pip might install an outdated version of MarkItDown instead of the current stable one. This happens because the library’s own dependencies haven’t been built for Python 3.14 yet, so pip falls back to the earliest compatible version it finds.

To fix this, you can install MarkItDown in a Python 3.13 or earlier environment. Check out pyenv to manage multiple versions of Python.

Alternatively, MarkItDown also supports several optional dependencies that enhance its capabilities. You can install them selectively according to your needs. Below is a list of some available optional dependencies:

  • pptx for PowerPoint files
  • docx for Word documents
  • xlsx and xls for modern and older Excel workbooks
  • pdf for PDF files
  • outlook for Outlook messages
  • az-doc-intel for Azure Document Intelligence
  • audio-transcription for audio transcription of WAV and MP3 files
  • youtube-transcription for fetching YouTube video transcripts

If you only need a subset of dependencies, then you can install them with a command like the following:

Shell
(venv) $ python -m pip install 'markitdown[pdf,pptx,docx]'

This command installs only the dependencies needed for processing PDF, PPTX, and DOCX files. This way, you avoid cluttering your environment with artifacts that you won’t use or need in your code.

Read the full article at https://realpython.com/python-markitdown/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 05, 2025 02:00 PM UTC


Django Weblog

Django security releases issued: 5.2.8, 5.1.14, and 4.2.26

In accordance with our security release policy, the Django team is issuing releases for Django 5.2.8, Django 5.1.14, and Django 4.2.26. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2025-64458: Potential denial-of-service vulnerability in HttpResponseRedirect and HttpResponsePermanentRedirect on Windows

NFKC normalization in Python is slow on Windows. As a consequence, HttpResponseRedirect, HttpResponsePermanentRedirect, and redirect were subject to a potential denial-of-service attack via certain inputs with a very large number of Unicode characters.

Thanks to Seokchan Yoon (https://ch4n3.kr/) for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2025-64459: Potential SQL injection via _connector keyword argument in QuerySet and Q objects

The methods QuerySet.filter(), QuerySet.exclude(), and QuerySet.get(), and the class Q() were subject to SQL injection when using a suitably crafted dictionary, with dictionary expansion, as the _connector argument.

Thanks to cyberstan for the report.

This issue has severity "high" according to the Django security policy.

Affected supported versions

  • Django main
  • Django 6.0 (currently at beta status)
  • Django 5.2
  • Django 5.1
  • Django 4.2

Resolution

Patches to resolve the issue have been applied to Django's main, 6.0 (currently at beta status), 5.2, 5.1, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2025-64458: Potential denial-of-service vulnerability in HttpResponseRedirect and HttpResponsePermanentRedirect on Windows

CVE-2025-64459: Potential SQL injection via _connector keyword argument in QuerySet and Q objects

The following releases have been issued

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum. Please see our security policies for further information.

November 05, 2025 12:00 PM UTC


Real Python

Quiz: Python MarkItDown: Convert Documents Into LLM-Ready Markdown

In this quiz, you’ll test your understanding of the Python MarkItDown: Convert Documents Into LLM-Ready Markdown tutorial.

By working through this quiz, you’ll revisit how to install MarkItDown, convert documents to Markdown for your LLM workflows, and more.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 05, 2025 12:00 PM UTC


PyCharm

10 Smart Performance Hacks For Faster Python Code

This is a guest post from Dido Grigorov, a deep learning engineer and Python programmer with 17 years of experience in the field.

10 Smart Performance Hacks For Faster Python Code

In the rapidly evolving domain of software development, Python has established itself as a premier language, renowned for its simplicity, readability, and versatility. It underpins a vast range of applications, from web development to artificial intelligence and data engineering. However, beneath its elegant syntax lies a potential challenge: performance bottlenecks that can transform otherwise efficient scripts into noticeably sluggish processes.

Whether the task involves processing large datasets, developing real-time systems, or refining computational efficiency, optimizing Python code for speed can be a decisive factor in achieving superior results.

This guide presents 10 rigorously tested performance-enhancement strategies. Drawing upon Python’s built-in capabilities, efficient data structures, and low-level optimization techniques, it offers practical methods to accelerate code execution without compromising the language’s characteristic clarity and elegance. Supported by empirical benchmarks and illustrative code examples, these techniques demonstrate how incremental improvements can yield substantial performance gains – empowering developers to transition from proficient practitioners to true experts in high-performance Python programming.

Let’s dive in and turbocharge your Python prowess!

Hack 1: Leverage sets for membership testing

When you need to check whether an element exists within a collection, using a list can be inefficient – especially as the size of the list grows. Membership testing with a list (x in some_list) requires scanning each element one by one, resulting in linear time complexity (O(n)):

big_list = list(range(1000000))
big_set = set(big_list)
start = time.time()
print(999999 in big_list)
print(f"List lookup: {time.time() - start:.6f}s")

start = time.time()
print(999999 in big_set)
print(f"Set lookup: {time.time() - start:.6f}s")

Time measured:

In contrast, sets in Python are implemented as hash tables, which allow for constant-time (O(1)) lookups on average. This means that checking whether a value exists in a set is significantly faster, especially when dealing with large datasets.

For tasks like filtering duplicates, validating input, or cross-referencing elements between collections, sets are far more efficient than lists. They not only speed up membership tests but also make operations like unions, intersections, and differences much faster and more concise.

By switching from lists to sets for membership checks – particularly in performance-critical code – you can achieve meaningful speed gains with minimal changes to your logic.

Hack 2: Avoid unnecessary copies

Copying large objects like lists, dictionaries, or arrays can be costly in both time and memory. Each copy creates a new object in memory, which can lead to significant overhead, especially when working with large datasets or within tight loops.

Whenever possible, modify objects in place instead of creating duplicates. This reduces memory usage and improves performance by avoiding the overhead of allocating and populating new structures. Many built-in data structures in Python provide in-place methods (e.g. sort, append, update) that eliminate the need for copies.

numbers = list(range(1000000))
def modify_list(lst):
    lst[0] = 999
    return lst
start = time.time()
result = modify_list(numbers)
print(f"In-place: {time.time() - start:.4f}s")

def copy_list(lst):
    new_lst = lst.copy()
    new_lst[0] = 999
    return new_lst
start = time.time()
result = copy_list(numbers)
print(f"Copy: {time.time() - start:.4f}s")

Time measured:

In performance-critical code, being mindful of when and how objects are duplicated can make a noticeable difference. By working with references and in-place operations, you can write more efficient and memory-friendly code, particularly when handling large or complex data structures.

Hack 3: Use __slots__ for memory efficiency

By default, Python classes store instance attributes in a dynamic dictionary (__dict__), which offers flexibility but comes with memory overhead and slightly slower attribute access.

Using __slots__ allows you to explicitly declare a fixed set of attributes for a class. This eliminates the need for a __dict__, reducing memory usage – which is especially beneficial when creating many instances of a class. It also leads to marginally faster attribute access due to the simplified internal structure.

While __slots__ does restrict dynamic attribute assignment, this trade-off is often worthwhile in memory-constrained environments or performance-sensitive applications. For lightweight classes or data containers, applying __slots__ is a simple way to make your code more efficient.

class Point:
    __slots__ = ('x', 'y')
    def __init__(self, x, y):
        self.x = x
        self.y = y
start = time.time()
points = [Point(i, i+1) for i in range(1000000)]
print(f"With slots: {time.time() - start:.4f}s")

Time measured:

Hack 4: Use math functions instead of operators

For numerical computations, Python’s math module provides functions that are implemented in C, offering better performance and precision than equivalent operations written in pure Python.

For example, using math.sqrt() is typically faster and more accurate than raising a number to the power of 0.5 using the exponentiation (**) operator. Similarly, functions like math.sin(), math.exp(), and math.log() are highly optimized for speed and reliability.

These performance benefits become especially noticeable in tight loops or large-scale calculations. By relying on the math module for heavy numerical work, you can achieve both faster execution and more consistent results – making it the preferred choice for scientific computing, simulations, or any math-heavy code.

Use math functions instead of operators

PyCharm makes it even easier to take advantage of the math module by providing intelligent code completion. Simply typing math. triggers a dropdown list of all available mathematical functions and constants – such as sqrt(), sin(), cos(), log(), pi, and many more – along with inline documentation. 

This not only speeds up development by reducing the need to memorize function names, but also encourages the use of optimized, built-in implementations over custom or operator-based alternatives. By leveraging these hints, developers can quickly explore the full breadth of the module and write cleaner, faster numerical code with confidence.

import math
numbers = list(range(10000000))
start = time.time()
roots = [math.sqrt(n) for n in numbers]
print(f"Math sqrt: {time.time() - start:.4f}s")

start = time.time()
roots = [n ** 0.5 for n in numbers]
print(f"Operator: {time.time() - start:.4f}s")

Time measured:

Hack 5: Pre-allocate memory with known sizes

When building lists or arrays dynamically, Python resizes them in the background as they grow. While convenient, this resizing involves memory allocation and data copying, which adds overhead – especially in large or performance-critical loops.

If you know the final size of your data structure in advance, pre-allocating memory can significantly improve performance. By initializing a list or array with a fixed size, you avoid repeated resizing and allow Python (or libraries like NumPy) to manage memory more efficiently.

This technique is particularly valuable in numerical computations, simulations, and large-scale data processing, where even small optimizations can add up. Pre-allocation helps reduce fragmentation, improves cache locality, and ensures more predictable performance.

start = time.time()
result = [0] * 1000000
for i in range(1000000):
    result[i] = i
print(f"Pre-allocated: {time.time() - start:.4f}s")

start = time.time()
result = []
for i in range(1000000):
    result.append(i)
print(f"Dynamic: {time.time() - start:.4f}s")

Time measured:

Hack 6: Avoid exception handling in hot loops

While Python’s exception handling is powerful and clean for managing unexpected behavior, it’s not designed for high-frequency use inside performance-critical loops. Raising and catching exceptions involves stack unwinding and context switching, which are relatively expensive operations.

In hot loops – sections of code that run repeatedly or process large volumes of data – using exceptions for control flow can significantly degrade performance. Instead, use conditional checks (if, in, is, etc.) to prevent errors before they occur. This proactive approach is much faster and leads to more predictable execution.

Reserving exceptions for truly exceptional cases, rather than expected control flow, results in cleaner and faster code – especially in tight loops or real-time applications where performance matters.

numbers = list(range(10000000))
start = time.time()
total = 0
for i in numbers:
    if i % 2 != 0:
        total += i // 2
    else:
        total += i
print(f"Conditional: {time.time() - start:.4f}s")

start = time.time()
total = 0
for i in numbers:
    try:
        total += i / (i % 2)
    except ZeroDivisionError:
        total += i
print(f"Exception: {time.time() - start:.4f}s")

Time measured:

Hack 7: Use local functions for repeated logic

When a specific piece of logic is used repeatedly within a function, defining it as a local (nested) function – also known as a closure – can improve performance and organization. Local functions benefit from faster name resolution because Python looks up variables more quickly in local scopes than in global ones.

In addition to the performance gain, local functions help encapsulate logic, making your code cleaner and more modular. They can also capture variables from the enclosing scope, allowing you to write more flexible and reusable inner logic without passing extra arguments.

This technique is particularly useful in functions that apply the same operation multiple times, such as loops, data transformations, or recursive processes. By keeping frequently used logic local, you reduce both runtime overhead and cognitive load.

Hint: Use AI Assistant’s Suggest Refactoring

If you’re using PyCharm (or any JetBrains product) with the AI Assistant plugin, one particularly powerful tool is Suggest Refactoring. With it, you can select a segment of code, invoke the AI Assistant, and ask it to propose cleaner or more efficient alternatives – all in one go. 

The assistant shows you a “refactored” version of your code, lets you view the diff (what would change), and you can accept either selected snippets or the whole block. This helps maintain consistency, enforce best practices, and catch opportunities for improvement you might otherwise miss.

AI Assistant’s Suggest Refactoring

How to use Suggest Refactoring

Here are step-by-step instructions (as per JetBrains’ documentation) on how to use this feature:

  1. Select the code fragment you want to refactor.
  2. When the popup appears (e.g. small lightbulb or context menu), click the AI Assistant icon.
  3. Choose Suggest Refactoring in the menu.
  4. The AI Chat pane then opens with its proposed refactorings. In it, you can:
    • Click Show Diff to compare the original against the proposed code.
    • Or if you prefer, you can select Apply Immediately to skip the diff and apply the suggestion directly.
  5. If you like the suggested changes, click Accept on individual snippets (in the gutter) or Accept All to replace the entire selected fragment.
  6. If you don’t like the suggestions, you can always close the diff or dialog without applying.
def outer():
    def add_pair(a, b):
        return a + b
    result = 0
    for i in range(10000000):
        result = add_pair(result, i)
    return result
start = time.time()
result = outer()
print(f"Local function: {time.time() - start:.4f}s")

def add_pair(a, b):
    return a + b
start = time.time()
result = 0
for i in range(10000000):
    result = add_pair(result, i)
print(f"Global function: {time.time() - start:.4f}s")

Time measured:

Hack 8: Use itertools for combinatorial operations

When dealing with permutations, combinations, Cartesian products, or other iterator-based tasks, Python’s itertools module provides a suite of highly efficient, C-optimized tools tailored for these use cases.

Functions like product(), permutations(), combinations(), and combinations_with_replacement() generate elements lazily, meaning they don’t store the entire result in your computer’s memory. This allows you to work with large or infinite sequences without the performance or memory penalties of manual implementations.

In addition to being fast, itertools functions are composable and memory-efficient, making them ideal for complex data manipulation, algorithm development, and problem-solving tasks like those found in simulations, search algorithms, or competitive programming. When performance and scalability matter, itertools is a go-to solution.

from itertools import product
items = [1, 2, 3] * 10
start = time.time()
result = list(product(items, repeat=2))
print(f"Itertools: {time.time() - start:.4f}s")

start = time.time()
result = []
for x in items:
    for y in items:
        result.append((x, y))
print(f"Loops: {time.time() - start:.4f}s")

Time measured:

Hack 9: Use bisect for sorted list operations

When working with sorted lists, using linear search or manual insertion logic can be inefficient – especially as the list grows. Python’s bisect module provides fast, efficient tools for maintaining sorted order using binary search.

With functions like bisect_left(), bisect_right(), and insort(), you can perform insertions and searches in O(log n) time, as opposed to the O(n) complexity of a simple scan. This is particularly useful in scenarios like maintaining leaderboards, event timelines, or implementing efficient range queries.

By using bisect, you avoid re-sorting after every change and gain a significant performance boost when working with dynamic, sorted data. It’s a lightweight and powerful tool that brings algorithmic efficiency to common list operations.

import bisect
numbers = sorted(list(range(0, 1000000, 2)))
start = time.time()
bisect.insort(numbers, 75432)
print(f"Bisect: {time.time() - start:.4f}s")

start = time.time()
for i, num in enumerate(numbers):
    if num > 75432:
        numbers.insert(i, 75432)
        break
print(f"Loop: {time.time() - start:.4f}s")

Time measured:

Hack 10: Avoid repeated function calls in loops

Calling the same function multiple times inside a loop – especially if the function is expensive or produces the same result each time – can lead to unnecessary overhead. Even relatively fast functions can accumulate significant cost when called repeatedly in large loops.

To optimize, compute the result once outside the loop and store it in a local variable. This reduces function call overhead and improves runtime efficiency, particularly in performance-critical sections of code.

This technique is simple but effective. It not only speeds up execution but also enhances code clarity by signaling that the value is constant within the loop’s context. Caching function results is one of the easiest ways to eliminate redundant computation and make your code more efficient.

def expensive_operation():
    time.sleep(0.001)
    return 42
start = time.time()
cached_value = expensive_operation()
result = 0
for i in range(1000):
    result += cached_value
print(f"Cached: {time.time() - start:.4f}s")

start = time.time()
result = 0
for i in range(1000):
    result += expensive_operation()
print(f"Repeated: {time.time() - start:.4f}s")

Time measured:

In summary

From leveraging the inherent efficiency of Python’s built-in functions and high-performance libraries such as NumPy to employing memory-conscious techniques with __slots__ and generators, these fifteen Python performance strategies provide a comprehensive set of tools for enhancing execution speed. 

The methods explored include optimizing iterative processes with comprehensions, utilizing sets for rapid membership checks, avoiding unnecessary data copies and exception handling overhead, and applying bitwise operations as arithmetic shortcuts.

Specialized modules such as itertools, bisect, and collections further streamline complex tasks, while adherence to best practices – such as minimizing the use of global variables, pre-allocating memory, and implementing caching – ensures lean, efficient code execution. Empirical benchmarks demonstrate that even minor adjustments can yield significant time savings in large-scale operations, reinforcing the principle that effective optimization does not necessitate a complete code rewrite.

Whether refining a standalone script or scaling a production-level application, these techniques, when applied judiciously, can significantly enhance performance while conserving system resources. Ultimately, the most effective optimizations strike a balance between speed and clarity.

About the author

Dido Grigorov

Dido Grigorov

Dido is a seasoned Deep Learning Engineer and Python programmer with an impressive 17 years of experience in the field. He is currently pursuing advanced studies at the prestigious Stanford University, where he is enrolled in a cutting-edge AI program, led by renowned experts such as Andrew Ng, Christopher Manning, Fei-Fei Li and Chelsea Finn, providing Dido with unparalleled insights and mentorship.

Dido’s passion for Artificial Intelligence is evident in his dedication to both work and experimentation. Over the years, he has developed a deep expertise in designing, implementing, and optimizing machine learning models. His proficiency in Python has enabled him to tackle complex problems and contribute to innovative AI solutions across various domains.

November 05, 2025 10:50 AM UTC

November 04, 2025


Python Morsels

__slots__ for optimizing classes

Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.

Table of contents

  1. How are class attributes stored by default?
  2. Using __slots__ to restrict class attributes
  3. Why use __slots__?
  4. Saving memory usage with __slots__
  5. Memory usage comparison with and without __slots__
  6. Summary

How are class attributes stored by default?

Here we have a class called Point in a points.py file:

class Point:
    def __init__(self, x, y, z):
        (self.x, self.y, self.z) = (x, y, z)


def point_path_from_file(filename):
    with open(filename) as lines:
        return [
            Point(*map(float, point_line.split()))
            for point_line in lines
        ]

And here's an instance of this Point class:

>>> p = Point(1, 2, 3)

Normally, classes store their attributes in a dictionary called __dict__.

>>> p.__dict__
{'x': 1, 'y': 2, 'z': 3}

We have a class here where every instance has x, y, and z attributes. But we could add other attributes to any instance of this Point class, and another key-value pair will appear in this __dict__ dictionary.

For example if we add a w attribute:

>>> p.w = 4

Our __dict__ dictionary will now have a w attribute:

>>> p.__dict__
{'x': 1, 'y': 2, 'z': 3, 'w': 4}

This is how classes work by default; classes work this way unless you use __slots__.

Using __slots__ to restrict class attributes

To use __slots__, we …

Read the full article: https://www.pythonmorsels.com/__slots__/

November 04, 2025 10:30 PM UTC


Rodrigo Girão Serrão

A generator, duck typing, and a branchless conditional walk into a bar

A generator, duck typing, and a branchless conditional walk into a bar.

What's your favourite line of Python code?

My friend Aaron is quite a character. One day, he was giving a talk and said that everyone must have their favourite line of code. And he was absolutely sure of what he was saying. His conviction is that everyone is so deep into their craft that they naturally feel strongly about some lines of code. So much so that one of them is their personal favourite.

The caveat is that he was talking about APL, not about Python, and with that context in mind the idea of “everyone having a favourite line of code” makes much more sense, believe it or not.

But that got me thinking... Do I have a favourite line of Python code? What is it?

I like generators

There are lots of things I like about the Python language, like the fact that it is very beginner-friendly, it has a very vibrant community, or that everything is an object and you can interact with the core language by implementing the right dunder methods.

One other thing I like in general, and that translates well into the world of Python, is laziness. A pinch of laziness in a programmer is a good thing because it will force you to think about the best way of doing things, and Python has a category of objects that are inherently lazy, which are generators.

Generators are objects you can iterate over – so, they're objects you can use in for loops, for example – but that only generate the values that you need on demand. Kind of like the built-in range, if you think about it.

range is lazy

If you run range(10), that small expression runs in a tiny fraction of a second because it just instantiates an object called “range” and doesn't really compute the integers from 0 to 9, inclusive. You can see that if you print the object:

import time

start = time.perf_counter()

r = range(10)
print(r)  # range(0, 10)

end = time.perf_counter()
print(f"Done in {end - start:.4f}s.")  # Done in 0.0001s.

Creating a range with a gazillion integers is equally fast since, again, you're not computing the integers themselves. You are just instantiating the class range:

import time

start = time.perf_counter()

r = range(999_999_999_999_999_999)  # A gazillion.
print(r)  # range(0, 999999999999999999)

end = time.perf_counter()
print(f"Done in {end - start:.4f}s.")  # Done in 0.0001s.

The story becomes completely different if, instead of a range, you want a list with all those integers. A list must hold its elements, so creating a larger list takes up more memory and more time:

import time

start = time.perf_counter()

r = list(range(10))  # <-- list(...)
print(r)  # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

end = time.perf_counter()
print(f"Done in {end - start:.4f}s.")  # Done in 0.0001s.

Creating a list with 10 elements seems to be quite fast, but what if you want a...

November 04, 2025 08:26 PM UTC


PyCoder’s Weekly

Issue #707: Python Infrastructure, Concurrency, Django in 2025, and More (Nov. 4, 2025)

#707 – NOVEMBER 4, 2025
View in Browser »

The PyCoder’s Weekly Logo


Michael Kennedy: Managing Your Own Python Infrastructure

How do you deploy your Python application without getting locked into an expensive cloud-based service? This week on the show, Michael Kennedy from the Talk Python podcast returns to discuss his new book, “Talk Python in Production.”
REAL PYTHON podcast

Speed Up Python With Concurrency

Learn what concurrency means in Python and why you might want to use it. You’ll see a simple, non-concurrent approach and then look into why you’d want threading, asyncio, or multiprocessing.
REAL PYTHON course

Modern, Self-Hosted Authentication

alt

Keep your users, your data and your stack with PropelAuth BYO. Easily add Enterprise authentication features like Enterprise SSO, SCIM and session management. Keep your sales team happy and give your CISO peace of mind. here →
PROPELAUTH sponsor

The State of Django 2025

Develop with Django? The answers from the annual Django survey show you how over 4,600 developers are using it today and give you actionable ideas to implement in your projects right now.
JETBRAINS.COM • Shared by Evgeniia

PSF Withdraws $1.5M Proposal for US Gov’t Grant

PYTHON SOFTWARE FOUNDATION

Django Is Now a CVE Numbering Authority (CNA)

DJANGO SOFTWARE FOUNDATION

PSF Board Office Hour Sessions for 2026

PYTHON SOFTWARE FOUNDATION

Python Jobs

Python Video Course Instructor (Anywhere)

Real Python

Python Tutorial Writer (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials

Regex, Pregex, or Pyparsing?

Parsing messy support tickets? This post walks through real-world examples of Python techniques for extracting structured data from unstructured text. It compares the re module for classic pattern matching, pregex for cleaner and more readable regex construction, and pyparsing for more complex structures.
KHUYEN TRAN • Shared by Khuyen Tran

Micropython Used in a PlayStation Game

Tibor is a game written using Micropython. If that isn’t cool enough, the coders figured out how to embed it in the Unreal Engine and then used that to convert it to a PS5 game.
KREŠIMIR ŠPES

The free data science code editor, built by the creators of RStudio.

Posit’s new Integrated Development Environment that enables data scientists to effectively work with Python, R and other languages within a single, unified interface, streamlining development workflows and reducing context switching.
POSIT sponsor

Django: Introducing django-http-compression

HTTP supports response compression, which can significantly reduce the size of responses, thereby decreasing bandwidth usage and load times for users. This post describes a library that helps you achieve this.
ADAM JOHNSON

Logging in Python

If you use Python’s print() function to get information about the flow of your programs, logging is the natural next step. Create your first logs and curate them to grow with your projects.
REAL PYTHON

Using Starlark for Complex Configuration

If you need something deeper than TOML, Starlark is a Python-like configuration language which now has Python bindings through a Rust plug-in. This article shows you its capabilities.
BITE CODE!

Please Don’t Break Things

Does this need to be a breaking change? This opinion piece has David asking whether that change being made to a library was really necessary and the cost to those of us using it.
DAVID VUJIC • Shared by David Vujic

What Caused the Large AWS Outage?

Last week, a major AWS outage hit thousands of sites & apps, and even a Premier League soccer game. This post is an overview of what caused this high-profile, global outage.
GERGELY OROSZ

How Does Python’s OrderedDict Maintain Order?

This article takes a closer look at the inner workings of OrderedDict and explains what it takes to implement an ordered dictionary in Python.
PIGLEI • Shared by piglei

Why Performance Matters in Python Development

A deep dive from a deep-learning engineer on why performance in Python is important, how it is changing, and the myths of Python performance.
DIDO GRIGOROV

Using Python Optional Arguments When Defining Functions

Use Python optional arguments to handle variable inputs. Learn to build flexible function and avoid common errors when setting defaults.
REAL PYTHON

Quiz: Using Python Optional Arguments When Defining Functions

Practice Python function parameters, default values, args, *kwargs, and safe optional arguments with quick questions and short code tasks.
REAL PYTHON

Unicode Footguns in Python

Understanding Unicode equivalence and the deceptive nature of glyphs
VIVIS DEV

Projects & Code

hyperflask: Full Stack Web Framework

GITHUB.COM/HYPERFLASK

caniscrape: Analyze a Website’s Anti-Bot Protections

GITHUB.COM/ZA1815

python-diskcache: Python Disk-Backed Cache

GITHUB.COM/GRANTJENKS

pytogether: Collaborative Python IDE in the Browser

GITHUB.COM/SJRIZ

Events

Weekly Real Python Office Hours Q&A (Virtual)

November 5, 2025
REALPYTHON.COM

Canberra Python Meetup

November 6, 2025
MEETUP.COM

Sydney Python User Group (SyPy)

November 6, 2025
SYPY.ORG

PyCon Mini Tokai 2025

November 8 to November 9, 2025
PYCON.JP

PyCon Chile 2025

November 8 to November 10, 2025
PYCON.CL

Django Girls Chongoene #2

November 8 to November 9, 2025
DJANGOGIRLS.ORG

PyCon Ireland 2025

November 15 to November 17, 2025
PYCON.IE

PyCon Wroclaw 2025

November 15 to November 16, 2025
PYCONWROCLAW.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #707.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

November 04, 2025 07:30 PM UTC


Real Python

Building UIs in the Terminal With Python Textual

Have you ever wanted to create an app with an appealing interface that works in the command line? Welcome to Textual, a Python toolkit and framework for creating beautiful, functional text-based user interface (TUI) applications. The Textual library provides a powerful and flexible framework for building TUIs. It offers a variety of features that allow you to create interactive and engaging console applications.

In this video course, you’ll learn how to create, style, and enhance Textual apps with layouts, events, and actions.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 04, 2025 02:00 PM UTC


Python Software Foundation

Connecting the Dots: Understanding the PSF’s Current Financial Outlook

As the PSF heads into our end-of-year fundraiser, we want to share information to help “connect the dots” and show a more complete picture of the PSF’s current financial outlook. You’ve heard from us on subjects related to our financial position from several different angles recently (a list of those posts is below). We’ve prioritized proactive communications, because we believe in transparency, we have trust in our community, and we value keeping you informed— we know how invested in and impacted by our work you are. We now want to pull those threads together in order to create some shared clarity on the big picture, and, hopefully, inspire you to action to support our fundraising efforts.  

The dots

Many groups, organizers, and individuals in the Python community and beyond are experiencing the impacts of the current financial environment, including inflation, reduced sponsorship, economic pressure in the tech sector, and global/local uncertainty and conflict. Unfortunately, the PSF has felt these effects as well, in a number of ways. We’ve been doing our best to share how the current environment impacts our areas of service to the community as the PSF navigates these challenges over the past couple of years:

To briefly summarize, the PSF’s assets and yearly revenue have declined, and costs have increased, while the demand and need for our work has continued to multiply. 

Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program. A PSF-run PyCon US is also an essential program for the PSF to deliver value to our sponsors. Unfortunately, PyCon US has run at a loss for three years—and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing US and economic conditions have reduced our attendance. Because PyCon US is still a 2000+ person event, we must secure venue contracts for event spaces that can accommodate that number of people, years in advance. Those contracts come with a lot of requirements, such as union labor, required vendors, and many more details (iykyk) that, in the end, amount to a hefty spend.

Meanwhile, Python usage has continued to surge (which is wonderful!), but rather than keep pace, corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed. We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. 

We’ve also been seeking out grant opportunities where we find good fits with our mission. We made it far along in one large U.S. Government grant process, but ultimately decided to withdraw our application because it conflicted with our values and mission. The community's supportive response to that decision has been heartening and brought in an unexpected surge of material support totaling $135K+ USD from 1400+ donors, which includes 270+ new PSF members! The PSF is astounded and deeply appreciative at the outpouring of solidarity in both words and actions. This remarkable show of support reminds the us of the community’s strength, and reinforces our resolve in the decision to withdraw from the grant process, even as the $1.5M gap from the grant remains.

Our 2024 Annual Impact Report provides a window on the current economic outlook for the PSF, with a loss in net income and a dip in the growth of assets in 2024. Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited “levers to pull” when it comes to budgeting and long-term sustainability. As you can see from the categories mentioned, each of these expense areas leads directly to the services we provide the community. Additionally, we have several sources of assets with donor restrictions (i.e. earmarked funds), meaning we can’t shift those funds to cover other areas of need. 


 

What does this mean? 

Overall, the PSF is facing significant financial challenges, but we are actively monitoring the situation and taking action where we can. This post is our way of “raising the flag” early and calling in the community proactively. We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn’t shift in the next year. 

What we’re doing

Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include:
The PSF’s end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more “oomph” behind the campaign. We’ll be doing our regular fundraising activities; we’ll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients. 

What you can do

So, what can you do to help us gain sponsors to ensure critical infrastructure, our community, and more can stay supported and sustainable?
  1. If your company is using Python to build its products and services, check to see if they already sponsor the PSF on our Sponsors page.
  2. If not, reach out to your organization's internal decision-makers and impress on them just how important it is for us to power the future of Python together, and send them our sponsor prospectus.
  3. Point out the various benefits they will receive from sponsoring the PSF. Mention that PyCon US 2026 is coming up next spring, where they can connect with the community, recruit, and understand the current direction of the Python language!
  4. Remind them to reach out to sponsors@python.org if they have any questions or would like a walk-through of our sponsorship program.
As the PSF prepares for our end-of-year fundraiser, we want to emphasize the importance of our community's support. Your relentless passion for Python and our community, along with your individual donations, memberships, stories, advocacy, and more, all make a huge impact and keep our tiny-but-mighty PSF team inspired. Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of “why I support the PSF” stories will make all the difference in our end-of-year fundraiser. 

If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations. If you have questions about the PSF’s current financial outlook, the steps we’re taking, or how you can get involved, we welcome you to join the PSF Board Office Hours, join the conversation on Discuss, or email psf@python.org. As ever, we are incredibly grateful to be in community with each of you, and we’re honored to have your support. 

November 04, 2025 08:36 AM UTC


Seth Michael Larson

GameCube Nintendo Classics and storage size

If you're into GameCube collecting and archiving you may already know that GameCube ISOs or "ROMs" are around ~1.3 GB in size, regardless of the game that is contained within the .iso file. This is because GameCube ROMs are all copies of the same disk format: the GameCube Game disc (DOL-6).

The GameCube Game disc is a 8cm miniDVD-based disc with a static storage capacity of 1.5 GB. Compare this to cartridges which using memory-mapping controllers (MMC) can encase different amounts of storage ROM depending on the size of the game data itself.

This was a concern raised by some GameCube players on Switch. Given the prices of microSD Express storage (~28 ¢/GB) and the size of the GameCube game library (>650 total, 45 first-party) meant storage requirements could increase quickly for new GameCube titles.

Luckily, looking at the data about the GameCube Nintendo Classics application on the Switch we can see that the ROMs in use are "trimmed", such that their size is less than 1.3 GB:

Date Titles Games Storage Storage/Game
2025-06-03 F-Zero GX
Legend of Zelda: The Wind Waker
Soulcalibur II
3 3.5 GB 1.16 GB
2025-07-03 Super Mario Strikers 4 ??? GB ??? GB
2025-08-21 Chibi-Robo! 5 6.9 GB 1.38 GB
2025-10-30 Luigi's Mansion 6 7.2 GB 1.2 GB

Luigi's Mansion in particular is known to only require ~100 MB of data on the 1.3 GB disc. Animal Crossing for the GameCube is also legendarily small due to starting life as an N64 game, requiring only 50 MB of data.

It'll be interesting to see what happens for the first multi-disc game to be added to Nintendo Classics. Notably, Namco already has a GameCube game in Nintendo Classics: Soulcalibur II. For this reason, I suspect that the first multi-disc game will be one of these three published by Namco:



Thanks for keeping RSS alive! ♥

November 04, 2025 12:00 AM UTC

November 03, 2025


Real Python

A Close Look at a FastAPI Example Application

This example project showcases important features of the FastAPI web framework, including automatic validation and documentation. FastAPI is an excellent choice for both beginners building their first API and experienced developers diving deep into API design.

In this tutorial, you’ll explore a FastAPI example application by building a randomizer API that can shuffle lists, pick random items, and generate random numbers.

By the end of this tutorial, you’ll understand that:

  • Path parameters and type hints work together for automatic request validation.
  • Request bodies handle complex data in your API endpoints.
  • Asynchronous programming improves your API’s performance.
  • CORS configuration enables secure cross-origin requests.

To follow along with this tutorial, you should be comfortable defining Python functions, working with decorators, and have a basic understanding of CRUD and JSON.

Get Your Code: Click here to download the free sample code that you’ll use to take a close look at a FastAPI example application.

Take the Quiz: Test your knowledge with our interactive “A Close Look at a FastAPI Example Application” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

A Close Look at a FastAPI Example Application

Practice FastAPI basics with path parameters, request bodies, async endpoints, and CORS. Build confidence to design and test simple Python web APIs.

Set Up a FastAPI Example Project

Before diving into code, you’ll need to properly set up your development environment. This involves installing FastAPI and creating your first endpoint. The goal of these first lines of code is to verify that everything works correctly.

Install FastAPI

FastAPI requires two main components to run your application: the framework itself and an ASGI (Asynchronous Server Gateway Interface) server. The recommended way to install FastAPI includes these two components and all the standard dependencies you’ll need to get started with your FastAPI project.

First, select your operating system below and use your platform-specific command to set up a virtual environment:

Windows PowerShell
PS> python -m venv venv
PS> .\venv\Scripts\activate
(venv) PS>
Shell
$ python -m venv venv
$ source venv/bin/activate
(venv) $

With your virtual environment activated, install FastAPI with all its standard dependencies:

Shell
(venv) $ python -m pip install "fastapi[standard]"

This command installs FastAPI along with Uvicorn as the ASGI server. The [standard] extra includes a predefined set of optional dependencies that enhance FastAPI’s functionality. For example, it provides the FastAPI CLI, which you’ll use later to run the development server.

Create Your First API Endpoint

With FastAPI installed, you can create a minimal API with one API endpoint to test your setup. This first endpoint will help you understand the basic structure of a FastAPI application. You’ll see how FastAPI uses Python decorators to define routes and how it automatically handles JSON serialization for your response data.

Start by creating a new file named main.py in your project’s directory:

Python main.py
from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def home():
    return {"message": "Welcome to the Randomizer API"}

You create your FastAPI application named app by instantiating the FastAPI class. The @app.get("/") decorator defines the home() function as a route handler. That way, FastAPI will call the home() function when someone sends a GET request to your API’s root URL.

To see what this looks like in action, hop over to the terminal and run your FastAPI application with the following command:

Shell
(venv) $ fastapi dev main.py

FastAPI  Starting development server 🚀
         Searching for package file structure from directories with __init__.py files
         Importing from /Users/rp/projects/fastapi

module   🐍 main.py

  code   Importing the FastAPI app object from the module with the following code:
         from main import app

   app   Using import string: main:app

server   Server started at http://127.0.0.1:8000
server   Documentation at http://127.0.0.1:8000/docs

   tip   Running in development mode, for production use: fastapi run

         Logs:

  INFO   Will watch for changes in these directories: ['/Users/rp/projects/fastapi']
  INFO   Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
  INFO   Started reloader process [85470] using WatchFiles
  INFO   Started server process [85474]
  INFO   Waiting for application startup.
  INFO   Application startup complete.

The fastapi dev command is part of the FastAPI CLI. It starts your application in development mode with automatic reloading. This means that any changes you make to your code will automatically restart the server.

Read the full article at https://realpython.com/fastapi-python-web-apis/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 03, 2025 02:00 PM UTC


Reuven Lerner

Want to learn uv?

You’ve probably heard about uv:

But using uv isn’t just about learning a few commands. It’s about changing how you think about packaging, and what commands you run on a regular basis.

That’s why I’ve created a free uv crash course. Every day, for 12 days, you’ll get insights into how to (and how not to) use uv in your workflow. Every article includes oodles of examples, context, and ideas for switching to uv from other tools.

Check it out, for free: https://uvcrashcourse.com

The post Want to learn uv? appeared first on Reuven Lerner.

November 03, 2025 01:15 PM UTC


Real Python

Quiz: A Close Look at a FastAPI Example Application

In this quiz, you’ll test your understanding of the FastAPI example project that can shuffle lists, pick random items, and generate random numbers.

By working through this quiz, you’ll revisit how path parameters and type hints enable automatic validation, how request bodies model data, how async endpoints improve performance, and how CORS allows safe cross-origin requests.

To go deeper, read A Close Look at a FastAPI Example Application. You can also review Python functions, decorators, CRUD, and JSON.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 03, 2025 12:00 PM UTC


eGenix.com

PyDDF Python Herbst Sprint 2025

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

Python Meeting Herbst Sprint 2025 in
Düsseldorf

Samstag, 15.11.2025, 10:00-18:00 Uhr
Sonntag, 16.11.2025. 10:00-18:00 Uhr

Atos Information Technology GmbH, Am Seestern 1, 40547 Düsseldorf

Informationen

Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung von Atos Deutschland ein Python Sprint Wochenende.

Der Sprint findet am Wochenende 15/16.11.2025 in der Atos Niederlassung, Am Seestern 1, in Düsseldorf statt.Folgende Themengebiete sind als Anregung bereits angedacht:
Natürlich können die Teilnehmenden weitere Themen vorschlagen und umsetzen.

Anmeldung, Kosten und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Meetup Sprint Seite:

WICHTIG: Ohne Anmeldung können wir den Gebäudezugang nicht vorbereiten. Eine spontane Anmeldung am Sprint Tag wird daher vermutlich nicht funktionieren.

Teilnehmer sollten sich zudem in der PyDDF Telegram Gruppe registrieren, da wir uns dort koordinieren:

Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python-Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-André Lemburg, eGenix.com

November 03, 2025 09:00 AM UTC


Hugo van Kemenade

Python Core Sprint 2025

🐍🏃In September, the annual Python Core Sprint was hosted by Arm in Cambridge, UK!

The plan: put 35 core developers and 13 special guests in a room for a week, and see what they cook up.

Monday highlights #

We kicked off the first day with a round of five-word intros (mine: “three”, “dot”, “fourteen”, “release”, “manager”), lots of talks, and lots of discussion about talks:

I did some sprint spring cleaning of our PyPI projects, dropping support for then-almost-EOL Python 3.9:

And because Mariatta wasn’t with us, here’s the all-important Python T-shirt census:

A room of people at desks with monitors. Steve has a mic and is asking a question to Brett who’s standing at and presenting.

The sprint room

Tuesday highlights #

Release day?

I’d originally planned to release Python 3.14.0rc3 on the Tuesday, but the morning was full of presentations, and the afternoon had an early departure for the social event, so I moved it to Wednesday instead.

Tania Allard gave a presentation about the different types of mentorship and how we can improve, followed by an open discussion.

Gregory P. Smith gave a demo on how we can use tools like Claude with CPython.

Tania, Jannis Leidel, Carol Willing and I discussed the User Success Workgroup and we came up with some ideas on next steps.

We ended the day with a punting tour on the river Cam and dinner at Jesus College, thank you, Arm!

And Thomas Wouters gave a fun session of his Feuding Pythonistas game (spoiler: people are wrong on the internet).

Python/tech t-shirt census:

People sitting in a small boat, with a man standing at the back with a long pole in the water, in front of grand Cambridge college

Punting on the Cam

Wednesday highlights #

Release day?

The Steering Council asked for an extra day to decide about a possible typing revert (python/steering-council#307), so not today.

Lightning talks:

Carol, Adam, Thomas, Petr Viktorin and I discussed a number of docs topics.

I released the Python Docs Sphinx Theme with more translations.

We had a Q&A session with the Steering Council, three in-person and two joining remotely.

Jacob Coffee and I looked into upgrading the Python Insider and PSF blogs into something a little more modern.

T-shirt census:

Steering Council Q&A: Greg, Pablo and Donghee on stools and Barry and Emily on screen.

The Python Steering Council

Thursday highlights #

Release day? Yes!

The Steering Council decided not to revert, so full steam ahead with the release.

Savannah Ostrowski, release manager for 3.16 and 3.17, shadowed to see what the process looks like (not as bad as it looks like in PEP 101).

Time for a couple of quick PRs and an interview with Pablo Galindo Salgado and Łukasz Langa on the core.py podcast, along with 29 others!

T-census:

The 3.14 release room, two laptops on a table and the release CI build shown on a screen. The laptop with the “365 PARTYGIRL” sticker isn’t mine.

The 3.14.0rc3 release in progress

Friday highlights #

I went to Manchester to attend PyCon UK. Some highlights:

The conference also included sprints, and Adam and I ran the CPython sprint. We had a big table full of contributors and a few made their very first contributions, which is always rewarding for all involved!

Another roomful of people working at laptops around tables

PyCon UK sprint

Some numbers for me during the week:

Thank you #

Huge thanks to Diego Russo and Arm for arranging and hosting us. The core sprint is always a highlight of the year and an incredibly productive week.

Read writeups by Diego and Antonio, and I recommend listening to Łukasz and Pablo’s core.py podcast for interviews with 18 (part one) and 12 sprinters (part two). They’re long, but it’s fascinating to hear all the different things everyone is working on.

Header photo by Arm

November 03, 2025 08:58 AM UTC


Python Bytes

#456 You're so wrong

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://pyfound.blogspot.com/2025/10/NSF-funding-statement.html?featured_on=pythonbytes">The PSF has withdrawn a $1.5 million proposal to US government grant program</a></strong></li> <li><strong><a href="https://www.reddit.com/r/Python/comments/1oh7dcw/a_binary_serializer_for_pydantic_models_7_smaller/?featured_on=pythonbytes">A Binary Serializer for Pydantic Models</a></strong></li> <li><strong><a href="https://www.pythonmorsels.com/t-strings-in-python/?featured_on=pythonbytes">T-strings: Python's Fifth String Formatting Technique?</a></strong></li> <li><strong><a href="https://github.com/antoniorodr/Cronboard?featured_on=pythonbytes">Cronboard</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=9dVJDj_HLy8' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="456">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1:</strong> <a href="https://pyfound.blogspot.com/2025/10/NSF-funding-statement.html?featured_on=pythonbytes">The PSF has withdrawn a $1.5 million proposal to US government grant program</a></p> <ul> <li><a href="https://simonwillison.net/2025/Oct/27/psf-withdrawn-proposal/?featured_on=pythonbytes">Related post from Simon Willison</a></li> <li>ARS Technica: <a href="https://arstechnica.com/tech-policy/2025/10/python-foundation-rejects-1-5-million-grant-over-trump-admins-anti-dei-rules/?featured_on=pythonbytes">Python plan to boost software security foiled by Trump admin’s anti-DEI rules</a></li> <li>The Register: <a href="https://www.theregister.com/2025/10/27/python_foundation_abandons_15m_nsf/?featured_on=pythonbytes">Python Foundation goes ride or DEI, rejects government grant with strings attached</a></li> <li>In Jan 2025, the PSF submitted a proposal for a US NSF grant under the Safety, Security, and Privacy of Open Source Ecosystems program. After months of work by the PSF, the proposal was recommended for funding.</li> <li>If the PSF accepted it, however, they would need to agree to the some terms and conditions, including, affirming that the PSF doesn't support diversity. The restriction wouldn't just be around the security work, but around all activity of the PSF as a whole. And further, that any deemed violation would give the NSF the right to ask for the money back.</li> <li>That just won't work, as the PSF would have already spent the money.</li> <li>The PSF mission statement includes "The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers." The money would have obviously been very valuable, but the restrictions are just too unacceptable.</li> <li>The PSF withdrew the proposal. This couldn't have been an easy decision, that was a lot of money, but I think the PSF did the right thing.</li> </ul> <p><strong>Michael #2:</strong> <a href="https://www.reddit.com/r/Python/comments/1oh7dcw/a_binary_serializer_for_pydantic_models_7_smaller/?featured_on=pythonbytes">A Binary Serializer for Pydantic Models</a></p> <ul> <li><strong>7× Smaller Than JSON</strong></li> <li>A compact binary serializer for Pydantic models that dramatically reduces RAM usage compared to JSON.</li> <li>The library is designed for high-load systems (e.g., Redis caching), where millions of models are stored in memory and every byte matters.</li> <li>It serializes Pydantic models into a minimal binary format and deserializes them back with zero extra metadata overhead.</li> <li><strong>Target Audience:</strong> This project is intended for developers working with: <ul> <li>high-load APIs</li> <li>in-memory caches (Redis, Memcached)</li> <li>message queues</li> <li>cost-sensitive environments where object size matters</li> </ul></li> </ul> <p><strong>Brian #3: <a href="https://www.pythonmorsels.com/t-strings-in-python/?featured_on=pythonbytes">T-strings: Python's Fifth String Formatting Technique?</a></strong></p> <ul> <li>Trey Hunner</li> <li>Python 3.14 has t-strings. How do they fit in with the rest of the string story?</li> <li>History <ul> <li>percent-style (%) strings - been around for a very long time</li> <li><code>string.Template</code> - and <code>t.substitute()</code> - from Python 2.4, but I don’t think I’ve ever used them</li> <li>bracket variables and <code>.format()</code> - Since Python 2.6</li> <li>f-strings - Python 3.6 - Now I feel old. These still seem new to me</li> <li>t-strings - Python 3.14, but a totally different beast. These don’t return strings.</li> </ul></li> <li>Trey then covers a problem with f-strings in that the substitution happens at definition time.</li> <li>t-strings have substitution happen later. this is essentially “lazy string interpolation”</li> <li>This still takes a bit to get your head around, but I appreciate Trey taking a whack at the explanation.</li> </ul> <p><strong>Michael #4: <a href="https://github.com/antoniorodr/Cronboard?featured_on=pythonbytes">Cronboard</a></strong></p> <ul> <li>Cronboard is a terminal application that allows you to manage and schedule cronjobs on local and remote servers.</li> <li>With Cronboard, you can easily add, edit, and delete cronjobs, as well as view their status.</li> <li><strong>✨ Features</strong> <ul> <li>✔️ Check cron jobs</li> <li>✔️ Create cron jobs with validation and human-readable feedback</li> <li>✔️ Pause and resume cron jobs</li> <li>✔️ Edit existing cron jobs</li> <li>✔️ Delete cron jobs</li> <li>✔️ View formatted last and next run times</li> <li>✔️ Accepts <code>special expressions</code> like <code>@daily</code>, <code>@yearly</code>, <code>@monthly</code>, etc.</li> <li>✔️ Connect to servers using SSH, using password or SSH keys</li> <li>✔️ Choose another user to manage cron jobs if you have the permissions to do so (<code>sudo</code>)</li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://discuss.python.org/t/pep-810-explicit-lazy-imports/104131/466?featured_on=pythonbytes">PEP 810: Explicit lazy imports, has been unanimously accepted by steering council</a></li> <li><a href="https://courses.pythontest.com/lean-tdd?featured_on=pythonbytes">Lean TDD</a> book will be written in the open. TOC, some details, and a 10 page introduction are now available. Hoping for the first pass to be complete by the end of the year. <ul> <li>I’d love feedback to help make it a great book, and keep it small-ish, on a very limited budget.</li> </ul></li> </ul> <p><strong>Joke: <a href="https://x.com/PR0GRAMMERHUM0R/status/1977403921149649083?featured_on=pythonbytes">You are so wrong!</a></strong></p>

November 03, 2025 08:00 AM UTC


Django Weblog

Announcing DjangoCon Europe 2026 in Athens, Greece! ☀️🏖️🏛️🇬🇷

We’re excited to share that DjangoCon Europe returns in 2026 — this time in the historic and sun-soaked city of Athens, Greece 🇬🇷, with three days of talks from April 15–17 followed by two days of sprints over the weekend!

Panorama of Athens with the city skyline. The Mediterranean sea in the background, and the Acropolis in the foreground

Photo by Rafael Hoyos Weht on Unsplash

DjangoCon Europe is one of the longest-running Django events worldwide, now in its 18th edition - and 15th country!

What’s on the agenda

We’re preparing a mix of Django and Python talks, hands-on workshops, and opportunities to collaborate, learn, and celebrate our community. Whether you're new to Django or a long-time Djangonaut, DjangoCon Europe is designed to help you build new skills and connect with others who care about open-source software.

Athens provides the perfect backdrop — a lively, accessible city full of culture 🏛️, great food 😊, and spring sunshine ☀️.

Join us in Athens

DjangoCon Europe thrives because people across our community take part. As the organizers prepare the programe, there will be many ways to get involved:

Stay updated

We’ll share details on proposals, tickets, sponsorship packages, and sprints in the coming weeks, via our newsletter on the conference website.

Sign up for updates

We are also on:

November 03, 2025 07:00 AM UTC


Armin Ronacher

Absurd Workflows: Durable Execution With Just Postgres

It’s probably no surprise to you that we’re building agents somewhere. Everybody does it. Building a good agent, however, brings back some of the historic challenges involving durable execution.

Entirely unsurprisingly, a lot of people are now building durable execution systems. Many of these, however, are incredibly complex and require you to sign up for another third-party service. I generally try to avoid bringing in extra complexity if I can avoid it, so I wanted to see how far I can go with just Postgres. To this end, I wrote Absurd 1, a tiny SQL-only library with a very thin SDK to enable durable workflows on top of just Postgres — no extension needed.

Durable Execution 101

Durable execution (or durable workflows) is a way to run long-lived, reliable functions that can survive crashes, restarts, and network failures without losing state or duplicating work. Durable execution can be thought of as the combination of a queue system and a state store that remembers the most recently seen execution state.

Because Postgres is excellent at queues thanks to SELECT ... FOR UPDATE SKIP LOCKED, you can use it for the queue (e.g., with pgmq). And because it’s a database, you can also use it to store the state.

The state is important. With durable execution, instead of running your logic in memory, the goal is to decompose a task into smaller pieces (step functions) and record every step and decision. When the process stops (whether it fails, intentionally suspends, or a machine dies) the engine can replay those events to restore the exact state and continue where it left off, as if nothing happened.

Absurd At A High Level

Absurd at the core is a single .sql file (absurd.sql) which needs to be applied to a database of your choice. That SQL file’s goal is to move the complexity of SDKs into the database. SDKs then make the system convenient by abstracting the low-level operations in a way that leverages the ergonomics of the language you are working with.

The system is very simple: A task dispatches onto a given queue from where a worker picks it up to work on. Tasks are subdivided into steps, which are executed in sequence by the worker. Tasks can be suspended or fail, and when that happens, they execute again (a run). The result of a step is stored in the database (a checkpoint). To avoid repeating work, checkpoints are automatically loaded from the state storage in Postgres again.

Additionally, tasks can sleep or suspend for events and wait until they are emitted. Events are cached, which means they are race-free.

With Agents

What is the relationship of agents with workflows? Normally, workflows are DAGs defined by a human ahead of time. AI agents, on the other hand, define their own adventure as they go. That means they are basically a workflow with mostly a single step that iterates over changing state until it determines that it has completed. Absurd enables this by automatically counting up steps if they are repeated:

absurd.registerTask({name: "my-agent"}, async (params, ctx) => {
  let messages = [{role: "user", content: params.prompt}];
  let step = 0;
  while (step++ < 20) {
    const { newMessages, finishReason } = await ctx.step("iteration", async () => {
      return await singleStep(messages);
    });
    messages.push(...newMessages);
    if (finishReason !== "tool-calls") {
      break;
    }
  }
});

This defines a single task named my-agent, and it has just a single step. The return value is the changed state, but the current state is passed in as an argument. Every time the step function is executed, the data is looked up first from the checkpoint store. The first checkpoint will be iteration, the second iteration#2, iteration#3, etc. Each state only stores the new messages it generated, not the entire message history.

If a step fails, the task fails and will be retried. And because of checkpoint storage, if you crash in step 5, the first 4 steps will be loaded automatically from the store. Steps are never retried, only tasks.

How do you kick it off? Simply enqueue it:

await absurd.spawn("my-agent", {
  prompt: "What's the weather like in Boston?"
}, {
  maxAttempts: 3,
});

And if you are curious, this is an example implementation of the singleStep function used above:

Single step function
async function singleStep(messages) {
  const result = await generateText({
    model: anthropic("claude-haiku-4-5"),
    system: "You are a helpful agent",
    messages,
    tools: {
      getWeather: { /* tool definition here */ }
    },
  });

  const newMessages = (await result.response).messages;
  const finishReason = await result.finishReason;

  if (finishReason === "tool-calls") {
    const toolResults = [];
    for (const toolCall of result.toolCalls) {
      /* handle tool calls here */
      if (toolCall.toolName === "getWeather") {
        const toolOutput = await getWeather(toolCall.input);
        toolResults.push({
          toolName: toolCall.toolName,
          toolCallId: toolCall.toolCallId,
          type: "tool-result",
          output: {type: "text", value: toolOutput},
        });
      }
    }
    newMessages.push({
      role: "tool",
      content: toolResults
    });
  }

  return { newMessages, finishReason };
}

Events and Sleeps

And like Temporal and other solutions, you can yield if you want. If you want to come back to a problem in 7 days, you can do so:

await ctx.sleep(60 * 60 * 24 * 7); // sleep for 7 days

Or if you want to wait for an event:

const eventName = `email-confirmation-${userId}`;
try {
  const payload = await ctx.waitForEvent(eventName, {timeout: 60 * 5});
  // handle event payload
} catch (e) {
  if (e instanceof TimeoutError) {
    // handle timeout
  } else {
    throw e;
  }
}

Which someone else can emit:

const eventName = `email-confirmation-${userId}`;
await absurd.emitEvent(eventName, { confirmedAt: new Date().toISOString() });

That’s it!

Really, that’s it. There is really not much to it. It’s just a queue and a state store — that’s all you need. There is no compiler plugin and no separate service or whole runtime integration. Just Postgres. That’s not to throw shade on these other solutions; they are great. But not every problem necessarily needs to scale to that level of complexity, and you can get quite far with much less. Particularly if you want to build software that other people should be able to self-host, that might be quite appealing.

  1. It’s named Absurd because durable workflows are absurdly simple, but have been overcomplicated in recent years.

November 03, 2025 12:00 AM UTC

November 02, 2025


Stefanie Molin

Becoming a Core Developer

Throughout your open source journey, you have no doubt been interacting with the core development team of the projects to which you have been contributing. Have you ever wondered how people become core developers of a project? In this post, I share my journey to becoming a core developer of numpydoc.

November 02, 2025 04:50 PM UTC


Django Weblog

Five ways to discover Django packages

With tens of thousands of available add-ons, it can be hard to discover which packages might be helpful for your projects. But there are a lot of options available to navigate this ecosystem – here are a few.

New ✨ Ecosystem page

Our new Django’s ecosystem page showcases third-party apps and add-ons recommended by the Django Steering Council.

State of Django

The 2025 State of Django survey is out, and we get to see how people who responded to the survey are ranking packages! Here are their answers to “What are your top five favorite third-party Django packages?”

Responses Package
49% djangorestframework
27% django-debug-toolbar
26% django-celery
19% django-cors-headers
18% django-filter
18% django-allauth
15% pytest-django
15% django-redis
14% django-extensions
14% django-crispyforms
13% djangorestframework-simplejwt
12% django-channels
12% django-storages
12% django-environ
11% django-celery-beat
10% django-ninja
10% None / I’m not sure
7% django-import-export
7% Wagtail
6% dj-database-url
5% django-silk
5% django-cookiecutter
5% dj-rest-auth
5% django-models-utils
4% django-taggit
4% django-rest-swagger
3% django-polymorphic
3% django-configurations
3% django-compressor
3% django-multitenant
3% pylint-django
2% django-braces
2% model-bakery
2% Djoser
1% django-money
1% dj-rest-knox
8% Other

Thank you to JetBrains who created this State of Django survey with the Django Software Foundation! They are currently running a bit promotion campaign - Until November 11, 2025, get PyCharm for 30% off. All money goes to the Django Software Foundation!

Django Packages

Django Packages is a directory of reusable Django apps, tools, and frameworks, categorized and ranked by popularity. It has thousands of options that are easily comparable with category "grids".

Awesome Django

Awesome Django is more of a community-maintained curated list of Django resources and packages. It’s frequently updated, currently with 198 different package entries.

Reddit and newsletters

The r/django subreddit often covers new tools and packages developers are experimenting with. And here are newsletters that often highlight new packages or “hidden gems” in Django’s ecosystem:

November 02, 2025 02:29 PM UTC


Brian Okken

Announcing the Lean TDD book

There are many great ideas that I’ve gotten from TDD, Lean, Pragmatic, and more. For the past few years, I’ve really wanted to write a book about TDD, with an emphasis on using Lean teachings to cut out all the waste, and then grow TDD into more of a project wide process.

To motivate myself to just get it done finally, I’ve started. And I’m using a holistic project wide process.

November 02, 2025 12:00 AM UTC

November 01, 2025


The Python Coding Stack

And Now You Know Your ABC

I may have already mentioned in a previous post that I’ve rekindled an old passion: track and field athletics. I’m even writing about it in another publication: Back on the Track (look out for the new series on sprint biomechanics, if you’re so inclined!)

This post is inspired by a brief thought that crossed my mind when I considered writing software to assist my club—in the end, I chose not to. (But I already volunteer three hours a week coaching members of the youth team. So no, I don’t feel guilty.)

Here’s the task. I want to create a class to represent a track and field event held in a competition. This entry allows you to enter the raw results as reported by the officials—the athlete’s bib number and the time they clocked in a race. It then computes the event’s full results.

Athletes and Events

A good starting point is to define two classes, Athlete and Event. I’ll focus on the Event class in this post, so I’ll keep the Athlete class fairly basic:

All code blocks are available in text format at the end of this article • #1 • The code images used in this article are created using Snappify. [Affiliate link]

There’s more we could add to make this class more complete. But I won’t. You can create a list of athletes that are competing in the track and field meeting:

#2

To make it easy and efficient to obtain the Athlete object just from a bib number, you can create a bib_to_athlete dictionary:

#3

The dictionary’s keys are the bib numbers as integers and the values are the Athlete objects. Therefore, print(bib_to_athlete[259].name) displays “Carl Lewis”.

Now, let’s move to the Event class:

#4

Let’s go through the data attributes in Event.__init__():

So, let’s create the Result class at the top of the script:

#5

We’ll show an example of Event and Result instances soon. But first we need some methods. Let’s get back to the Event class and define .add_result(), which allows you to add a raw result in the format provided by the officials—the officials only provide the bib number and the performance, such as the time clocked by the athlete:

#6

You add checks for the scenarios when a bib number doesn’t match any athlete in the whole meeting or when the bib number doesn’t match any athlete in the specific event. The dictionary .get() method returns None by default if the key is not present in the dictionary.

If the bib number is valid, you create a Result instance and add it to .results.

Once you add all the results from the officials, you’re ready to finalise the results:

#7

This method sorts the list of results based on the value of the .performance data attribute.

If you’re not familiar with lambda functions, you can read What’s All the Fuss About ‘lambda’ Functions in Python. And to read more about the key parameter in .sort(), see The Key to the ‘key’ Parameter in Python.

Let’s try this class to see everything is as we expect:

#8

You create an Event instance for the 100m sprint race. Then you add three individual results. Finally, you finalise the results to get the athletes and their positions in the race.

Since .finalise_results() sorts based on the performace, the athlete with the fastest time takes the first position in the list, and so on. Here’s the output from the for loop displaying the athletes in .results in order:

Usain Bolt: 9.58
Carl Lewis: 9.86
Jesse Owens: 10.3

Usain Bolt is first with the fastest time, followed by Carl Lewis, and Jesse Owens in third. Everything seems to be working well… But “It works” are the two most dangerous words in programming!

Subscribe now

It’s Time For The Long Jump

The long jump results come in next from the officials. You know how to input the data and finalise the results now:

#9

And here are the results displayed:

Carl Lewis: 8.87
Mike Powell: 8.95

But, but… Mike Powell’s 8.95m is better than Carl Lewis’s 8.87m. The order is wrong!

Of course! In the 100m sprint the fastest time wins, and the fastest time is the lowest number. But in the long jump it’s the longest jump that wins, and that’s the largest number!

The .finalise_results() method no longer works for the long jump or for other field events. For field events, the sorting needs to happen in reverse order.

There are several ways to deal with this problem. There are always several ways to solve a problem in programming.

The simplest is to add an .is_field_event Boolean data attribute in Event and then add an if statement in .finalise_results(). And if this were the only difference between running and field events, this would indeed be a fine option.

However, if there are more differences to account for, the extra fields and if statements scattered throughout the class make the class harder to maintain.

So, let’s look at another option.

Creating Separate Classes for Track Events and Field Events

You could create two classes instead of one: TrackEvent and FieldEvent. Each class will take care of getting the finalise_results() method right and also deal with any other differences we may find between track events and field events.

However, you don’t want to create two unrelated classes since these classes will have a lot in common. Sure, you can copy and paste the code that’s common between the two classes, but you don’t need me to tell you that’s not a good idea. Will you remember to make changes in both places when you decide to change the implementation later?

You also want to make sure they have the same data attribute names for similar things. For example, you don’t want one class to have .finalise_results() and the other .final_results(), say, or change the spelling by mistake. Now, I know you’ll pay attention when creating these methods to make sure this doesn’t happen, but why take the risk?

And when you come back to your code in six months’ time (or a colleague starts working on the code) and you decide you need another class that follows the same principles, will you remember what methods you need to include? You can spend time studying your old classes, of course. But wouldn’t you like to make your life a bit simpler? Of course you would.

Inheritance offers one solution. If you need a refresher on inheritance, you can read the fifth article in the classic Harry Potter OOP series: “You Have Your Mother’s Eyes” • Inheritance in Python Classes.

However, TrackEvent shouldn’t inherit from FieldEvent since a track event is not a field event. The same applies the other way around. These classes are siblings, so they can’t have a parent-child relationship.

Instead, they could both inherit from a common parent. So, let’s keep the Event class as a common parent for both TrackEvent and FieldEvent.

However, the only purpose of Event is to serve as a starting point for the two child classes. You no longer want a user to create instances of Event now that you have TrackEvent and FieldEvent. They should only create instances of TrackEvent or FieldEvent. How can you make this clear in your code and perhaps even prevent users from creating an instance of the parent class, Event?

Abstract Base Classes (ABCs)

The answer is Abstract Base Classes, often shortened to ABCs. The ABC acronym gives the impression these are as easy as ABC—they’re not, but there’s no reason they need to be difficult, either. The title of this article is also a reference to a rhyme my children used to sing when they were toddlers learning their ABC!

Let’s refresh your memory about different terms often used to describe the inheritance relationship between classes:

Whatever terms you choose to use, they refer to the same things!

So, an ABC is a base class, since other classes are derived from it. That deals with the BC in ABC. And it’s abstract because you’re never going to create a concrete instance of this class.

You’re not meant to create an instance of an abstract base class and often you’ll be prevented from doing so. But you can use it to derive other classes.

Let’s turn Event into an abstract base class:

#10

The changes from the previous version are highlighted. Let’s go through each change:

Here’s what you’re effectively stating when you create this ABC with the .finalise_results() abstract method: Any class derived from the Event ABC must include a .finalise_results() method.

Let’s explore this by defining TrackEvent and FieldEvent. At first, you’ll keep them simple:

#11

You define the two new classes TrackEvent and FieldEvent. They inherit from Event, which is an abstract base class. You just include pass as the body of each class for now. This means that these classes inherit everything from the parent class and have no changes or additions. They’re identical to the parent class.

Note that you now use TrackEvent and FieldEvent to create instances for the 100m race and the long jump.

However, you get an error when you try to create these instances:

Traceback (most recent call last):
  File ... line 53, in <module>
    track_100m = TrackEvent(
        “100m Sprint”,
        [bib_to_athlete[259], bib_to_athlete[161], bib_to_athlete[362]]
    )
TypeError: Can’t instantiate abstract class TrackEvent without
    an implementation for abstract method ‘finalise_results’

The abstract base class includes the method .finalise_results(), which is marked as an abstract method. Python is expecting a concrete implementation of this method. Any class that inherits from the Event ABC must have a concrete implementation of this method. Let’s fix this:

#12

In TrackEvent, you include the same code you had in the original Event.finalise_results() since this algorithm works well for track events where the smallest numbers (the fastest times) represent the best performances.

However, you pass reverse=True to .sort() in FieldEvent.finalise_results() since the largest numbers (longest distances) represent the best performances in this case.

You can now try these new classes on the 100m race results and the long jump results you used earlier:

#13

You now use the new derived classes TrackEvent and FieldEvent in this code instead of Event. You also add two new printouts to separate the results.

Here’s the output now:

100m Sprint Results:
Usain Bolt: 9.58
Carl Lewis: 9.86
Jesse Owens: 10.3
​
Long Jump Results:
Mike Powell: 8.95
Carl Lewis: 8.87

The 100m results show the fastest times (smallest values) as the best performances. The long jump results show the longer jump (larger value) as the best performance. All as it should be!

Wind Readings and Breaking Ties

But there are more differences we need to account for. In some track and field events, the wind reading matters. In these events, if the wind reading is larger than 2.0 m/s, the results still stand but the performances cannot be used for official records.

But does this affect track events or field events? So should you account for this in the TrackEvent class or in the FieldEvent class?

It’s not so simple. Wind readings matter in some track events, but not all. And they also matter in some field events, but not all. So, you have a different subset of events to account for now. The 100m, 200m, 110m hurdles and 100m hurdles, which are all track events, are in the same category as the long jump and triple jump, which are field events.

But before we find a solution for this, here’s something else to mess things up even more. What happens when there’s a tie—when two athletes have the same performance value? The tie-breaking rules also depend on the event. Let’s ignore the track events here, since depending on what timing system is used, it’s either the officials who decide or the higher precision times from the automatic timing systems.

But what about the field events? In most of them, if there’s a tie, the next best performance is taken into account. However, the rules are different for the high jump and pole vault events where there’s a count back system used. Explaining the rules of track and field is not the purpose of this article, so I won’t!

So that’s yet another subset to consider: tie breaks in the vertical jumps are different from tie breaks in the horizontal jumps and throws.

How can we account for all these subsets of events?

To ABC or Not To ABC

There are always many solutions to the same problem. You can extend the idea of using ABCs to cater for all options. But the Venn diagram of which event falls under which category is a bit complicated in this case.

The 100m, 200m, 100m hurdles and 110m hurdles are all track events affected by wind readings. But the long jump and triple jump are also affected by wind readings but they’re field events. The discus and other throw events are field events—so the longest throw wins—but aren’t affected by high wind readings. And the long jump, triple jump, and the throws have a next-best jump/throw tie-breaking rule. But the pole vault and high jump are field events not affected by the wind but with different tie-breaking rules.

Are you still with me? Confused? Can you think of an abstract base class structure to account for all these combinations. It’s not impossible, but you’ll need several layers in the hierarchy.

Instead, I’ll explore a different route in a second article, which I’ll publish soon!


The follow-up article will be part of The Club here at The Python Coding Stack. These are the articles for paid subscribers. So, if you’d like to read about a different wayarguably, a better wayof merging all these requirements into our classes, join The Club by upgrading to a paid subscription.


Final Words for This Article • Ready for Part 2?

Inheritance is a great tool. And abstract base classes enable you to organise your hierarchies by creating common base classes for concrete classes you may need to define.

However, inheritance hierarchies can get quite complex. Since inheritance provides a tight coupling between classes, deep hierarchies can cause a bit of a headache.

Still, abstract base classes provide a great tool to make code cleaner, more readable, and more robust. In the follow-up to this article (coming soon), we’ll look at another tool you can use along with inheritance to solve these complex relationships. So join The Club to carry on reading about the track and field classes and how mixins and composition can help manage the complexity.

Image by TianaZZ from Pixabay


Code in this article uses Python 3.14

The code images used in this article are created using Snappify. [Affiliate link]

Join The Club, the exclusive area for paid subscribers for more Python posts, videos, a members’ forum, and more.

Subscribe now

You can also support this publication by making a one-off contribution of any amount you wish.

Support The Python Coding Stack


For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

Further reading related to this article’s topic:


Appendix: Code Blocks

Code Block #1
class Athlete:
    def __init__(self, name, bib_number):
        self.name = name
        self.bib_number = bib_number
Code Block #2
# ...

list_of_athletes = [
    Athlete(”Carl Lewis”, 259),
    Athlete(”Jesse Owens”, 161),
    Athlete(”Usain Bolt”, 362),
    Athlete(”Mike Powell”, 412),
    Athlete(”Florence Griffith-Joyner”, 263),
    Athlete(”Allyson Felix”, 298),
    Athlete(”David Rudisha”, 177),
    # ...  Add more athletes as needed
]
Code Block #3
# ...

bib_to_athlete = {
    athlete.bib_number: athlete for athlete in list_of_athletes
}
Code Block #4
# ...

class Event:
    def __init__(self, event_name, participants):
        self.event_name = event_name
        self.participants = participants
        self.results = []
Code Block #5
class Result:
    def __init__(self, athlete, performance):
        self.athlete = athlete
        self.performance = performance

# ...
Code Block #6
# ...

class Event:
    # ...
        
    def add_result(self, bib_number, performance):
        athlete = bib_to_athlete.get(bib_number)
        if not athlete or athlete not in self.participants:
            raise ValueError(f”Invalid bib number {bib_number}”)
        self.results.append(
            Result(athlete, performance)
        )
Code Block #7
# ...

class Event:
    # ...
    
    def finalise_results(self):
        self.results.sort(key=lambda item: item.performance)
Code Block #8
# ...

track_100m = Event(
    “100m Sprint”,
    [bib_to_athlete[259], bib_to_athlete[161], bib_to_athlete[362]]
)

track_100m.add_result(259, 9.86)
track_100m.add_result(161, 10.3)
track_100m.add_result(362, 9.58)
track_100m.finalise_results()

for result in track_100m.results:
    print(f”{result.athlete.name}: {result.performance}”)
Code Block #9
# ...

field_long_jump = Event(
    “Long Jump”,
    [bib_to_athlete[412], bib_to_athlete[259]]
)

field_long_jump.add_result(412, 8.95)
field_long_jump.add_result(259, 8.87)
field_long_jump.finalise_results()

for result in field_long_jump.results:
    print(f”{result.athlete.name}: {result.performance}”)
Code Block #10
from abc import ABC, abstractmethod

class Result:
    def __init__(self, athlete, performance):
        self.athlete = athlete
        self.performance = performance

class Athlete:
    def __init__(self, name, bib_number):
        self.name = name
        self.bib_number = bib_number

class Event(ABC):
    def __init__(self, event_name, participants):
        self.event_name = event_name
        self.participants = participants
        self.results = []

    def add_result(self, bib_number, performance):
        athlete = bib_to_athlete.get(bib_number)
        if not athlete or athlete not in self.participants:
            raise ValueError(f”Invalid bib number {bib_number}”)
        self.results.append(
            Result(athlete, performance)
        )
    
    @abstractmethod
    def finalise_results(self):
        pass

# ...
Code Block #11
# ...

class TrackEvent(Event):
    pass

class FieldEvent(Event):
    pass

# ...

track_100m = TrackEvent(
    “100m Sprint”,
    [bib_to_athlete[259], bib_to_athlete[161], bib_to_athlete[362]]
)

# ...

field_long_jump = FieldEvent(
    “Long Jump”,
    [bib_to_athlete[412], bib_to_athlete[259]]
)

# ...
Code Block #12
# ...

class TrackEvent(Event):
    def finalise_results(self):
        self.results.sort(key=lambda item: item.performance)

class FieldEvent(Event):
    def finalise_results(self):
        self.results.sort(
            key=lambda item: item.performance,
            reverse=True,
        )

# ...
Code Block #13
# ...

track_100m = TrackEvent(
    “100m Sprint”,
    [bib_to_athlete[259], bib_to_athlete[161], bib_to_athlete[362]]
)

track_100m.add_result(259, 9.86)
track_100m.add_result(161, 10.3)
track_100m.add_result(362, 9.58)
track_100m.finalise_results()

print(”100m Sprint Results:”)
for result in track_100m.results:
    print(f”{result.athlete.name}: {result.performance}”)

field_long_jump = FieldEvent(
    “Long Jump”,
    [bib_to_athlete[412], bib_to_athlete[259]]
)

field_long_jump.add_result(412, 8.95)
field_long_jump.add_result(259, 8.87)
field_long_jump.finalise_results()

print(”\nLong Jump Results:”)
for result in field_long_jump.results:
    print(f”{result.athlete.name}: {result.performance}”)

For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!

Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.

And you can find out more about me at stephengruppetta.com

November 01, 2025 12:52 PM UTC


Zero to Mastery

[October 2025] Python Monthly Newsletter 🐍

71st issue of Andrei's Python Monthly: uv is the Future, Python Hyperflask, Python 3.14 is here, and much more. Read the full newsletter to get up-to-date with everything you need to know from last month.

November 01, 2025 10:00 AM UTC


Talk Python to Me

#526: Building Data Science with Foundation LLM Models

Today, we’re talking about building real AI products with foundation models. Not toy demos, not vibes. We’ll get into the boring dashboards that save launches, evals that change your mind, and the shift from analyst to AI app builder. Our guide is Hugo Bowne-Anderson, educator, podcaster, and data scientist, who’s been in the trenches from scalable Python to LLM apps. If you care about shipping LLM features without burning the house down, stick around.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/ppm'>Posit</a><br> <a href='https://talkpython.fm/nordstellar'>NordStellar</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Hugo Bowne-Anderson</strong>: <a href="https://x.com/hugobowne?featured_on=talkpython" target="_blank" >x.com</a><br/> <strong>Vanishing Gradients Podcast</strong>: <a href="https://vanishinggradients.fireside.fm?featured_on=talkpython" target="_blank" >vanishinggradients.fireside.fm</a><br/> <strong>Fundamentals of Dask: High Performance Data Science Course</strong>: <a href="https://training.talkpython.fm/courses/fundamentals-of-dask-getting-up-to-speed" target="_blank" >training.talkpython.fm</a><br/> <strong>Building LLM Applications for Data Scientists and Software Engineers</strong>: <a href="https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=friendsoftalkpython" target="_blank" >maven.com</a><br/> <strong>marimo: a next-generation Python notebook</strong>: <a href="https://marimo.io?featured_on=talkpython" target="_blank" >marimo.io</a><br/> <strong>DevDocs (Offline aggregated docs)</strong>: <a href="https://devdocs.io?featured_on=talkpython" target="_blank" >devdocs.io</a><br/> <strong>Elgato Stream Deck</strong>: <a href="https://www.elgato.com/us/en/p/stream-deck?featured_on=talkpython" target="_blank" >elgato.com</a><br/> <strong>Sentry's Seer</strong>: <a href="https://talkpython.fm/seer" target="_blank" >talkpython.fm</a><br/> <strong>The End of Programming as We Know It</strong>: <a href="https://www.oreilly.com/radar/the-end-of-programming-as-we-know-it/?featured_on=talkpython" target="_blank" >oreilly.com</a><br/> <strong>LorikeetCX AI Concierge</strong>: <a href="https://www.lorikeetcx.ai?featured_on=talkpython" target="_blank" >lorikeetcx.ai</a><br/> <strong>Text to SQL & AI Query Generator</strong>: <a href="https://www.text2sql.ai?featured_on=talkpython" target="_blank" >text2sql.ai</a><br/> <strong>Inverse relationship enthusiasm for AI and traditional projects</strong>: <a href="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/04/LLM-SDLC_Fig1_edit3-1.png?featured_on=talkpython" target="_blank" >oreilly.com</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=_LFdKjsKdPE" target="_blank" >youtube.com</a><br/> <strong>Episode #526 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/526/building-data-science-with-foundation-llm-models#takeaways-anchor" target="_blank" >talkpython.fm/526</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/526/building-data-science-with-foundation-llm-models" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

November 01, 2025 08:00 AM UTC